uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,089,854
arxiv
\section{Introduction}\label{S:1} The Hardy operator on the set of the positive integers is given by $I\varphi(n)=\sum_{l=0}^n\varphi(l)$. The problem of characterizing the positive weights $u,v:{\mathbb N}\to{\mathbb R}^+$ such that \begin{equation}\label{hardy1} \|I\varphi\|_{\ell^p(v)}\le C\|\varphi\|_{\ell^p(u)} \end{equation} for some positive $C$, depending only on $1<p<\infty,u$, and $v$, has long been considered and solved. It was only rather recently \cite{ars2002} that an analogous problem was considered on trees. Let $T$ be a tree with vertex set $V(T)\ni o$, where $o$ is a \it root \rm of $T$, and define, for $\varphi:V(T)\to\mathbb{R}_+$, the function $I\varphi:V(T)\to\mathbb{R}_+$ as $I\varphi(\alpha)=\sum_{\beta\in[o,\alpha]}\varphi(\beta)$, where $[o,\alpha]$ is the ``geodesic'' joining $\alpha$ to the root. In fact, $\mathbb{N}$ is a particular tree, but general trees might exhibit the exponential growth, with respect to $n$, of the number of points having distance $n$ to a distinguished vertex. The usual dyadic tree is a typical example. We postpone the precise definition of the Hardy operator on trees to the next section, where the necessary notation is introduced. Characterizing the two-weight inequality for the Hardy operator on trees led to a new characterization of the Carleson measures for the Dirichlet space, a result originally due to Stegenga \cite{stegenga1980}, which applied, in fact, to a wide range of exponents, weights, and underlying spaces. In its simpler form, one wants to characterize measure $\mu\ge0$ on the unit disc $\{z\in\mathbb{C}:\ |z|<1\}$ in the complex plane, such that \begin{equation}\label{hardy2} \iint_{\mathbb{D}}|f(z)|^2d\mu(z)\le C\iint_{\mathbb{D}}|f^\prime(z)|^2dxdy \end{equation} for all holomorphic functions satisfying $f(0)=0$. The connection between the holomorphic problem and the discrete one might be summarized as follows. The function $f$ on $\mathbb{D}$ is somehow identified with the function $I\varphi$, the function $f^\prime$ with $\varphi$ (the ``derivative'' of $I\varphi$), the unit disc $\mathbb{D}$ with the tree $T$ which indexes its dyadic Whitney decomposition. In 1985, E. Sawyer \cite{sawyer1985} considered the extension of (\ref{hardy1}) to the bi-linear case: $II\varphi(m,n)=\sum_{i=0}^m\sum_{j=0}^n\varphi(i,j)$, with $\varphi:\mathbb{N}^2\to\mathbb{R}_+$. He characterized the two weight inequality for the bi-linear Hardy operator $II$, and it should be mentioned that the tri-linear inequality still awaits a characterization. Recently, we and Giulia Sarfatti \cite{amps2018} considered the problem of characterizing the Carleson measures for the Dirichlet space on the bi-disc, which may be thought of $\mathcal{D}(\mathbb{D}^2)\equiv\mathcal{D}(\mathbb{D})\otimes\mathcal{D}(\mathbb{D})$. The first step is reducing the problem to one on the bi-tree: the Cartesian product of two copies of the tree, with the corresponding Hardy operator defined by summation on Cartesian products of geodesics, as in Sawyer's result. We could not modify the proof of Sawyer, however, to make it work in the bi-tree case. Our proof follows Stegenga's idea of proving a \textit{capacitary strong type inequality}, which is the heart of the proof. In this note, we prove some results in multi-linear potential theory, which might prove useful in extending the results in \cite{amps2018} to (i) polytrees (with more than two factors), (ii) with weights. In Section \ref{SS:2.4} we prove that the capacity of a subset $E$ of the polytree $T^d=T\times\dots\times T$ is comparable to that of its projection $\mathcal{S}_b(E)$ onto the \it distinguished boundary \rm $(\partial T)^d$ of $T^d$. The novelty is that we consider the discrete problem arising from the study of the Potential Theory associated with \it weighted \rm Dirichlet spaces, which have not been so far investigated. In Section \ref{oggi}, we give two noncapacitary sufficient conditions for a measure to satisfy the trace inequality for the multilinear Hardy operator on a polytree.\\ Throughout this paper we refer to some basic facts from potential theory, as presented in \cite[Chapter 2]{ah1996}. \section{Weighted $d$-tree and potential theory}\label{S:2} \subsection{A $d$-tree} As in \cite{amps2018} we start by considering the rooted directed (away from the root) uniform infinite binary tree (\textit{a dyadic tree}). The order relation on the vertex set $V(T)$ is given by direction: for $\alpha,\beta\in V(T)$ we say that $\alpha \leq \beta$, if one can get from $\beta$ to $\alpha$ following the directed root. In other words, $\beta$ is one of the endpoint of the edges in the geodesic $[\alpha,o]$ connecting $\alpha$ and the root $o$. We also write $\alpha < \beta$, if $\alpha\leq\beta$, and $\alpha\neq \beta$. The boundary $\partial T$ of the tree is defined in a standard way; each point $\omega \in \partial T$ is encoded as an infinite directed sequence $[e^0,e^1,\dots]\subset E(T)$ of connected edges that starts at the root $o$ (i.e. $o$ is the endpoint of $e^0$). The order relation makes sense for $\partial T$ as well, given $\omega\in \partial T$ we say that $\omega \leq \alpha$, if and only if $\alpha$ is an endpoint of one of the edges $e^{k}$ encoding $\omega$, or $\alpha = \omega$. We write $\overline{T} := T\bigcup \partial T$. In what follows we identify the vertex set $V(T)$ and the tree itself, i.e. we assume that $\alpha\in T$ is always a vertex.\\ If $\alpha,\beta \in \overline{T}$, then there there exists a unique point $\gamma \in \overline{T}$ that is \textit{the least common ancestor} of $\alpha$ and $\beta$, we denote it by $\alpha\wedge\beta$. Namely, we have that $\gamma\geq\alpha,\;\gamma\geq\beta$, and if there is another point $\tilde{\gamma}$ satisfying these relations, then $\tilde{\gamma}\geq\gamma$ (basically $\gamma$ is the first intersection points of geodesics connecting $\alpha$ and $\beta$ to the root). In particular, $\alpha\wedge\alpha = \alpha$. The total amount of common ancestors of $\alpha$ and $\beta$ is denoted by $d_T(\alpha\wedge\beta)$ ( $d_T(\alpha\wedge\beta) = dist_T(\alpha\wedge\beta,o)+1$, where $dist_T$ is the usual graph distance on $T$). $d_T$ can be infinite, for instance, $d_T(\omega\wedge\omega) = \infty$ when $\omega\in\partial T$. The \textit{predecessor set} (with respect to the geometry of $\overline{T}$) of a point $\alpha\in V(T)\cup\partial T$ is \[ \mathcal{P}(\alpha) = \{\beta\in \overline{T}:\; \beta\geq\alpha\}. \] In particular, every point is its own predecessor. The \textit{successor set} is \[ \mathcal{S}(\beta) := \{\alpha\in\overline{T}:\; \beta\in\mathcal{P}(\alpha)\},\quad \beta\in \overline{T}. \] Clearly $d_T(\alpha\wedge\beta) = \sharp\mathcal{P}(\alpha\wedge\beta)$.\par We are now ready to define the $d$-tree. Fix an integer $d$, and consider $T_1,T_2,\dots, T_d$ --- identical copies of the dyadic tree $T$. The vertex set $V(T)^d$ of the graph $T^d$ is defined as follows \begin{equation}\notag V(T^d):= V(T)^d = V(T_1)\times V(T_2)\times\dots\times V(T_d), \end{equation} i.e. $\alpha \in V(T^d)$, if $\alpha = (\alpha_1,\dots,\alpha_d)$ with $\alpha_j\in T_j,\; j=1,\dots,d$. Two vertices $\alpha,\beta\in V(T^d)$ are connected by an edge, if and only if there exists a number $1\leq j\leq d$ such that $\alpha_j$ and $\beta_j$ are connected by an edge in $T_j$, and $\alpha_k = \beta_k$ for any $k\neq j$. As before, we usually identify $V(T^d)$ and $T^d$.\\ The order relation on $T^d$ is induced by the order on its coordinate trees, we say that $\alpha \leq \beta$, if $\alpha_j \leq\beta_j$ for every $1\leq j \leq d$. The boundary of the $d$-tree is \[ \partial T^d = \bigcup_{D\subset \{1,2,\dots,d\}}\prod_{j\in D}T_j\prod_{k\in \{1,2,\dots,d\}\setminus D}\partial T_k \] (the Cartesian products are taken according to the order of indices). The set $\partial T_1\times\partial T_2\times\dots\times\partial T_d $ is called a \textit{distinguished boundary} of $T^d$ and denoted by $(\partial T)^d$. We let $\overline{T}^d = T^d\bigcup\partial T^d$. As before, we define predecessor and successor sets of a vertex $\alpha = (\alpha_1,,\dots,\alpha_d)$ using the same notation \[ \mathcal{P}(\alpha) = \mathcal{P}(\alpha_1)\times\dots \times\mathcal{P}(\alpha_d),\; \mathcal{S}(\alpha) = \mathcal{S}(\alpha_1)\times\dots\times\mathcal{S}(\alpha_d). \] Sometimes we specify the dimension writing $\mathcal{S}_{T}(\alpha)$ for a point $\alpha$ in the tree $T$, and $\mathcal{S}_{T^d}(\alpha)$ for a point $\alpha$ in the $d$-tree (same goes for the predecessor sets). The part of $\mathcal{S}(\alpha)$ that lies on the distinguished boundary is denoted by $\partial \mathcal{S}(\alpha)$.\\ Similar to one-dimensional setting we denote the number (possibly infinite) of common ancestors of $\alpha$ and $\beta$ by $d_{T^d}(\alpha\wedge\beta)$, where $\alpha\wedge\beta = (\alpha_1\wedge\beta_1,\dots,\alpha_d\wedge\beta_d)$ is a (unique) least common ancestor (in $T^d$) of $\alpha$ and $\beta$. The predecessor and successor sets are defined as above (and denoted in the same way). We have \[d_{T^d}(\alpha\wedge\beta) = \prod_{j=1}^d d_T(\alpha_j\wedge\beta_j) = \sharp\mathcal{P}(\alpha\wedge\beta). \] We also write $d_T(\alpha_j)$ and $d_{T^d}(\beta)$ instead of $d_T(\alpha_j\wedge\alpha_j)$ and $d_{T^d}(\beta\wedge\beta)$. \\ \subsection{Potential theory on $d$-tree}\label{SS:2.35} Before we introduce the basics of potential theory on the $d$-tree we adapt our space to the conventions used in \cite{ah1996}. First we define a metric on $\overline{T}_j$: given $\alpha_j,\beta_j\in \overline{T}_j$ we set \begin{equation}\notag \delta_j(\alpha_j,\beta_j) := 2^{-d_T(\alpha_j\wedge\beta_j)} - \frac12\left(2^{-d_T(\alpha_j)} + 2^{-d_T(\beta_j)}\right), \end{equation} essentially this is a distance associated to the graph distance on $T$ with weights $2^{-dist_T(\alpha_j,o)}$. Then we let \begin{equation}\label{e:31} \delta(\alpha,\beta) = \sum_{j=1}^d\delta_j(\alpha_j,\beta_j),\quad \alpha,\beta\in \overline{T}^d. \end{equation} Clearly, $\delta$ is a metric on $\overline{T}^d$.\\ We suggest two ways of interpreting a $d$-tree, the first one is less natural in a sense, but it allows us to properly use the machinery in \cite{ah1996}. The dyadic tree is a planar graph, and one can embed it into $\mathbb{R}^2$ in such a way that its boundary $\partial T_j$ is actually a classical ternary Cantor set $E_c$ on the unit interval. As a result we can assume that $\overline{T}_j\subset \mathbb{R}^2$, moreover, embedded with $\delta_j$ it is a locally compact Radon space, and Borel sets in $\overline{T}_j$ are Borel in $\mathbb{R}^2$. In the same vein, the points of $T^d$ embed into $\mathbb{R}^{2d}$. In particular $(\partial T)^d$ can be identified with $E_c^2$.\par Let $\pi$ be a positive Borel measure on $T^d$, that is, collection of positive weights on vertices of $T^d$ -- we always assume $\pi$ has zero mass on $\partial T^d$. Denote by $\mathbb{M}$ the (open) $d$-tree $T^d$ equipped with measure $\pi$ and a family of Borel (with respect to the distance $\delta$) measurable sets. We define a kernel $G:\mathbb{R}^{2d}\times\mathbb{M}\rightarrow \mathbb{R}_+$ to be $G(\alpha,\beta) := \chi_{\mathcal{S}_{\beta}}(\alpha)$, where $\alpha\in \overline{T}^d\subset\mathbb{R}^{2d}$, $\beta\in T^d$ and $\mathcal{S}_{\beta} := \{\gamma\in\overline{T}^d:\; \gamma\leq\beta\}$ is the $\overline{T}^d$-successor set of $\beta$. It is easy to verify that $G$ is lower semicontinuous on $\overline{T}$ in first variable, and measurable on $\mathbb{M}$ in second variable. This means that we are now squarely in the context of Adams and Hedberg (\cite[Chapter 2.3]{ah1996}), and we can proceed with the Potential Theory. Given a non-negative Borel measure $\mu$ on $\overline{T}^d$ (which, again, is by extension Borel on $\mathbb{R}^{2d}$) and a non-negative $\pi$-measurable function $f$ on $\mathbb{M}$ we let \begin{subequations}\label{e:32} \begin{eqnarray} \label{e:32.1}& (\mathbb{I}f)(\alpha) := \int_{\mathbb{M}}G(\alpha,\beta)f(\beta)\,d\pi(\beta) = \sum_{\gamma\geq\alpha}f(\gamma)\pi(\gamma),\\ \label{e:32.2}& (\mathbb{I}^*\mu)(\beta) := \int_{\bar{T}^2}G(\alpha,\beta)\,d\mu(\alpha) = \int_{\mathcal{S}(\beta)}\,d\mu(\alpha). \end{eqnarray} \end{subequations} Observe that a measure supported on $T^2$ and a non-negative function are pretty much the same objects --- a collection of masses assigned to the points of the $d$-tree. The Potential Theory generated by these two operators leads us to the notions of $\pi$-potential \begin{equation}\label{e:33} \mathbb{V}_{\pi}^{\mu} := (\mathbb{I}\mathbb{I}^*)(\mu) \end{equation} and capacity \begin{equation}\label{e:334} \capp_{\pi} E := \inf\left\{\int f^2\,d\pi:\; f\geq0,\,(\mathbb{I}f)(\alpha)\geq1,\; \alpha\in E\right\},\quad E\subset \overline{T}^d. \end{equation} Given two Borel measures $\mu,\nu\geq0$ on $\overline{T}^d$ we define their mutual energy to be \begin{equation}\label{e:337} \mathcal{E}_{\pi}[\mu,\nu] := \int_{\overline{T}^d}\mathbb{V}^{\mu}\,d\nu = \int_{\overline{T}^d}\mathbb{V}^{\nu}\,d\mu = \sum_{\alpha\in T^d}(\mathbb{I}^*\mu)(\alpha)(\mathbb{I}^*\nu)(\alpha)\pi(\alpha) = \langle\mathbb{I^*}\mu,\mathbb{I}^*\nu\rangle_{L^2(T^d,\pi)}, \end{equation} the last two equalities following from Tonelli's theorem. When $\mu=\nu$ we write $\mathcal{E}_{\pi}[\mu]$ instead, and we call it \textit{the energy of $\mu$}. Given a Borel set $E\subset \overline{T}^d$ there exists a uniquely defined \textit{equilibrium measure} $\mu_E\geq0$ that generates the minimizer in \eqref{e:334}, so that \[ \capp_{\pi} E = \int_{T^d}(\mathbb{I}^*\mu_E)^2\,d\pi = \mathcal{E}_{\pi}[\mu_E] = \mu_E(E) \] (see \cite{ah1996}). If $E$ is a compact set, then one also has $\supp\mu_E\subset E$.\par Another way to look at the $d$-tree (which is more convenient and tangible) is the dyadic rectangle representation. It is well known that a dyadic tree can be interpreted as a collection of dyadic subintervals of some basic interval (say, $[0,1]$), with a natural order given by inclusion. This approach is not without its own problems though, since $\partial T$ and $[0,1]$ do not have a one-to-one correspondence --- dyadic-rational points can be images of two different elements of $\partial T$. This obstacle however is is not relevant in the context of the potential theory we have developed, since the measures we are working with do not distinguish these points. In other words, if the measure has finite energy for an appropriate choice of weight $\pi$, its total mass on the non-injective set is zero, see Lemma \ref{l:1}. That means that for every point $\alpha\in T^d$ there exists a unique dyadic rectangle $R_{\alpha} = \prod_{j=1}^d[k_j2^{-n_j},(k_j+1)2^{-n_j}]\subset [0,1]^d$ with $n_j\geq0$ and $0\leq k_j\leq 2^{n_j}-1$, and vice-versa, every such dyadic rectangle corresponds to a point $\alpha\in T^d$. In the same way, the \textit{distinguished boundary} can be roughly viewed as the unit cube $[0,1]^d$ (again, the problematic points are not seen by finite energy measures). The rest of $\partial T^d$ is visualized similarly. \\ This representation makes it clear that a $d$-tree (for $d\geq 2$) is \textit{not a tree}, since, for instance, every point has several geodesics connecting it to the root $(o_1,\dots,o_d)$, and $T^d$ has a lot of cycles. However $T^d$ still have some structural properties inherited from the geometry of $T$, in particular \textit{it does not have any directed cycles}. This allows us to salvage some of the arguments used in one-dimensional case.\\ As usual, we write $A \lesssim B$ if there exists a constant $C$ (that depends only on $d$, $\pi$, and whose value may change from line to line) such that $A \leq CB$, and $A\approx B$, if $A\lesssim B$ and $B\lesssim A$. \section{Properties of potentials and standard polynomial weights}\label{SS:2.4} \subsection{Basic properties of potentials and capacities} We call $\pi$ a product weight, if $\pi(\beta) = \prod_{j=1}^n\pi_j(\beta_j)$, where $\pi_j$ is a weight on $T_j$. \begin{lemma}\label{l:1} Assume $\pi$ is bounded away from zero. Then the following properties hold: \begin{enumerate} \item If $\mu\geq0$ is a Borel measure on $(\partial T)^d$ with finite energy, then $\mu (\{\omega_j\}\times\prod_{k\neq j}\partial T_k) = 0$ for any $\omega_j \in \partial T_j,\; 1\leq j\leq d$.\label{l:A1.s.1} \item Assume $\pi$ is a product weight. Let $E\subset \overline{T}^d$ be a Borel set. Define $E_j\subset \overline{T}_j$ to be its coordinate projections, i.e. $\alpha_j\in E_j$, if there exist points $\alpha_k\in \overline{T}_k,\; k\neq j$ such that $(\alpha_1,\dots,\alpha_j,\dots,\alpha_d) = \alpha \in E$. Then \begin{equation}\label{e:l1.1} \capp_{\pi} E \leq \prod_{j=1}^d \capp_{\pi_j}E_j. \end{equation} In particular, if $E$ is a product set, $E = E_1\times\dots\times E_d$, then we have equality in \eqref{e:l1.1}. \label{l:A1.s.0} \item Let $E$ be a Borel subset of $\overline{T}^d$ and $\mu\geq0$ be a Borel measure on $\overline{T}^d$ such that $\mathcal{E}_{\pi}[\mu]<\infty$ and $\mathbb{V}_{\pi}^{\nu} \leq 1$ q.a.e. on $E$. Then $\capp_{\pi} E \geq \mu(E)$. \label{l:A1.s.2} \item Let $E$ be a Borel subset of $\overline{T}^d$ and $\mu\geq0$ be a Borel measure on $\overline{T}^d$ with finite mass such that $\mu(E) \geq \mathcal{E}_{\pi}[\mu]$. Then $\capp_{\pi} E \geq \mu(E)$. \label{l:A1.s.2.5} \end{enumerate} \end{lemma} \begin{proof} \textit{Property \ref{l:A1.s.1}.}\;\; Assume $j=1$ and $\mu(\{\omega_1\}\times\prod_{k=2}^d\partial T_k) =\varepsilon >0$ for some $\omega_1\in \partial T_1$. Then we immediately have $(\mathbb{I}^*\nu)(\alpha_1,o_2,\dots,o_d) \geq \varepsilon$ for any $\alpha_1>\omega_1$, and \[ \mathcal{E}_{\pi}[\mu] = \sum_{\alpha\in T^d}(\mathbb{I}^*\nu)^2(\alpha)\pi(\alpha) \geq \sum_{\alpha_1>\omega_1}(\mathbb{I}^*\nu)^2(\alpha_1,o_2,\dots,o_d)\pi(\alpha_1,o_2,\dots,o_d) = \infty, \] since $\pi \geq 1$ (actually that is the only thing we need from the weight here).\par \textit{Property \ref{l:A1.s.0}.}\;\; For every $1\leq j\leq d$ let $f_j$ be some admissible for $E_j$ function (so that\\ $\sum_{\beta_j\geq\alpha_j}f_j(\beta_j)\pi_{j}(\beta) \geq 1$ for every $\alpha_j\in E_j$). Define \[ f(\beta) = \prod_{j=1}^df_j(\beta_j),\; \beta\in T^d. \] Since $\pi = \prod_{j=1}^d\pi_j$, we clearly have \[ \sum_{\beta\geq\alpha}f(\beta)\pi(\beta) = \prod_{j=1}^d\sum_{\beta_j\geq\alpha_j}f(\beta_j)\pi(\beta_j) \] for any $\alpha\in \overline{T}^d$. Therefore $f$ is admissible for $\prod_{j=1}^d E_j$. In the same fashion, $\|f\|^2_{L^2(T^d,d\pi)} = \prod_{j=1}^d\|f_j\|_{L^2(T_j,d\pi_j)}^2$, hence $\capp_{\pi}\left(\prod_{j=1}^dE_j\right) \leq \prod_{j=1}^d\capp_{\pi_j}E_j$, and \eqref{e:l1.1} follows immediately. In particular, the product of polar sets is a polar set as well.\\ To get the equality for product sets we turn to the dual definition of capacity: \[ \capp_{\pi_j} = \sup\{\mu_j(E_j)^2:\;\supp\mu_j\subset E_j,\; \mathcal{E}_{\pi_j}[\mu_j]\leq 1\}. \] Now let $\mu_j$ be some admissible (in the sense above) measure for $E_j$. Define $\mu$ to be the usual extension of $\prod_{j=1}^d\mu_j$ to $\overline{T}^d$. As before, for any $\alpha\in \overline{T}^d$ one has \begin{equation}\notag \begin{split} &\mathbb{V}_{\pi}^{\mu}(\alpha) = \sum_{\beta\geq\alpha}(\mathbb{I}^*\mu)(\beta)\pi(\beta) = \sum_{\beta\geq\alpha}\int_{\mathcal{S}(\beta)}\,d\mu\pi(\beta) = \int_{\overline{T}^d}\sum_{\beta\geq\alpha}\,\pi(\beta)\chi_{\mathcal{S}(\beta)}(\tau)d\mu(\tau) =\\ &\int_{\overline{T}^d}\sum_{\beta\geq\alpha\wedge\tau}\,\pi(\beta)d\mu(\tau) = \int_{\overline{T_1}}\dots\int_{\overline{T}_d}\sum_{\beta_1\geq\alpha_1\wedge\tau_1}\pi(\beta_1)\cdot.\,.\,.\,\cdot\sum_{\beta_1\geq\alpha_d\wedge\tau_d}\pi(\beta_d)\,d\mu_1(\tau_1)\dots\,d\mu_d(\tau_2) = \prod_{j=1}^{d}V_{\pi_j}^{\mu_j}(\alpha). \end{split} \end{equation} In particular, we observe that $\mathcal{E}_{\pi}[\mu] = \prod_{j=1}^d\mathcal{E}_{\pi_j}[\mu_j]$, hence $\pi$-energy of $\mu$ is less than $1$. Combined with the fact that $\supp\mu\subset \prod_{j=1}^dE_j$ we obtain $\capp_{\pi}\left(\prod_{j=1}^dE_j\right) \geq \prod_{j=1}^d\capp_{\pi_j}E_j$.\par \textit{Property \ref{l:A1.s.2}}.\;\; Define, as usual, the restricted measure $\mu\vert_E$ by $\mu\vert_E(F) := \mu(E\bigcap F)$, and let $\mu_E$ be the equilibrium measure of $E$. Clearly, $\mathcal{E}_{\pi}[\mu_E] < \infty$, and $\mathbb{V}_{\pi}^{\mu\vert_E} \leq \mathbb{V}_{\pi}^{\mu}$. We have \[ \mathcal{E}_{\pi}[\mu\vert_E] = \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu\vert_E}\,d\mu\vert_E \leq \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu_E}\,d\mu\vert_E = \mathcal{E}_{\pi}[\mu_E,\mu\vert_E], \] since $\mathbb{V}_{\pi}^{\mu_E}\geq1$ q.a.e. on $E$. Hence, by positivity of the energy integral, \[ \mu(E) = \mu\vert_E(E) = \int_{\overline{T}^d}\,d\mu\vert_E \leq \mathcal{E}_{\pi}[\mu_E,\mu\vert_E] \leq \mathcal{E}_{\pi}[\mu_E] = \capp E. \] \textit{Property \ref{l:A1.s.2.5}},\;\; Let $\mu_E$ be the equilibrium measure of $E$. Clearly \[ \mathcal{E}_{\pi}[\mu_E,\mu] = \int_{\overline{T}^d}\mathbb{V}^{\mu_E}_{\pi}\,d\mu \geq \mu(E) \geq \mathcal{E}_{\pi}[\mu]. \] By positivity of energy integral it follows that $\capp_{\pi}E = \mathcal{E}_{\pi}[\mu_E] \geq \mathcal{E}_{\pi}[\mu_E,\mu]$.\par \end{proof} \subsection{Standard polynomial weights and capacity of the boundary} From now on we are restricting ourselves to a special class of weights --- the so-called standard polynomial weights, where $\pi_j(\beta_j) = 2^{s_jd_{T}(\beta_j)}$ for some $0\leq s_j <1$. This class is connected to the discrete representation of weighted Dirichlet space on the polydisc, i.e. space of analytic functions $f$ on $\mathbb{D}^d$, $f(z)=\sum_{a_1,\dots,a_d\geq0}\hat{f}(a_1,\dots,a_d)z^{a_1}_1\dots z^{a_d}_d$, which satisfy that \[ \sum_{a_1,\dots,a_d}|\hat{f}(a_1,\dots,a_d)|^2(a_1+1)^{1-s_1}\cdot\dots\cdot(a_d+1)^{1-s_d} < +\infty. \] In this case there is a natural way to push down a measure $\mu$ defined on the whole $d$-tree to its distinguished boundary $(\partial T)^d$. \\ To do that we first need to define an analogue of Lebesgue measure on $(\partial T)^d$. We start with a dyadic tree $T$. For any point $\alpha\in T$ we put \[ M(\partial\mathcal{S}(\alpha)) := 2^{-d_T(\alpha)+1} \] to be the 'length' of a 'dyadic interval' on $\partial T$. We see that $M$ can be extended to a Borel measure on $\partial T$ satisfying the property above (also, clearly, it has no mass on singletons). Since $M$ is finite, there exists a unique Borel measure $M_d$ on $(\partial T)^d$ such that \[ M_d(\mathcal{S}(\alpha)\bigcap(\partial T)^d) = \prod M(\partial\mathcal{S}(\alpha_j)) \] (observe that $M_d(\{\omega_j\}\times\prod_{j\neq k} T_{k})=0$ for any $1\leq j\leq d$ and $\omega_j\in\partial T_j$). \\ Suppose now $\mu\geq0$ is a Borel measure on $\overline{T}^d$ with finite energy. By the disintegration theorem we can define a measure $\mu_b$ supported on the $(\partial T)^d$ to be \begin{equation}\label{e:lA.3} \begin{split} d\mu_b(\omega_1,\dots,\omega_d) := \sum_{D\subset\{1,\dots,d\}}\sum_{j\in D}\sum_{\beta_j>\omega_j}\frac{d\mu(\tau(D,\beta,\omega))}{\prod_{j\in D}M(\partial\mathcal{S}(\beta_j))}\,\prod_{j\in D}dM(\omega_j), \end{split} \end{equation} where $\tau(D,\beta,\omega)_j = \beta_j$, if $j\in D$, and $\tau(D,\beta,\omega)_j = \omega_j$ otherwise. Roughly speaking, we take the mass $\mu(\beta)$ and distribute it uniformly over $\mathcal{S}(\beta)$, the distinguished boundary part of $\mu$ we leave as it is, and we do a mixed distribution on the rest of $\partial T^d$. \begin{theorem}\label{t:1} The potentials of $\mu$ and $\mu_b$ are equivalent, \begin{equation}\label{e:lA.3s} \mathbb{V}_{\pi}^{\mu}(\alpha) \approx \mathbb{V}_{\pi}^{\mu_b}(\alpha),\quad \alpha\in \overline{T}^d. \end{equation} \end{theorem} \textit{Proof of Theorem \ref{t:1}}.\\ Given $\alpha,\beta\in \overline{T}^d$ define $d_{\pi}(\alpha\wedge\beta) := \sum_{\gamma\geq\alpha\wedge\beta}\pi(\gamma)$. Since $\mathbb{V}^{\mu_b} = \int_{(\partial T)^d}d_{\pi}(\zeta\wedge\omega)\,d\mu_b$, we want to compare the values of $d_{\pi}(\alpha\wedge\beta)$, and the average of $d_{\pi}$ taken over the boundary projections of $\alpha$ and $\beta$. \begin{lemma}\label{l:51} One has \begin{equation}\label{e:91} d_{\pi}(\alpha\wedge\beta) \approx \frac{1}{M_d(\partial\mathcal{S}(\alpha))M_d(\partial\mathcal{S}(\beta))}\int_{\partial\mathcal{S}(\alpha)}\int_{\partial\mathcal{S}(\beta)}d_{\pi}(\xi\wedge\omega)dM_d(\xi)\,dM_d(\omega) \end{equation} ($d_{\pi}$ is almost a martingale with respect to the measure $M$). \end{lemma} \begin{proof} Due to multiplicativity it is enough to prove that, say, \begin{equation}\notag d_{\pi_1}(\alpha_1\wedge\beta_1) \approx \frac{1}{M(\partial\mathcal{S}(\alpha_1))M(\partial\mathcal{S}(\beta_1))}\int_{\partial\mathcal{S}(\alpha_1)}\int_{\partial\mathcal{S}(\beta_1)}d_{\pi_1}(\xi_1\wedge\omega1)dM(\xi_1)\,dM(\omega_1). \end{equation} If $\xi_1\leq\alpha_1$ and $\omega_1\leq\beta_1$, then $d_{\pi_1}(\xi_1\wedge\omega_1) \geq d_{\pi_1}(\alpha_1\wedge\beta_1)$, hence \[ d_{\pi_1}(\alpha_1\wedge\beta_1) \leq \frac{1}{M(\partial\mathcal{S}(\alpha_1))M(\partial\mathcal{S}(\beta_1))}\int_{\partial\mathcal{S}(\alpha_1)}\int_{\partial\mathcal{S}(\beta_1)}d_{\pi_1}(\xi_1\wedge\omega_1)dM(\xi_1)\,dM(\omega_1). \] To get the reverse inequality we first show that for any $\beta_1\in T_1$ and $\tau_1\in \overline{T}_1$ we have \begin{equation}\label{e:92} \frac{1}{M(\partial\mathcal{S}(\beta_1))}\int_{\partial\mathcal{S}(\beta_1)}d_{\pi_1}(\tau_1\wedge\omega_1)\,dM(\omega_1) \lesssim d_{\pi_1}(\tau_1\wedge\beta_1). \end{equation} If $\tau_1\geq\beta_1$ or these two points are not comparable, then, clearly, $d_{\pi_1}(\tau_1\wedge\beta_1) = d_{\pi_1}(\tau_1\wedge\omega_1)$ for $\omega_1\leq\beta_1$, and \eqref{e:92} is trivial. Hence from now on we assume that $\tau_1 < \beta_1$. First we note that since $\pi$ is a standard polynomial weight, one has $d_{\pi_1}(\gamma) \approx d_T(\gamma)$, if $s_1d_T(\gamma)\leq1$, and $d_{\pi_1}(\gamma) \approx \frac{1}{s_1}2^{s_1d_T(\gamma)}$, if $s_1d_T(\gamma)\geq1$, for any $\gamma\in \overline{T}_1$. Let $n:= d_T(\beta_1)$ and $N:= d_T(\tau_1)$. For every $n\leq k \leq N$ there exists exactly one point $\gamma_k\in T_1$ such that $\tau_1 \leq\gamma_k \leq\beta_1$, and $d_T(\gamma_k) = k$ (in particular $\gamma_1 = \beta_1,\; \gamma_N = \tau_1$). Define \[ S_k = \partial\mathcal{S}(\gamma_k)\setminus\partial\mathcal{S}(\gamma_{k+1}),\quad n\leq k \leq N-1, \] and \[ S_N = \partial\mathcal{S}(\tau_1). \] If $\omega_1\in S_k$, then, clearly, $d_{\pi_1}(\tau_1\wedge\omega_1) \approx \frac{1}{s_1} 2^{s_1k}$ for $k\geq\frac{1}{s_1}$, and $d_{\pi_1}(\tau_1\wedge\omega_1) \approx k$ otherwise. Moreover, these sets are disjoint and form a covering of $\partial\mathcal{S}(\beta_1)$. Also $M(S_k) = 2^{-k}- 2^{-k-1}, n\leq k \leq N-1$ and $M(S_N) = 2^{-N}$. We have \begin{equation}\notag \begin{split} &\frac{1}{M(\partial\mathcal{S}(\beta_1))}\int_{\partial\mathcal{S}(\beta_1)}d_{\pi_1}(\tau_1\wedge\omega_1)\,dM(\omega_x) =\\ & 2^{d_T(\beta_1)}\sum_{k=n}^N\int_{S_k}d_{\pi_1}(\tau_1\wedge\omega_1)\,dM(\omega_1) \approx 2^n\sum_{k=n}^{\left[\frac{1}{s_1}\right]}k\cdot M(S_k)+ \frac{1}{s_1}2^n\sum_{k=\max\left(n,\left[\frac{1}{s_1}\right]\right)}^N2^{s_1k}\cdot M(S_k) \leq\\ & 2^n\sum_{k=n}^{\left[\frac{1}{s_1}\right]}k2^{-k} + \frac{1}{s_1}2^n\sum_{k=\max\left(n,\left[\frac{1}{s_1}\right]\right)}^N2^{-k(1-s_1)}\leq \frac{10}{1-s_1}d_{\pi_1}(\tau_1\wedge\beta_1), \end{split} \end{equation} and we arrive at \eqref{e:92}. It follows immediately that \begin{equation}\notag \begin{split} &\frac{1}{M(\partial\mathcal{S}(\alpha_1))M(\partial\mathcal{S}(\beta_1))}\int_{\partial\mathcal{S}(\alpha_1)}\int_{\partial\mathcal{S}(\beta_1)}d_{\pi_1}(\xi_1\wedge\omega_1)\,dM(\xi_1)\,dM(\omega_1) \leq\\ &\frac{10}{1-s_1}\frac{1}{M(\partial\mathcal{S}(\alpha_1))}\int_{\partial\mathcal{S}(\alpha_1)}d_{\pi_1}(\xi_1\wedge\beta_1)M(\xi_1) \leq \frac{100}{(1-s_1)^2} d_{\pi_1}(\alpha_1\wedge\beta_1). \end{split} \end{equation} \end{proof} We proceed with the proof of Theorem \ref{t:1}. Fix any point $\alpha\in \overline{T}^d$. We have \begin{equation}\notag \begin{split} &\mathbb{V}^{\mu_b}(\alpha) = \int_{(\partial T)^d}\prod_{j=1}^dd_{\pi_j}(\alpha_j\wedge\omega_j)\,d\mu_b(\omega_1,\dots,\omega_d). \end{split} \end{equation} Consider the first term (the one corresponding to values of $\mu$ on $T^d$) in the expression for $d\mu_b$. By Tonelli's theorem and Lemma \ref{l:51} one has \begin{equation}\notag \begin{split} &\int_{(\partial T)^d}\prod_{j=1}^dd_{\pi_j}(\alpha_j\wedge\omega_j)\,\sum_{\beta\geq\omega}\frac{\mu(\beta)}{\prod_{j=1}^dM(\mathcal{S}(\beta_j))}\,dM(\omega_1)\dots\,dM(\omega_d) = \\ &\sum_{\beta\in T^d}\mu(\beta)\cdot\prod_{j=1}^d\left(\frac{1}{M(\mathcal{S}(\beta_j))}\int_{\mathcal{S}(\beta_j)}d_{\pi_j}(\alpha_j\wedge\omega_j)\,\,dM(\omega_j)\right) \approx\\ &\sum_{\beta\in T^d}\mu(\beta)d_{\pi}(\alpha\wedge\beta). \end{split} \end{equation} Similarly, if we take one of the mixed terms in \eqref{e:lA.3}, say with $D = \{2,3,\dots,d\}$, we obtain \begin{equation}\notag \begin{split} &\int_{(\partial T)^d}\prod_{j=1}^dd_{\pi_j}(\alpha_j\wedge\omega_j)\,\sum_{j=2}^d\sum_{\beta_j\geq\omega_j}\frac{d\mu(\omega_1,\beta_2,\dots,\beta_d)}{\prod_{j=2}^dM(\mathcal{S}(\beta_j))}\,dM(\omega_2)\dots\,dM(\omega_d) =\\ &\sum_{j=2}^d\sum_{\beta_j\in T_j}\int_{\partial T_1}d_{\pi_1}(\alpha_1\wedge\omega_1)d\mu(\omega_1,\beta_2,\dots,\beta_d) \cdot\prod_{j=2}^d\left(\frac{1}{M(\mathcal{S}(\beta_j))}\int_{\mathcal{S}(\beta_j)}d_{\pi_j}(\alpha_j\wedge\omega_j)\,dM(\omega_j)\right)\approx\\ &\sum_{j=2}^d\sum_{\beta_j\in T_j}\int_{\partial T_1}d_{\pi_1}(\alpha_1\wedge\omega_1)\prod_{j=2}^dd_{\pi_j}(\alpha_j\wedge\beta_j)d\mu(\omega_1,\beta_2,\dots,\beta_d) = \int_{\partial T_1\times T_2\times\dots\times T_d}d_{\pi}(\alpha\wedge\tau)\,d\mu(\tau). \end{split} \end{equation} The rest of the terms are done in the same way. We arrive at \begin{equation}\notag \begin{split} &V^{\mu_b}_{\pi}(\alpha) =\\ & \int_{(\partial T)^d}\sum_{D\subset\{1,\dots,d\}}\sum_{j\in D}\sum_{\beta_j>\omega_j}\prod_{j\in D}d_{\pi_j}(\alpha_j\wedge\beta_j)\prod_{j\in \{1,\dots,d\}\setminus D}d_{\pi_j}(\alpha_j\wedge\omega_j)\frac{d\mu(\tau(D,\beta,\omega))}{\prod_{j\in D}M(\partial\mathcal{S}(\beta_j))}\,\prod_{j\in D}dM(\omega_j) \approx\\ & \sum_{D\subset\{1,\dots,d\}}\int_{\prod_{j\in D} T_j\times\prod_{j\in \{1,\dots,d\}\setminus D}}d_{\pi}(\alpha\wedge\tau)\,d\mu(\tau) = \int_{\overline{T}^d}d_{\pi}(\alpha\wedge\tau)\,d\mu(\tau) = \mathbb{V}_{\pi}^{\mu}(\alpha), \end{split} \end{equation} here the Cartesian product is taken according to the order of indices. $\blacksquare$ \begin{corollary}\label{c:1} Given a compact set $E\subset \overline{T}^d$ define its boundary projection $\mathcal{S}_b(E)\subset(\partial T)^d$ to be \[ \mathcal{S}_b(E) = \bigcup_{\beta\in E}\partial\mathcal{S}(\beta). \] Then there exists a constant $C>1$ depending only on $d$ and $\pi$ such that \begin{equation}\label{e:cA.3} \capp_{\pi} \mathcal{S}_b(E) \leq\capp_{\pi} E \leq C\capp_{\pi} \mathcal{S}_b(E). \end{equation} \end{corollary} \begin{proof} The left inequality is trivial, since any function admissible for $E$ is also admissible for $\mathcal{S}_b(E)$. Now let $\mu$ and $\nu$ be the equilibrium measures for $E$ and $\mathcal{S}_b(E)$ respectively. By definition of $\mu_b$ \begin{equation}\notag \begin{split} &|\mu_b| := \int_{(\partial T)^d}\,d\mu_b = \int_{(\partial T)^d}\sum_{D\subset\{1,\dots,d\}}\sum_{j\in D}\sum_{\beta_j>\omega_j}\frac{d\mu(\tau(D,\beta,\omega))}{\prod_{j\in D}M(\partial\mathcal{S}(\beta_j))}\,\prod_{j\in D}dM(\omega_j)=\\ &\sum_{D\subset\{1,\dots,d\}}\sum_{j\in D}\sum_{\beta_j\in T_j}\int_{\prod_{j\in D} \mathcal{S}(\beta_j)}\int_{\prod_{k\in \{1,\dots,d\}\setminus D}\partial T_k}\frac{d\mu(\tau(D,\beta,\omega))}{\prod_{j\in D}M(\partial\mathcal{S}(\beta_j))}\,\prod_{j\in D}dM(\omega_j) =\\ &\sum_{D\subset\{1,\dots,d\}}\int_{\prod_{j\in D} T_j\times\prod_{j\in \{1,\dots,d\}\setminus D}}\,d\mu(\tau) = \int_{\overline{T}^d}d_{\pi}\,d\mu(\tau) =: |\mu|. \end{split} \end{equation} By Theorem \ref{t:1} and equilibrium property \[ |\mu_b| = |\mu| = \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu}\,d\mu \approx \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu_b}\,d\mu \approx \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu_b}\,d\mu_b. \] On the other hand, for every $C\in\mathbb{R}$ we have \begin{equation}\notag 0 \leq \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu_b}\,d\mu_b -2C\int_{\overline{T}^d}\mathbb{V}_{\pi}^{\nu}\,d\mu_b + C^2\int_{\overline{T}^d}\mathbb{V}_{\pi}^{\nu}\,d\nu \leq \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu_b}\,d\mu_b -2C|\mu_b| + C^2|\nu|, \end{equation} since $\nu$ is equilibrium for $\mathcal{S}_b(E)$ and $\mathbb{V}^{\nu} \geq 1$ q.a.e. on $\mathcal{S}_b(E)\supset\supp\mu_b$. Hence, if we take $C$ to be large enough, we obtain \begin{equation}\notag 0 \leq \int_{\overline{T}^d}\mathbb{V}_{\pi}^{\mu_b}\,d\mu_b -C|\mu_b| + C\left(C|\nu|-|\mu_b|\right)\leq C\left(C|\nu|-|\mu_b|\right). \end{equation} Therefore \[ C\capp \mathcal{S}_b(E) = C|\nu| \geq |\mu_b| = |\mu| = \capp E, \] and we get the second half of \eqref{e:cA.3}. \end{proof} Note that the condition $s_j<1$ imposed on the standard polynomial weights is essential. Indeed, in the proof of Lemma \ref{l:51} one can see, that if $s_j\geq 1$ for some $j$, then the capacity of $\partial T_j$ (and hence of $(\partial T)^d$) becomes zero. In this case we basically leave the domain of weighted graph Dirichlet spaces and move to Hardy spaces, for which the capacity is a much less convenient instrument. Also, since $\pi$ is uniform, the equilibrium measure of the distinguished boundary $(\partial T)^d$ is actually $C M_d$ with $C = C(d,\pi)$.\\ \section{Hardy inequality on $d$-tree and properties of trace measures}\label{oggi} Assume $\mu \geq 0$ is a Borel measure on the $\overline{T}^d$, and $f\geq0$ is a function on $T^d$. The multilinear weighted Hardy inequality is \begin{equation}\label{e:53} \int_{\overline{T}^d}(\mathbb{I}f)^2\,d\mu \leq C\sum_{\alpha\in T^d}f^2(\alpha)\pi(\alpha) = \|f\|^2_{L^2(T^d,\,d\pi)}, \end{equation} for some constant $C>0$. A measure $\mu$ is called a \textit{trace measure for Hardy inequality}, if \eqref{e:53} holds for any $f\geq0$ with constant $C = C_{\mu}$ depending only on $\mu$ (and of course on the weight $\pi$ and dimension $d$). There is a vast amount of literature on various types of trace inequalities of the form above (see e.g. \cite{arsw2014}, \cite{ks1988}, \cite{ks1986} and references therein). Here we mostly aim to concentrate on this particular discrete version and investigate the relationship between different necessary and sufficient conditions.\\ Inequality \eqref{e:53} means that the operator $\mathbb{I}$ is bounded when acting from $L^2(T^d,\,d\pi)$ to $L^2(\overline{T}^d,\,d\mu)$. Equivalently, the adjoint operator, which we denote by $\mathbb{I}^*_{\mu}$, is bounded; \begin{equation}\label{e:54} \|\mathbb{I}^*_{\mu}g\|^2_{L^2(T^d,\,d\pi)} \leq C\|g\|^2_{L^2(\overline{T}^d,\,d\mu)}, \end{equation} for any $\mu$-measurable $g\geq0$ on $\overline{T}^d$. Since \begin{equation}\notag \begin{split} &\langle\varphi, \mathbb{I}f\rangle_{L^2(\overline{T}^d,\,d\mu)} = \int_{\overline{T}^d}\varphi(\alpha)\sum_{\beta\geq\alpha}f(\beta)\pi(\beta)\,d\mu(\alpha)= \int_{\overline{T}^d}\varphi(\alpha)\sum_{\beta\in T^d}\chi_{\mathcal{S}(\beta)}(\alpha)f(\beta)\pi(\beta)\,d\mu(\alpha) =\\ &\sum_{\beta\in T^d}\int_{\mathcal{S}(\beta)}\varphi(\alpha)\,d\mu(\alpha)f(\beta)\pi(\beta), \end{split} \end{equation} we clearly have \[ \mathbb{I}^*_{\mu}g(\alpha) = \int_{\mathcal{S}(\alpha)}g\,d\mu. \] Another reason to consider this inequality is to study the connection between Hardy inequality on $d$-tree and Carleson embedding for weighted Dirichlet-type spaces on the polydisc, which has been well established in \cite{ars2008}, \cite{arsw2014} for $d=1$, and, recently, in \cite{amps2018} for $d=2$ and $\pi\equiv 1$ (unweighted case).\\ We start with the dual inequality \eqref{e:54}. Let $\mu\geq0$ be a Borel measure on $d$-tree with finite energy, and assume for simplicity that $\supp\mu\subset(\partial T)^d$ (one can pass to general case by careful application of Theorem \ref{t:1} above). A set $E\subset (\partial T)^d$ is called rectangular, if $E$ is a union of finite collection of 'dyadic rectangles' on $(\partial T)^d$, in other words there exists a collection of points $\{\alpha^j\}_{j=1}^N$ such that \[ E = \bigcup_{j=1}^N\partial\mathcal{S}(\alpha^j). \] Now fix such a set $E$ and let $g := \chi_{E}$, plugging $g$ into \eqref{e:54} we obtain \begin{equation}\notag \int_{\overline{T}^d}\chi_{E}\,d\mu = \mu(E) \gtrsim C_{\mu}\|\mathbb{I}^*_{\mu}\chi_E\|^2_{L^2(T^d,\,d\pi)} = C_{\mu}\sum_{\alpha\in T^d}\left(\int_{\mathcal{S}_{\alpha}}\chi_E\,d\mu\right)^2\pi(\alpha) = C_{\mu}\sum_{\alpha\in T^d} \left(\mu\left(\partial\mathcal{S}(\alpha)\bigcap E\right)\right)^2\pi(\alpha). \end{equation} Using the dyadic rectangle interpretation from the end of Section \ref{SS:2.35} we can rewrite this inequality as \begin{equation}\label{e:55} \mu(E) \gtrsim C_{\mu}\sum_{Q}\mu\left(Q\bigcap E\right)^2\pi(Q),\quad \textup{for any rectangular}\; E, \end{equation} where $Q = Q_{\alpha} = \partial \mathcal{S}(\alpha)$ is the uniquely defined 'dyadic rectangle' representing a point $\alpha\in T^d$, and $\pi(Q_{\alpha}):= \pi(\alpha)$. Clearly, the expression on the right-hand side of \eqref{e:55} is just $\pi$-energy of $\mu$ restricted on the set $E$, we call this inequality \textit{global charge-energy} condition. Moreover, if we only consider those rectangles $Q$ that are inside $E$, we obtain \begin{equation}\label{e:56} \mu(E) \gtrsim C_{\mu}\sum_{Q\subset E}\mu(Q)^2\pi(Q),\quad \textup{for any rectangular}\; E, \end{equation} this one is called \textit{local charge-energy} condition (the reasoning being that the right hand side can be viewed as a 'local' $\pi$-energy of $\mu$ on the set $E$).\\ One of the questions we are interested in is whether one of these necessary conditions is also sufficient for \eqref{e:54}. We start with the global charge-energy condition. By Property \ref{l:A1.s.2.5} from Lemma \ref{l:1} one has \[ \capp_{\pi}E \gtrsim C_{\mu}\mu(E),\quad \textup{for any rectangular}\; E, \] that is, $\mu$ is a \textit{subcapacitary} measure. In \cite{amps2018} it was shown (for $d=2$ and unweighted case) that subcapacitary property indeed implies the trace condition \eqref{e:53}. Note that the subcapacitary condition should hold for any rectangular set $E$; if $\mu(E) \leq \capp_{\pi}E$ only for some particular set $E$, it does not necessarily imply \eqref{e:55} for that set.\\ Consider now the weaker local charge-energy condition \eqref{e:56}. In \cite{ahmv2018} it was shown that it still is equivalent to the trace inequality, for $d=2$ and $\pi\equiv1$. For $d=1$ and general $\pi$, a proof can be found in \cite{arsw2014}; see also \cite{ahmv2018} for Bellman function approach. Here we want to present a slightly different approach, based on the maximal function inequality. \begin{theorem}\label{p:1} Assume that Borel measure $\mu\geq0,\; \supp\mu\subset (\partial T)^d,$ with finite energy satisfies \eqref{e:56}. Then the trace inequality \eqref{e:54} follows, if the maximal function inequality \begin{equation}\label{e:57} \int_{(\partial T)^d}g^2\,d\mu \gtrsim C_{\mu}\int_{(\partial T)^d}(\mathcal{M}_{\mu}g)^2\,d\mu \end{equation} holds for any $g\in L^2(\overline{T}^d,\,d\mu)$, where \[ (\mathcal{M}_{\mu}g)(\beta) = \sup_{\beta \leq\alpha\in T^d}\frac{\int_{Q_{\alpha}}g\,d\mu}{\mu(Q_{\alpha})},\quad \beta\in \overline{T}^d. \] \end{theorem} \begin{proof} Fix a function $g\in L^2(\overline{T}^d,\,d\mu)$, some $k\in\mathbb{Z}$ and consider the set $E_{k}:= \{\omega\in (\partial T)^d:\; (\mathcal{M}_{\mu}g)(\omega)>2^k\}$. Clearly there exists a sequence $\{\alpha^j_{k}\}_{j=1}^{\infty}$ such that $E_{k} = \bigcup_{j=1}^{\infty}\partial\mathcal{S}(\alpha^j_{k})$. Approximating $E_{k}$ by rectangular sets $E^{n}_{k} = \bigcup_{j=1}^{n}\partial\mathcal{S}(\alpha^j_{k})$ we see that local charge-energy condition implies \[ \mu (E_k) \gtrsim C_{\mu}\sum_{Q\subset E_k}\mu(Q)^2\pi(Q),\quad k\in\mathbb{Z}. \] By distribution function argument and maximal function inequality \eqref{e:57} we have \begin{equation}\notag \begin{split} &C_{\mu}^{-1}\int_{(\partial T)^d}g^2\,d\mu \gtrsim \int_{(\partial T)^d}(\mathcal{M}_{\mu}g)^2\,d\mu \approx \sum_{k\in\mathbb{Z}}2^{2k}\mu(E_k) \gtrsim \sum_{k\in\mathbb{Z}}2^{2k}\sum_{Q\subset E_k}\mu(Q)^2\pi(Q) \gtrsim\\ &\sum_{\beta\in T^2}(\mathcal{M}_{\mu}g)^2(\beta)\mu^2(Q_{\beta})\pi(Q_{\beta}), \end{split} \end{equation} since $\bigcup_{k\in\mathbb{Z}} E_k = (\partial T)^d$. On the other hand, \begin{equation}\notag \begin{split} &\sum_{\beta\in T^2}(\mathcal{M}_{\mu}g)^2(\beta)\mu^2(Q_{\beta})\pi(Q_{\beta}) = \sum_{\beta\in T^2}\left(\sup_{\beta \leq\alpha}\frac{\int_{Q_{\alpha}}g\,d\mu}{\mu(Q_{\alpha})}\right)^2\mu^2(Q_{\beta})\pi(Q_{\beta}) \geq \sum_{\beta\in T^2}\left(\frac{\int_{Q_{\beta}}g\,d\mu}{\mu(Q_{\beta})}\right)^2\mu^2(Q_{\beta})\pi(Q_{\beta})\geq\\ & \sum_{\beta\in T^2}\left(\int_{Q_{\beta}}g\,d\mu\right)^2\pi(Q_{\beta}) = \|\mathbb{I}^*_{\mu}g\|^2_{L^2(T^d,\,d\pi)}. \end{split} \end{equation} We are done. \end{proof} For $d=1$ this proposition solves the problem, since the maximal function operator is obviously bounded. In higher dimensions \eqref{e:57} fails for some measures; due to presence of cycles in $T^d$, several rectangles can have non-trivial intersection. However the counterexamples to \eqref{e:57} that we are aware of are of rather non-subcapacitary nature, that is, all of them also fail to satisfy \eqref{e:56}. Therefore one can ask whether the local charge-energy inequality can be transformed into some sufficient conditions for the maximal function inequality. This might not be straightforward, since \eqref{e:56} and \eqref{e:57} scale differently. Another question is connected to the nature of the rectangular sets on which we test the trace inequality. In the one-dimensional case it is sufficient that \eqref{e:56} holds for all single rectangles (dyadic intervals). One would expect that a single box test, \eqref{e:56} for rectangles, is no longer sufficient when $d\geq 2$. One might compare with the description of Carleson measures for the Hardy space on the bidisc \cite{chang1979}, but note we have been discussing the dual inequality of the Hardy inequality \eqref{e:53}. The single box test for \eqref{e:53} is just a subcapacitary condition, and it fails to be sufficient already for $d=1$, since, generally, capacity is not additive. In particular, the single box test for \eqref{e:53} on an unweighted dyadic tree asks that $\mu(Q) \lesssim \frac{1}{\log\frac{1}{|Q|}}$. However, if we ask a little bit more from this single box test, we can obtain sufficient conditions for $\mu$ to satisfy the trace inequalities \eqref{e:53} and \eqref{e:54}. \begin{proposition}\label{p:2} Let $\pi$ be a standard polynomial weight, and $\varphi:[0,2]^d\rightarrow\mathbb{R}_+$ be a function, increasing in each variable, such that \begin{equation}\label{e:p2.1} \begin{split} &\int_{[0,2]^d}\frac{\varphi(t_1,\dots,t_d)}{t_1^{1+s_1}\cdot\dots\cdot t_d^{1+s_d}}\,dt_1\dots dt_d < +\infty. \end{split} \end{equation} Then, if $\mu\geq0$ is a Borel measure on $(\partial T)^d$ satisfying \begin{equation}\label{e:p2.3} \mu(Q_{\alpha}) \leq \varphi(M(Q_{\alpha_1}),\dots,M(Q_{\alpha_d})), \end{equation} for any $\alpha = (\alpha_1,\dots,\alpha_d)\in T^d$, then $\mu$ is a trace measure for Hardy inequality \eqref{e:53}. \end{proposition} \begin{proof} First we show that \eqref{e:p2.3} implies that $\mathbb{V}_{\pi}^{\mu}(\omega) \lesssim 1$ for any $\omega\in (\partial T)^d$. The weight $\pi$ is uniform, in particular, it (along with the estimate \eqref{e:p2.3}) depends only on generation numbers of $\beta$, and not on the location of $\beta$ within $T^d$. Therefore it is enough show the boundedness of potential only for one point, say $\omega = (0,\dots,0)$ (i.e. the point that corresponds to the leftmost geodesic taken on each tree $T_j$,\; $j=1,\dots,d$). For any multiindex $n = (n_1,\dots,n_d)\subset \mathbb{N}^d$ there exists a unique point $\beta(n)\geq\omega$ with those exact generation numbers (i.e. $d(\beta_j(n)) = n_j$ and $M(Q_{\beta_j(n)}) = 2^{-n_j}$), hence \[ \mathbb{V}^{\mu}_{\pi}(\omega) = \sum_{\beta\geq\omega}\mu(Q_{\beta})\pi(Q_{\beta}) \leq \sum_{n\in\mathbb{N}^d}\varphi(2^{-n_1},\dots,2^{-n_d})2^{s_1n_1+\dots+s_dn_d} \lesssim \int_{[0,1]^d}\frac{\varphi(t_1,\dots,t_d)}{t_1^{1+s_1}\cdot\dots\cdot t_d^{1+s_d}}\,dt_1\dots dt_d. \] It follows immediately that for any $g\in L^2(\overline{T}^d,\,d\mu)$ one has \begin{equation}\notag \begin{split} &\|\mathbb{I}^*_{\mu}\|^2_{L^2(T^d,\,d\pi)} = \sum_{\alpha\in T^d}\left(\int_{Q_{\alpha}}g\,d\mu\right)^2\pi(Q_{\alpha}) \leq \sum_{\alpha\in T^d}\int_{Q_{\alpha}}g^2\,d\mu\cdot \mu(Q_{\alpha})\pi(Q_{\alpha})\leq\\ &\int_{(\partial T)^d}\sum_{\alpha\geq \omega}\mu(Q_{\alpha})\pi(Q_{\alpha})g^2(\omega)\,d\mu(\omega) \lesssim C_{\mu}\int_{(\partial T)^d}g^2\,d\mu. \end{split} \end{equation} \end{proof}
2,877,628,089,855
arxiv
\section{acknowledgments}
2,877,628,089,856
arxiv
\section{Introduction \label{introduction}} The quantum information theory (QIT) and condensed matter physics share in using the entanglement \cite{Bell}. The main motivation behind such interest is two folds: On the one hand in QIT the entanglement is recognized as an essential resource for implementing some quantum information tasks such as quantum computations, quantum cryptography and densecoding \cite{Nielsen}. It is also believed that the protocol based on the quantum entangled state has much speed than the classical ones. On the other hand entanglement is a unique measure of the quantum correlation of a pure state in condensed matter physics. Thus, for condensed matter systems the entanglement can de considered as some kind of correlation which appears in the pure states. The role of entanglement in quantum phase transition (QPT) \cite{Sachdev} is of considerable interest\cite{Osterloh}. Quantum phase transitions occur at absolute zero temperature. They are induced by the change of an external parameter or coupling constant, and are driven by quantum fluctuations. Quantum as well as classical phase transitions are characterized by detecting nonanalytic behaviors in some physical properties of the system. It is often accompanied by a divergence in some correlation functions, but quantum systems possess additional correlations which do not exist in a classical counterparts, {\it the entanglement}. Entanglement is a direct measure of quantum correlations and shows nonanalytic behavior such as discontinuity in the vicinity of the quantum critical point \cite{Wu,latorre}. In the past few years the subject of several activities were to investigate the behavior of entanglement in the vicinity of quantum critical point for different spin models \cite{Osterloh,Vidal1,Vidal2,Bose,Osborne,Verstraete} as well as itinerant systems \cite{Zanardi,Gu,Anfossi}. In our recent work \cite{kargarian} we have introduced the notion of renormalization of entanglement and showed that this notion truly describe the nonanalytic behavior of the derivative of entanglement close to critical point of the Ising model in Transverse Field (ITF). Moreover, we have investigated how the critical point is reached by increasing the size of system via the renormalization group (RG) approach. The finite size scaling demonstrates that the RG of entanglement truly captures the critical behavior of the model. The renormalization of quantum states has also been introduced in terms of matrix product states \cite{fv}. In this work we have applied the same approach to the anisotropic spin 1/2 Heisenberg (XXZ) model in order to investigate the critical behavior of the model using the evolution of entanglement through the quantum renormalization group \cite{pfeuty, wilson} close to quantum critical point. In this model the concurrence between the two nearest-neighbor sites develops a maximum at the isotropic point ($\Delta=1$) without any singularity in its first derivative \cite{Shi3}, i.e it vanishes at the critical point $\Delta=1$. However, in our approach a singularity, i.e the divergence of first derivative, is seen at the critical point. It is expected that our approach is efficient and powerful to study the quantum phase transition in various systems since the thermodynamic limit of the model is simply captured through an analytic RG equations. Quantum renormalization group with fixed boundary conditions give rise only to a nontrivial fixed point $\Delta=1$ and trivial fixed points $\Delta=0,\infty$ for the XXZ model while for every point in the $0\leq\Delta\leq1$ region the model is critical, i.e gapless. A modification of the boundary conditions by means of quantum group concept \cite{Mdelgado} restores the gapless properties of $0\leq\Delta\leq1$, although the critical behavior of the model and its scaling properties remain unchanged from the gapped phas. This article is organized as follows. In section-\ref{qrg} the main idea of quantum renormalization group is briefly reviewed and in section-\ref{xxz-qrg} this idea is applied to the XXZ model. Sections-\ref{ent-rg} discusses the main idea of this paper about the renormalization of entanglement entropy and concurrence is introduced and the section-\ref{qgqrg} is devoted to the quantum group analysis in order to get more insights about the critical features of the model. Finally in section-\ref{conclusion} we summerize our results. \section{Quantum renormalization group \label{qrg}} The main idea of the RG method is the mode elimination or thinning of the degrees of freedom followed by an iteration which reduces the number of variables step by step until reaching a fixed point. In Kadanoff's approach, the lattice is divided into blocks. Each block is treated independently to build the projection operator onto the lower energy subspace. The projection of the inter-block interaction is mapped to an effective Hamiltonian ($H^{eff}$) which acts on the renormalized subspace \cite{miguel1,langari}. The procedure starts by decomposing the Hamiltonian into two parts, \begin{eqnarray} \label{eq2} H=H^{B}+ H^{BB}, \end{eqnarray} where the block Hamiltonian $H^{B}$ is a sum of commuting terms, each acting on different blocks. The inter-block interaction is represented by $H^{BB}$. The perturbative implementation of this method has been discussed in Refs.[\onlinecite{miguel1,jafari}]. We will shortly present this approach in the 1st order correction. The zeroth order effective Hamiltonian is given by \begin{eqnarray} \label{eq5} H^{eff}_{0}=P_{0}H^{B}P_{0}, \end{eqnarray} where $P_{0}$ is a projection operator. Since $H^{B}$ is a sum of disconnected block Hamiltonians \begin{eqnarray} \nonumber H^{B}=\sum_{I=1}^{N'}h_{I}^{B}, \end{eqnarray} one can search for a solution of $P_{0}$ in a factorised form \begin{eqnarray} \nonumber P_{0}=\prod_{I=1}^{N'}P_{0}^{I}, \end{eqnarray} where $N'$ is the number of blocks. In the standard quantum renormalization group, $P_{0}^{I}$ is given by \begin{eqnarray} \nonumber P_{0}^{I}=\sum_{i=1}^{k}|\psi_{i}\rangle\langle\psi_{i}|, \end{eqnarray} where $|\psi_{i}\rangle~(i=1,\cdots, k)$ are the $k$ lowest energy eigenstates of $h_{I}^{B}$. The interaction between blocks define the 1st order correction by the following equation \begin{eqnarray} \label{eq6} H_{1}^{eff}=P_{0}H^{BB}P_{0}. \end{eqnarray} The effective (renormalized) Hamiltonian is then \begin{equation} \label{eq7} H^{eff}=H^{eff}_{0}+H_{1}^{eff}. \end{equation} We will implement this approach in the next sections to obtain the quantum properties of the XXZ spin chain. \section{Renormalization of the XXZ model \label{xxz-qrg}} The Hamiltonian of XXZ model on a periodic chain of $N$ sites is \begin{eqnarray} H(J,\Delta)=\frac{J}{4}(\sum_{i}^{N}\sigma_{i}^{x}\sigma_{i+1}^{x}+ \sigma_{i}^{y}\sigma_{i+1}^{y}+ \Delta\sigma_{i}^{z}\sigma_{i+1}^{z}) \label{eq23} \end{eqnarray} where $J, \Delta>0$, $J$ is the exchange coupling, $\Delta$ is the anisotropy parameter and $\sigma^{\alpha}$ ($\alpha=x, y z$) are Pauli matrices. For $\Delta=1$, the Hamitonian is $SU(2)$ symmetry invariant, but for $\Delta\neq1$ the $SU(2)$ symmetry breaks down to the $U(1)$ rotational symmetry around the $z$-axis. The model is exactly solvable by means of Bethe Ansats as far as the rotational symmetry exists. It is known that for $0\leq\Delta\leq1$ the model is gapless with qasi-long range ordered where the correlations decay algebraic with no magnetic long range order. For $\Delta>1$, the symmetrty is reduced to $Z2$ and a gap opens which is in the universality class of one dimensional antiferromagnetic Ising chain. Indeed the third term in the Hamiltonian causes ordering in the system and as $\Delta$ tends to infinity the N\'{e}el state is the dominant phase of the system. The first two terms in the Hamiltonian extend the quantum fluctuations in the system and result in the corruption of the N\'{e}el ordering. It is shown that the competition between the quantum fluctuations and ordering yield a maximum value of concurrence between the two nearest neighbor sites which show a scaling behavior at the critical point $\Delta=1$ \cite{Shi1},\cite{Shi2}. To implement the idea of QRG to calculate the entanglement and concurrence we use a three site block procedure where the block Hamiltonian is $\\ \\h_{I}^{B}=\frac{J}{4}\Big[(\sigma_{1,I}^{x}\sigma_{2,I}^{x}+\sigma_{2,I}^{x}\sigma_{3,I}^{x}+ \sigma_{1,I}^{y}\sigma_{2,I}^{y}+\sigma_{2,I}^{y}\sigma_{3,I}^{y})+$ \begin{eqnarray}\label{eq24} \Delta(\sigma_{1,I}^{z}\sigma_{2,I}^{z}+\sigma_{2,I}^{z}\sigma_{3,I}^{z})\Big]. \end{eqnarray} In this case the inter-block ($H^{BB})$ and intra-block ($H^{B}$) Hamiltonians are \begin{eqnarray} \label{eq25} H^{BB}=\frac{J}{4}\left[\sum_{I}^{N/3}(\sigma_{3,I}^{x}\sigma_{1,I+1}^{x}+ \sigma_{3,I}^{y}\sigma_{1,I+1}^{y}+\Delta\sigma_{3,I}^{z}\sigma_{1,I+1}^{z})\right], \end{eqnarray} \begin{eqnarray} \label{eq26} H^{B}=\sum_{I}^{N/3} h_{I}^{B}. \end{eqnarray} A remark is in order here, choosing the three site block is essential here to get a self similar Hamiltonian after each RG step. An odd site XXZ Hamiltonian has two degenerate ground states which are used to construct the projection operator of quantum RG (QRG). These degenerate ground states are \begin{eqnarray} \label{eq27-1} |\phi_{0}\rangle=\frac{1}{\sqrt{2+q^{2}}}(|\uparrow\uparrow\downarrow\rangle+ q|\uparrow\downarrow\uparrow\rangle+|\downarrow\uparrow\uparrow\rangle),\end{eqnarray} \begin{eqnarray} \label{eq27-2} |\phi_{0}'\rangle=\frac{1}{\sqrt{2+q^{2}}}(|\uparrow\downarrow\downarrow\rangle+ q|\downarrow\uparrow\downarrow\rangle+|\downarrow\downarrow\uparrow\rangle), \end{eqnarray} where $q$ is \begin{equation} \label{eq28} q=\frac{-1}{2}[\Delta+\sqrt{\Delta^{2}+8}]. \end{equation} The corresponding energy is \begin{eqnarray} \label{eq29} e_{0}=-\frac{J}{4}[\Delta+\sqrt{\Delta^{2}+8}], \end{eqnarray} and $|\uparrow \rangle$, $|\downarrow \rangle$ are the eigenstates of $\sigma^{z}$. The projection operator ($P_{0}$) for the $I$-th block is defined \begin{eqnarray} \label{eq30} P_{0}^{I}={|\Uparrow\rangle}_{II}\langle\phi_{0}|+{|\Downarrow\rangle}_{II}\langle\phi_{0}'|, \end{eqnarray} where $|\Uparrow\rangle_{I}$ and $|\Downarrow \rangle_{I}$ are renamed states of each block to represent the effective site degrees of freedom. The renormalization of Pauli matrices are given by \begin{eqnarray} \label{eq31} P_{0}^{I}\sigma_{i,I}^{\alpha}P_{0}^{I}=\xi_{i}^{\alpha}{\sigma'}_{I}^{\alpha}~~~~~~(i=1,2,3~~;~~\alpha=x,y,z), \end{eqnarray} where \begin{eqnarray} \label{eq32} \xi_{1}^{x,y}=\xi_{3}^{x,y}=\frac{2q}{2+q^{2}}~~~&,&~~~ \xi_{1}^{z}=\xi_{3}^{z}=\frac{q^{2}}{2+q^{2}} \nonumber \\ \xi_{2}^{x,y}=\frac{2}{2+q^{2}}~~~&,&~~~ \xi_{2}^{z}=\frac{2-q^{2}}{2+q^{2}}, \end{eqnarray} where the indices 1, 2 and 3 refer to sites labeled in a single block based on Fig.(\ref{3siteblock}). \begin{figure} \begin{center} \includegraphics[width=8cm]{3siteblock.eps} \caption{Three sites in a block labeled sequentially.} \label{3siteblock} \end{center} \end{figure} The effective Hamiltonian (using Eq.(\ref{eq7})) of the renormalized chain is again an XXZ chain with the scaled couplings \begin{eqnarray} \label{eq33} H^{eff}=\frac{J'}{4}\left[\sum_{i}^{N/3}({\sigma}_{i}^{x}{\sigma}_{i+1}^{x}+{\sigma}_{i}^{y}{\sigma}_{i+1}^{y})+ \Delta'({\sigma}_{i}^{z}{\sigma}_{i+1}^{z})\right],\end{eqnarray} where \begin{eqnarray} \label{eq34} J'=J(\frac{2q}{2+q^{2}})^{2}~~~,~~~\Delta'=\Delta\frac{q^{2}}{4}. \end{eqnarray} The stable and unstable fixed points of the QRG equations is obtained by solving $\Delta'=\Delta\equiv\Delta^*$. The stable fixed points locate at zero and infinity while $\Delta^*=1$ stands for the unstable fixed point which specify the critical point of the model. Starting with any initial value for $\Delta>1$, the coupling constant flows toward infinity showing that the model falls into the universality class of Ising model, while for $\Delta<1$ the stable $\Delta=0$ fixed point is touched. As we have mentioned previously, for $0\leq\Delta\leq1$ the model represents a spin fluid phase. The transition between the two phases are truly captured by real space QRG. The main discrepancy of our results on the Ising model in transverse field \cite{kargarian} (ITF) and the present one on XXZ comes from the fact that XXZ model is critical for all values of $0\leq\Delta\leq1$ but QRG equations do not to show the whole critical region except $\Delta=0$. In fact the QRG prescription only represents the masslessness property of the XY fixed point ($\Delta=0$). However, when the coarse graining procedure implemented by some appropriate boundary conditions in order to get more correlation between blocks as is done in the quantum group method \cite{Mdelgado}, the critical line of the model is truly predicted (see section \ref{qgqrg}). \section{Renormalized entanglement and its scaling property \label{ent-rg}} \subsection{\emph{Entanglement and concurrence}} In this subsection we calculate the concurrence and entanglement of the XXZ chain using our proposal which implements the idea of renormalization group. As we have mentioned previously, a finite size block is treated exactly to calculate the physical quantities. The coupling constants of a finite block are renormalized via the QRG prescription to give the large size behaviour. The XXZ spin 1/2 Hamiltonian has two degenerate ground state in the three site block, however to define the density matrix we have to consider one of them. Thus, the density matrix is defined by \begin{equation} \label{eq35} \varrho=|\phi_{0}\rangle\langle\phi_{0}|, \end{equation} where $|\phi_{0}\rangle$ has been introduced in Eq.(\ref{eq27-1}). The results will be the same if we consider $|\phi'_{0}\rangle$ to construct the density matrix. There are basically two choices to define the concurrence and entanglement for a three site block. ({\it 1}) The symmetric case, in which the concurrence between site 1 and 3 is obtained by summing over the degrees of freedom of site 2. And the entanglement between site 2 and remaining sites of the block as measured by von-neuman entropy. ({\it 2}) We sum over site 1 or 3 and get the concurrence between the middle site (2) and one site at the corner side of the block (see Fig.(\ref{3siteblock})). Without loss of generality we only concentrate on case ({\it 1}). The density matrix defined in Eq.(\ref{eq35}) is traced over site 2 degrees of freedom to get the reduced density matrix for site 1 and 3 ($\varrho_{1,3}$) which gives \begin{eqnarray} \label{eq36} \varrho_{13}=\frac{1}{2+q^{2}} \left( \begin{array}{cccc} q^{2} & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right). \end{eqnarray} The corresponding eigenvalues of $\hat{R}=\varrho_{13}\tilde{\varrho_{13}}$ (where $\tilde{\varrho}_{13}= (\sigma^{y}_{1}\otimes\sigma^{y}_{3})\varrho_{13}^{\ast}(\sigma^{y}_{1}\otimes\sigma^{y}_{3})$) are in ascending order \begin{eqnarray} \label{eq37} \lambda_{1}= \lambda_{2}= \lambda_{3}=0\;\;,\;\; \lambda_{4}=\frac{4}{(2+q^{2})^{2}}. \end{eqnarray} Thus, the concurrence is obtained \begin{eqnarray} \label{eq38} C_{13}=\max\lbrace\lambda_{4}^{1/2}, 0\rbrace=\frac{2}{2+q^{2}}. \end{eqnarray} According to Eq.(\ref{eq28}), $C_{13}$ is a function of $\Delta$. The renormalization of $\Delta$ defines the evolution of concurrence as the size of system becomes large. We have plotted in FIG.\ref{fig22} the value of $C_{13}$ versus $\Delta$ for different QRG iterations. \begin{figure} \begin{center} \includegraphics[width=8cm]{fig2.eps} \caption{(color online) Representation of the evolution of concurrence in terms of RG iterations (steps).} \label{fig22} \end{center} \end{figure} The plots of $C_{13}$ versus $\Delta$ for different QRG steps cross each other at the scale invariant critical point, $\Delta_c\equiv\Delta^*=1$. By increasing the size of system (higher QRG steps) the concurrence develops two different behaviours which are separated at $\Delta_c$. After enough QRG steps the value of $C_{13}$ is saturated for $0\leq\Delta<1$ which shows the existence of quantum correlations between two blocks at large distances in an infinite chain that is effectively described by a three site model with the renormalized coupling constants. In this region ($0\leq\Delta\leq1$) the quantum fluctuations arising from the transverse interactions have dominant effect and destroy any long range order. In spite of lacking a long range order, the evolution of concurrence via RG indicates that the spin fluid phase contains the quantum correlations, i.e the qubits in the presence of quantum fluctuations are quantum correlated. For $\Delta>1$, $C_{13}$ saturates the \emph{zero} value asymptotically to represent the lack of quantum correlation in the Ising limit. We have also examined the entanglement of formation \cite{entanglement of formation} between sites 1 and 3 which can be obtained by the following relation: \begin{eqnarray} \label{eq39} E_1&=&-y\log_{2}(y)-(1-y)\log_{2}(1-y), \nonumber \\ y&=&\frac{1}{2}+\frac{1}{2}\sqrt{1-C_{13}^{2}}. \end{eqnarray} The entanglement of formation shows similar behavior to FIG.\ref{fig22} because of monotonous relation between concurrence and entanglement of formation. In the symmetric case we can also study the entanglement between the middle site (2) with the remaining sites of the block. The amount of entanglement is given by von-Neuman entropy of reduced density matrix obtained after tracing out the remaining sites of block \begin{eqnarray} \label{eq43} \varrho_{2}=\frac{1}{2+q^{2}} \left( \begin{array}{cc} 2 & 0 \\ 0 & q^{2} \\ \end{array} \right).\end{eqnarray} The von-Neuman entropy which is the entanglement of site 2 is \begin{eqnarray} \label{eq44} E=-\frac{2}{2+q^{2}}\log_{2}\frac{2}{2+q^{2}}- \frac{q^{2}}{2+q^{2}}\log_{2}\frac{q^{2}}{2+q^{2}}. \end{eqnarray} The variation of entanglement ($E$) versus $\Delta$ has been plotted in Fig.\ref{fig21}. Different plots show the evolution of $E$ under QRG iterations. In other words, the different step of QRG show how the entanglement evolves when the size of chain is increased. Similar to $C_{13}$, $E$ behaves as an order parameter which gets a nonzero value for $0\leq\Delta<1$ and zero for $\Delta>1$ in the infinite size limit. \emph{Nonzero} $E$ for $0\leq\Delta<1$ verfies again that the state of model is entangled for $0\leq\Delta<1$ where the ground state is characterized by a gapless excitation and algebraic decay of spin correlations (the spin fluid phase). While, $E$ is zero for the N\'{e}el state ($\Delta>1$) which is not an entangled state. \begin{figure} \begin{center} \includegraphics[width=8cm]{fig3.eps} \caption{(color online) The evolution of Entanglement entropy as the size of system grows (i.e. different RG iterations).} \label{fig21} \end{center} \end{figure} As previously discussed the entanglement between the middle site (2) and the remaining ones tends to maximal entanglement for the region of coupling $0\leq \Delta<1$ after few RG steps Fig.\ref{fig21}. While the entanglement of formation between sites 1 and 3 tends to a nonmaximal value, in fact this is a monogamy property of entanglement\cite{Coffman}. On the basis of monogamy property , the maximal entanglement between two parties restricts their entanglement with third party and vis versa. \subsection{\emph{nonanalytic and scaling behavior}} The non-analytic behavior in some physical quantity is a feature of second-order quantum phase transition. It is also accompanied by a scaling behavior since the correlation length diverges and there is no characteristic length scale in the system at the critical point. There have been a number of theoretical studies on the entanglement and quantum phase transitions. These studies show that the entanglement as a direct measure of quantum correlations indicate critical behavior such as diverging of its derivative as the phase transition is crossed. Osterloh,\emph{et. al}\cite{Osterloh} have verified that the entanglement in the vicinity of critical point of ITF and XX model in transverse field show a scaling behavior. It has also been shown that the entanglement between a block of spins and the rest of system scales near the quantum critical point \cite{Vidal1}. The scaling properties of entanglement with the size of block differs at and away from the critical point. At the critical point where the correlations decay algebraically entanglement scales logarithmically while it saturates away from the critical point \cite{Latorre}. It can be interpreted in the framework of conformal field theory \cite{korepin} associated with the quantum phase transition and the central charge of the theory. Indeed for the XXZ model, the entanglement entropy, i.e. the entanglement of a block (sublattice bipartition) with the rest of the system, demonstrates extermum behavior at the critical point\cite{Chen}. In this work, we adopt a preferable and distinct way to study the block entanglement via the renormalization group approach. As we have stated in the RG approach for XXZ model, a large system, i.e. $N=3^{n+1}$, can be effectively described by three sites with the renormalized couplings of the n-th RG iteration. Thus, the entanglement between the two renormalized sites represents the entanglement between two parts of the system each containing $N/3$ sites effectively. In this respect we can speak of {\it block entanglement} - block-block entanglement or the entanglement between a block and the rest of system. It is shown that the first derivative of the entanglement shows a diverging behavior as the critical point is reached. Having this in mind, the first derivative of concurrence is analyzed as a function of coupling $\Delta$ at different RG steps which manifest the size of system. The derivative of concurrence with respect to the coupling constant ($\frac{dF}{dg}$) shows a singular behavior at the critical point. It is given by \begin{equation} \label{eq20} \frac{dF^{(n)}}{d\Delta}=\frac{dF^{(n)}}{d\Delta_{n}}\frac{d\Delta_{n}}{d\Delta_{n-1}}{...}\frac{d\Delta_{1}}{d\Delta} \end{equation} where $F^{(n)}$ stand for the concurrence or von-Neuman entropy and $\Delta_{n}$ the renormalized anisotropy coupling ($\Delta$) at the n-th RG iteration. The singular behavior is the result of discontinuous change of $C$ at $\Delta=\Delta_c$. We only concentrate on the nonanalytic behavior of entanglement between the middle site and remaining sites of each block. We have plotted $\frac{dE}{d\Delta}$ versus $\Delta$ in Fig.\ref{fig34} for different RG iterations which shows the singular behavior as the size of system becomes large (higher RG steps). A more detailed analysis shows that the position of the minimum ($\Delta_m$) of $\frac{dE}{d\Delta}$ tends towards the critical point like $\Delta_{m}=\Delta_{c}+N^{-0.47}$ which has been plotted in Fig.\ref{fig35}. Moreover, we have derived the scaling behavior of $y\equiv|\frac{dE}{d\Delta}|_{\Delta_m}$ versus $N$. This has been plotted in Fig.\ref{fig36} which shows a linear behavior of $ln(y)$ versus $ln(N)$. The exponent for this behavior is $\mid\frac{dE}{d\Delta}|_{\Delta_m} \sim N^{0.47}$. This results justify that the RG implementation of entanglement truly capture the critical behavior of the XXZ model at $\Delta=1$. \begin{figure} \begin{center} \includegraphics[width=8cm]{fig4.eps} \caption{(color online) First derivative of entanglement entropy and its manifestation towards diverging as the number of RG iterations (steps) increases (Fig.\ref{fig21}).} \label{fig34} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{fig5.eps} \caption{(color online) The scaling behavior of $\Delta_{m}$ in terms of system size ($N$) where $\Delta_{m}$ is the position of minimum in Fig.\ref{fig34}.} \label{fig35} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm]{fig6.eps} \caption{(color online) The logarithm of the absolute value of minimum, $\ln(\mid dE/d\Delta\mid_{\Delta_{m}})$, versus the logarithm of chain size, $\ln(N)$, which is linear and shows a scaling behavior. Each point corresponds to the minimum value of a single plot of Fig.\ref{fig34}. } \label{fig36} \end{center} \end{figure} Besides the entanglement entropy, two point quantum correlation as measured by concurrence develops a maximum at the critical point. The concurrence of the two nearest neighbor sites of a large chain has numerically been calculated and the maximum at the critical point is due to the counterbalance of ordering and quantum fluctuations \cite{Shi3}. In the Ising limit, i.e $\Delta \longrightarrow \infty$, the N\'{e}el order is developed which makes the concurrence becomes zero. The same treatment also is seen in the renormalization of entanglement entropy or concurrence. As stated in section (2) the renormalisation equations develops two different phases for the XXZ model which are separated by the unstable fixed point $\Delta_c = 1$. For $\Delta > 1$ region which falls into the Ising universality class, both the entanglement entropy and concurrence (Fig.\ref{fig22}) tends to zero as the iteration of RG equations increases, i.e the stable fixed point $\Delta \longrightarrow \infty$ is reached. A novel issue arises when the exponents of the scaling of $\Delta_{m}$ and the minimum value of von-Numan entropy, as discussed in the previous paragraphs, are compared with the corresponding ones of the concurrence. The first derivative of concurrence tends to diverge at the quantum critical point. The position of minimum and the minimum value itself scale as $\Delta_{m}=\Delta_{c}+N^{-0.46}$ and $\mid\frac{dC}{d\Delta}|_{\Delta_m} \sim N^{0.48}$, respectively, as the size of the system becomes large. It is clear that the exponents are almost the same for the two measures of entanglement. It indicates that the entanglement entropy and concurrence scale in the same manner in the limit of large scale behavior and are associated to the long range behavior of the model close to the critical point. According to this intuitive picture of entanglement scaling and critical properties of the model we would like to emphasize that the exponent is directly related to the correlation length exponent close to the critical point. The correlation length exponent, $\nu$, gives the behavior of correlation length in the vicinity of $\Delta_{c}$ \begin{equation} \label{eq50} \xi \sim (\Delta-\Delta_{c})^{-\nu} \end{equation} Under the RG transformations (Eq.(\ref{eq34})) the correlation length scales as $\xi \longrightarrow \xi^{(1)}=\xi/n_{B}$ where $n_{B}$ stands for the number of sites in each block which is $n_{B}=3$ in our procedure. So, it is easy to look for the $n^{th}$ RG iteration \begin{equation} \label{eq51}\xi^{(n)} \sim (\Delta_{n}-\Delta_{c})^{-\nu}=\xi/n_{B}^{n} \end{equation} which immediately leads to an expression for $\mid\frac{d\Delta_{n}}{d\Delta}\mid_{\Delta_{c}}$ in terms of $\nu$ and $n_{B}$ \begin{equation} \label{eq52}\mid\frac{d\Delta_{n}}{d\Delta}\mid_{\Delta_{c}}\sim N^{1/\nu}. \end{equation} The comparison with Eq.(\ref{eq20}) demonstrates that the exponent which governs the nonanalytic behavior of the entanglement entropy and concurrence in the vicinity of the critical point is nothing than the inverse of correlation length exponent. It should be noted that the scaling of the position of minimum, $\Delta_{m}$ (Fig.\ref{fig35}), also comes from the behavior of the correlation length near the critical point. As the critical point is approached and in the limit of large system sizes, not the thermodynamic limit, the correlation length almost covers the whole of system, i.e. $\xi \sim N $, and a simple comparison with Eq.(\ref{eq50}) results in the scaling of position as $\Delta_{m}=\Delta_{c}+N^{-1/\nu}$. \section{further insight from quantum group \label{qgqrg}} As it has been carried out in Sec.(\ref{qrg}, \ref{xxz-qrg}) the (block) quantum renormalization group is conceptually and technically simple, but it may yield poor quantitative results. This was the main reason on the slow development of QRG through the 1980s in favor of powerfull numerical Quantum Mont Carlo method. However there was a comback to the RG in the 1990s as one of the most powerful method when dealing with the zero temperature properties of the quantum systems. It was Wilson the first to associate the failure of the QRG to the role of the boundary conditions when applying to the tight bonding model. In fact the success of the density matrix renormalization group (DMRG) as developed by White\cite{white} refers to the way in which it takes into account the effect of boundary conditions in terms of the quantum correlation in the ground sate of the system. Later, it was shown that the breakdown of the QRG is rooted on taking into account the entanglement in the ground state of the system \cite{JOsborne}. However, Martin-Delgado, \emph{et al}\cite{Mdelgado} proposed a QRG prescription which implements the concept of the quantum group to the renormalization group approach. In this approach the effect of boundary conditions have been imposed in terms of boundary magnetic fields on each block. The boundary fields cancel each other when collecting all blocks into the whole chain. This method was used to the describe the critical line of the XXZ model for $0\leq\Delta\leq1$ and its basic idea relies on the "restoring" the rotational symmetry of the model by adding the appropriate boundary terms to the model. The open chain Hamiltonian is defined as \begin{widetext} \begin{eqnarray} \nonumber H=\frac{J}{4}\sum_{i=1}^{N}h_{i,i+1}~~~,~~~ h_{i,i+1}=\Big[\sigma_{i}^{x}\sigma_{i+1}^{x}+\sigma_{i}^{y}\sigma_{i+1}^{y}+ (\frac{q+q^{-1}}{2})\sigma_{i}^{z}\sigma_{i+1}^{z} -(\frac{q-q^{-1}}{2})(\sigma_{i}^{z}-\sigma_{i+1}^{z})\Big]~~~,~~ \end{eqnarray} \end{widetext} Where $q$ is an arbitrary complex number. To construct a real space RG, we have considered three-site block (Fig.\ref{3siteblock}) which is important in order to get a renormalized Hamiltonian of the same form as the original one. In this case the inter-block ($H^{BB})$ and intra-block ($H^{B}$) Hamiltonians are: \begin{widetext} \begin{eqnarray} \nonumber h_{I}^{B}&=&\frac{J}{4}\Big[(\sigma_{1,I}^{x}\sigma_{2,I}^{x}+\sigma_{2,I}^{x}\sigma_{3,I}^{x}+ \sigma_{1,I}^{y}\sigma_{2,I}^{y}+\sigma_{2,I}^{y}\sigma_{3,I}^{y}) +(\frac{q+q^{-1}}{2})(\sigma_{1,I}^{z}\sigma_{2,I}^{z}+\sigma_{2,I}^{z}\sigma_{3,I}^{z}) -(\frac{q-q^{-1}}{2})(\sigma_{1,I}^{z}-\sigma_{3,I}^{z})\Big],\\ \nonumber H^{BB}&=&\frac{J}{4}\sum_{I=1}^{N/3}\Big[(\sigma_{3,I}^{x}\sigma_{1,I+1}^{x}+ \sigma_{3,I}^{y}\sigma_{1,I+1}^{y}) +(\frac{q+q^{-1}}{2})(\sigma_{3,I}^{z}\sigma_{1,I+1}^{z}) -(\frac{q-q^{-1}}{2})(\sigma_{3,I}^{z}-\sigma_{1,I+1}^{z})\Big]. \end{eqnarray} \end{widetext} In this way the correlations between blocks are taken into account in the coarse graining procedure in terms of the boundary fields of each block. The ground state of the block Hamiltonian is doubly degenerate and given by \begin{widetext} \begin{eqnarray} \nonumber |\psi_{0}\rangle=\frac{1}{\sqrt{2(q+q^{-1}+1)}}(-q^{1/2}|\uparrow\uparrow\downarrow\rangle+ (q^{1/2}+q^{-1/2})|\uparrow\downarrow\uparrow\rangle-q^{-1/2}|\downarrow\uparrow\uparrow\rangle) ,\\ \nonumber |\psi_{0}'\rangle=\frac{1}{\sqrt{2(q+q^{-1}+1)}}(-q^{1/2}|\uparrow\downarrow\downarrow\rangle+ (q^{1/2}+q^{-1/2})|\downarrow\uparrow\downarrow\rangle-q^{-1/2}|\downarrow\downarrow\uparrow\rangle) . \end{eqnarray} \end{widetext} and the corresponding energy is $e_{0}=-\frac{J}{4}(2+q+q^{-1})$. The effective Hamiltonian with renormalized couplings is \begin{widetext} \begin{eqnarray} \nonumber H^{eff}=\frac{J'}{4}\left[\sum_{i}^{N/3}({\sigma}_{i}^{x}{\sigma}_{i+1}^{x}+{\sigma}_{i}^{y}{\sigma}_{i+1}^{y})+ (\frac{q'+q'^{-1}}{2})({\sigma}_{i}^{z}{\sigma}_{i+1}^{z}) -(\frac{q'-q'^{-1}}{2})(\sigma_{i}^{z}-\sigma_{i+1}^{z})\right], \end{eqnarray} \end{widetext} where \begin{eqnarray} \label{51} q'=q~~,~~ J'=\xi^{2}(q)J~~,~ ~ \xi(q)=\frac{q+q^{-1}+2}{2(q+q^{-1}+1)} \end{eqnarray} The remarkable result of eq.(\ref{51}) is that the coupling constant $q$ when set as a pure phase, or alternatively $\Delta=(\frac{q+q^{-1}}{2})$ dose not flow under RG transformation which predicts correctly a line of critical models in the range $|\Delta|<1$, while $J^{m}$, which is the value of $J$ after $m$ RG iterations, goes to zero in the limit where $m\rightarrow\infty$, which in turn defines the scale of energy. This means that in the critical region the entanglement measure, either concurrence or entanglement entropy do not evolve through the renormalization of the anisotropic coupling constant, i.e. it does not rescale the anisotropy parameter ($\Delta'=\Delta$) since every point in this region is a fixed point. The density matrix is constructed from either $|\psi_{0}\rangle$ or $|\psi_{0}'\rangle$. We then sum over site 1 and 3 degrees of freedom to get the reduced density matrix of site 2 and the rest of system. The von-Neumann entropy or the entanglement of site 2 and the whole of system is obtained by the eigenvalues of the reduced density matrix (similar to Eq.(\ref{eq44})). The entanglement of site 2 ($E_q$) has been plotted in Fig.\ref{fig50} versus the anisotropy parameter. For $0 < \Delta\leq 1$ we observe a decrease of the entanglement versus $\Delta$ which is the effect of anisotropy to reduce the quantum correlations whilt it is maximum at $\Delta=0$. Moreover, the reported values for $0 < \Delta\leq 1$ do not evolve under RG transformation which shows to be the value as $N\rightarrow \infty$. For $\Delta>1$ we get the evolution of $E_q$ in terms of RG transformation. For higher RG iterations we get the zero value of entanglement which manifests the uncorrelated nature of a (classical) Ising state. The nonanalytic behavior which manifest itself in the first derivative of the entanglement comes as the critical point $\Delta_{c}=1$ is approached from the Ising (gapped) phase and is accompanied by a scaling with an exponents like what were seen before. \begin{figure} \begin{center} \includegraphics[width=8cm]{fig7.eps} \caption{(color online) Quantum group manifestation of the evolution of Entanglement entropy at different RG iterations (steps).} \label{fig50} \end{center} \end{figure} \section{conclusions \label{conclusion}} In this work the relation between the entanglement and quantum phase transition via the renormalization group procedure is addressed. We have used the idea of renormalization group to study the quantum information properties of the XXZ chain. The entanglement entropy and concurrence as two measures of quantum correlations are used. In order to explore the critical behavior of the XXZ model the evolution of both entanglement entropy and concurrence through the renormalization of the lattice were examined. As the number of RG iterations increases the entanglement entropy as well as concurrence develops two different values in both sides of the quantum critical point, where the phase transition between the spin fluid phase and the Ising like phase occurs. The phase transition becomes significant which shows a diverging behavior in the first derivative of the two measures. This divergence of entanglement measures accompanied by some scaling behavior near the critical point (as the size of the system becomes large). The scaling behavior characterizes how the critical point of the model is touched as the system size is increased. The fact that both measures of entanglement scale with the same exponent is a significant result of the large scale behavior of the model near the critical point. It is also shown that the nonanalytic behavior of both measures of entanglement is mirrored from the correlation length exponent in the vicinity of the critical point. This inspires the behavior of the entanglement near the critical point is directly connected to the quantum critical properties of the model. Compensating the poor results of the QRG in the critical region of the XXZ model, we resort to the quantum group method in order to take into account the correlation between blocks. In this way the massless region is truly captured and a realistic feature of the evolution of the concurrence or entanglement entropy is inspired. However, the nonanalytic behavior and the scaling behavior is the same as before as the critical point of the model is approached from the gapped phase. The approach presented here can be used to get the quantum information properties of quantum systems in an easy way. In other words we get the properties of a large system dealing with a small block which make it possible to get analytic results. However, the quantum renormalization group usually suffers from the poor quantitative results. One should get more accurate results using the proposed idea with a more complex numerical method like density matrix renormalization group or exact diagonalzation method. \begin{acknowledgments} This work was supported in part by the Center of Excellence in Complex Systems and Condensed Matter (www.cscm.ir). \end{acknowledgments} \section*{References}
2,877,628,089,857
arxiv
\section{Introduction} The past few years have witnessed significant progress on various object-level visual recognition tasks, such as object detection \cite{felzenswalb10, fidler13}, object segmentation \cite{Smin12cls, arbelaez2012semantic}, etc. Understanding how different parts of an object are related and where the parts are located have been an increasingly important topic in computer vision. There is extensive study on some part-level visual recognition tasks, such as human pose estimation (predicting joints) \cite{yang2011articulated, toshev2013deeppose} and landmark localization (predicting keypoints) \cite{belhumeur2011localizing, liubird}. But there are only a few pieces of works on semantic part segmentation, such as human parsing \cite{bo2011shape, dongdeformable, dongtowards, yamaguchi2012parsing} and car parsing \cite{thomas2008using, eslami2012generative, Lu2014}. In some applications (e.g., activity analysis), it would be of great use if computers can output richer part segmentation instead of just giving a set of keypoints/landmarks, a bounding box or an entire object segment. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{figures/introfigure_new.pdf} \caption{Different visual recognition tasks: (a) semantic labeling with pixelwise object and background label. (b) object detection which outputs bounding box. (c) part detection which gives bounding box for part. (d,e,f) semantic part segmentation with pixelwise part label. We study the semantic part segmentation problem in this paper. Best viewed in color.} \label{fig:introfigure} \end{center} \end{figure} We make an attempt on the challenging task of semantic part segmentation for animals in this paper. Since animals often have homogeneous appearance (e.g., furs) on the whole body, mid-level segmentation methods \cite{Carreira12, Arbelaez11} could not output quality proposals for semantic parts. Besides, current classifers are not able to distinguish between different semantic parts since they often have similar appearance. Thus we could not simply take the popular object segmentation pipeline \cite{Smin12cls, arbelaez2012semantic} by treating each semantic part as an object. This tells us that the shape information of semantic parts are necessary for part segmentation. But there is a large amount of variability of part shapes due to different animal viewpoints and poses (see (d,e,f) in Figure \ref{fig:introfigure}). Therefore, it is very challenging to build a model that effectively combines animal appearance, part shape and spatial relation among parts under varying viewpoints and poses, while still allowing efficient learning and inference. Inspired by \cite{jin2006context,zhu2007stochastic,zhu2010learning,kokkinos2011inference}, compositional model is able to capture long-range relations among parts while still enabling efficient inference since parts are arranged in a hierarchical manner. The intuition of compositional model is that articulated objects are often built by compositions of the parts, which in turn are built by compositions of more elementary subparts. Specifically, in this paper, we build a mixture of compositional models to represent the animal and part shapes/boundaries. Each mixture is able to handle local deformation of shapes and different mixtures deal with global variations due to viewpoints and poses. We incorporate edge, appearance, and part cues into the compositional model by using algorithms from edge detection, semantic labeling and part detection. It is of significant importance to design efficient \textit{inference} and \textit{learning} algorithms for the compositional model. We develop the constrained generalized distance transform (CGDT) algorithm which extends the distance transform algorithm in \cite{felzenszwalb2004distance}. This algorithm allows us to perform efficient linear-time inference for the model. Besides, we design a novel algorithm to learn the compositional models for animal and parts boundaries under various poses and viewpoints from the part-level annotation. And we learn the parameters of the model using latent SVM. In order to segment highly deformable animal legs, we first perform part segmentation using our compositional model for large parts, such as head, neck and torso, etc. Given these segmentation results, we can narrow down the search region for legs since legs are almost always underneath the torso. Then we segment legs by combining symmetric structure and appearance information. Our experiment is conducted on two animal classes: horse and cow. We use a newly annotated dataset on Pascal VOC 2010 \cite{chendetect} which provides pixelwise semantic part annotations. We focus on segmenting fully observable animals in this paper and leave the occlusion and truncation issue for future study. Self-occlusion due to poses/viewpoints can be handled by our model. We compare our algorithm with the method that combines the state-of-the-art animal part detection \cite{chendetect} and object segmentation \cite{BharathECCV2014}. The experiment shows that our method achieves much better part segmentation than the baseline, which demonstrates the effectiveness of our method. In summary, our contribution is threefold. Firstly, we develop a novel method for animal part segmentation by introducing a mixture of compositional models coupled with shape and appearance. Secondly, we propose an algorithm to learn the compositional models of object and part shapes given part-level pixelwise annotations. Thirdly, we develop the constrained generalized distance transform (CGDT) algorithm to achieve linear-time inference for our model. \section{Related Work} In terms of method, our work is related to \cite{zhu2010learning, zhu2011max}, where they used compositional model for horse segmentation. But they did not incorporate strong appearance cues into their compositional shape model, and they modeled only a few poses and viewpoints. Besides, our inference is much faster than their compositional inference algorithm. There was also work on automatically learning the compositional structure/hierarchical dictionary \cite{zhu2008unsupervised, fidler2007towards}, but their algorithms did not consider semantic parts and were not evaluated on challenging datasets. In terms of task, our work is related to human parsing/clothes parsing \cite{bo2011shape, dongdeformable, dongtowards, yamaguchi2012parsing}. They generated segment proposals by superpixel/over-segmentation algorithms, and then used these segments as building blocks for whole human body by either compositional method or And-Or graph. Note that our task is inherently quite different from clothes parsing because animals often have roughly homogeneous appearance throughout the whole body while in the human parsing datasets humans often have different appearance (e.g., color) for different part due to clothes. So their superpixel/over-segmentation algorithms could not output good segment proposals for animal parts. Besides, in challenging datasets like Pascal VOC, cluttered background and unclear boundaries further degrade the superpixel quality. Therefore, the superpixel-based methods for human parsing are not appropriate for our animal part segmentation task. Our work also bears a similarity to \cite{zhu2012face} in the spirit that a mixture of graphical models are used to capture global variation due to viewpoints/poses. But our compositional model is able to capture spatial relation between children nodes while still achieving linear complexity inference, and we develop an algorithm to learn the mixtures of compositional models. Besides, our task is part segmentation for animals of various poses and viewpoints, which appears more challenging than landmark localization for faces in \cite{zhu2012face}. There are lots of works in the literature on modeling object shape such as \cite{shotton2005contour, ferrari2010images, wu2010learning, kokkinos2007unsupervised}. But they were only aimed at object-level detection or segmentation. Furthermore, none of them combined shape representation with strong appearance information. \section{Compositional Model combining Shape and Appearance} We develop a compositional model to represent animal shape/boundary under various viewpoints and poses. We formulate the compositional part-subpart relation by a probabilistic graphical model. Let $v$ denote the parent node which represents the part and $ch(v)$ denote the children nodes which represent the constituent subparts. The location of part $v$ is denoted by $S_v = (x_v, y_v)$ and the locations of its subparts $ch(v)$ are denoted by $S_{ch(v)}$. The probability distribution for the part-subpart composition is modeled as a Gibbs distribution \begin{equation} P(S_{ch(v)}|S_v) = \begin{cases} \frac{1}{Z}{\exp(-\psi (S_{ch(v)}))}, &\text{if} \ \ S_v=f(S_{ch(v)}) \\ 0 , &\text{otherwise.} \end{cases} \end{equation} Here $f(S_{ch(v)})$ is a deterministic function. In this paper, we limit the number of subparts for any part to be two, i.e., $|ch(v)|=2$. And we set $f(S_{ch(v)}) = S_{ch(v)}/2$, which means that the location of a part is the average location of its children subparts. Potential function $\psi(S_{ch(v)})$ represents the relation between two children subparts. Let $ch(v) = (v_1,v_2).$ We have \begin{equation} \psi(S_{ch(v)})=w_v \cdot (dx_v^2,dy_v^2 ), \end{equation} where $dx_v = x_{v_2} -x_{v_1} -\Delta x_v $ and $dy_v = y_{v_2} -y_{v_1} -\Delta y_v$. Here $\Delta S_v = (\Delta x_v, \Delta y_v)$ is the location of part $v$'s second subpart $v_2$ relative to its first subpart $v_1$. And $(dx_v, dy_v)$ is the location displacement of second subpart $v_2$ relative to its anchor location. In summary, a part node $v$ is uniquely specified by its children $ch(v)$ and the spatial relation $\Delta S_v$ between children. In terms of the parent-children relation, our compositional model is similar to the prevailing pictorial structure \cite{felzenszwalb2005pictorial} and deformable part model \cite{felzenswalb10}. But our model is able to capture mutual relation between children. \begin{figure}[t] \centering \begin{subfigure}[b]{150pt} \includegraphics[width=150pt]{figures/composition_figure2.pdf} \caption{} \end{subfigure}\hspace{0cm} \begin{subfigure}[b]{80pt} \includegraphics[width=80pt]{figures/polarity_figure.pdf} \caption{} \end{subfigure} \caption{(a) Illustration of compositional model for a particular horse shape. Red for head, blue for neck and green for torso. Due to space limitation, the leaf nodes (oriented edgelet of eight orientations) are not shown. (b) Three types of polarity value for a leaf node with a horizontal orientation. Green dot represents center location and red line segment represents orientation. Best viewed in color.} \label{fig:modelfigure} \end{figure} An object can be modeled by repeating the part-subpart compositions, as shown in Figure \ref{fig:modelfigure} (a). Mathematically, we use a probabilistic graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ to model the object. This graph has a hierarchical structure with levels $l \in \{1,...,L\}$, and $\mathcal{V} = \cup_{l=1}^L {\mathcal{V}_l}$, where $\mathcal{V}_l$ denotes the node set at level-$l$. At the top level (level-$L$), there is one root node, representing the object (i.e., $|\mathcal{V}_L|=1$). The leaf node is $v \in \mathcal{V}_1$. If a part node $v$ is at level-$l$, i.e., $ v \in \mathcal{V}_l$, then its children subparts must be at level-$(l-1)$, i.e., $ch(v) \subset \mathcal{V}_{l-1}$. And as mentioned above, for any part, we limit the number of subparts to be two. So there are in total $2^{L-1}$ leaf nodes and $2^L-1$ nodes in the graph $\mathcal{V}$. Any part-subpart pair constructs an edge in this graph, i.e., $(v,t) \in \mathcal{E}$ if $t \in ch(v)$. There is no edge between two children subparts of one parent part, i.e., $(s,t) \notin \mathcal{E}$ if $s \in ch(v), t \in ch(v)$. Thus the hierarchical graph $\mathcal{G}$ has a tree-like structure\footnote{Precisely, it is not a tree since two children are connected. But we prefer calling it tree structure in this paper for explanation purposes.}. The probability distribution for the object is specified by products of part-subpart probabilities \begin{equation} P(S_{\mathcal{V}}) = \prod_{v \in \mathcal{V} \setminus \mathcal{V}_1}{ P(S_{ch(v)}|S_v)} P(S_{\mathcal{V}_L}). \end{equation} We assume $P(S_{\mathcal{V}_L})$ is uniformly distributed. The compositional model introduced above can be viewed as a prior shape model for the object since it characterizes the spatial relation between parts and subparts. To specify a fully generative model for the object, we need to define a likelihood function \begin{equation} P(\mathbf{I} | S_{\mathcal{V}}) = \frac{1}{Z}{\exp(-\sum_{v \in \mathcal{V}}{ \phi(S_v,\mathbf{I})}))}. \end{equation} The MAP inference performs \begin{equation} \max_{S_{\mathcal{V}}} P(S_{\mathcal{V}}|\mathbf{I}) \propto P(\mathbf{I} | S_{\mathcal{V}}) P(S_{\mathcal{V}}) , \end{equation} which is equivalent to minimizing the energy \begin{equation} \begin{split} E(\mathbf{I}) = \min_{S_{\mathcal{V}}} E(S_{\mathcal{V}},\mathbf{I}) = \sum_{v \in \mathcal{V}}{ \phi(S_v,\mathbf{I})} + \sum_{ \substack{v \in \mathcal{V} \setminus \mathcal{V}_1 \\ S_v=f(S_{ch(v)})}} {\psi(S_{ch(v)})}. \label{eq:energymin} \end{split} \end{equation} \subsection{Feature for Unary Term} Next we explain the potential function (unary term) which interacts with the image. We assume that the parameters are shared by parts which will be discussed in detail in Section \ref{sec:paralearning}. Specifically, the potential function for leaf node $v \in \mathcal{V}_1$ is modeled as \begin{equation} \phi(S_v,\mathbf{I}) = \phi^{\text{edge}}(S_v,\mathbf{I}) + \phi^{\text{app}}(S_v,\mathbf{I}). \end{equation} The first term $\phi^{\text{edge}}(S_v,\mathbf{I})$ characterizes how well the orientation at location $S_v$ in the image matches the model orientation $\theta_v$. In the experiment we use the gPb edge detection result \cite{Arbelaez11} which outputs pixelwise confidence score for eight orientations. Thus \begin{equation} \phi^{\text{edge}}(S_v,\mathbf{I}) = w_v^{\text{edge}} \cdot gPb(\theta_v,S_v, \mathbf{I}). \end{equation} To incorporate appearance information, each leaf node $v$ is associated with an polarity value $a_v$ (specified by the model as $\theta_v$) indicating which side of the leaf node is object side, and which side is non-object (background) side. We extract a square centered at location $S_v$, and obtain the object-side region and non-object-side region based on the orientation, as shown in Figure \ref{fig:modelfigure} (b). We use the semantic labeling result \cite{mottaghirole} as the appearance feature. It gives pixelwise segmentation result for 34 classes including 20 object classes from Pascal VOC and another 14 background classes. Each pixel is associated with a 34-dimensional vector with each component being the confidence score for the corresponding class. We average the feature vector of object-side region and non-object-side region, and then concatenate them to make a 68-dimensional feature vector denoted by $SemLab(\theta_v,S_v, \mathbf{I})$. We use the confidence scores of all classes to deal with inaccurate semantic labeling and context information. Thus we have \begin{equation} \phi^{\text{app}}(S_v,\mathbf{I}) = w_v^{\text{app}} \cdot SemLab(\theta_v,S_v, \mathbf{I}). \end{equation} For the non-leaf node $v \in \mathcal{V}_l, l > 1$, the unary term $\phi(S_v,\mathbf{I}) $ indicates the confidence of part $v$ being at location $S_v$. The confidence score can be from some part detection algorithm \cite{chendetect} for animals. \begin{equation} \phi(S_v,\mathbf{I}) = w_v^{\text{part}} \cdot PartDet(S_v, \mathbf{I}). \end{equation} For example, if $v$ represents the horse head, $PartDet(S_v, \mathbf{I}) $ can be the horse head detection score. \subsection{Mixture of Poses and Viewpoints} We have so far introduced a compositional model for animal of one single viewpoint and pose. In order to model various poses and viewpoints, we use a set of nodes at the top level ($v \in \mathcal{V}_L$), each of which represents animal shape from one viewpoint and pose. Basically we use a mixture model with each mixture being a node at the top level. Section \ref{sec:structlearn} will introduce how to learn the mixtures. \section{Inference for Compositional Model} Given an image, the goal of inference is to find the best mixture $v \in \mathcal{V}_L$ (i.e. the best viewpoint and pose) and specify locations of all its descendants $S_{tree(v)}$, especially locations of all leaf nodes as boundary landmarks. Then we can connect adjacent landmarks of each semantic part to give part segmentation result. Basically, for each mixture $v \in \mathcal{V}_L$, we solve the minimization problem (\ref{eq:energymin}) by standard dynamic programming on the tree $tree(v)$. And then we select the mixture with the minimal energy as the best mixture. The dynamic programming algorithm involves a bottom-up process starting from the leaf nodes to find the minimal energy, which is followed by a top-down process to find the best configuration. The search is done over every pixel in the image grid. Denote the image grid by $\mathcal{D}=\{1,...,W\} \times \{1,...,H\}$, and the size of the image grid is $|\mathcal{D}| = W \times H$. The core of dynamic programming is to solve the following minimization problem for each non-leaf node \begin{equation} \begin{split} E(S) = &\min_{\substack{ \{ S_{1},S_{2} \} \\2 S = S_{1}+S_{2}}} { \phi(S_{1},S_{2}) + E_{1}(S_{1}) + E_{2}(S_{2}) } \\ = &\min_{\substack{ \{ S_{1} \} \\ 2S-S_{1} \in \mathcal{D} }} { \phi(S_{1},2S - S_{1}) + E_{1}(S_{1}) + E_{2}(2S - S_{1})}. \label{eq:originprob} \end{split} \end{equation} Here $S$, $S_1$ and $S_2$ denote the locations of the parent (part) node and the two children (subpart) nodes respectively. $E(S)$, $E(S_1)$ and $E(S_2)$ denote the energy functions in the dynamic programming. For simplicity, we drop the subscript $v$ since it applies to every non-leaf node. Exact solution of problem (\ref{eq:originprob}) requires quadratic complexity $O(|\mathcal{D}|^2)$, which is too slow in practice. This drives us to design an algorithm to achieve linear complexity $O(|\mathcal{D}|)$. Therefore we approximate problem (\ref{eq:originprob}) by $$ E(S) \approx \min_{\substack{ \{ S_{1} \} \\ 2S-S_{1} \in \mathcal{D} }} { \phi(S_{1},2S - S_{1}) + E_{1}(S_{1}) + E_{2}(2S - S^*_{1})}, $$ \begin{equation} S^*_1 = \arg \min_{\substack{ \{ S_{1} \} \\ 2S-S_{1} \in \mathcal{D} }} {\phi(S_{1},2S -S_{1}) + E_{1}(S_{1}) }. \label{eq:newprob} \end{equation} \begin{figure*}[!t] \centering \begin{equation} (x_1^*,y_1^*) = \arg \min_{\substack{ \{y_{1}\} \\1 \leq 2y-y_{1} \leq H }} \{ \ \ 4w^y(y-y_{1}-\frac{\Delta y}{2})^2 + \min_{\substack{ \{ x_{1} \}\\ 1 \leq 2x-x_{1} \leq W }} {4w^x(x-x_{1}-\frac{\Delta x}{2})^2 + E_{1}(x_{1},y_{1}) } \ \ \}. \label{eq:expprob} \end{equation} \end{figure*} \renewcommand{\algorithmicrequire}{\textbf{Initialization:}} \renewcommand{\algorithmicensure}{\textbf{Process:}} \begin{algorithm}[!t] \caption{The CGDT algorithm} \label{alg:pseu_cdt} \begin{algorithmic}[1] \REQUIRE ~~\\ $range(1) =u^{-1}(1); \ range(2)=l^{-1}(1); $ $idx(1)=1; \ k=1; $ \ENSURE ~~\\ \STATE For \ $z=2 \ \ \text{to} \ \ n$ \STATE \quad \quad $s=\frac{(g(z)+h^2(z))-(g(idx(k))+h^2(idx(k)))}{2h(z)-2h(idx(k))}$; \STATE \quad \quad Project $s$ onto interval $[u^{-1}(z),l^{-1}(z)]$; \STATE \quad \quad While $s \leq range(k)$ \STATE \quad \quad \quad \quad k = k-1; \STATE \quad \quad \quad \quad $s=\frac{(g(z)+h^2(z))-(g(idx(k))+h^2(idx(k)))}{2h(z)-2h(idx(k))}$; \STATE \quad \quad \quad \quad Project $s$ onto interval $[u^{-1}(z),l^{-1}(z)]$; \STATE \quad \quad end \STATE \quad \quad If $s > range(k+1)$ \STATE \quad \quad \quad \quad $k = k+1; \ \ idx(k)=z; $ \STATE \quad \quad \quad \quad $range(k+1)=l^{-1}(z)$; \STATE \quad \quad Else \STATE \quad \quad \quad \quad $k=k+1; \ \ idx(k)=z; $ \STATE \quad \quad \quad \quad $range(k)=s; \ \ range(k+1) = l^{-1}(z)$; \STATE \quad \quad end \STATE end \STATE Fill in the value of $\gamma(x)$ using $range(k)$ and $idx(k)$. \end{algorithmic} \end{algorithm} The reason of making such approximation is that we can then solve problem (\ref{eq:newprob}) efficiently in linear time using the constrained generalized distance transform algorithm developed in Section \ref{sec:CDT}. We will validate this approximation by experiment in Section \ref{sec:infer}. \subsection{Constrained Generalized Distance Transform (CGDT) Algorithm} \label{sec:CDT} First note that since the variables $S_1 = (x_{1}, y_{1})$ are separable, we can translate the 2-dimensional problem (\ref{eq:newprob}) into two 1-dimensional problems by first minimizing one variable ($x_1$) and then minimizing the other one ($y_1$), as shown in Equation (\ref{eq:expprob}). Next we show how to efficiently solve these two similar 1-dimensional subproblems. To this end, we consider a slightly more general problem of the form \begin{equation} \gamma(x) = \min_{l(x) \leq z \leq u(x)} {(x-h(z))^2 + g(z)}, \label{eq:coreprob} \end{equation} where $h(z)$, $u(x)$ and $l(x)$ are all non-decreasing. In Equation (\ref{eq:expprob}), for the inner minimization, we set $h(z) = z + \frac{\Delta x}{2}$ and $l(x)=2x-W,u(x)=2x-1$; and for the outer minimization, we set $h(z) = z + \frac{\Delta y}{2}$ and $l(y)=2y-H,u(y)=2y-1$. Note that problem (\ref{eq:coreprob}) becomes the ordinary generalized distance transform \cite{felzenszwalb2004distance} if we ignore the constraint $l(x) \leq z \leq u(x)$. Inspired by \cite{felzenszwalb2004distance}, $\gamma(x)$ can be viewed as the lower envelope of a set of truncated parabolas $(x-h(z))^2+g(z)$ with the truncation being $u^{-1}(z) \leq x \leq l^{-1}(z)$. The algorithm performs in two steps. In the first step we obtain the lower envelope of all the truncated parabolas by computing the boundary points between adjacent selected parabolas while keeping the truncation constraint being satisfied. In the second step we fill in the value $\gamma(x)$ using the obtained lower envelope from step one. Algorithm pseudocode is provided in Algorithm \ref{alg:pseu_cdt}, where we use $range(k)$ and $range(k+1)$ to indicate the range of $k$-th parabola in the lower envelope, and $idx(k)$ to indicate the grid location $z$ of the $k$-th parabola in the lower envelope. \section{Learning for Compositional Model} \subsection{Structure Learning} \label{sec:structlearn} Structure learning refers to learning the hierarchical graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ to represent the animal and part shapes under various poses and viewpoints. Specifically, for each non-leaf node $v$, we need to learn the part-subpart relation $ch(v)$ and $\Delta S_v$; and for each leaf nodes $v \in \mathcal{V}_1$, we need to learn the orientation $\theta_v$ and polarity $a_v$. We consider eight orientations which are equally distributed from 0 to $\pi$, and three polarity values for each orientation which represent object region on one side, object region on the other side, and object region on both sides respectively, as shown in Figure \ref{fig:modelfigure} (b). Thus there are in total 24 types of leaf nodes at level one. Note that leaf nodes are shared across different mixtures. We use compositional models to represent big semantic parts such as head, neck and torso, and we discuss segmenting legs in Section \ref{sec:legs}. The structure learning algorithm proceeds in the following four steps. The visualization figures are provided in the supplementary material. 1. Clustering: Given part-level annotations, we extract the masks for head, neck and torso and assign them different values (1 for head, 2 for neck, and 3 for torso). Then we resize each example by the maximal side length. We apply the K-medoids clustering algorithm to find K representative shapes from the training data. And we will build K compositional mixtures based on the K representative shapes. 2. Sampling: We evenly sample fixed number of landmarks along the boundary of each semantic part. 3. Matching: We match each landmark to one of the 24 leaf nodes. 4. Composing: Starting from the landmarks (leaf nodes), we compose each two adjacent nodes (children) into a higher-level node (parent) and record the spatial relation between the two children nodes. The parent location is the average of two children locations. We run this procedure level-by-level up to the top level. \subsection{Parameter Learning}\label{sec:paralearning} The parameters of the compositional model are $w_v$ and $w_v^{\text{part}}$ for non-leaf nodes, and $w_v^{\text{edge}}$ and $w_v^{\text{app}}$ for leaf nodes. To reduce the model complexity, we assume that parameters are shared by parts. So the parameter vector becomes $\mathbf{w} = (w,w^{\text{part}},w^{\text{edge}},w^{\text{app}})$. These parameters strike a balance between the prior shape ($w$), appearance cues ($w^{\text{app}}$), orientation confidence ($w^{\text{edge}}$) and part confidence ($w^{\text{part}}$). The sharing allows us to learn the model parameters using a small number of training data. Note that the energy function $E({S_{\mathcal{V}}},\mathbf{I};\mathbf{w})$ is of the form \begin{equation} E({S_{\mathcal{V}}},\mathbf{I};\mathbf{w}) = \mathbf{w} \cdot \phi({S_{\mathcal{V}}},\mathbf{I}). \end{equation} The training dataset is denoted by $\{ (\mathbf{I}_i, y_i) \}_{i=1}^n$, where $y_i \in \{+1,-1\}$. The positive examples refer to object bounding box images and negative examples refer to bounding box images of other objects. Since we do not have the location information for all parts/subparts $S_{\mathcal{V}}$, we adopt latent SVM for learning parameters $\mathbf{w}$. \begin{equation} \min_{\mathbf{w}} \quad \frac{1}{2} ||\mathbf{w}||^2+C\sum_{i=1}^n{\max(0,1-y_i F(\mathbf{I}_i;\mathbf{w}))}, \end{equation} where the score function is defined as $F(\mathbf{I}_i;\mathbf{w}) = - \min_{S_{\mathcal{V}}} E({S_{\mathcal{V}}},\mathbf{I}_i;\mathbf{w})$. \section{Segmenting Legs} \label{sec:legs} Considering the extremely high variability of animal legs, we take a coarse-to-fine approach to segment legs. Specifically, after segmenting the animal body (head, neck, torso), we can narrow down the search region for legs since we know that most of the time the legs appear underneath the torso. Then in the refined search region, we detect symmetric regions using algorithm in \cite{lee2013detecting} since animal legs often have roughly symmetric structure. Next we compute a confidence score for each detected symmetric region $R$, and make prediction by thresholding this score. \begin{equation} \text{score}(R) = w^{\text{obj}} \cdot {fea}(R). \end{equation} Here $w^{\text{obj}}$ is the parameters corresponding to the features extracted within object region, i.e., the first half of $w^{\text{app}}$. And $fea(R)$ is the average 34-dimensional feature vector within region $R$. \section{Experiments} \label{sec:exp} In this section, we will report part segmentation results for horse and cow. We also conduct some diagnostic experiments for our model. In addition, we validate by experiment that our approximate inference is much faster than exact inference while losing little accuracy. {\bf Dataset:} We use a newly annotated dataset on Pascal VOC 2010 \cite{chendetect} to evaluate our part segmentation algorithm. It provides pixelwise semantic part annotations for each object instance. Since we focus on non-occlusion and non-truncation case, for each animal class we manually select the fully observable animal instances in both trainval and test set. We use this refined dataset for training and testing, and we will release it. For horse and cow, there are roughly 150 fully observable bounding box images in trainval and test respectively. Considering the various poses and viewpoints of animals and cluttered background in Pascal VOC images, we believe the fully observable animal bounding box images are a suitable testbed for our algorithm. We use the bounding box images with part annotations in the Pascal trainval set for structure learning. As for parameter learning, we use the bounding box images from the Pascal VOC trainval set as positive examples and randomly select a subset of bounding box images of other object classes from the Pascal VOC trainval set as negative examples. We use the bounding box images from the Pascal VOC test set for testing. {\bf Setup:} We consider head, neck, torso and leg as semantic parts. In the structure learning, we set the number of boundary landmarks to be 8 for head, 8 for neck and 16 for torso. Thus each compositional tree has 6 levels and 32 leaf nodes. The head node and neck node are at the 4-th level and torso node is at the 5-th level. We only consider the head part score (i.e., $w_v^{\text{part}}$ is non-zero only if $v$ refers to the head part) since head is the most discriminative part for animals. Our algorithm outputs the best mixture and locations of all parts/subparts. We only use the locations of all leaf nodes as boundary landmarks. We connect the adjacent leaf nodes of each semantic part to make a closed contour as part segmentation result. We use intersection-over-union (IOU) as the performance measure. \begin{figure}[!t] \centering \includegraphics[width=260pt]{figures/mixtureNum.pdf} \caption{The performance variation with the number of mixtures for four semantic parts. The effect of having appearance cues and head cues are also shown in this figure.} \label{fig:mixtureNum} \end{figure} \subsection{Efficient Inference} \label{sec:infer} Recall that we make approximations (\ref{eq:newprob}) in order to allow efficient linear-time inference. We now provide results demonstrating that we lose little in accuracy and gain much in speed by approximation. Let $E(\mathbf{I})$ denote the exact minimal energy (quadratic complexity) and $\tilde{E}(\mathbf{I})$ denote the minimal energy by our efficient approximate algorithm (linear complexity). We measure the error by $\frac{(E(\mathbf{I}) - \tilde{E}(\mathbf{I}))}{E(\mathbf{I})}$. We compute this error on all test images of horses and get 0.53\% average error. Furthermore, we compute the average location difference of all leaf nodes between exact inference and our approximate inference algorithm. We get on average 1.78 pixels error and 1.11\% if normalized by maximal side length. The above results show that our approximation is extremely accurate. As for the speed, the average parsing time per image (resize maximal side length to 160) is about 10 seconds using our inference algorithm while the average parsing time for exact inference is about 10 minutes per image. This demonstrates the significant speedup by our fast approximate inference algorithm. \subsection{Model Diagnostics} Our diagnostic experiment is based on horse images. {\bf Number of Mixtures:} The structure learning algorithm uses $K$-medoids clustering to find $K$ representative shapes. Figure \ref{fig:mixtureNum} shows how the segmentation performance varies with respect to the parameter $K$ for each semantic part. Intuitively, as the number of mixture increases, our mixtures of compositional models are able to capture more deformations and variations of animal and part boundaries. Therefore, the segmentation performance improves with the number of mixtures. Particularly, small parts (head and neck) benefit significantly from increasing mixture number. {\bf Appearance and Head cues:} We can also see from Figure \ref{fig:mixtureNum} that the performance drops if we do not use appearance cues from semantic labeling or head cues from animal part detection. This result indicates that combining appearance and part information are necessary for localizing boundaries although they are not always correct. {\bf Deformation Ability of Compositional Model:} Figure \ref{fig:deform} shows that each mixture deals with local deformation, and different mixtures handle large global variation due to poses and viewpoints. Thus our mixtures of compositional models are able to capture various shape variations. {\bf Failure Cases:} Figure \ref{fig:failure} shows three typical failure cases. The reason for (a) is because the horse is in very rare pose which cannot be captured by any mixture. The reason for (b) is because the semantic labeling result is wrong and the horse boundary is unclear due to dark lighting. Incorrect body segment often leads to wrong leg segmentation (e.g., case (a) and (b)). In (c), the legs are mistakenly segmented although the horse body segment is correct. This is because both the detected symmetric structure (red region on the image) and the semantic labeling result are not correct. \begin{table}[t] \centering \begin{tabular}{l*{6}{c}r} \hline Method &head & neck+torso & leg\\ \hline Our model &34.82 & 55.62 &28.56\\ PD+OS &26.77 & 53.79 & 11.18 \\ \hline PD+GT &38.66 & 60.63 & 19.36 \\ \hline \end{tabular} \vspace{0.3cm} \begin{tabular}{l*{6}{c}r} \hline Method &head & neck & torso & neck+torso & leg\\ \hline Our model &47.21 & 38.01 & 61.02 & 66.74 & 38.18\\ PD+OS &37.32 & N/A & N/A & 60.35 & 27.47 \\ \hline PD+GT &56.64 & N/A & N/A & 67.96 & 40.95 \\ \hline \end{tabular} \caption {Part segmentation result for horses (bottom) and cows (top). The performance measure is IOU (\%). PD+OS refers to the method that combines part detection bounding box and object segmentation. PD+GT refers to the method that combines part detection bounding box and groundtruth segmentation.}\label{tab:horseresult} \end{table} \subsection{Comparison} {\bf Baseline:} There has been lack of work on semantic part segmentation for animals. But there is part-based object detection work \cite{chendetect} that is able to output part-level bounding boxes. There is also many object segmentation works that give object-level segments. Therefore, it is straightforward to combine part detection and object segmentation to output part-level segmentation result. Take the head as an example. We treat certain part of the object segment that lies inside the head bounding box as the head segment. This method is our comparison baseline. We use the state-of-the-art object segmentation algorithm \cite{BharathECCV2014} in the experiment. We conduct our experiments on two animal classes: horse and cow. Table \ref{tab:horseresult} shows quantitative results and Figure \ref{fig:visual} gives some part segmentation visualizations. The horse model has 60 mixtures and cow model has 30 mixtures. Since the part detection method \cite{chendetect} treats neck+torso as one part, we do not have detection bounding box for neck and torso separately. For cows we did not split neck and torso since the cow neck is always small, which is in contrast to the long horse neck. We can see that our part segmentation results are significantly better than the baseline methods (PD+OS). We can also see that our results are only a little lower than the method (PD+GT) that combines the part detection bounding box and the groundtruth animal segmentation. Note that this is an "oracle" method since groundtruth segmentation is never available during test time. This result further validates the effectiveness of our method. \begin{figure*}[t] \centering \hspace{0.4cm} \begin{subfigure}[b]{100pt} \centering \includegraphics[height=100pt]{figures/composition2_figure.pdf} \end{subfigure} \hspace{1.1cm} \begin{subfigure}[b]{320pt} \centering \includegraphics[height=50pt]{figures/mod2_det1.png} \includegraphics[height=50pt]{figures/mod2_det2.png} \includegraphics[height=50pt]{figures/mod2_det3.png} \includegraphics[height=50pt]{figures/mod2_det4.png}\\ \includegraphics[height=50pt]{figures/mod2_fig1.png} \includegraphics[height=50pt]{figures/mod2_fig2.png} \includegraphics[height=50pt]{figures/mod2_fig3.png} \includegraphics[height=50pt]{figures/mod2_fig4.png} \end{subfigure} \vspace{0.2cm} \hspace{0.5cm} \begin{subfigure}[b]{100pt} \centering \includegraphics[height=100pt]{figures/composition1_figure.pdf} \end{subfigure} \hspace{1.0cm} \begin{subfigure}[b]{320pt} \centering \includegraphics[height=50pt]{figures/mod6_det3.png} \includegraphics[height=50pt]{figures/mod6_det4.png} \includegraphics[height=50pt]{figures/mod6_det5.png} \includegraphics[height=50pt]{figures/mod6_det6.png} \\ \includegraphics[height=50pt]{figures/mod6_fig3.png} \includegraphics[height=50pt]{figures/mod6_fig4.png} \includegraphics[height=50pt]{figures/mod6_fig5.png} \includegraphics[height=50pt]{figures/mod6_fig6.png} \end{subfigure} \caption{Two mixtures and corresponding landmark localization results. For each mixture, the left figure is the compositional model, the top row on the right is the landmark localization results, and the bottom row on the right is the input images. We can see that each mixture deals with local deformation, and different mixtures handle large variation due to poses and viewpoints.} \label{fig:deform} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[b]{450pt} \includegraphics[height=50pt]{figures/horse_seg1.png} \includegraphics[height=50pt]{figures/horse_seg2.png} \includegraphics[height=50pt]{figures/horse_seg3.png} \includegraphics[height=50pt]{figures/horse_seg4.png} \includegraphics[height=50pt]{figures/horse_seg5.png} \includegraphics[height=50pt]{figures/horse_seg6.png} \includegraphics[height=50pt]{figures/horse_seg8.png} \includegraphics[height=50pt]{figures/horse_seg9.png} \includegraphics[height=50pt]{figures/horse_img1.png} \includegraphics[height=50pt]{figures/horse_img2.png} \includegraphics[height=50pt]{figures/horse_img3.png} \includegraphics[height=50pt]{figures/horse_img4.png} \includegraphics[height=50pt]{figures/horse_img5.png} \includegraphics[height=50pt]{figures/horse_img6.png} \includegraphics[height=50pt]{figures/horse_img8.png} \includegraphics[height=50pt]{figures/horse_img9.png} \end{subfigure} \begin{subfigure}[b]{10pt} \centering \includegraphics[height=100pt]{figures/horse_colormap.pdf} \end{subfigure} \vspace{0.2cm} \begin{subfigure}[b]{450pt} \includegraphics[height=50pt]{figures/cow_seg1.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg2.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg3.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg4.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg5.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg6.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg7.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg8.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_seg9.png} \includegraphics[height=50pt]{figures/cow_img1.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img2.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img3.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img4.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img5.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img6.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img7.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img8.png} \hspace{0.03cm} \includegraphics[height=50pt]{figures/cow_img9.png} \end{subfigure} \begin{subfigure}[t]{10pt} \centering \includegraphics[height=80pt]{figures/cow_colormap.pdf} \end{subfigure} \caption{Typical semantic part segmentation results from various viewpoints and poses for horses (top) and cows (bottom). Best viewed in color.} \label{fig:visual} \end{figure*} \begin{figure*}[!t] \begin{subfigure}[b]{170pt} \includegraphics[height=45pt]{figures/fail2_img.png} \includegraphics[height=45pt]{figures/fail2_seg.png} \includegraphics[height=45pt]{figures/fail2_map.pdf} \caption{} \end{subfigure} \hspace{0.01cm} \begin{subfigure}[b]{160pt} \includegraphics[height=45pt]{figures/fail1_img.png} \includegraphics[height=45pt]{figures/fail1_seg.png} \includegraphics[height=45pt]{figures/fail1_map.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{170pt} \includegraphics[height=45pt]{figures/fail3_img.png} \includegraphics[height=45pt]{figures/fail3_seg.png} \includegraphics[height=45pt]{figures/fail3_map.pdf} \caption{} \end{subfigure} \caption{Three typical failure cases. For each case, left is image, middle is part segmentation result, and right the semantic labeling result. (a) rare pose. (b) mistaken semantic labeling and unclear boundary. (c) correct body segment but wrong leg segment due to mistaken semantic labeling and symmetric structure (red region on the image).} \label{fig:failure} \end{figure*} \section{Conclusion} In this paper, we built a mixture of compositional models combining shape and appearance for animal part segmentation task. We proposed a novel structure learning algorithm to learn the mixture of compositional trees which are able to represent animal shapes of various poses and viewpoints. We also developed a linear complexity algorithm to significantly speed up the inference of the compositional model. We tested our method for horse and cow on the Pascal VOC dataset. The experimental results showed that our method achieves much better part segmentation results than the baseline method. As for the future work, we will deal with occlusion and truncation issue, and enable part sharing when learning the compositional models. \newpage \section*{Supplementary Material} \section*{A. Effect of Increasing Training Data} Our model has a small number of parameters due to parameter sharing across parts, which enables to learn the model parameters using limited training data. Figure \ref{fig:supple} shows that the segmentation performance only slightly increases with respect to the number of training images. We can see that our model performs very well even using 30 training bounding box images. This indicates that our compositional model is able to learn the model parameters using very limited number of training examples, which we think is another advantage of our model. \begin{figure}[!h] \centering \includegraphics[width = 0.45\textwidth]{figures/supple} \caption{The segmentation performance with respect to the number of training images.} \label{fig:supple} \end{figure} \section*{B. Visualization of Structure Learning Algorithm} The structure learning algorithm in Section 5.1 of the paper includes four steps: clustering, sampling, matching, and composing. Figure \ref{fig:structlearn} shows the intermediate results from clustering, sampling and composing (final compositional model visualized in a flat manner). \begin{figure}[!t] \centering \includegraphics[width = 0.45\textwidth]{figures/structlearn_new} \caption{Visualization of results from each step of the structure learning algorithm. Red for head, blue for neck, and green for torso. In (c), the line segment refers to the oriented edge. Best viewed in color.} \label{fig:structlearn} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width = \textwidth]{figures/mixture60} \caption{Visualization of 60 mixtures for horses. Red for head, blue for neck, and green for torso. Best viewed in color.} \label{fig:mixture60} \end{figure*} \section*{C. Visualization of Compositional Models} Figure \ref{fig:mixture60} shows all 60 mixtures we used for horse images. Due to space limitation, the shapes are visualized in a flat manner. The hierarchical visualizations for the compositional models are shown in Figure 2 and Figure 4 of the paper. \section*{D. Proof of Algorithm 1} In this section, we provide a brief proof for the Algorithm 1 in the paper. We consider the following problem \begin{equation*} \gamma(x) = \min_{l(x) \leq z \leq u(x)} {(x-h(z))^2 + g(z)}, \end{equation*} where $h(z)$, $u(x)$ and $l(x)$ are all non-decreasing. The variables $x$ and $z$ are defined on a 1-dimensional grid $\{1,2,...,n\}$. Inspired by \cite{felzenszwalb2004distance}, $\gamma(x)$ can be viewed as the lower envelope of a set of truncated parabolas $(x-h(z))^2+g(z)$ with the truncation being $u^{-1}(z) \leq x \leq l^{-1}(z)$. The algorithm performs in two steps. The first step is that we obtain the lower envelope of all the truncated parabolas by computing the boundary points between adjacent selected parabolas while keeping the truncation constraint being satisfied. The second step is that we fill in the value $\gamma(x)$ using the obtained lower envelope from step one. In the paper, we use $range(k)$ and $range(k+1)$ to indicate the range of $k$-th parabola in the lower envelope, and $idx(k)$ to indicate the grid location $z$ of the $k$-th parabola in the lower envelope. In the proof, for notational simplicity, we use $r(k)$ to refer to $range(k)$ and $i(k)$ to refer to $idx(k)$. \begin{figure}[!h] \centering \includegraphics[width = 0.45\textwidth]{figures/proof_start} \caption{The lower envelope computation for two parabolas.} \label{fig:proof_start} \end{figure} As shown in Figure \ref{fig:proof_start}, the lower envelope computation for two parabolas is as follows. We first compute the intersection point \begin{equation} s = \frac{(g(z)+h^2(z))-(g(z_1)+h^2(z_1))}{2h(z)-2h(z_1)}. \end{equation} For $x \leq s$, the lower envelope is the left parabola rooted at $(h(z_1),g(z_1))$; for $s>x$, the lower envelope is the right parabola rooted at $(h(z),g(z))$. The algorithm performs with $z$ being from $1$ to $n$. Each time we check the parabola rooted at $(h(z),g(z))$, and update the lower envelope set accordingly. Now suppose there are already $k$ parabolas selected in the lower envelope set. For a new value $z$, we compute the lower envelope between the parabola rooted at $(h(z),g(z))$ and the rightmost parabola in the lower envelope set rooted at $(h(i(k),g(i(k))))$. We can easily compute their intersection \begin{equation} s = \frac{(g(z)+h^2(z))-(g(i(k))+h^2(i(k)))} {2h(z)-2h(i(k)) }. \end{equation} To satisfy the truncation constraint, we project $s$ to interval $[u^{-1}(z), \ l^{-1}(z)]$ . We consider the following three cases for computing the boundary points. We use $s^*$ to denote the projected $s$. \begin{figure}[h] \centering \includegraphics[width = 0.45\textwidth]{figures/proof_case1} \caption{Case 1. Best viewed in color.} \label{fig:proof_case1} \end{figure} \begin{figure}[h] \centering \includegraphics[width = 0.45\textwidth]{figures/proof_case2} \caption{Case 2. Best viewed in color.} \label{fig:proof_case2} \end{figure} \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{figures/proof_case3} \caption{Case 3. Best viewed in color.} \label{fig:proof_case3} \end{figure} Figure \ref{fig:proof_case1} shows the first case where $s^* > r(k+1)$. Note that the Algorithm 1 in the paper automatically implies that \begin{equation} r(k+1) = l^{-1}(i(k)). \end{equation} In this case, we add the current parabola induced by $h(z)$ to the lower envelope set. Its range is $[r(k+1),l^{-1}(z)]$. And we do not need to consider the other parabolas in the envelope set. Figure \ref{fig:proof_case2} shows the second case where $s^* \in [r(k),r(k+1)]$. In this case, we add the current parabola induced by $h(z)$ to the lower envelope set. And its range is $[s,l^{-1}(z)]$. As with the first case, we do not need to consider the other parabolas in the envelope set either. Figure \ref{fig:proof_case3} shows the third case where $s^* < r(k)$. In this case, we can only guarantee that for the parabola induced by $h(z)$, the range $[r(k),l^{-1}(z)]$ is definitely in the lower envelop set. This means that we remove the $k$-th parabola (induced by h(i(k))) in the lower envelope. But for the $x<r(k)$, we have to compare the parabola induced by $(h(z),g(z))$ with other parabolas in the lower envelope set by iteratively decreasing $k$. And for each new $k$, we repeat the same operations discussed above. Note that each parabola is at most removed once from the lower envelope set. So the algorithm runs in linear complexity. After obtaining the boundary points, we need to fill in the values for $\gamma(x)$. The difficulty is that some boundary points are not continuous, e.g., Figure \ref{fig:proof_case1} and Figure \ref{fig:proof_case4}. For Figure \ref{fig:proof_case1}, at the boundary point, we select the value given by the left parabola. And for Figure \ref{fig:proof_case4}, we select the value given by the right parabola. Note that Figure \ref{fig:proof_case2} and Figure \ref{fig:proof_case4} both belong to the second case where $s^* \in [r(k),r(k+1)]$. The difference is that in Figure \ref{fig:proof_case2}, we have $s^* = s$ since $u^{-1}(z) < s$, while in Figure \ref{fig:proof_case4}, we have $s^* = u^{-1}(z)$ since $u^{-1}(z) > s$. \begin{figure}[h] \centering \includegraphics[width= 0.45\textwidth]{figures/proof_case4} \caption{ Discontinuous boundary. Best viewed in color.} \label{fig:proof_case4} \end{figure} \newpage {\small \bibliographystyle{ieee}
2,877,628,089,858
arxiv
\section{Introduction} As the intensive EUV radiation from newly formed star(s) ionizes the star-facing surface layer of a molecular cloud, the ionization heating induces a shock which propagates into the cloud, compressing material to form highly condensed core(s). These cores are the potential sites for EUV radiation triggered new star formation. Simultaneously the H\hspace{1pt}$\alpha$ emission produced by the recombination of ions with electrons creates a bright rim around the star facing side of the cloud. The structures formed are termed Bright Rimmed Clouds (BRCs), the best candidate for studying the feedback of massive stars to surrounding and parental molecular cloud(s). This process is the Radiation Driven Implosion (RDI) mechanism for EUV radiation triggered star formation in BRCs. Multi-wavelength observations have revealed various physical and morphological properties of BRCs at H\hspace{1pt}{\sc ii}\ regions. BRCs observed so far tend to be classified as type A, B and C in increasing order of the curvature of their bright rim \citep{SugitaniEtAl1991-1, SugitaniOgura1994-1}, though some of BRCs are found in M shaped morphology \citep{Osterbrock-1957, UrquhartEtAl2006-1, Karr-2005}. Starting with a uniform and spherical molecular cloud illuminated by a plane-parallel ionizing radiation from one side, theoretical modelling based on the RDI mechanism successfully revealed the formation process of standard type A, B or C BRCs. These simulated BRCs are all symmetric to their structure axis, and the latter aligns with the radiation flux direction, i.e., the line connecting the tip of a BRC and the centre of the exciting star \citep{Bertoldi1989-1, LeflochLazareff1994-1, KesselBurkert2003-1, MiaoEtAl2006-1, MiaoEtAl2009-1, Gritschneder2009-1, MiaoEtAl2010-1, BisbasEtAl2011-1, HaworthHarries2012-1, Tremblin-2012}. However, further investigation on the observed structures at H\hspace{1pt}{\sc ii}\ boundaries finds that: i) some BRCs do not show a comet-like morphology, but fragment-core linear structures perpendicular to the EUV radiation flux direction (e.g. \citet{ChauhanEtAl2011-1}); ii) most type A, B and C BRCs are not symmetric about their structural axes, which also do not necessarily align with the radiation flux direction of their exciting stars \citep{Thompson-2004, MorganEtAl2004-1}. As shown in Figure \ref{angle-distribution}, there is a distribution in the angle between the axis of a BRC and the direction from the tip of the BRC to the centre of the exciting star, based on the observational data on groups of BRCs in about 10 emission nebul\ae\ \citep{Osterbrock-1957}; iii) There are more objects with structures which can not be categorized by type A, B or C than those which can be; iv) In those asymmetrical BRC structures, RDI triggered multi-star or sequential formation are often found not only at the head of BRCs, but also in the more compressed side layer \citep{FukudaEtAl2013-1, Makela-2013, Sicilia-Aguilar-2014, Panwar-2014}. It is obvious that it is difficult to accommodate the variety of structural and physical features observed with RDI modelling starting with a spherical cloud. \begin{figure} \center \includegraphics[width=0.47\textwidth]{figure1.png} \caption{The distribution of the inclination angle $\varphi$, which is defined as the angle between the major axis and the $x$ axis as shown in Figure \ref{orientation}, of the BRCs in about 10 emission nebul\ae\ \citep{Osterbrock-1957}.} \label{angle-distribution} \end{figure} Observations on the physical properties of molecular clouds have revealed that spherical clouds are very rare cases \citep{Gammie-2003,Doty-2005, RathborneEtAl2009-1} and there are many physical processes which lead to formation of prolate molecular clouds \citep{Hao-2011, Gholipour-2013}. To investigate the mechanism for the development of structures of various morphologies other than standard (i.e., symmetrical) BRCs at H\hspace{1pt}{\sc ii}\ boundaries, using an initially non-spherical cloud, such as a prolate cloud, in RDI modelling seems the most feasible choice. Definition in terms of two additional, geometric, parameters are introduced: the ratio of the major ($a$) to minor ($b$) axis $\gamma = \frac{a}{b}$ and the inclination angle $\varphi$, the configuration of which is illustrated in Figure \ref{orientation}. In this way we can expect that the variety of the morphological structures derived from RDI simulations must be greatly increased. \begin{figure} \center \includegraphics[width=0.47\textwidth]{figure2.pdf} \caption{The initial geometry of a prolate cloud in simulations. $x_{\mathrm{p}}$, $x_{\mathrm{a}}$ and $x_{\mathrm{max}} $ are the $x$ projections of the apex, the semi-major axis $a$ and the farthest point of the cloud from $z$ axis respectively.} \label{orientation} \end{figure} A set of RDI simulations were conducted using an initially prolate cloud with $\varphi = 0 {}^{\circ} $, which we call `perpendicular' to the EUV radiation flux. It is found that a perpendicular prolate cloud would evolve into a perpendicular fragment-core linear structure to the ionizing flux, covered by a bright top layer on the star side. Both the initial physical/geometrical conditions of the cloud and the strength of radiation flux affect the distribution of condensed cores over the structure, the total condensed core mass and the triggered star formation time. The sporadic fragment-core structures at H\hspace{1pt}{\sc ii}\ boundaries can be interpreted by the effect of the interaction between a perpendicular prolate cloud and a parallel-plane EUV radiation field. More details are presented in \citet{Kinnear2014-1}. In this paper, we expand the investigation to the evolution of a prolate cloud at H\hspace{1pt}{\sc ii}\ boundaries, with the cloud's major axis inclined to the EUV radiation flux by an angle $ 0{}^{\circ} \le (90\deg - \varphi) \le 90{{}^{\circ}}$. This is performed in order to understand the mechanism for the formation of a broad range of asymmetrical structures and the consequent triggered star formation inside them. In the rest of the paper, we first present a brief description of the codes used, the initial conditions of the clouds for simulations, and a derived quantity which gives an indication of the dynamic evolution of prolate clouds at different H\hspace{1pt}{\sc ii}\ boundaries. In the results and discussion section we present simulation results and discuss the physical mechanism responsible for the formation of different mophological structures at H\hspace{1pt}{\sc ii}\ boundaries. Finally, conclusions are drawn. \section{The codes and initial conditions} \label{code-initial} \subsection{The codes} \label{code} An updated Smoothed Particles Hydrodynamics (SPH) {\sc Code ii}\ is used for all of the simulations investigated in this paper, which is an extended version of {\sc Code i}\ \citep{NelsonLanger1999-1}. {\sc Code i}\ is an SPH code to investigate the effect of an isotropic FUV interstellar background radiation on the evolution of a molecular cloud, by solving the full set of hydrodynamic equations, including self-gravity of the cloud and a chain of chemical differential equations. Based on {\sc Code i}, a numerical solver with a ray-tracing algorithm \citep{KesselBurkert2000-1} for a plane-parallel EUV radiation transferring equation was implemented, as well as the relevant heating and cooling processes. We refer to this version as {\sc Code ii}, which can be used to investigate the interaction of a plane-parallel ionizing radiation from massive star(s) with a molecular cloud of arbitrary initial geometry. More detailed descriptions of the code can be found in \citet{MiaoEtAl2006-1} and \citet{Kinnear2014-1}. By virtue of the asymmetrical morphology of the simulated clouds, the condensed cores triggered by RDI induced shocks are not usually aligned with the structural axis and multi-star formation is also frequently observed. To identity the positions of these triggered condensed cores and analyse their physical properties, a code called `{\sc CoreFinder}' is used, for which a detailed description is included in \citet{Kinnear2014-1}. For the simulations presented in this paper, all `cores' are defined as a region of at least 0.03 M$_{\odot}$\ ($> 100$ SPH particles) and the density of each SPH particle sampled must be $n \ge 10^6$ $n_{\mathrm{H}_2}$ cm${}^{-3}$. As in \citet{Kinnear2014-1}, the cores formed from the simulations are the first generation of RDI triggered objects. Simulations for the subsequent generations of RDI triggered star formation becomes excessively slower after initial extremely high density core(s) form, which leads to a decrease towards infinitesimally small time steps, and the effective halt of the simulation. This can be avoided by the use of the `sink' particles of \citet{BateBurkert1997-1}, but is not currently implemented in {\sc Code ii}. All of the column density images are produced using the software {\sc SPLASH}, which is specially designed for processing SPH numerical data \citep{Splash}. \subsection{Initial conditions} \label{initil} \subsubsection{Initial geometry} The geometrical shape of a prolate cloud in the simulations is described by a pair of initial parameters ( $a$, $\gamma$ ) and its orientation to the plane-parallel ionization radiation flux is $ 0{}^{\circ} \le \varphi \le 90{}^{\circ}$, as shown in Figure \ref{orientation}. The ionization flux onto the surface of the cloud on the star side is treated as plane-parallel to the $z$ axis, for an assumed large ratio of distance to the illuminating star against cloud size. In Figure \ref{orientation}, $x_{\mathrm{p}}$ is the $x$ coordinate of the apex of the cloud. letting $r$ be the distance of any a point on the surface of a prolate cloud in the $xz$ plane, and $\theta$ the angle between $r$ and the $x$ axis, we have, \begin{equation} r = \frac{a}{\sqrt{\cos^2 (\theta - \varphi) + \gamma^2 \sin^2(\theta - \varphi)}}. \end{equation} The maximum $x$ coordinate and the $x$ projection of the semi-major axis \begin{eqnarray} x_{\mathrm{max}} & = & a \sqrt{\cos^2 \varphi + \frac{\sin^2 \varphi}{\gamma^2}}, \\ \label{xmax} x_{\mathrm{a}} & = & a \cos \varphi. \label{xa} \end{eqnarray} As an important indicator to the dynamical evolution of a prolate cloud with an inclination angle $\varphi$, the effective illuminated area by the EUV radiation is, \begin{equation} A = \frac{\pi a x_{\mathrm{max}}}{\gamma} = \frac{\pi a^2}{\gamma} \sqrt{\cos^2 \varphi + \frac{\sin^2 \varphi}{\gamma^2}}, \label{eff_area} \end{equation} which is a decreasing function with $\varphi$, has the maximum $\pi a b$ at $\varphi = 0{}^{\circ}$ and the minimum $\pi b^2$ at $\varphi = 90{}^{\circ}$. For the convenience of the following discussion, we call the half cloud with $\varphi \le \theta \le \pi + \varphi $ as the `front' semi-ellipsoid, and $\pi + \varphi < \theta \le 2 \pi + \varphi $ the `back' semi-ellipsoid. \subsubsection{Initial mass distribution} As we have done in the investigation of the evolution of a perpendicular prolate cloud at an H\hspace{1pt}{\sc ii}\ boundary \citep{Kinnear2014-1}, all of the molecular clouds in our simulations start with a uniform density, which is rendered by a glass-like distribution of SPH particles created using {\sc Gadget-2} \citep{Springel2005-1}. A glass-like distribution is taken as a good approximation to a uniform mass distribution. We choose the initial mass of the cloud to be 30 M$_{\odot}$, the same as used for \citet{Kinnear2014-1}. The number of SPH particles for each molecular cloud is determined by the mass resolution, $3.0 \times 10^{-4}$ M$_{\odot}$\ per SPH particle, for 100,000 particles, higher than required by the convergence test of the {\sc Code ii}. A zero initial velocity field is set for all of the molecular clouds in the simulations. Since the objective of our investigation is to explore the evolution of the RDI triggered collapse of an initially inclined prolate cloud, any initially unstable cloud against the self-gravity before interacting with the ionization radiation flux should not be chosen. Stability is assured by applying the Jeans criteria for an isolated prolate cloud, in terms of Jeans number $J$, the ratio of the initial gravitational energy of a prolate cloud to the thermal energy and can be expressed as \citep{Bastien1983-1}, \begin{equation} J = \frac{\pi \; G \; \rho\; \mu\; b^2}{15\; e\; R_{\mathrm{g}}\; T} \ln{\left(\frac{1 + e }{1 - e }\right)} \le 1.0, \label{jeansnumber} \end{equation} where $\rho$, $T$ and $\mu$ are the mass density, the initial temperature and the mean molecular mass of the prolate cloud respectively, $G$ and $R_{\mathrm{g}}$ are the standard physical constants and the eccentricity, \begin{equation} e = \sqrt{1 - \frac{b^2}{a^2} }= \frac{\sqrt{\gamma^2 - 1}}{\gamma}. \end{equation} Using the relation between density, mass $M$ and volume of a cloud, we can derive the condition for the major axis $a$ of an initially stable prolate cloud, \begin{eqnarray} a \ge a_{\mathrm{crit}} = 0.052 \; \frac{M^*\; \gamma}{T\; \sqrt{\gamma^2 - 1}} \ln \left(\frac{\gamma + \sqrt{\gamma^2-1}}{\gamma - \sqrt{\gamma^2 - 1}}\right), \label{majoraxis} \end{eqnarray} where $M^*$ is the mass of the prolate cloud in units of solar masses, and $a$ and $a_{\mathrm{crit}}$ have units of Parsecs. For a given molecular cloud of mass $M^*$, and initial temperature $T$ and $\gamma$, a minimum value $a$ can be estimated, the major axis of an initially gravitationally stable cloud should satisfy $a > a_{\mathrm{crit}}$. \subsection{Boundary condition} The clouds in our simulations are subject to an isotropic interstellar background FUV radiation of one Habing unit \citep{Habing1968-1} and an ionizing EUV radiation with a flux of $ 10^{9}$ cm$^{-2}$ s$^{-1}$ (typical of the boundary of an H\hspace{1pt}{\sc ii}\ \ region) directed parallel to the $z$-axis (along the negative $z$ direction) as illustrated in Figure \ref{orientation}. A constant pressure boundary condition is applied with the value of the external pressure being set equivalent to an external medium of atomic hydrogen of $n\mathrm{(H\hspace{1pt}{\sc i})} = 10 $ $n_{\mathrm{H}_2}$ cm${}^{-3}$ \ and temperature of 100 K. An outflow condition is imposed at a fixed boundary which is a few times of the initial size of the cloud \citep{NelsonLanger1999-1}. \subsection{The ionization penetration depth parameter} In order to classify the initial dynamic regime of a perpendicular prolate cloud ($\varphi = 0$), a dimensionless parameter, \ensuremath{d_{\mathrm{EUV}}}, was defined, being the ratio between the physical ionising radiation penetration depth to the semi-minor axis of the cloud. For this scenario, $\ensuremath{d_{\mathrm{EUV}}} = \frac{F_{\mathrm{EUV}} \, \gamma }{\alpha_{\mathrm{B}} \, a \, n^2}$ \citep{Kinnear2014-1}. When $\varphi \ne 0 {}^{\circ}$, we modify its definition to the ratio of the physical ionization penetration depth to the characteristic depth of the cloud. This characteristic depth is defined as half of the longest path through the cloud in the direction of the radiation, which is always the depth of the cloud at $x=0$, $y=0$. \begin{eqnarray} \ensuremath{d_{\mathrm{EUV}}} & = & \frac{\frac{F_{\mathrm{EUV}}}{\alpha_{\mathrm{B}} n^2}}{r(\theta = 90{}^{\circ})} \nonumber \\ & = & \frac{F_{\mathrm{EUV}} \, \gamma }{\alpha_{\mathrm{B}} \, a \, n^2} \sqrt{\frac{sin^2 \varphi}{\gamma^2} + cos^2 \varphi} \nonumber \\ & = & 1.6 \times 10^3 \, \frac{F^*_{\mathrm{EUV}}\, \gamma}{n^2 \, a^*} \sqrt{\frac{sin^2 \varphi}{\gamma^2} + cos^2 \varphi} \quad (\mathrm{pc}) \label{pene-depth} \end{eqnarray} where the major axis $a^{*}$ is in the unit of pc, and $F_{\mathrm{EUV}}^*$ is the EUV ionising radiation flux in units of $10^9$ cm$^{-2}$s$^{-1}$, $\alpha_\mathrm{B}$ is the recombination coefficient of hydrogen ion - election under the `on-the-spot' approximation \citep{DysonWilliams1997-1} and has the value of $2.0 \times 10^{-13}$ cm$^3$ s$^{-1}$ at a temperature of about $10^4$ K \citep{DysonWilliams1997-1}. This is then taken as a constant, as the equilibrium temperature for ionized material is $\approx 10^4$ K and the dependence of $\alpha_\mathrm{B}$ on temperature is not strong in the region around that temperature. As already discussed in \citet{Kinnear2014-1}, when $\ensuremath{d_{\mathrm{EUV}}} << 1$ for $\varphi = 0$, the prolate cloud is in the RDI triggered shock dominant region, and the collapse of the cloud is by `foci convergence' mode, i.e., two high density cores form at the two ends of a filament; when $\ensuremath{d_{\mathrm{EUV}}} \le 1$, the cloud can still be RDI triggered to collapse but through `linear convergence', i.e., a few high density cores form along the whole filament; when $\ensuremath{d_{\mathrm{EUV}}} \ge 1$, the cloud is in photo-ionization dominant region, and would be photo-evaporated. As \ensuremath{d_{\mathrm{EUV}}}\ can possess a similar range of values for all $\varphi$ in Equation \ref{pene-depth}, we would expect it to play a similar role for $\varphi \neq 0$. \section{Results and discussions} Three sets of simulations were conducted using prolate clouds of different geometrical shapes ($\gamma$), inclination angles ($\varphi$), initial densities ($n$) and ionising fluxes ($F_{\mathrm{EUV}}$), to investigate the effect of their variations on the evolution of a prolate cloud. Table \ref{seriessummary} contains a summary of the ranges of properties for each set. All of the clouds have an initial temperature of 60 K and are illuminated by a 50,000 K star providing a flux at the cloud, $F_0 = 10^9$ cm$^{-2}$ s$^{-1}$ (or 2.0 $\times 10^9$ cm$^{-2}$ s$^{-1}$, when specified), the same as that for \citet{Kinnear2014-1}. In this section, we first describe the evolution of a prolate cloud of an inclination angle of 45${}^{\circ}$ in detail, then investigate the influence of changing initial inclination angle $\varphi$ on the evolution of the same cloud. Next we discuss how the initial geometry of a prolate cloud affects the morphological evolution of a cloud and RDI triggered star formation inside it. Finally we propose a formation mechanism for the variety in morphological structures found at H\hspace{1pt}{\sc ii}\ boundaries. \begin{table} \centering \begin{tabular}{lcccc} \cline{1-5} & $n$ & $\gamma$ & $\varphi$ & $F_0$ \\ Series name & $n_{\mathrm{H}_2}$ cm${}^{-3}$\ & & ${}^{\circ}$ & $\times 10^{9}$ cm$^{-2}$ s$^{-1}$ \\ G1200 & 1200 & 1-10 & 0-90 & 1 \\ density \& flux & 100-1200 & 2 & 60 & 1 \& 2 \\ irregular & 400 & 2-3.5 & 60-85 & 2\\ \cline{1-5} \end{tabular} \caption{Summary of the test series conducted. $n$ is initial density, $\gamma$ the axial ratio, $\varphi$ the inclination angle and $F_0$ the incident flux.} \label{seriessummary} \end{table} \begin{table} \centering \begin{tabular}{lcccc} \cline{1-5} \multicolumn{5}{c}{G1200} \\ No. & $\gamma$ & $a_{\mathrm{crit}}$ (pc) & $a_{1200}$ (pc) & \ensuremath{d_{\mathrm{EUV}}}\ (\%) \\ 1 & 1.00 & 0.052 & 0.494 & 0.225 \\ 2 & 1.25 & 0.060 & 0.573 &0.218 \\ 3 & 1.50 & 0.067 & 0.648 &0.219 \\ 4& 1.75 & 0.073& 0.718 &0.221 \\ 5 & 2.00 & 0.079& 0.784 & 0.224\\ 6 & 2.25 & 0.084& 0.849 & 0.228 \\ 7 & 2.50 & 0.039& 0.910 & 0.232\\ 8 & 2.75 & 0.093& 0.970 & 0.237\\ 9 & 3.00 & 0.097& 1.028 & 0.241 \\ 10& 3.25 & 0.101& 1.084 & 0.246\\ 11 & 3.50& 0.104& 1.139 &0.250\\ 12 & 3.75 &0.107& 1.193 & 0.256\\ 13 & 4.00 & 0.110& 1.245 & 0.260\\ 14 & 4.50 &0.116& 1.347 & 0.268\\ 15 & 5.00 & 0.121& 1.445 & 0.277 \\ 16 & 5.50 &0.126& 1.540 &0.285\\ 17 & 6.00 &0.130& 1.632 &0.292 \\ 18 & 7.00 & 0.138& 1.808 & 0.307\\ 19 & 8.00 & 0.145& 1.977 & 0.320\\ \cline{1-5} \end{tabular} \caption{Parameters of molecular clouds of G1200 series with an inclination angle $\varphi = 45{}^{\circ}$. From left to right, columns 1-3 are the number identity, axial ratio and the critical semi major axis defined by Equation \ref{majoraxis}. Columns 4-5 are the major axis and \ensuremath{d_{\mathrm{EUV}}}\ defined by Equation \ref{pene-depth}. All of the semi-major axes and critical semi-major axes are in units of pc and the penetration depth is in \%.} \label{G1200(5)-angle-45} \end{table} \subsection{Evolution of cloud G1200(5) with $\varphi = 45{}^{\circ}$} \label{G1200_5} Table \ref{G1200(5)-angle-45} lists all of the relevant parameters of clouds with initial density 1200 $n_{\mathrm{H}_2}$ cm${}^{-3}$\ and inclination angle $\varphi = 45{}^{\circ}$, for variations of the initial ratio $\gamma$ . The cloud is the fifth one in the G1200 set, notated G1200(5). It has an initial shape defined by $\gamma = 2.0 $, its major axis (0.784 pc) is $\approx 10$ times of its critical major axis (0.079 pc) derived by using Equation \ref{majoraxis}; indicating its initial stability against gravitational collapse. Figure \ref{col-den-1200-45} shows an evolutionary sequence of column density of the cloud over a period of 0.13 Myr. We also plot the corresponding mean axial density distribution along the $x$ axis in Figure \ref{axial-den-1200-45}, with the established binning method for describing axial profiles of SPH particle properties \citep{Kinnear2014-1, NelsonLanger1999-1, Nelson-1993}. This view provides a qualitative picture of the location of condensed cores formed in the shocked layer of gas in the cloud by the RDI effect. It is seen from the above two figures that 28 Kyr after the radiation flux was switched on, a shock is established by the ionization heating. This shock starts compressing the cloud surface on the star-facing side. A slightly compressed thin layer of mean density $\approx 3.0 \times 10^3$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ appears at the EUV exposed surface, as shown in the upper-middle panels in Figures \ref{col-den-1200-45} and \ref{axial-den-1200-45}. The density at $x = 0.49$ pc is slightly higher than the axial mean density, which reflects the very early stage of a condensed core formation around the apex of the cloud. From $t = 0.028$ to 0.11 Myr, the shock continues propagating into the cloud. The density in the thin shocked layer continues increasing and the structure of the head becomes more distinctive. The non-head mean axial density and the central density of the head increase to $10^4$ and $10^5$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ respectively at 0.054 Myr, and to $2.0 \times 10^4$ and $10^6$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ respectively at 0.08 Myr. At $t = 0.11$ Myr, the density in the shocked layer increases dramatically towards the apex of the cloud (at $x = 0.32$ pc). This density jumps from $\approx 8 \times 10^4$ to $8 \times 10^5$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ at $x = 0. 24$ pc, then to the maximum $4 \times 10^6$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ at the apex. It is interesting to find from the magnified image in the left panel of Figure \ref{zooming-col-1200-45} that a filament structure forms at the head, aligning with the EUV radiation direction. The structure has a length of roughly 0.03 pc and the top end of it coincides with the apex of the cloud. The gas contained in both the shocked surface layer and in the filament becomes further compressed by $t = 0.13$ Myr. At this stage the maximum mean density in the shocked layer and in the filament are $ 10^5 - 10^6 $ and $ > 10^9$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ respectively. The right panel of Figure \ref{zooming-col-1200-45} further reveals the formation of two high density cores in close proximity. These dual cores may be the result of fragmentation of the filament-like structure observed at $t=0.11$ Myr. \begin{figure*} \center \includegraphics[width=0.85\textwidth]{figure3.pdf} \caption{Sequential evolution of the column number density for Cloud G1200(5) with the incline angle $\varphi = 45{}^{\circ}$. Time sequence is left to right, then top to bottom then left to right.} \label{col-den-1200-45} \end{figure*} \begin{figure*} \center \includegraphics[width=0.85\textwidth]{figure4.pdf} \caption{Axial mean density distribution along the $x$ axis of the cloud G1200(5) corresponding to the column density evolutionary snapshots in Figure \ref{col-den-1200-45} } \label{axial-den-1200-45} \end{figure*} \begin{figure} \center \includegraphics[width=0.5\textwidth]{figure5.pdf} \caption{Magnification of the column density profiles time $t = 0.11$ (left) and 0.13 Myr (right) at the head of the cloud G1200(5) with $\varphi = 45{}^{\circ}$} \label{zooming-col-1200-45} \end{figure} Overall, the cloud G1200(5) with an initial inclination angle of 45${}^{\circ}$ evolves to an asymmetrical type B BRC, with double RDI triggered high density cores embedded at its head, very close to the apex of the BRC. The cores are the potential sites for new star formation. At the same time the apex of the cloud moves down from $z = 0.61$ pc to $z = 0.035$ pc, a net recoil velocity of $\left< v_z \right> = 4.3$ km s$^{-1}$, due to the rocket effect caused by the evaporating gas from the star-facing surface irradiated by the ionizing radiation \citep{OortSpitzer1955-1}. Next we investigate the effect of the inclination of a prolate cloud on its dynamical evolution. \subsection{Evolution of G1200(5) with $\bf 0{}^{\circ} \le \varphi \le 90 {}^{\circ} $} To investigate the dependence on the inclination, the cloud G1200(5) is simulated for a range of inclinations $0{}^{\circ} \le \varphi \le 90 {}^{\circ}$. The sequence of inclination angles are chosen as 0, 15, 30, 45, 60, 75 and 90 ${}^{\circ}$. The values of \ensuremath{d_{\mathrm{EUV}}}\ for these seven clouds are listed in Table \ref{G1200(5)-diff-angle}, from which we can see that the ionizing penetration depth decreases with $\varphi$, and is very shallow, with values $<< 1$. As a result, all of these clouds are in the RDI induced shock-dominated regime and triggered collapse is expected. \begin{table} \centering \begin{tabular}{llllllll} \cline{1-8} $\varphi ({}^{\circ})$ & 0 & 15 & 30 & 45 & 60 & 75 & 90 \\ \ensuremath{d_{\mathrm{EUV}}}& \multirow{2}{*}{0.283} & \multirow{2}{*}{0.276} & \multirow{2}{*}{0.255} & \multirow{2}{*}{0.224} & \multirow{2}{*}{0.187} & \multirow{2}{*}{0.155} & \multirow{2}{*}{0.141} \\ (\%) &&&&&&&\\ \cline{1-8} \end{tabular} \caption{Variation of \ensuremath{d_{\mathrm{EUV}}}\ over different $\varphi$ for cloud G1200(5).} \label{G1200(5)-diff-angle} \end{table} \subsubsection{Morphological variation with $\varphi$} It is found that the evolutionary sequences of the 7 clouds are qualitatively similar to that with $\varphi = 45{}^{\circ}$, i.e., the formation of a condensed layer on the star facing surface of the cloud and highly condensed core(s) near the apex, although the final morphologies themselves depend on the initial inclination angle of the cloud. As the general evolution is so similar, they are not discussed individually, but with focus on comparison of the differences in their final morphologies and the physical properties of any high density core(s). In Figure \ref{col-den-1200-all}, the final snapshots of column density of cloud G1200(5) from 6 different simulations with $\varphi = 0, 15, 30, 60, 75, $ and $90 {}^{\circ}$, are presented. The equivalent for $\varphi = 45{}^{\circ}$ can be found in Figure \ref{col-den-1200-45}. When the inclination angle $\varphi $ is small ($ \le 30{}^{\circ}$), $x_a \approx x_{\mathrm{max}}$, the EUV radiation mainly illuminates the surface of the front semi-ellipsoid. The effect of the RDI is to drive a pseudo-plane-parallel shock propagating into the cloud, and the prolate cloud evolves to a filamentary structure with a condensed surface layer on the star side and highly condensed core(s) at the apex (when $\varphi \ne 0{}^{\circ}$) or at either the apex or the two ends (when $\varphi = 0{}^{\circ}$). For the latter case, the evolution has been analysed in detail by \citet{Kinnear2014-1}. \begin{figure*} \includegraphics[width=0.85\textwidth]{figure6.pdf} \caption{The final snapshot of the column density from six simulations of G1200(5) with different initial inclination angles; left to right then up to bottom are per: 0, 15, 30, 60, 75 90 ${}^{\circ}$.} \label{col-den-1200-all} \end{figure*} When $ 45 {}^{\circ} \le \varphi < 90{}^{\circ}$, the EUV radiation illuminates not only the surface of the front semi-ellipsoid, but also the top part of back semi-elliptical surface, so that the shock induced by RDI creates a condensed and curved surface layer with the apex as a convergence point of two compressed curving surface layers, those surfaces on opposing sides of the apex. The region around the apex is also the site for the formation of high density core(s). These clouds evolve to asymmetrical BRCs of different morphologies. The morphological sequence is similar to that of the cloud with $\varphi = 45{}^{\circ}$, as shown in Figure \ref{col-den-1200-45}. Comparing the three final morphologies of the simulations at 45, 60 and 75\deg\ in Figures \ref{col-den-1200-45} and \ref{col-den-1200-all}, we can see that as the inclination angle increases, the final structure of the asymmetrical BRC becomes increasingly extended both in the head and tail. This is because, as the effective EUV illumination area decreases as described in Equation \ref{eff_area}, the total volume of the cloud affected by the RDI induced shock becomes smaller. Consequently the original prolate shape of a cloud is less distorted. When $\varphi = 90{}^{\circ} $, the cloud G1200(5) develops into a symmetrical type B/C BRC but having a wide tail structure, with the widest part defined by the initial semi-minor axis. The morphology developed is different from that of standard type B/C BRC structure, which has a much narrower width in its tail, if one assumes that the progenitor cloud was initially spherical \citep{LeflochLazareff1994-1, KesselBurkert2003-1, MiaoEtAl2006-1, MiaoEtAl2009-1}. Observations have also reported several type B/C BRCs with wide tail morphological structures, e.g. SFO74 \citep{Thompson-2004,Kusune2014-1}. In summary, a variety of non-standard morphological structures can be obtained by changing the inclination of cloud G1200(5). At low inclination, linear fragment-core structures form, at middle inclinations, asymmetric type B and C structures, and approaching 90\deg inclinations these return to symmetrical type B and C structures. \subsubsection{The location of high density cores} Figure \ref{dis-x-1200-r2-all} illustrates the distribution of the $x$-displacements of condensed cores formed in the 7 simulations of different initial inclination angle $\varphi$. A point to note is that, where cores form in very close proximity, their positions cannot be distinguished on the scale used, regardless of the size choice of the marker. It can be seen that when $\varphi > 0{}^{\circ} $, core(s) always form at the side of positive $x$ axis and the change of the $x$ displacement with $\varphi$ follows the trend of $x_{\mathrm{p}}$ with $\varphi$. This illustrates an extremely high preference for high density core(s) to form around the apex $x_p$ in each simulation. \begin{figure} \center \includegraphics[width=0.5\textwidth]{figure7.pdf} \caption{The $x$ displacement of the cores formed in G1200(5) with different initial inclination angles. The short `-' indicates $x_{\mathrm{max}}$, `+' $x_{\mathrm{p}}$, `$\circ$' for the cores with $10^6 \le n_{\mathrm{peak}} \le 10^{12}$ $n_{\mathrm{H}_2}$ cm${}^{-3}$, and `$\bullet$' for the extremely high density cores with $n_{\mathrm{peak}} > 10^{12}$ $n_{\mathrm{H}_2}$ cm${}^{-3}$.} \label{dis-x-1200-r2-all} \end{figure} \begin{figure} \center \includegraphics[width=0.43\textwidth]{figure8.pdf} \caption{The evolution of maximum density of the core(s) in the cloud G1200(5) with different initial inclination angles.} \label{max_den_with_time_plot} \end{figure} \begin{figure} \center \includegraphics[width=0.45\textwidth]{figure9.pdf} \caption{The variation of the total core mass of the cloud G1200(5) over different initial inclination angles. The symbols and their represented properties of cores are shown in the legend box.} \label{total_core_mass_E06-08} \end{figure} \subsubsection{RDI triggered core formation efficiency} In order to examine the effect of varying inclination angle on the efficiency of RDI triggered high density core formation, we investigate the angle dependence of the core formation time and the accumulated core mass. Figure \ref{max_den_with_time_plot} shows the evolution of the maximum density in the 7 simulations with different inclination angles. In general, the evolution of $n_{\mathrm{max}}$ in each simulation passes through three different phases: quasi-linear, quasi-stable and steeply rising, although the time of each phase varies with inclination angle $\varphi$. These phases may correspond to the different stages of RDI triggered high density core formation. Beginning with initial compression by an RDI induced shock, mass accumulates to form the condensed region, which finally collapses to form high density core(s). The time for a simulation to reach the highest density (high density core formation) decreases with inclination angle, from 0.19 down to 0.086 Myr for simulations with $\varphi = 0$ and $90{}^{\circ}$ respectively. This is because the curvature of the EUV radiation illuminated surface around the apex of the prolate cloud increases with $\varphi$, this is beneficial for mass accumulating towards a geometric focus underneath the curved surface. This decreases the path length to the focus, and so also the time for accumulating mass around it and triggering gravitational instability of the condensed region; albeit with less material having been `swept up' by the front. Due to this shortening path length, the total mass of the high density core(s) is therefore expected to decrease with $\varphi$. The relation between the total high density core mass and inclination angle $\varphi$ is shown in Figure \ref{total_core_mass_E06-08}, which reveals that the total high density ($ n \ge 10^6 $ $n_{\mathrm{H}_2}$ cm${}^{-3}$) core mass is about 4.1 M$_{\odot}$, in simulation of $\varphi = 0$, then decreases with $\varphi$, becoming 0.4 M$_{\odot}$\ at $\varphi = 90{}^{\circ}$. This is consistent with the above expectation. However the total mass of the extremely high density core(s) ($n > 10^8$ $n_{\mathrm{H}_2}$ cm${}^{-3}$) doesn't show an obvious similar relation with $\varphi$ to the mass of high density core(s). This may be for several reasons. Firstly, as the simulation is curtailed before the end of all of the dynamic processes, extremely high density cores may be on the cusp of formation, but have not yet had the opportunity. Secondly, a large degree of random variation is expected with such gravitational instability-based collapse. Accumulating mass in compact high density regions can be fairly consistent, but the collapse of these regions or parts of these regions to extremely high densities through gravitational instability is chaotic and varies strongly with the exact configuration of the dynamics. From the above result, it is understood that as a cloud is rotated from $\varphi = 0$ to $90\deg$, the morphology of the cloud varies from fragment-core filamentary structures to varied asymmetrical BRCs, then to standard symmetrical BRCs as $\varphi$ approaches $90\deg$. At high $\varphi$, RDI triggered star formation occurs earlier with lower masses at high densities. However, for the sampling performed, such pattern is not obvious for the extremely high density core formation, which appears to be chaotic. Next we look at the effect of the initial shape of an inclined prolate cloud on its final morphological structure and RDI triggered star formation efficiency. \subsection{Evolution of G1200 series with ${\bf 1.0 \le \gamma \le 8.0}$} The initial shape of the clouds investigated here is defined by the ratio of the major to minor axis which are in the range of $ 1.0 \le \gamma \le 8.0 $. When the initial density and total mass are constants, the major axis changes with $\gamma$. The inclination angle is kept at 45${}^{\circ}$. The ionization radiation penetration depth ranges from 0.225 to 0.320\% as listed in Table \ref{G1200(5)-angle-45}. \subsubsection{Morphology of final structures} The dynamical evolution of these clouds is also qualitatively similar to that discussed in Section \ref{G1200_5}. We focus on the analysis of the effect of the initial shape of a cloud on the final morphology, the location(s) of the condensed core(s) and the total core masses. Figure \ref{col-den-45-all-gamma} presents six simulation result of cloud G1200 with different initial major to minor axis ratio $\gamma = $1.0, 1.5, 2.0, 2.5, 3.0, 5.0, selected from simulation results for 19 different initial clouds, described in Table \ref{G1200(5)-angle-45}. With increasing $\gamma$, the final morphology changes from a standard (symmetrical) type A ($\gamma = 1.0$), to an asymmetric type A BRC ($\gamma =1.5$), then to an asymmetric type B BRC ($\gamma = 2.0$), next to evolve to an asymmetrical type C BRC ($\gamma = 2.5$). With further increase ($\gamma \ge 3.0$) only a filamentary structure can form. \begin{figure*} \includegraphics[width=0.85\textwidth]{figure10.pdf} \caption{The final snapshots of the column density from six simulations of G1200 with $\gamma$ equal to the values shown in the top-left corner of each panel. The inclination angle is 45${}^{\circ}$ for all of the initial clouds.} \label{col-den-45-all-gamma} \end{figure*} In these six simulations, high density cores form at the head of the structure except in the case of $\gamma = 2.5$, where the highly condensed core is embedded in the small `nose' structure just outside the head. It is interesting to notice that the morphological structure of SFO46 (CG1) from observations \citep{harju-1990, haikala-2010, Makela-2013} bears a similar feature to that of $\gamma = 2.5$. We leave the discussion for the formation mechanism of this morphology to the Section \ref{diff-initial-den-flux}, where we present and analyse more results of various kinds of morphological structures. It is also emphasised that we don't intend to compare the detailed physical structures between simulation and observation on SFO46 in this paper, but only to point out the link between the interesting features from these simulations to such observations. Detailed comparison of physical properties requires further simulations, and will be addressed in a subsequent paper. \subsubsection{Displacement of the high density cores} \begin{figure} \center \includegraphics[width=0.45\textwidth]{figure11.pdf} \caption{The $x$ displacement of condensed core in G1200 series with an inclination of $\varphi = 45\%$ for varied $\gamma$. The short `-' indicates $x_{\mathrm{max}}$, `+' $x_{\mathrm{p}}$, `$\circ$' for the cores with $10^6 \le n_{\mathrm{peak}} \le 10^{12}$ $n_{\mathrm{H}_2}$ cm${}^{-3}$, and `$\bullet$' for the extremely high density cores with $n_{\mathrm{peak}} > 10^{12}$ $n_{\mathrm{H}_2}$ cm${}^{-3}$.} \label{disp_norm_den_1200_ang_45} \end{figure} Figure \ref{disp_norm_den_1200_ang_45} displays the $x$-displacement of condensed core(s) with changing geometry, based on the simulation results of 19 clouds with different $\gamma$. With increasing $\gamma$, $x_{\mathrm{p}}$ moves further in the $+x$ direction. This pattern is displayed in the same form for the condensed cores. A very clear trend of the core formation occurring at or around the apex for all ratios is exhibited. \subsubsection{The total core mass} \begin{figure} \center \includegraphics[width=0.45\textwidth]{figure12.pdf} \caption{Total mass of condensed cores in cloud G1200 series at an inclination of 45\% for varied axial ratio $\gamma$.} \label{mass_r2_1200_all_gamma_criteria} \end{figure} The trend of total mass of both the high density core(s) (empty circles) and extremely high density cores (solid black circles) is a decrease with $\gamma$. This is illustrated in Figure \ref{mass_r2_1200_all_gamma_criteria}. The total mass of high density cores has an approximate pattern of maximum mass of 2.0 M$_{\odot}$\ at $\gamma \approx 2.25$ (with an outlier as high as almost 3.0 M$_{\odot}$\ at $\gamma = 2.75$) and a minimum of 0.6 M$_{\odot}$\ at $\gamma = 8$ (with an unusually low value of only around 0.2 M$_{\odot}$\ for $\gamma = 6.0$). Clouds with lower ellipticity appear to have a higher probability of collecting greater mass in the final cores. This can be expected as the mass per unit length along the major axis decreases with $\gamma$. As a result, the amount of mass able to collapse toward the apex region decreases, which is similar to the case when $\varphi = 0{}^{\circ}$ \citep{Kinnear2014-1}. When $\varphi \ne 0{}^{\circ}$, due to the wide range of variation in the final morphologies of the clouds, the amplitude of the fluctuation is similarly increased. From the same figure, we can see the mass of extremely condensed cores (solid black circles) in a cloud also decreases with $\gamma$ in a similar way as the high density core. The mass of these cores corresponds to the precursor mass for possible proto-stars forming later in the evolution of the cloud, having a maximum of $\approx 1.6$ and a minimum of $\approx 0.2$ M$_{\odot}$. \subsection{Morphological evolution of clouds of different initial density and under different ionizing flux} \label{diff-initial-den-flux} \begin{table} \centering \begin{tabular}{lcccc} \cline{1-5} Index & $n$ & $a$ & $\ensuremath{d_{\mathrm{EUV}}}(F_1)$ & $\ensuremath{d_{\mathrm{EUV}}}(F_2)$\\ & $n_{\mathrm{H}_2}$ cm${}^{-3}$ & pc & \% & \% \\ G100 & 100 & 1.797 & 11.779 & 23.558 \\ G200 & 200 & 1.427 & 3.708 & 7.416 \\ G400 & 400 & 1.132 & 1.17 & 2.34 \\ G600 & 600 & 0.989 & 0.594 & 1.188 \\ G700 & 700 & 0.940 & 0.460 & 0.920 \\ G800 & 800 & 0.899 & 0.368 & 0.736 \\ G1000 & 1000 & 0.834 & 0.254 & 0.508 \\ G1200 & 1200 & 0.784 & 0.187 & 0.374 \\ \cline{1-5} \end{tabular} \caption{Parameters of molecular clouds of mass 30 M$_{\odot}$, an inclination angle $\varphi = 60{}^{\circ}$, $\gamma =2$, and $a_{\mathrm{crit}} = 0.079$ (pc). From left to right, columns are the cloud index, initial number density, the major axis and \ensuremath{d_{\mathrm{EUV}}}\ defined by Equation \ref{pene-depth}, with different ionization flux $F_1$ and $F_2$ being $1.0 \times 10^{9}$ and $ 2.0 \times 10^{9}$ cm$^{-2}$ s$^{-1}$ respectively.} \label{clouds-diff-n} \end{table} After the investigation of the effects of both initial shape and inclination angle of a prolate cloud on its RDI triggered dynamical evolution, we now explore the consequences of changing the initial density of a cloud and the ionizing radiation flux in the morphological development of a cloud of mass 30 M$_{\odot}$. This is essentially to examine the consistency of the \ensuremath{d_{\mathrm{EUV}}}\ parameter in characterising a wider variety of cloud parameters. A sequence of 8 different initial densities are chosen as 100, 200, 400, 600, 700, 800, 1000, and 1200 $n_{\mathrm{H}_2}$ cm${}^{-3}$. Two different ionization fluxes of $F_1 = 1.0 \times 10^{9}$ and $F_2 = 2.0 \times 10^{9}$ cm$^{-2}$ s$^{-1}$ are applied for a total of 16 simulations. An inclination angle of $60{}^{\circ}$ and $\gamma = 2.0$ is used for all cases. This is as a result of the simulation of G1200(5) exhibiting the formation of the `nose' structure in the vicinity of these geometries. As such, we can also expect to get some clue of how the `nose' structure is developed and changes with the initial properties of a cloud in this set of simulations. The corresponding parameters of the clouds are listed in Table \ref{clouds-diff-n}. It is seen that the change in EUV radiation penetration depth over the initial density ($100 \le n \le 1200$ $n_{\mathrm{H}_2}$ cm${}^{-3}$) is almost two orders of magnitude, a much more dramatic variance than from changing $\gamma$ or $\varphi$. \begin{figure*} \includegraphics[width=0.95\textwidth]{figure13.pdf} \caption{The final snapshot of the column density of clouds of initial mass of 30 M$_{\odot}$\ $\gamma = 2.0$ and $\varphi = 60 {}^{\circ}$\ but different initial densities as shown in the top-left corner of each panel. The ionization flux = 10$^{9}$ cm$^{-2}$ s$^{-1}$} \label{density_differences_flux_1e9} \end{figure*} \begin{figure*} \includegraphics[width=0.95\textwidth]{figure14.pdf} \caption{The final snapshot of the column density of the same 8 clouds as in Figure \ref{density_differences_flux_1e9} (initial mass of 30 M$_{\odot}$\ $\gamma = 2.0$ and $\varphi = 60 {}^{\circ}$) but with increased incident flux of $2 \times 10^{9}$ cm$^{-2}$ s$^{-1}$.} \label{density_differences_flux_2e9} \end{figure*} \subsubsection{An overview of the final morphological structures} Figure \ref{density_differences_flux_1e9} displays the final snapshots of the column density of the 8 simulations with ionization flux of $F_1$. The morphology of the final structure of a cloud changes from an asymmetrical type C BRC to a filamentary structure and then to an irregular structure, as the initial density is decreased from 1200 to 100 $n_{\mathrm{H}_2}$ cm${}^{-3}$, which leads to an increase of \ensuremath{d_{\mathrm{EUV}}}\ from 0.187 to 11.779\%. As shown in the top row panels, an asymmetrical type C BRC is developed in G1200, G1000, G800, and G700, with \ensuremath{d_{\mathrm{EUV}}}\ of 0.187, 0.254, 0.368 and 0.460\% respectively. Although they all bear an asymmetric type C BRC morphology, the minor axis becomes narrower with increasing \ensuremath{d_{\mathrm{EUV}}}, which results in more cloud material being photo-evaporated from their star facing surfaces. When the initial density $n = 600$ $n_{\mathrm{H}_2}$ cm${}^{-3}$\ ($\ensuremath{d_{\mathrm{EUV}}} = 0.594$\%), the prolate cloud evolves into a filamentary structure, as shown in the first panel in the bottom row of Figure \ref{density_differences_flux_1e9}. Clouds of initial densities of 400, 200 and 100 $n_{\mathrm{H}_2}$ cm${}^{-3}$\ (\ensuremath{d_{\mathrm{EUV}}}\ = 1.17, 3.708 and 11.779 \% respectively) are all seen to form irregular structures. \begin{figure} \center \includegraphics[width=0.45\textwidth]{figure15.pdf} \caption{The core formation time of the $F_1$ and $F_2$ group clouds vs their initial densities.} \label{init_den_vs_core_time} \end{figure} \begin{figure} \center \includegraphics[width=0.45\textwidth]{figure16.pdf} \caption{The total mass of the cores formed in $F_1$ and $F_2$ group clouds vs their initial densities.} \label{core_mass_50_002_1e6} \end{figure} The final morphological structures from the simulations using doubled ionization flux $F_2$ are presented in Figure \ref{density_differences_flux_2e9}. Similar morphological structure sequences to those seen in Figure \ref{density_differences_flux_1e9} are observed. An asymmetrical type C BRC is formed from G1200, G1000 and G800, a filamentary structure from G700, and irregular structures in G600, G400, G200 and G100. The only difference is that the morphological transition point shifts to a cloud of higher initial density compared to the $F_1$ simulations. The first transition from type C BRC to filamentary morphology occurs around G600($F_1$) and G700($F_2$), and the second transition from filamentary to irregular morphology is around G400($F_1$) and G600($F_2$). These appear to correspond to similar \ensuremath{d_{\mathrm{EUV}}}\ in each case, for both fluxes. The former transition occurs at $\ensuremath{d_{\mathrm{EUV}}} \approx 0.8$ and the latter at $\ensuremath{d_{\mathrm{EUV}}} \approx 1.1$. This indicates that \ensuremath{d_{\mathrm{EUV}}}\ remains a characteristic parameter for the dynamical evolution of a cloud for the parameter space so far investigated. Figures \ref{init_den_vs_core_time} and \ref{core_mass_50_002_1e6} present the core formation time and the total mass of the RDI triggered condensed cores vs the initial density of the clouds of the above two groups of simulated clouds, with black lines for $F_1$ group and red for $F_2$. It is shown that the core formation time in both group clouds decreases with the initial density of a cloud. This is because of the stronger initial gravitational bonding in the high density cloud. For the same reason, the total mass of the condensed cores follow an increasing trend with the initial density of the cloud. However the effect of the strength of the EUV radiation flux on the efficiency of triggered core formation vs initial density is two fold. When the initial density is below 600 $n_{\mathrm{H}_2}$ cm${}^{-3}$\ the clouds illuminated by the higher flux ($F_2$) take a longer time to form condensed cores, and collect slightly more mass in their condensed cores than in the clouds illuminated by the lower flux ($F_1$). This trend is reversed when the initial density is higher then 600 $n_{\mathrm{H}_2}$ cm${}^{-3}$. i.e., clouds illuminated by higher flux take less time to form condensed core and collect less mass into their condensed cores than the clouds illuminated by lower flux. Overall, the information delivered in the above two figures suggests that the RDI triggered star formation efficiency is much higher (shorter core formation time and higher total core mass) in molecular clouds of shorter \ensuremath{d_{\mathrm{EUV}}}\ (corresponding to higher initial density) in both $F_1$ and $F_2$ group clouds From the above two sets of simulations, we can see that when $\ensuremath{d_{\mathrm{EUV}}} \ge 1.0\%$, the deep penetration of the EUV radiation into the cloud causes irregular structure formation. A closer look at the above two morphological variation sequences tells that with increasing \ensuremath{d_{\mathrm{EUV}}}, the tiny `nose' structure around the apex of the asymmetrical type C BRC gradually grows and becomes an irregular `horse-head' structure. In order to understand how the `nose' and `horse-head' structures form, we further investigate the evolutionary process of one of the above clouds, which presents a clear horse-head-like morphology. \subsubsection{Growing from `nose' to `horse-head' structure} We chose the cloud G400($F_2$) to analyse its evolutionary process due to its formation of a particularly distinctive `horse-head' structure. Figure \ref{column-den400} describes a six-snapshot column density evolutionary sequence of cloud G400($F_2$), which shows a growing process from a small `nose' to a `horse-head' morphological structure. The corresponding RDI induced shock velocity profiles for each of the snapshots are plotted in Figure \ref{velocity-evolution}, which reveals the kinematic mechanism for the morphology formation in G400($F_2$). From Figure \ref{column-den400}, we can see that a shocked layer forms at $t = 0.046$ Myr, which surrounds most of the surface of the front semi-ellipsoid and a small part of the top surface of the back semi-ellipsoid. The maximum magnitude of the shock velocity $v_{\mathrm{s}} = 12$ km $s^{-1}$, calculated as the peak velocity of non-ionized gas within the condensed shock front. Most of the shocked layer surrounding the front semi-ellipsoid propagates into the cloud smoothly toward (+$x$, -$z$) direction. Very close to the apex, the shock velocity of the front semi-ellipsoid is toward the direction of ($x, -z$), and that of the back semi-ellipsoid is ($-x,-z$), the gas material from the two sides of the apex converge beneath the apex to make a small `nose' structure at $t = 0.084$ Myr, as shown in the top-right panels Figures \ref{column-den400} and \ref{velocity-evolution}. While a small `nose' structure forms at the point of ($x$, $z$) $\approx$ (0.2, 0.2) pc and with $v_{\mathrm{x}} \approx 0$, below the nose structure, the RDI induced shock keeps pushing the surface layer of the front semi-ellipsoid moves toward the ($+x$, $-z$) direction at the shock velocity. This makes the below-nose region of the BRC gradually produce a displacement along $x$ direction relative to the nose. The nose becomes a short horizontal fragment at $t = 0.13$ Myr, in which the direction of the RDI induced shock velocity is mainly along $z$ direction. The short horizontal fragment gradually forms a `hook' at $t = 0.17$ Myr as shown in the lower middle panels of Figures \ref{column-den400} and \ref{velocity-evolution}. Further propagation of the shocked layer below the hook toward ($+x$, $-z$) direction makes the `hook' structure grow to look like a `horse-head' as shown in the last panels in the above two figures. The final `horse-head' structure bears some resemblance to the well known horse-head nebula. It is seen from the above analysis that the appearance of a `horse-head' morphology in this instance is a consequence of growing a small `nose' structure. The `nose' structure forms if the gas around the apex of an ellipsoidal cloud converge beneath that apex due to a strong RDI induced shock effect, as described above. This requires that the cloud should have a reasonably high inclination angle, i.e., $\varphi > 45{}^{\circ}$ in our simulations. However the formation of a `nose' structure does not necessarily lead to a `horse-head' structure. For example, a small `nose' formation is seen from the final morphologies of G700-1200 in the $F_1$ series and G800-1200 in the $F_2$ series. Whether the `nose' can grow into a `horse-head' also depends on the initial ionization penetration depth, i.e., $\ensuremath{d_{\mathrm{EUV}}} > 1.0$\%. The latter is to guarantee a higher enough shock velocity $\left(v_{\mathrm{s}} \sim \sqrt{\frac{F_{\mathrm{Lyman}}}{n}}\right)$\citep{Bertoldi1989-1}, which can push the majority of the gas below the nose toward the ($+x$, $-z$) direction. \begin{figure*} \includegraphics[width=0.70\textwidth]{figure17.pdf} \caption{The column density evolution of a cloud G400($F_2$), with an inclination angle of 60${}^{\circ}$.} \label{column-den400} \end{figure*} \begin{figure*} \includegraphics[width=0.70\textwidth]{figure18.pdf} \caption{The evolution of the RDI induced shock velocity of cloud G400($F_2$) overlapped in the corresponding cross-section density snapshot. The magnitude of the velocity is between 2 and 12 km s$^{-1}$.} \label{velocity-evolution} \end{figure*} \subsection{Development of other irregular morphologies} \label{diff-morphology} Further simulations were conducted, to explore if there are other structures which form different irregular morphologies to a `horse-head' as a consequence of the variation of initial conditions of the cloud and ionization flux. A series of 16 simulations with G400($F_2$) of different $\gamma = 2.0, 2.5, 3.0, 3.5$ and $\varphi = 60, 70, 80, 85{}^{\circ}$ are conducted. For all 16 clouds, $\ensuremath{d_{\mathrm{EUV}}} > 1.0$\%. In this section, we concentrate on the description of the morphological variations, as the underlying mechanisms appear to be similar, and they are of interest due to their variety rather than exact evolutions. \begin{figure*} \includegraphics[width=0.95\textwidth]{figure19.pdf} \caption{The final morphologies of the clouds G400($F_2$) of different initial geometry and inclination angle. The axis scales vary between panels, because of the different initial scale of the major- and minor axes for the clouds depicted.} \label{diff-morphology-N400} \end{figure*} The first column panels in Figure \ref{diff-morphology-N400} present the final morphologies of G400($F_2$) of $\gamma = 2$ and 4 different initial inclination angles, which show that when the initial angle $\varphi$ increases from 60${}^{\circ}$, the `horse-head' in the final structure becomes longer in $z$ direction. The connection point between the `horse-head' and the below-horse-head part moves downward with $\varphi$. At 85$^{{}^{\circ}}$, the whole morphology is an `elephant trunk' like structure. This is because the `nose' structure formed in the first tens of Kyr of the evolution increasingly stretches along the $z$ axis with increasing $\varphi$. The first row panels in Figure \ref{diff-morphology-N400} describe the morphology change of cloud G400($F_2$) of inclination angle $\varphi =60{}^{\circ}$ and varied $\gamma$. When the initial prolate cloud becomes more elliptical, in addition to a `horse-head' structure, the RDI induced shock also triggers linear gravitational instability along the direction of the major axis in some clouds. The 16 panels in Figure \ref{diff-morphology-N400} present 16 different final morphologies which appear in the clouds of the initial ($\gamma, \varphi$) as stated above. We can conclude that the larger the inclination angle, the longer the `horse-head' sub-structure; and that the more elliptical the cloud initially, the thinner the final morphology is. Although, as in previous instances, their structural details differ across the range of $\gamma$. Through viewing more simulation results obtained using different combination of ($F_{\mathrm{Lyman}}, n, \gamma, \varphi$), we find that the variety of the final morphology is extensive for $\ensuremath{d_{\mathrm{EUV}}} > 1.0$\%, although it is impossible to show all of them in this paper. \section{Conclusions} The results from the simulations conducted show that uniform density prolate clouds at an inclination to a plane parallel EUV radiation field can evolve to a variety of morphological structures often found at various H\hspace{1pt}{\sc ii}\ boundaries. The parameter of EUV radiation penetration depth, \ensuremath{d_{\mathrm{EUV}}}, of a given cloud can be a good indicator as to whether it would evolve to a regular BRC or irregular structures. The final morphology of the clouds of initial denisty ($1200$ $n_{\mathrm{H}_2}$ cm${}^{-3}$) and ellipticity ($\gamma = 2$), and with $\ensuremath{d_{\mathrm{EUV}}} < 1.0$\% changes from a filament to an asymmetrical BRC, then to a symmetrical BRC, as the inclination angle $\varphi$ increases from $0$ to $90{}^{\circ}$. The core formation time and the total mass of the RDI triggered condensed cores both decrease with $\varphi$. Therefore EUV radiation triggered star formation is quicker in high $\varphi$ prolate clouds, but the total mass of the triggered stars is higher for low $\varphi$ cloud. For clouds of fixed initial density ($1200$ $n_{\mathrm{H}_2}$ cm${}^{-3}$) and inclination angle ($\varphi = 45{}^{\circ}$), as $\gamma$ increases from 1 to 8, the final morphologies changes from a symmetrical to asymmetrical BRC, then to filamentary structure. RDI triggered star formation efficiency decreases with the increase of $\gamma$, due to a greater quantity of gas material being evaporated with a high cross sectional area to the incident radiation. It is found that the final morphology of a prolate cloud is very sensitive to the initial density. For fixed initial shape ($\gamma = 2.0$) and fixed inclination angle ($\varphi = 60 {}^{\circ}$), the morphology of a cloud changes from an asymmetric type C BRCs to a filament then to irregular structures, such as a `horse-head', as the initial density decreases from 1200 to 100 $n_{\mathrm{H}_2}$ cm${}^{-3}$. EUV radiation triggered star formation efficiency is found to be higher in regular BRCs (with $\ensuremath{d_{\mathrm{EUV}}} < 1.0$\%) than in irregular structures (with $\ensuremath{d_{\mathrm{EUV}}} > 1.0$\%). Based on the above systematic investigation on the evolution of molecular clouds (or clumps) at an H\hspace{1pt}{\sc ii}\ boundary, we can now suggest a unified formation mechanism for the different morphological structures seen in the published images of a variety of H\hspace{1pt}{\sc ii}\ regions: Giant molecular clouds (GMCs) are clumpy and the numerous clumps are of different shapes, from spherical to highly elliptical, and with their major axes not exhibiting a strong preference for any particular alignment within their environment. When a star or a cluster of stars forms inside a GMC, the interaction between the ionizing radiation with the surrounding molecular clumps of different shapes and orientations creates structures of various morphologies. The spherical and prolate clumps aligned with the radiation flux form standard type BRCs, the inclined prolate clumps with a shallow ionizing radiation penetration depth form asymmetrical BRCs or an inclined linear structures; the inclined prolate clumps with a deep radiation penetration depth form a variety of irregular structures, the well known `horse-head' structure is only one of them. In our next paper, we are going to investigate the evolution of physical properties of a few observed asymmetrical BRCs and the RDI triggered star formation inside them. \section{Acknowledgement} Timothy Kinnear thanks the University of Kent for providing his PhD studentship.
2,877,628,089,859
arxiv
\section{Introduction} In this work we consider the initial value problem (IVP) for the $L^2$ critical inhomogeneous nonlinear Schr\"odinger (INLS) equation \begin{equation} \begin{cases} i \partial_t u + \Delta u + |x|^{-b} |u|^{\frac{4-2b}{N}}u = 0, \,\,\, x \in \mathbb{R}^N, \,t>0,\\ u(0) = u_0 \in H^1(\mathbb{R}^N), \end{cases} \label{PVI} \end{equation} where $N\geq 1$ and $0<b<2$. For $b=0$, \eqref{PVI} reduces to the IVP associated to the classical nonlinear Schr\"odinger (NLS) equation. This model is called $L^2$ critical since the scaling symmetry $u(x,t)\mapsto \lambda^{\frac{N}{2}}u(\lambda x, \lambda^2 t)$ leaves invariant the $L^2$ norm. The local well-posedness of the IVP \eqref{PVI} was obtained by \citet[Appendix K]{GENSTU} (see also \citet{CARLOS}). Moreover, \citet{G12} also proved that this problem is globally well-posed below the ground state threshold. The solutions of the IVP \eqref{PVI} satisfy mass and energy conservation laws given respectively by \begin{equation}\label{Mass} M[u(t)] =\int |u(x,t)|^2\, dx =M[u_0]. \end{equation} and \begin{equation}\label{Energy} E[u(t)] =\frac12 \int |\nabla u(x,t)|^2\, dx - \frac{N}{4-2b+2N}\int |x|^{-b}|u(x,t)|^{\frac{4-2b}{N}+2}\,dx=E[u_0]. \end{equation} Our main result is the following. \begin{thm}\label{Blowup} Let $N\geq 1$ and $0<b<2$. If $u_0\in H^1(\mathbb{R}^N) $ and $E[u_0]< 0,$ then the corresponding solution $u(t)$ to \eqref{PVI} blows-up in finite. \end{thm} The previous theorem was first obtained by \citet{ogawa1991blow} for the classical radial NLS equation when $N\geq 2$ and, combining a scaling argument, the same authors in \cite{OT91PAMS} were able to improve this result for the non-radial NLS equation in dimension one. Applying the same ideas, \citet{dinh2017blowup} extended these results for the INLS model under identical conditions: radial for $N\geq 2$ and non-radial for $N=1$. Here we refine the argument of \citet{ogawa1991blow} to consider the non-radial INLS equation in all spatial dimensions, without relying on the scaling argument of \cite{OT91PAMS}. We should point out that Theorem \ref{Blowup} is still unknown for the classical NLS equation in the non-radial when $N\geq 2$. Recently, several papers reported results for the non-radial INLS equation that so far can only be obtained for the radial NLS equation \cite{BL21}, \cite{CC21}, \cite{CF21}, \cite{GM21} and \cite{M21}. The present paper is another contribution in this direction. The new tool in the INLS setting is the decaying factor $|x|^{-b}$ which implies a control, away from the origin, for the terms arising from the nonlinearity. For the NLS equation this type of control is usually made by an application of a radial Sobolev embedding due to \citet[Lemma 1]{Strauss}. This paper is organized as follows. In Section \ref{sec2}, we introduce the basic notation and established a non-radial interpolation estimate. The last section is devoted to the proof of Theorem \ref{Blowup}. \section{Notation and Preliminaries}\label{sec2} In this section we introduce the basic notation used throughout the manuscript. The symbol $c$ will denote various positive constants and its exact value is not essential in our analysis. We write $a \lesssim b$ to denote $a \leq c\,b$ for some positive constant $c$. Similarly we define $a \gtrsim b$. The spaces $L^p(\mathbb{R}^N)$ and $H^{1}(\mathbb{R}^N)$ will be abbreviated as $L^p$ and $H^{1}$ with the norms denoted by $\|\cdot\|_p$ and $\|\cdot\|_{H^{1}}:=\|\cdot\|_{2}+\|\nabla \cdot\|_{2}$, respectively. We also consider the functional space $W^{1,\infty}=\{f\in L^{\infty}, \nabla f\in L^{\infty}\}$. Next, we obtain an interpolation estimate that will be very useful in the proof of our main result. \begin{lemma}[Non-radial interpolation estimate] Let $N\geq 1$, $0<b <2$ and $\phi$ be a positive real valued function. \begin{itemize} \item If $N\neq 2$ and $\phi^{\frac{1}{2-b}} \in W^{1,\infty}$, then for all $f\in H^1$ we have \begin{equation}\label{Interp1} \int \phi|f|^{\frac{4-2b}{N} +2}dx\lesssim \left(\| \nabla \left(\phi^{\frac{1}{2-b}}\right) f \|_2+\| \phi^{\frac{1}{2-b}} \nabla f\|_2\right)^{2-b}\|f\|_2^{\frac{4+b(N-2)}{N}}. \end{equation} \item If $N= 2$ and $\phi^{\frac{1}{2-\frac{b}{2}}} \in W^{1,\infty}$, then for all $f\in H^1$ we have \begin{equation}\label{Interp2} \int \phi|f|^{4-b}dx\lesssim \left(\| \phi^{\frac{1}{2-\frac{b}{2}}} f\|_2+\| \nabla (\phi^{\frac{1}{2-\frac{b}{2}}}) f \|_2+\| \phi^{\frac{1}{2-\frac{b}{2}}} \nabla f\|_2\right)^{2-\frac{b}{2}}\|f\|_2^{2-\frac{b}{2}}. \end{equation} \end{itemize} \end{lemma} \begin{proof} For $N\geq 3$, we apply the Holder inequality and Sobolev embedding to obtain \begin{eqnarray*} \int \phi|f|^{\frac{4-2b}{N} +2}dx&=&\int (\phi|f|^{2-b})(|f|^{\frac{4-2b}{N} +b})dx\\ &\lesssim & \|\phi |f|^{2-b}\|_{\frac{2N}{(N-2)(2-b)}}\||f|^{\frac{4-2b}{N} +b}\|_{\frac{2N}{4+b(N-2)}}\\ &=& \|\phi^{\frac{1}{2-b}}f\|^{2-b}_{\frac{2N}{N-2}}\|f\|^{\frac{4+b(N-2)}{N} }_{2}\\ &\lesssim & \|\nabla (\phi^{\frac{1}{2-b}}f)\|^{2-b}_{2}\|f\|^{\frac{4+b(N-2)}{N} }_{2}, \end{eqnarray*} which implies the desired inequality. Next when $N=1$, we first claim that \begin{equation}\label{OTN1} \|\phi^{\frac{1}{4-2b}}f\|_{\infty}\lesssim \|f\|^{1/2}_2\left(\|\left(\phi^{\frac{1}{2-b}}\right)' f\|_2+\|\phi^{\frac{1}{2-b}} f'\|_2\right)^{1/2}. \end{equation} Indeed, by an approximation argument we may assume that $f$ has compact support and therefore \begin{eqnarray*} \phi^{\frac{1}{2-b}}f^2(x)&=&\frac{1}{2}\left(\int_{-\infty}^{x}\left(\phi^{\frac{1}{2-b}}f^2\right)'ds+\int_{x}^{+\infty}\left(\phi^{\frac{1}{2-b}}f^2\right)'ds \right)\\ &\lesssim& \int \left(\phi^{\frac{1}{2-b}}\right)'f^2ds +\int \phi^{\frac{1}{2-b}} f' fds \\ &\lesssim & \|f\|_2\left(\|\left(\phi^{\frac{1}{2-b}}\right)'f\|_2+\|\phi^{\frac{1}{2-b}} f'\|_2\right) \end{eqnarray*} and \eqref{OTN1} is proved. Using this inequality we have \begin{eqnarray*} \int \phi|f|^{4-2b +2}dx&=&\int \left|\phi^{\frac{1}{4-2b}}f\right|^{4-2b}|f|^{2}dx\\ &\lesssim & \|f\|^{2-b}_2\left(\| \left(\phi^{\frac{1}{2-b}}\right)' f\|_2+\| \phi^{\frac{1}{2-b}} f'\|_2\right)^{2-b} \|f\|^2_{2}, \end{eqnarray*} so \eqref{Interp1} also holds in this case. Finally, we consider the case $N=2$ and use the following Sobolev embedding (see, for instance, \citet[Proposition 4.18]{DD12}) \begin{equation}\label{SEI1} \|f\|_{L^r}\leq c\|f\|_{H^{1}}, \,\, \mbox{for all}\,\, r\in[2,+\infty). \end{equation} together with the Holder inequality to obtain \begin{eqnarray*} \int \phi|f|^{4-b}dx&=&\int |\phi^{\frac{1}{2-\frac{b}{2}}}f|^{2-\frac{b}{2}}|f|^{2-\frac{b}{2}}dx\\ &\lesssim & \||\phi^{\frac{1}{2-\frac{b}{2}}}f|^{2-\frac{b}{2}}\|_{\frac{4}{b}} \||f|^{2-\frac{b}{2}}\|_{\frac{2}{2-\frac{b}{2}}}\\ &= &\|\phi^{\frac{1}{2-\frac{b}{2}}}f\|_{\frac{8-2b}{b}}^{2-\frac{b}{2}}\|f\|_{2}^{2-\frac{b}{2}}\\ &\lesssim & \|\phi^{\frac{1}{2-\frac{b}{2}}}f\|_{H^1}^{2-\frac{b}{2}}\|f\|_{2}^{2-\frac{b}{2}}. \end{eqnarray*} Thus, from the definition of the $H^1$-norm we deduce inequality \eqref{Interp2}. \end{proof} \begin{rem} The classical Gagliardo-Nirenberg (see for instance \citet{W83}, inequality (I.2)) $$ \int | f|^{2\sigma+2}dx\leq C \|\nabla f\|^{N\sigma}_2\|f\|_2^{2+\sigma(2-N)}, \quad \mbox{if} \quad 0<\sigma<\frac{2}{N-2} $$ implies, for $\sigma = \frac{2-b}{N}$ and assuming $\phi^{\frac{1}{\frac{4-2b}{N}+2}} f\in H^1$, that \begin{equation}\label{Interp3} \int \phi | f|^{\frac{4-2b}{N}+2}dx\leq C \|\nabla (\phi^{\frac{1}{\frac{4-2b}{N}+2}} f)\|^{2-b}_2\|\phi^{\frac{1}{\frac{4-2b}{N}+2}} f\|_2^{\frac{4+b(N-2)}{N}}. \end{equation} The main difference between inequalities \eqref{Interp1}-\eqref{Interp2} and \eqref{Interp3} is the power of the function $\phi$. As we will see later, to prove our main result we need this power to be greater then $1/2$ and therefore inequality \eqref{Interp3} is not enough to close the argument. \end{rem} \section{The proof of Theorem \ref{Blowup}}\label{sec3} Let $u_0\in H^1$ such that $E[u_0]<0$ and assume by contradiction that the corresponding solution $u(t)$ of \eqref{PVI} exists globally in time. For a bounded non-negative radial function $\phi\in C^{\infty}(\mathbb R^N)$, define $\phi_R(x)=R^2\phi\left(\frac{x}{R}\right)$ and \begin{align}\label{virial} z_R(t)=\displaystyle\int\phi_R|u(t)|^2\,dx, \end{align} for $R>0$ to be chosen later. It is clear that $$ z_R(t)\leq R^2\|\phi\|_{\infty}\|u_0\|_2^2, $$ by the mass conservation \eqref{Mass}. From direct computations (see, for instance, Proposition 7.2 in \cite{FG20}), we have the following virial identities \begin{equation}\label{zR'2} z_R'(t)=2\mbox{Im}\int \nabla\phi_R\cdot\nabla u(t)\overline{u}(t)\,dx \end{equation} and \begin{align}\label{zR''22} z_R''(t)=&4\mbox{Re} \sum_{j,k=1}^{N}\int \partial_ju(t)\,\partial_k\overline u(t)\,\partial^2_{jk}\phi_R\,dx-\int |u(t)|^2 \Delta^2\phi_R\nonumber\\ &-\frac{4-2b}{N+2-b}\int|x|^{-b}|u(t)|^{\frac{4-2b}{N}+2}\Delta\phi_R\,dx\nonumber\\ &+\frac{2N}{N+2-b}\int\nabla\left(|x|^{-b}\right)\cdot \nabla\phi_R|u(t)|^{\frac{4-2b}{N}+2}\,dx. \end{align} Recall that $$ \partial_j=\frac{x_j}{r}\partial_r \,\,\, \mbox{and}\,\,\, \partial_{kj}=\left(\frac{\delta_{kj}}{r}-\frac{x_kx_j}{r^3}\right)\partial_r + \frac{x_kx_j}{r^2}\partial^2_r $$ where $\partial_r$ denotes the radial derivative with respect to $r=|x|$. From these relations, since $\phi$ is radial, we deduce \begin{eqnarray*} \sum_{j,k=1}^{N}\partial_ju\,\partial_k\overline u\,\partial^2_{jk}\phi_R&=& \sum_{j,k=1}^{N}\partial_ju\,\partial_k\overline u\left[\left(\frac{\delta_{kj}}{r}-\frac{x_kx_j}{r^3}\right)\partial_r\phi + \frac{x_kx_j}{r^2}\partial^2_r\phi\right]\\ &=& \frac{\partial_r\psi}{r}|\nabla u|^2+\left(\frac{\partial_r^2\phi}{r^2}-\frac{\partial_r\phi}{r^3}\right)\left(\sum_{j,k=1}^{N} (x_j\partial_ju)\,\overline{(x_k\partial_ku)}\right)\\ &=& \frac{\partial_r\psi}{r}|\nabla u|^2+\left(\frac{\partial_r^2\phi}{r^2}-\frac{\partial_r\phi}{r^3}\right)|x\cdot \nabla u|^2. \end{eqnarray*} Moreover, it is easy to see that $$ \Delta \phi = \frac{N-1}{r}\partial_r\phi +\partial^2_r\phi $$ and $$ \nabla\left(|x|^{-b}\right) \cdot \nabla \phi=-b|x|^{-b}\frac{\partial_r\psi}{r}, $$ since $\nabla\left(|x|^{-b}\right)=-b|x|^{-b-2}x$. Therefore, we can rewrite the identities \eqref{zR'2}-\eqref{zR''22} as \begin{align}\label{virial1} z'(t)=2\,\mbox{Im}\int \partial_r\phi_R\frac{x\cdot \nabla u(t)}{r}\overline{u}(t)\,dx \end{align} and \begin{align}\label{virial2} z_R ''(t)=&\,\,4\int \frac{\partial_r\phi_R}{r}|\nabla u(t)|^2\,dx+4\int \left(\frac{\partial_r^2\phi_R}{r^{2}}-\frac{\partial_r \phi_R}{r^3}\right)|x\cdot \nabla u(t)|^2\,dx-\int|u(t)|^2 \Delta^2\phi_R \,dx \nonumber\\ &+\frac{4-2b}{N+2-b}\int \left[-\partial^2_r\phi_R -\left(N-1+\frac{bN}{2-b}\right)\frac{\partial_r \phi_R}{r}\right]|x|^{-b}|u(t)|^{\frac{4-2b}{N}+2}\,dx. \end{align} Continuing from above, we use the energy conservation \eqref{Energy} to obtain \begin{align}\label{zR''} z_R''(t)=2E[u_0]+K_1+K_2+K_3, \end{align} where \begin{align} K_1=&-4\int \left(2-\frac{\partial_r\phi_R}{r}\right)|\nabla u(t)|^2\,dx-4\int \left(\frac{\partial_r \phi_R}{r^3}-\frac{\partial^2_r\phi_R}{r^2}\right)|x\cdot \nabla u(t)|^2\,dx,\\ K_2=&\frac{2}{N+2-b}\int\left[(2-b)(2-\partial^2_r\phi_R)+(2N-2+b)\left(2-\frac{\partial_r \phi_R}{r}\right)\right]|x|^{-b}|u(t)|^{\frac{4-2b}{N}+2}\,dx\\ K_3=&-\int|u(t)|^2\Delta^2\phi_R\,dx. \end{align} Now, we define a function $\phi_R$ such that \begin{equation}\label{phicond} \partial_r\phi_R(r)-r{\partial^2_r\phi_R(r)}\geq 0, \,\, \mbox{for all} \,\, r=|x|\in \mathbb{R}. \end{equation} Indeed, inspired by the work of \citet{ogawa1991blow}, we first consider, for $k\in \mathbb{N}$ to be chosen later, the following function \begin{align}\label{v(r)} v(r)= \left\{ \begin{array}{ll} 2r, &\mbox{ if } 0\leq r \leq 1\\ 2r-2(r-1)^k, &\mbox{ if } 1< r \leq 1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\\ \mbox{smooth and}\,\, v'<0, &\mbox{ if } 1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}< r < 2\\ 0,&\mbox{ if }r\geq 2. \end{array} \right. \end{align} \begin{rem} Note that the function $f(r)=2r-2(r-1)^k$ for $r\geq 1$ has an absolute maximum at $r=1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}$ and therefore $f'\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)=0$. \end{rem} Define the radial function $$ \phi(r)=\int_0^{r}v(s)ds $$ Recall that $\phi_R(r)=R^2\phi\left(\frac{r}{R}\right)$, which implies \begin{equation}\label{phiR} \partial_r\phi_R(r)=Rv\left(\frac{r}{R}\right) \quad \mbox{and} \quad \partial^2_r\phi_R(r)=v'\left(\frac{r}{R}\right). \end{equation} It is easy to see that inequality \eqref{phicond} holds for $0<r\leq R$ and $r\geq 2R$ by direct computation and for $R\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)< r < 2R$ by \eqref{phiR} and the fact that $v'<0$ and $v\geq 0$ in this region. It remains to consider the region $R<r\leq R\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)$. By the definition of $v$ \eqref{v(r)} and relations \eqref{phiR}, in this region we have \begin{equation}\label{phiR-int} \partial_r\phi_R(r)=2R\left[\frac{r}{R}-\left(\frac{r}{R}-1\right)^k\right] \quad \mbox{and} \quad \partial^2_r\phi_R(r)=2-2k\left(\frac{r}{R}-1\right)^{k-1}. \end{equation} Thus \begin{eqnarray*} \partial_r\phi_R(r)-{r\partial^2_r\phi_R(r)}&=&2r\left(\frac{r}{R}-1\right)^{k-1}\left[k-\frac{R}{r}\left(\frac{r}{R}-1\right)\right]\\ &\geq &2r\left(\frac{r}{R}-1\right)^{k-1}\left[k-\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right]>0. \end{eqnarray*} In view of inequality \eqref{phicond}, the second integral in the definition of $K_1$ is positive and since $\partial_r\phi_R(r)=2r$ for $0<r\leq R$ we obtain \begin{align}\label{K_1} K_1\leq &-\int_{|x|> R}\Phi_{1,R}|\nabla u(t)|^2\,dx, \end{align} where $\Phi_{1,R}=4\left(2-\frac{\partial_r\phi_R}{r}\right)$. Moreover, using the $\partial^2_r\phi_R(r)=2$ for $0<r\leq R$, we have \begin{align} \mbox{supp} \left[(2-b)(2-\partial^2_r\phi_R)+(2N-2+b)\left(2-\frac{\partial_r \phi_R}{r}\right)\right]\subset (R,\infty), \end{align} which implies $$ K_2=\int_{|x|> R}\Phi_{2,R}|x|^{-b}|u(t)|^{\frac{4-2b}{N}+2}\,dx, $$ where $\Phi_{2,R}=\frac{2}{N+2-b}\left[(2-b)(2-\partial^2_r\phi_R)+(2N-2+b)\left(2-\frac{\partial_r \phi_R}{r}\right)\right]$. It is clear that $\Phi_{1,R}(r), \Phi_{2,R}(r)\geq 0$, since by definition $\partial^2_r\phi_R(r)\leq 2$ and $\partial_r\phi_R(r)\leq 2r$ for all $r=|x|\in \mathbb{R}$. Now we use the decay of $|x|^{-b}$ away from the origin to estimate $K_2$. To fix the ideas we only consider the case $N\neq 2$. When $N=2$ the proof is completely analogous just applying inequality \eqref{Interp2} instead of \eqref{Interp1} (see also Remark \ref{N=2} below for more details). From the inequality \eqref{Interp1} and Young's inequality, we deduce \begin{eqnarray*} K_2&\lesssim & \frac{1}{R^b}\int_{|x|> R}\Phi_{2,R}|u(t)|^{\frac{4-2b}{N}+2}\,dx\\ &\lesssim & \frac{1}{R^b} \left(\|\nabla (\Phi_{2,R}^{\frac{1}{2-b}})u(t)\|_2+\|\Phi_{2,R}^{\frac{1}{2-b}}\nabla u(t)\|_2 \right)^{2-b}\|u_0\|_2^{\frac{4-2b+Nb}{N}}\\ &\lesssim & \varepsilon \left(\|\nabla (\Phi_{2,R}^{\frac{1}{2-b}})u(t)\|_2+\|\Phi_{2,R}^{\frac{1}{2-b}}\nabla u(t)\|_2 \right)^{2} +\frac{\|u_0\|_2^{\frac{2(4-2b+Nb)}{bN}}}{\varepsilon^{\frac{2-b}{b}}R^2}. \end{eqnarray*} It is clear that $\Phi_{2,R}\in L^{\infty}$. We claim that \begin{equation}\label{nablaPhi} \left|\nabla \left(\Phi_{2,R}^{\frac{1}{2-b}}(r)\right)\right|\lesssim \frac{1}{R},\,\,\mbox{for all}\,\, r=|x|\in \mathbb{R}. \end{equation} Indeed if $r\leq R$ and $r\geq 2R$, then $\nabla \left(\phi^{\frac{1}{2-b}}(r)\right)=0$ and the desired inequality holds. In the intermediary region we first consider $R<r\leq R\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)$, where, in view of \eqref{phiR-int}, we obtain \begin{eqnarray*} \left|\nabla \left(\Phi_{2,R}^{\frac{1}{2-b}}(r)\right)\right|&=&\left|\partial_r\left[\left(\frac{r}{R}-1\right)^{\frac{k-1}{2-b}}\left(\frac{4}{N+2-b}\left[k(2-b)+(2N-2+b)\left(1-\frac{R}{r}\right)\right]\right)^{\frac{1}{2-b}}\right]\right|\\ &\lesssim & \frac{1}{R}\left(\frac{r}{R}-1\right)^{\frac{k-1}{2-b}-1}+\frac{R}{r^2}\left(\frac{r}{R}-1\right)^{\frac{k-1}{2-b}} \\ &\lesssim & \frac{1}{R}, \end{eqnarray*} if we assume that $\frac{k-1}{2-b}-1>0$ or $k>3-b$. Finally, when $ R\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)<r< 2R$ we have that $\Phi_{2,R}(r) \gtrsim 1$ and from \eqref{phiR} we deduce \begin{eqnarray*} \left|\nabla \left(\Phi_{2,R}^{\frac{1}{2-b}}(r)\right)\right|&=&\left|\partial_r\left[\left(\frac{2}{N+2-b}\left[(2-b)\left(2-v'\left(\frac{r}{R}\right)\right)+(2N-2+b)\left(2-\frac{R}{r}v\left(\frac{r}{R}\right)\right)\right]\right)^{\frac{1}{2-b}}\right]\right|\\ &\lesssim & \Phi_{2,R}^{\frac{b-1}{2-b}}(r)\left(\frac{1}{R}\left|v''\left(\frac{r}{R}\right)\right|+\frac{R}{r^2}\left|v\left(\frac{r}{R}\right)\right|+\frac{1}{r}\left|v'\left(\frac{r}{R}\right)\right|\right)\\ &\lesssim & \frac{1}{R}. \end{eqnarray*} Returning to the bound of $K_2$ and using estimate \eqref{nablaPhi} we get \begin{align}\label{K_2} K_2\lesssim & \,\, \varepsilon \left(\|\nabla (\Phi_{2,R}^{\frac{1}{2-b}})u(t)\|^{2}_2+\|\Phi_{2,R}^{\frac{1}{2-b}}\nabla u(t)\|^{2}_2 \right) +\frac{\|u_0\|_2^{\frac{2(4-2b+Nb)}{bN}}}{\varepsilon^{\frac{2-b}{b}}R^2}\nonumber \\ \lesssim &\,\, \varepsilon \int_{|x|> R}\Phi_{2,R}^{\frac{2}{2-b}}\left|\nabla u(t)\right|^2dx +\frac{\varepsilon \|u_0\|^2}{R^2}+\frac{\|u_0\|_2^{\frac{2(4-2b+Nb)}{bN}}}{\varepsilon^{\frac{2-b}{b}}R^2}. \end{align} Finally, since $\|\Delta^2\phi_R\|_{\infty}\lesssim 1/R^2$, we thus obtain from the mass conservation \eqref{Mass} the crude estimate \begin{equation}\label{K_3} K_3\lesssim \frac{\|u_0\|_2^2}{R^2}. \end{equation} Inserting estimates \eqref{K_1}, \eqref{K_2} and \eqref{K_3} into the right hand side of \eqref{zR''}, we infer that there exists $c>0$ such that \begin{align}\label{zR''2} z_R''(t)\leq& \,\,2E[u_0]+\int_{|x|> R} \left(c\varepsilon\Phi_{2,R}^{\frac{2}{2-b}}-\Phi_{1,R}\right)\left|\nabla u(t)\right|^2dx \nonumber \\ &+c\frac{\varepsilon\|u_0\|_2^2}{R^2}+c\frac{\|u_0\|_2^{\frac{2(4-2b+Nb)}{bN}}}{\varepsilon^{\frac{2-b}{b}}R^2}. \end{align} Next we claim that for sufficiently small $\varepsilon>0$ \begin{equation}\label{Phivare} c\varepsilon\Phi_{2,R}^{\frac{2}{2-b}}(r)-\Phi_{1,R}(r)\leq 0, \,\,\mbox{for all}\,\, r> R. \end{equation} We first consider the region $R<r\leq R\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)$. By relations \eqref{phiR-int}, in this region we have \begin{align} c\varepsilon\Phi_{2,R}^{\frac{2}{2-b}}(r)-\Phi_{1,R}(r)=&c\varepsilon\left(\frac{r}{R}-1\right)^{\frac{2(k-1)}{2-b}}\left(\frac{4}{N+2-b}\left[k(2-b)+(2N-2+b)\left(1-\frac{R}{r}\right)\right]\right)^{\frac{2}{2-b}}\\ &-8\frac{R}{r}\left(\frac{r}{R}-1\right)^{k} \\ = &\left(\frac{r}{R}-1\right)^{k}\left[c\varepsilon\left(\frac{r}{R}-1\right)^{\frac{bk-2}{2-b}}\left(\frac{4}{N+2-b}\left[k(2-b)\right.\right.\right.\\ &+\left.\left.\left.(2N-2+b)\left(1-\frac{R}{r}\right)\right]\right)^{\frac{2}{2-b}}-8\frac{R}{r}\right]\\ \leq & \left(\frac{r}{R}-1\right)^{k}\left[c\varepsilon\left(\frac{1}{k}\right)^{\frac{bk-2}{(2-b)(k-1)}}\left(\frac{4}{N+2-b}\left[k(2-b)\right.\right.\right.\\ &+\left.\left.\left.(2N-2+b)\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right]\right)^{\frac{2}{2-b}}-\frac{8}{1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}}\right], \end{align} if we assume that $bk-2>0$ or $k>2/b$. So, we can chose $\varepsilon>0$ sufficiently small such that \eqref{Phivare} holds in this case. Now we turn our attention to the region $R\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)< r$. Since $v'(r/R)\leq 0$ in this region from \eqref{phiR} we first have \begin{equation}\label{Reg2} \frac{\partial_r \phi_R}{r}(r)= \frac{R}{r}v\left(\frac{r}{R}\right)\leq \frac{1}{1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}}v\left(1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}\right)= 2-2\frac{\left(\frac{1}{k}\right)^{\frac{k}{k-1}}}{1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}} \end{equation} and $$ \left|\partial^2_r\phi_R(r)\right|=\left|v'\left(\frac{r}{R}\right)\right|\leq \|v'\|_{\infty}. $$ The last two inequalities imply $\Phi_{2,R}(r)\lesssim 1$. Moreover, from \eqref{Reg2}, we have $$ \Phi_{1,R}(r)=4\left(2-\frac{R}{r}v\left(\frac{r}{R}\right)\right)\geq 8\frac{\left(\frac{1}{k}\right)^{\frac{k}{k-1}}}{1+\left(\frac{1}{k}\right)^{\frac{1}{k-1}}}. $$ Therefore there exists $\varepsilon>0$ such that \eqref{Phivare} also holds in this case. Note that in both cases considered above $\varepsilon>0$ was chosen independent of $R>0$. Finally, collecting estimates \eqref{zR''2}, \eqref{Phivare} and taking $R>0$ sufficiently large we deduce $$ \frac{d^2}{dt^2}\displaystyle\int\phi_R|u(t)|^2\,dx= z_R''(t)\leq E[u_0]<0, $$ and standard arguments imply that the solution blows-up in finite time concluding the proof of Theorem \ref{Blowup}. \begin{rem}\label{N=2} In the case $N=2$, the inequality \eqref{zR''2} may be replaced by \begin{align}\label{zR''3} z_R''(t)\leq& \,\,2E[u_0]+\int_{|x|> R} \left(c\varepsilon\Phi_{2,R}^{\frac{2}{2-\frac{b}{2}}}-\Phi_{1,R}\right)\left|\nabla u(t)\right|^2dx \nonumber \\ &+c\frac{\varepsilon\|u_0\|_2^2}{R^2}+c\frac{\|u_0\|_2^{\frac{2(4-b)}{b}}}{\varepsilon^{\frac{4-b}{b}}R^4}+c\frac{\|u_0\|_2^{4-b}}{R^b}. \end{align} taking into account the inequality \eqref{Interp2} to estimate $K_2$. The same arguments employed above implies \begin{equation*}\label{Phivare2} c\varepsilon\Phi_{2,R}^{\frac{2}{2-\frac{b}{2}}}(r)-\Phi_{1,R}(r)\leq 0, \,\,\mbox{for all}\,\, r> R, \end{equation*} for sufficiently small $\varepsilon>0$, independent of $R>0$, as long as $k> 4/b$ for $v(r)$ given by \eqref{v(r)}. Moreover, the last three terms in the right hand side of \eqref{zR''3} can be made small, for $R>0$ sufficiently large, concluding the proof Theorem \ref{Blowup} also in this case. \end{rem} \vspace{0.5cm} \noindent \textbf{Acknowledgments.} M.C. was partially supported by Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior - CAPES. L.G.F. was partially supported by Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior - CAPES, Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico - CNPq and Funda\c{c}\~ao de Amparo a Pesquisa do Estado de Minas Gerais - Fapemig/Brazil.
2,877,628,089,860
arxiv
\section{Introduction}% \IEEEraisesectionheading{\section{Introduction}\label{sec:introduction}} \IEEEPARstart{C}{loud} computing has facilitated the deployment of systems for big data analytics and Web services. For an efficient exploitation of the cloud computing resources, we can either choose for a specific task~\cite{tccPerf} the most cost-effective cloud configuration, i.e., the types and numbers of virtual machine instances; or, we can optimize the system performance for a specific deployment setting so as to reduce the total computing resources in demand~\cite{bestconfig}. In fact, modern systems are exposing an increasing number of configurable parameters that can have strong impacts on system performance and thus that are denoted as \emph{PerfConfs}, e.g., \emph{innodb\_buffer\_pool\_size} and \emph{executor.cores} in Figure~\ref{fig:curve}. Well tuning the PerfConfs of a system can lead to multiple times of performance speedup~\cite{acts}, requiring no change to the system design. Unfortunately, to meet the diversity of applications and deployment settings, the number and the complexity of PerfConfs have increased to a level exceeding the comprehension capability of human beings~\cite{asilomar}. We see an emerging need for automating the tuning of PerfConfs~\cite{ieee1,metis,boat} for much higher system performance. Existing solutions to auto-tuning PerfConfs for systems in the cloud are either \emph{experience-driven} or \emph{data-driven}. Approaches based on heuristics-guided search~\cite{ga,evoParatune} and analytical modeling~\cite{starfish,tccEnsemble} rely heavily on human experiences and knowledge, belonging to the experience-driven category. Experience-driven tuning requires human intervention for each specific case and has limited applicability. Approaches using Bayesian optimization~\cite{ottertune} or other machine-learning models~\cite{capes} exploit data to train models for optimization, thus falling into the data-driven category. Data-driven tuning can be applied to where sufficient tuning samples are provided, thus attracting increasing popularity~\cite{boat,opentuner}. However, running tuning tests in the cloud and collecting large samples are expensive due to a pay-as-you-go cost, while the sample size required is in proportion to the dimension of the configuration space~\cite{learningTheory}. Sample scarcity and high dimensionality place two challenges to data-driven configuration tuning. In this paper, we take the data-driven approach to address the problem of auto-tuning performance for systems in the cloud through adjusting PerfConf settings. Our main idea is to \emph{tackle performance tuning as a comparison problem and model the performance comparison relations of the limited samples}. In contrast to the common exploitation of performance prediction models~\cite{cartModel}, we adopt the classification method for the comparison modeling, as it can bring about two benefits that directly address the sample scarcity challenge of the data-driven tuning. First, the classification model for the comparison problem can have a training set as quadratically large as the original sample set, as it takes pairs of original samples as input and such pairs can be constructed through \emph{permuting} every pair of the original samples. Second, we can generate even more training samples based on manual tuning experiences. As manual tuning process usually goes through numerous trials and comparisons, the tuning experiences are usually summarized in comparison-based rules, e.g., \textit{increasing memory cache sizes leads to higher performances}. We can generate more training samples for the classification models based on such tuning rules, while this is impossible for the performance prediction modeling~\cite{rf}. But two problems remain to be solved. The first is about dimensionality, i.e., \emph{how to effectively represent the input without increasing its dimensions}. If we directly concatenate two PerfConf settings, the input dimension for the model is increased to twice of the original one, leading again to sample scarcity~\cite{learningTheory}. If we take the division or difference of a PerfConf pair, different pairs will collide, leading to different inputs mapped to one same output. We propose to induce samples by constructing a bijection from a $2d$-dimensional space to a $d$-dimensional one. The second is about model accuracy, i.e., \emph{how to find the best PerfConf setting using an imprecise model}. This found PerfConf setting should lead to the best system performance within a given time and computing resources. Machine learning models are generally not a hundred percent accurate~\cite{learningTheory}. Even if we train a model with enough samples, this model can still mistakenly distinguish some comparison relations. We must robustly find a best PerfConf setting even if some predictions are incorrect. This best PerfConf setting should lead to better performance. We propose a clustering-based tuning algorithm that can exploit the imprecise classification model. \begin{figure}[t] \vspace{-12pt} \subfloat[MySQL performance under different workloads.]{ \includegraphics[width=.22\textwidth,height=71pt]{workloadNonlinear}% \label{fig:curve:mysql} }\vspace{-3pt} \hspace{1pt} \subfloat[Spark performance under different environments.]{ \includegraphics[width=.22\textwidth,height=71pt]{sparkEnvironment}% \label{fig:curve:spark} }\vspace{-3pt} \caption{Performance-PerfConf curves are \emph{nonlinear}, \emph{nonsmooth}, and \emph{system-/workload-/environment-specific}.}\vspace{-6pt} \label{fig:curve} \end{figure} We thus present ClassyTune, which is, to the best of our knowledge, the first automatic performance tuning system that exploits a classification model to find the best PerfConf setting within a limited sample. In Classytune, we use a \emph{classifier} to predict whether one PerfConf setting has a better performance than another. Taking this classification approach, ClassyTune can construct a useful model for auto-tuning with only a limited number of original PerfConf-performance samples, while the common auto-tuning methods would require tens of times more samples~\cite{smartconf,ottertune,bestconfig}. The classifier model can make a prediction in a time multiple orders of magnitude shorter than a tuning test actually runs. We can thus use the model as the surrogate of the system and take a systematic approach towards tuning with an imprecise model. ClassyTune consists of three components for sampling, modeling and searching respectively. The sampling component outputs a database of PerfConf-performance samples; the modeling component outputs a classification-based model; and, the searching component finds the PerfConf setting with the highest performance in best effort. Decoupling the system into three components allows the reuse of the intermediate tuning outputs, i.e., the database and the model. As a result, ClassyTune can be used not only for tuning, but also for system analysis. The intermediate outputs, especially the model, can inform users about relations between PerfConfs and performance, In this work, we make the following contributions:\vspace{-3pt} \begin{itemize} \item We propose a data-driven performance auto-tuning approach, unprecedentedly adopting a classification model for representing the performance comparison relations between PerfConf settings ($\S$\ref{sec:modeling}).\vspace{3pt} \item We propose to address the input dimension problem through sample induction that constructs a bijection based on the Cantor's proof ($\S$\ref{sec:sampleMapping}).\vspace{3pt} \item We propose a clustering-based auto-tuning method that exploits the imprecise classification model ($\S$\ref{sec:tuning}).\vspace{3pt} \item We implement the above solutions in ClassyTune ($\S$\ref{sec:implement}) and evaluate the system in extensive and comprehensive experiments, using 7 popular systems and 14 common application workloads in the cloud ($\S$\ref{sec:eval}).\vspace{3pt} \item We present a customer's use case to show how ClassyTune can be used and help users reduce the cloud computing resources needed to run an online stateless service ($\S$\ref{sec:usecase}). \end{itemize} \section{Motivation and Related Work}% \label{sec:motivate}% This section examines the modeling challenges for the data-driven methods of automatic performance tuning based on PerfConf setting adjustments. These challenges motivate our work over the related works, which are summarized at the end of this section. \subsection{Challenge: Non-Smooth Complicated Curves}% \label{sec:curve}% PerfConf-performance curves are formed by taking PerfConfs as input and the system performance as output. Different systems have different performance curves. In fact, this curve is not only related to the system, but also very sensitive to the workloads, the deployment environments and the computing resources~\cite{acts}. Figure~\ref{fig:curve} plots the curves for database system MySQL and the distributed online processing system Spark. Among the four plotted curves, two for MySQL and two for Spark, none demonstrates linearity. The performance is not in direct proportion to the PerfConf input. For example, Figure~\ref{fig:curve:mysql} plots the throughput of MySQL under two workloads of read-only and TPC-C, given \texttt{\small buffer\_pool\_size} as input. The throughputs of MySQL are not directly proportional to the size of buffer pool. Figure~\ref{fig:curve:spark} plots the job durations for Spark under the standalone and cluster deployments respectively. The performances demonstrate no linearity with the number of executor cores either. Even for the same system, changes to the workload, the deployment environment, or the computing resources can also lead to different PerfConf-performance curves. Changing the workload from read-only to TPC-C leads to two completely different performance curves for MySQL, as shown in Figure~\ref{fig:curve:mysql}. Changing the deployment from the standalone mode to the cluster mode also changes the shape of Spark's performance curve, as illustrated in Figure~\ref{fig:curve:spark}. Generally, it would not be wise to use linear models to map PerfConf-performance relations due to non-linearity. As the system, workload, environment, and computing resources are factors influencing the curve shape, PerfConf-performance models should be constructed with regard to a specific \emph{combination} of these factors, making model reuse infeasible. In sum, tuning tests and samples must be collected specifically for such a combination, leading to the sample scarcity challenge ($\S$\ref{sec:scarcity}). The non-smooth property of performance curves is also a challenge to the popular data-driven auto-tuning method of Bayesian optimization (BO)~\cite{BO}. BO-based auto-tuning enables an effective use of the sampling budget by guiding the sampling and search process with an acquisition function. The common application of BO adopts a Gaussian process prior to get a closed-form acquisition function. Unfortunately, this adoption requires the objective function to be a differentiable function. But not all objective function is differentiable. In fact, it is shown that the performance surfaces of several popular cloud systems are non-differentiable~\cite{acts}. The dissatisfaction of this assumption can invalidate an optimization process based on BO. \subsection{Challenge: The Sample Size} \label{sec:scarcity} Data-driven auto-tuning methods commonly exploit machine learning algorithms for modeling. We illustrate the sample size challenge to the common performance-prediction based modeling~\cite{cartModel}. We model the PerfConf-performance relation by three machine learning methods. As performance is a continuous value, these models are \emph{regression} models, including boosted decision tree (B\_CART), supported vector regression (SVR) and random forest regression (RFR). The decision tree model CART is effective in performance modeling for simple systems~\cite{cartModel} and thus recently applied to performance tuning~\cite{perfRank}. SVR can increase the sample set to twice as large, alleviating the sample scarcity partially. As a robust ensemble model, RFR combines the advantages of statistical reasoning and machine learning approaches~\cite{rf}. We have also tried linear regression, which has been used in a state-of-the-art related work for feature selection~\cite{ottertune}, but the model is too imprecise to be useful due to the reason described in Section~\ref{sec:curve}. We measure the above models using the max prediction error, which is the max difference between the real performances and the model predictions, divided by the corresponding real performance. The equation is $\max(\{\frac{|y_i-\hat{y}_i|}{y_i}\}_{i\in[0,n-1]})$, where $n$ is the number of samples, $y$ is the real performances and $\hat{y}$ is the performances predicted by a model. We use 100 samples to construct each model over 10 PerfConfs. \begin{figure}[t] \hspace{-1pt} \subfloat[Max differences between predicted and real performances.]{ \includegraphics[width=.255\textwidth,height=73pt]{diffx}% \label{fig:sample:maxdiff} }\vspace{-3pt} \hspace{1pt} \subfloat[Errors reduced as samples added (Hadoop-KMeans/RFR).]{ \includegraphics[width=.195\textwidth,height=73pt]{sampleSize}% \label{fig:sample:sizes} }\vspace{-3pt} \caption{Highly inaccurate performance predictions \textbf{due to limited samples}, but adding samples reduces errors.}\vspace{-12pt} \label{fig:sample} \end{figure} As demonstrated in Figure~\ref{fig:sample:maxdiff}, the max prediction errors of these models can be very high, as much as twice more than the real performances. While the complexity of the PerfConf-performance curves is one reason, the scarcity of samples is the other. In fact, the model inaccuracy can be decreased given more samples (Figure~\ref{fig:sample:sizes}), but the cost of obtaining a large sample set can be high. Many tuning solutions require a database of thousands of samples for tuning 10 parameters~\cite{ottertune}. Models based on neural networks would require more samples even for just two PerfConfs~\cite{capes}. Worse still, these samples must be collected for each specific combination of system, workload, environment and computing resources. This makes the precise prediction on system performance almost impossible, because collecting a large number of samples for \emph{every} such combination is impractical, if not impossible. Hence, we are facing the problem how to obtain proper samples for model construction. Sample scarcity has also negative impacts on the tuning process of BO. With BO, the GP model can be trained with limited samples and later updated with more samples as the acquisition function drives the sampling process. However, with a GP model trained with limited samples, the tuning process based on BO can be very ineffective. As demonstrated in Figure~\ref{fig:gpr}, a BO model with very few samples cannot locate best points for sampling as one with more samples does. \begin{figure} \hspace{-1pt} \subfloat[Better optimized performances for a larger initial training set.]{ \includegraphics[width=.22\textwidth,height=73pt]{gprTomcatBest}% \label{fig:gpr:best} }\vspace{-3pt} \hspace{1pt} \subfloat[Better next point predictions for a larger initial training set.]{ \includegraphics[width=.22\textwidth,height=73pt]{gprTomcatNext}% \label{fig:gpr:next} }\vspace{-3pt} \caption{\textbf{Sample size matters:} tuning Tomcat by BO with a GP prior.}\vspace{-6pt} \label{fig:gpr} \end{figure} \subsection{Challenge: Irrelevant Optimization Steps} Data-driven tuning methods like Bayesian optimization optimize and sample stepwise towards the final optimization goal~\cite{ottertune}. In comparison, many other data-driven tuning methods train a model after taking a large samples and then optimize on the final model~\cite{perfRank}. There exists a question on\emph{ whether we should optimize stepwise or integrally}. We look into the optimization process of BO. At each step, BO algorithms determine the next sampling point by \emph{optimizing} a carefully designed acquisition function~\cite{BO}. Acquisition functions determine how to explore the input space. The commonly used acquisition function is the expected improvement (EI) function, which represents the expected improvement on sampling a given point. The prior probability model on $f$ is needed in the EI computation. This probability model is usually assumed to be described by a Gaussian process (GP)~\cite{ottertune}. Assuming the GP prior, a priori knowledge over $f$ is required to set the covariance function and hyper-parameters. We take the common practice in the choice of the covariance function and hyper-parameters~\cite{BO}. Figure~\ref{fig:gpr:next} demonstrates how the BO method runs toward the final result by optimizing the EI acquisition function at each step. Even though the current EI acquisition function is optimized to find the next sample point at every step, the found point is not necessarily a better one. In fact, it is a worse one in many cases as demonstrated in Figure~\ref{fig:gpr:next}. When the total number of samples is small, the resulting model might even fail to find a better point in the following steps, e.g., the optimization process with a small initial sample set as represented by the dotted line in Figure~\ref{fig:gpr}. These facts indicate that we do not need to optimize at every step in the optimization process. We can wait till enough samples are collected. We should optimize integrally on a large sample set, instead of on a small sample set and in a stepwise way. Besides, instead of trying on a single point at each step, we can simultaneously try multiple points. With these understandings, we design ClassyTune. \subsection{Related Work} Solutions to automatic performance tuning have been proposed for a specific type of systems, e.g., storage systems~\cite{capes} and databases~\cite{ottertune}. Auto-tuning for general systems also exist, e.g., BestConfig~\cite{bestconfig}, BOAT~\cite{boat}, and SmartConf~\cite{smartconf}. Performance tuning requires the support from a flexible system architecture. Thus, auto-tuning systems for general systems implement system architectures for supporting the whole process of auto-tuning PerfConfs, including manipulating the system under tune, running tuning tests and computing the optimization results. At the core of configuration tuning lies a black-box optimization problem. The solutions to this black-box optimization problem can be divided into two categories, i.e., \emph{experience-driven tuning} and \emph{data-driven tuning}. Classic experience-driven tuning methods include the heuristics-based search approach~\cite{bestconfig,ga,sa} and the control-theory based approach~\cite{smartconf}. The tuning based on manually specified models~\cite{starfish,tccEnsemble} also belongs to this category. While heuristics are highly related to human experience, they might be useful for some systems but not the others. Besides, the search-based approach can only produce stable results when the searched space is large enough. Control theory based auto-tuning iteratively applies a change to inputs and monitors feedbacks to decide for the next step. This approach is only applicable to cases where the number of PerfConfs is only a handful. There also exist auto-tuning tools that decide the configuration settings based on expert provided guidelines or experts' answers on a set of questions~\cite{questiontune1}. Like manually specified models, they have only limited applicability. Different heuristics-driven tuning methods can be assembled for usage in auto-tuning, as the OpenTuner framework does~\cite{opentuner}. Data-driven tuning approaches exploit data to guide tuning, instead of experience-based heuristics or manually specified models. Such approaches typically train a model on a given data set and optimize the model towards the tuning objective~\cite{ottertune}. Due to the large number of PerfConfs, the model-based approach demands a large sample set to train useful regression models on performance~\cite{learningTheory}. Bayesian optimization is a popular data-driven tuning approach~\cite{capes,ottertune,metis}, as it requires only a limited number of samples to train the optimization model. For the BO method with a GP prior, a priori knowledge over the black-box function is required to set the covariance function and hyper-parameters of the GP model. Unfortunately, such knowledge requires deep understanding of the optimization problem and the covariance function, which is a difficult task for common users. Facebook's Spiral system~\cite{spiral} is an industrial practice to integrate data-driven methods for predicting the current best setting of PerfConfs. A recent work BOAT~\cite{boat} enables the blending of experience-driven tuning and data-driven tuning. It proposes an optimization framework to integrate human knowledge into the Bayesian optimization process, making the black-box optimization partially white. ClassyTune takes a classification approach to performance auto-tuning, which is completely different from previous works. ClassyTune addresses the sample scarcity problem in auto-tuning by two measures, i.e., permuting sample pairs to form inputs and generating samples from tuning experiences. Through data generation, ClassyTune transfers expert knowledge and experiences to the auto-tuning process. Like BestConfig~\cite{bestconfig}, ClassyTune has an architecture that can work with both experience-driven and data-driven tuning methods. The difference of these two architectures is that ClassyTune can save all collected tuning samples for future modeling purpose and expose the tuning model to inform users about PerfConf-performance relations, while BestConfig cannot. The classification model can be used effectively as the surrogate of the system in analysis. In comparison, models directly predicting performances are too imprecise to rely upon~\cite{perfRank}, while models like Bayesian optimization~\cite{BO,metis} can only predict the next best points and not be used in such analysis. \section{Design Overview} ClassyTune is a data-driven performance auto-tuning tool for systems in the cloud. It addresses the problem of auto-tuning system PerfConfs within a given number of tuning tests. A set of PerfConf-performance samples can be collected from the given number of tuning tests. Taking a comparison-based perspective, Classytune models the relation between each pair of PerfConf-performance samples. This comparison-based modeling enables the generation of even more samples based on tuning experiences, further attacking the sample scarcity challenge. The modeling process trains a classifier for predicting whether the first PerConf setting has a higher performance than the second in a pair of PerfConf settings. Section~\ref{sec:modeling} presents the details of the comparison-based modeling based on classification. Unlike the performance-prediction based methods, ClassyTune does not need to assume whether the performance curve is linear or non-linear, thanks to its classification-based method. But, like other machine learning models, the trained classifier is not a hundred percent accurate. It is an imprecise classifier. To tune with the imprecise classifier, ClassyTunes adopts a clustering-based method. Naive exploitations of the imprecise classifier will fail to find a best PerfConf setting due to occasionally incorrect predictions. ClassyTune uses the trained classifier as the surrogate of the system. ClassyTune clusters a set of good PerfConf settings output by the classifier to locate promising spaces for searching the best PerfConf setting. Section~\ref{sec:tuning} presents the details of the tuning process based on an imprecise classifier. The overall architecture and implementation of ClassyTune is presented in Section~\ref{sec:implement}. ClassyTune consists of three main components, i.e., sampling, modeling and searching ($\S$\ref{sec:components}). Sampling and searching components can interact with the system under tune. The interaction is mainly adjusting the PerfConf settings of the system under tune. The interactions are automated and driven by the tuning process when needed. The three components interact in a way as defined by the tuning algorithm ($\S$\ref{sec:algo}). Implementation details are also provided ($\S$\ref{sec:sysdetails}).\vspace{-6pt} \section{Modeling Comparisons} \label{sec:modeling} In this section, we first formulate the comparison-based view for performance tuning. We then detail how to induce training samples and model comparison relations by classification for the auto-tuning task.\vspace{-6pt} \subsection{The Comparison-Based View} We model the performance-comparison relations between pairs of PerfConf settings. This comparison-based model takes a pair of PerfConf settings $(X_1, X_2)$ as input and outputs $1$ if the first setting has a performance better than the second, i.e., $f(X_1)-f(X_2)>0$, or $0$ otherwise. Hence, it can be represented by the function $g$ defined as:\vspace{-3pt} \begin{equation} g(X_1, X_2)= \begin{cases} 1 & \text{if $f(X_1)-f(X_2)>0$},\\ 0 & \text{otherwise}. \end{cases} \label{eq:classifier} \end{equation}\vspace{-6pt} We exploit the above comparison-based model to tackle the auto-tuning problem. We relate the comparison relation to \textbf{each dimension difference between an input pair}. We propose a mapping to encode this dimension difference and construct a new set of samples ($\S$\ref{sec:sampleMapping}). With the constructed sample set, we can exploit classifiers to model the comparison relations between input pairs. We choose the classifier with the best trade-off between computation overhead and accuracy for modeling. This comparison-based approach has the following advantages over other approaches. First, the tuning problem can easily fit into the comparison-based modeling perspective. As performance tuning is usually formalized as an optimization problem, the problem for finding the optimal PerfConf setting is in fact to find one setting that has a better performance as \emph{compared} to all other PerfConf settings. Second, modeling the comparison relations is more robust than directly modeling on performance. On sample collection, the performance measurements are in fact prone to noise, leading to a variance of measurements. But even if two measurements might not be accurate due to noise or fluctuation, their comparison result can still be correct. In case that some comparisons do not have correct results due to a high variance of measurements, there still exist many other correct comparison relations to rely upon. In comparison, such high variance of measurements can completely divert the modeling of performance predictions. Third, comparison-based modeling leads to a natural augmentation of the data set, partially alleviating the sample scarcity problem. With comparison-based modeling, the training set consists of PerfConf pairs and their performance comparison results. This training set must be mapped from the original set of PerfConf-performance samples. The mapping is a permutation of the original sample set. Thus, for the same sample collection effort, comparison-based modeling can have a training set as quadratically large as the direct modeling of performance can have. Besides, we can generate even more training samples based on manual tuning experiences, which are commonly expressed as comparison-based rules. This is impossible for the performance prediction modeling. Finally, the comparison-based modeling provides straight-forward means for users to gauge the influences of PerfConfs on the performance. On manual tuning, we would actually observe whether a change of PerfConf values leads to an increase or decrease of the performance. This is exactly a comparison process. In fact, when we make an analysis on systems, we make similar comparison-based observations as well. Thus, comparison-based modeling aligns well with the thinking of human beings.\vspace{-6pt} \subsection{Inducing Samples for Modeling}% \label{sec:sampleMapping} The performance comparison result can be viewed as the performance change result if the first PerfConf setting is changed to the second one. Put it in another way, the performance change is actually related to the first PerfConf setting and the value difference regarding the second PerfConf setting. Hence, we can represent a pair of PerfConf settings by encoding in each dimension the value of the first setting and the corresponding difference respectively. For each dimension, we need to construct a bijection for an effective encoding. With such bijection, we can construct a larger sample set without increasing the input dimension. Cantor's proof is the solution to constructing such bijection~\cite{sfcbook}. Probably sounding counter-intuitive, it has been shown in cardinal arithmetic that the cardinality of the set $[0,1]\times[0,1]$ (the unit square) is equal to that of the set $[0,1]$. The cardinality of a set is a measure the number of elements of the set. The cardinality of a set is also called its size. The cardinality of a finite set is the number of its elements. \emph{Two sets have the same cardinality if there exists a bijection between the two sets.} This result was first demonstrated by Cantor and later proved based on space-filling curves (SFC), which are curved lines twisting and turning enough to fill the whole of any finite space~\cite{sfcbook}. Space-filling curves provide one way for constructing a bijection from the unit square to the unit interval, mapping from the $2d$-dimension space to the $d$-dimension space. For each PerfConf, we thus construct the bijection from two values into one value using SFC, specifically the z-ordering method~\cite{sfcbook}. The mapped value in the unit interval is called the z-value. The z-value of a point in multi-dimensions is simply calculated by interleaving the binary representations of its coordinate values. For example, given the $i$th-dimension values $X^{(i)}_1=0b000100$ and $X^{(i)}_2=0b000101$, we can get the z-value of $(X^{(i)}_1,X^{(i)}_2)=00bb000000110001$. The order of the two input variables actually matter. In the example, the z-value of $(X^{(i)}_2,X^{(i)}_1)$ is $00bb000000110010$. Note that, this z-ordering mapping can actually be modeled by a function with the modulo operator and simple arithmetic operators. We construct a new sample set as quadratic large as the original set of PerfConf-performance samples by permuting every pair of original samples. The permutation generates $P_n^2=n\times(n-1)$ samples from the original $n$ samples. On construction, we exploit the above SFC method to map pairs of PerfConf settings into a space with the same dimensions as the number of PerfConfs. It is common practice that inputs are normalized before training machine learning models. Assuming that $X_1,X_2$ are normalized and transformed into the unit interval $[0,1]$, the SFC-based bijection is $h(X_1,X_2)=\overrightarrow{X_{1,2}}$ with $X_1,X_2,\overrightarrow{X_{1,2}}\in [0,1]^d$. We can generate even more training samples based on historical tuning experiences. Experiences useful for sample generation are comparison-based rules, for example, \emph{increasing the value of PerfConf X leads to a higher performance}. For any given PerfConf setting, we can increase the value of PerfConf $X$ and obtain pairs of PerfConf settings. We can then induce new training samples based on the above sample induction method. As long as the experience-based rule holds, we can generate as many training samples as needed. However, we must be careful of two things. First, the experience-based rule must be correct; otherwise, the model trained on the generated samples would be wrong. Second, we must introduce no data skewness and take samples uniformly distributed in the input space; otherwise, the trained model can be misguiding.\vspace{-6pt} \subsection{Modeling Comparison as Classification} \label{sec:classify}% We can model the comparison-based relations using the machine leaning method of classification, the model of which is called \emph{classifier}. A classification problem is to decide which class a given input belongs to. Given pairs of PerfConfs, we classify their performance comparison results into two classes, i.e., the first better than the second and otherwise. For example, a PerfConf pair $(X_1,X_2)$ is classified into one class if $X_1$ performs better than $X_2$, i.e., when $g(X_1,X_2)=1$; otherwise, it is classified into the other class. With sample induction $h(X_1,X_2)=\overrightarrow{X_{1,2}}$ as defined in Section~\ref{sec:sampleMapping}, we can transform $g$ of Eq. (\ref{eq:classifier}) into the following function $g'$:\vspace{-6pt} \begin{equation} g'(h(X_1,X_2))= \begin{cases} 1 & \text{if $f(X_1)-f(X_2)>0$},\\ 0 & \text{otherwise}. \end{cases}% \label{eq:classifierbij} \end{equation} where $g(X_1,X_2)$$=$$g'(h(X_1,X_2))$$=$$g'(\overrightarrow{X_{1,2}})$. The input space of $g'$ has the same dimensions as that of $f$, i.e., half the input dimensions of $g$, but with training samples as quadratically many as those for $f$. We can now construct a classifier on the sample set ${(\overrightarrow{X},g'(\overrightarrow{X}))}$ with enough samples. We might also train a classifier for telling whether one configuration setting is better than the default configuration setting. But this way of constructing a classifier cannot solve the problem of sample scarcity. As our target is to exploit classifier models to solve the tuning problem, our focus is how to use the machine learning model, instead of improving the model. We do not tune the hyper-parameters of the classifier, as this is a problem as difficult as the one that the classifier is trained for. Rather, we bear in mind that the classifier is not precise. We thus design algorithms that could exploit imprecise predictions by such classifier to fulfill tuning-related tasks. \textbf{Classification vs. ranking.} As related works formulate tuning as an optimization problem, some would think that modeling tuning as a \emph{ranking} problem~\cite{perfRank} would be more natural than as a comparison one. We do not address the tuning problem by ranking models but with classification models for two reasons. First, the input space of configuration tuning generally has continuous dimensions, which would contain in any given range points in a number larger than the total number of natural numbers. As ranking is in fact mapping natural numbers to inputs, this fact indicates ranking is an inadequate way of modeling. Second, configuration tuning is to find the top input(s) in the set, rather than aligning all inputs. While given a ranking model, obtaining any comparison result is straight forward. Given a classification model for comparison, finding the ranking is an NP-hard problem~\cite{orderNPhard}. In other words, the ranking model has incorporated more information than the classification model. That said, like directly predicting performance, performance ranking has also done more than required.\vspace{-6pt} \section{Tuning with an Imprecise Classifier} \label{sec:tuning} With the comparison-based classification model, ClassyTune can compare any pair of PerfConf settings. Since we can now use the trained model as the surrogate of the real system, our goal now becomes to find out the best one in a sufficiently large set of $N$ PerfConf settings. \textbf{Strawman.} One naive solution is to sample $N$ PerfConf settings and use the classifier to compare every pair of them. In order to find the optimal setting, $N$ must be sufficiently large to cover the whole space of PerfConfs. Unfortunately, pairing every two of the $N$ PerfConf settings would lead to a set with a daunting size of $C_N^2$. Even though the classifier can predict in a sufficiently short time, this processing time would \emph{add up to a long duration}. Worse still, as the classifier is not a hundred percent correct, \emph{some results would be contradicting}, making it impossible to deduce the real optimal. \textbf{A better strawman.} An alternative solution is to do a binary search among the huge set of $N$ PerfConf settings. In each comparison, i.e., each prediction by the classifier, the winning PerfConf setting is kept for the next round of comparison, while the other one is discarded directly. After $\log_2 N$ rounds of binary comparisons, we will finally reach the last pair of winning PerfConf settings. And, the final winner will be the optimal. However, as we have mentioned, the classifier is not a hundred percent correct; thus, \emph{the actual optimal setting might have been discarded because it loses in just one false comparison.} \textbf{Our solution.} ClassyTune takes a systematic approach towards tuning. Rather than trying to improve the precision of the model, ClassyTune recognizes that the trained model can only make a large portion of predictions correct. It exploits this fact and finds the top setting in best effort through three phases, i.e., finding a list of good PerfConf settings, locating promising areas with optimal settings and searching for the optimal setting.\vspace{-6pt} \subsection{Finding Better PerfConf Settings} ClassyTune does not compare every pair of PerfConf settings. Rather, in the training phase, it keeps the best PerfConf setting in the training set along with the trained model. When given the large set of $N$ PerfConf settings, ClassyTune uses the trained model to compare each of the $N$ settings with the best PerfConf setting in the training set. This list of settings that win in the comparisons are kept. Even though the trained model might not be completely correct in these comparisons, it is very likely that many of these winning settings are ones better than the best PerfConf setting in the training set. We take \emph{a list of winning settings} output by the imprecise classifier. We do not keep the single PerfConf setting that wins the most comparisons, contrasting the way that BO with the GP prior takes one optimal setting at each step. Given the same imprecision rate, finding a list of winning settings reduces the probability that we find no PerfConf setting better than the best one in the training set. Furthermore, we do not directly output this list of winning PerfConf settings as optimal ones. Rather, we use them to locate some promising areas for finding the real optimal setting. The reasons include: 1) as the model is not a hundred percent accurate, some of the winning settings might not even be good settings; and, 2) the space of PerfConfs is too large such that the $N$ settings might not be representative enough for finding the optimal one. \subsection{Clustering to Locate Promising Subspaces} In fact, we believe good settings are close to each other and possibly locate at a few promising areas. Generally, the optimal PerfConf setting is surrounded by good settings that are better than many others. Likewise, the areas where many good settings locate are promising places that the optimal setting might be found. We denote such areas as the \emph{promising} subspaces. For the set of winning PerfConf settings, ClassyTune uses the clustering algorithm of KMeans to find out where the good PerfConf settings cluster. To determine the number of promising areas, i.e., the number of clusters, we exploit the elbow criterion~\cite{elbowclusternum} to find a best number $k$ for clustering. We then run the KMeans algorithm to cluster the winning PerfConf settings into $k$ clusters, whose centers are then computed. The promising subspaces are located around these centers. \subsection{Searching for the Best} Now, we have the centers of the promising subspaces. We have not yet set their boundaries. We set the boundaries of the promising subspaces based on the PerfConf settings that we have already evaluated. As we know that none of the evaluated settings is expected to be better than the list of winning settings, we should not consider those settings lying farther from any center than an evaluated setting that is closer to the center than other evaluated settings. Hence, for each center, we find at each dimension its closest neighbor in the set of evaluated settings; and, the value of this neighboring setting is used as the boundary for this dimension by the center. After finding for each dimension of all centers, we bound all promising subspaces. Within the specified number of tuning tests, we then sample in the promising subspaces so that a good coverage of the areas is guaranteed~\cite{bestconfig}. These sampled PerfConf settings are then evaluated in the system to decide which exactly is the best. The final best will be output as the suggested setting for an optimal performance.\vspace{-6pt} \section{The ClassyTune System} \label{sec:implement} The overall architecture of ClassyTune is illustrated in Figure~\ref{fig:ClassyTune}. Like BestConfig~\cite{bestconfig} and Ottertune~\cite{ottertune}, ClassyTune only needs the users to provide a list of PerfConfs along with their valid ranges, and scripts to set PerfConf values/get system performances for tuning a new system and its application workload. ClassyTune has three main components, i.e., sampling, modeling and searching. These components interact through data flows, thus they can locate on one same server or multiple servers. The results of sampling and modeling are produced as the intermediate outputs for reuse in following tasks. The two intermediate outputs are the database of PerfConf-performance samples and the classifier model. The final output of the tuning process is the best PerfConf setting found within the given number of tuning tests. \begin{figure}[!t] \centering \includegraphics[width=0.46\textwidth]{compareTune}\vspace{-6pt} \caption{ClassyTune: the architecture \& the tuning process.}\vspace{-12pt} \label{fig:ClassyTune} \end{figure} \subsection{Main Components}% \label{sec:components}% \textbf{Sampling.} Different from common machine learning tasks, configuration tuning allows the learning process to freely choose the points to sample in the input space. As all values in the range are valid for a dimension, sample values on each dimension should spread across the corresponding range so that the underlying relations impacting comparison results should be represented and learned. According our practical experience, we find the latin hypercube sampling (LHS) method~\cite{lhs} used in ClassyTune very effective and to the purpose. It can (1) uniformly cover the whole range on each dimension and (2) sample a given number of points. In comparison, uniform random sampling does not necessarily cover the whole range, while grid sampling might not be able to sample for a required number of points. Other sampling methods that satisfy the two properties like LHS can also be used with ClassyTune. The output of the sampling phase is a database of PerfConf-performance samples. \textbf{Modeling.} ClassyTune exploits the database of PerfConf-performance samples to construct news samples for training the comparison-based model. ClassyTune tries different classification methods to train the comparison-based model ($\S$\ref{sec:classify}). This comparison-based modeling enables ClassyTune to discover the latent principle underlying how changing an input leads to the change of performance. In comparison to direct performance modeling, ClassyTune simplifies the task by not requiring the precise prediction on the performance numbers. Rather, ClassyTune only cares about the relative relations of resulting performances. The constructed model is output as an intermediate result of the tuning process. It can be exploited later by other analysis tasks for the system. \textbf{Searching.} Based on the classifier, we search the configuration space thoroughly for a set of best points. The classifier is used to decide whether a configuration setting is superior to any other configuration. \emph{This prediction takes much less time than actually evaluating a configuration setting for the performance.} Like other model-based tuning solutions~\cite{metis,ottertune,cartModel}, ClassyTune exploits the trained model as a surrogate. Different from some Bayesian-optimization based solutions that explicitly solve an optimization equation, ClassyTune takes a systematic approach to optimization, adopting a three-phase searching process. The found candidate settings are evaluated in the system for verification. \subsection{The Tuning Algorithm} \label{sec:algo} The whole tuning process of ClassyTune is implemented as illustrated in Algorithm~\ref{algo:opt}. Given a set of PerfConf-performance samples as input, we first induce a new sample set for training a binary classifier (Line~1). Then, we find the best PerfConf setting in the original sample set (Line~2). Using the best PerfConf setting in the training set as the pivot, ClassyTune compare each of the $N$ PerfConf settings with this pivot (Line~3-5). All the winning settings are put in a winner set (Line~6-7). Second, ClassyTune proposes to enclose the areas where the winner settings cluster in (Line~8-9). These areas are the promising subspaces where optimal settings might locate. Even though the classifier might have mispredicted some winners, the location of the promising subspaces might be shifted a little bit but would not be completely missed. Third, to actually find the optimal settings, ClassyTune proposes to resample in these subspaces and evaluate the sampled PerfConf settings in the system (Line~10-11). The best setting will be output as the solution (Line~12-14). \setlength{\textfloatsep}{0.6\baselineskip} \begin{algorithm}[!t] \footnotesize \caption{ClassyTune: classification-based tuning.}% \label{algo:opt}% \KwIn{$X, y$\tcp*{\scriptsize PerfConf settings,performance}} \KwIn{$m$\tcp*{\scriptsize \# of validating PerfConf settings}} \KwOut{$bestX$\tcp*{\scriptsize the optimal PerfConf setting}} \tcc{\scriptsize induce samples, train classifier} $clf$=\texttt{\footnotesize FIT}(\texttt{\footnotesize SET\_INDUCE}($X,y$))\; $idxMax$=\texttt{\footnotesize ARGSORT}($y$)[-1]\tcp*{\scriptsize index of best $y$} \tcc{\scriptsize sample many points in the space} $S\leftarrow\{X\}$ \tcp*{\scriptsize $\mid S\mid>1000\times DIM$($X$)} $Xp$=\texttt{\footnotesize PAIR\_INDUCE}($S$,$X$[$idxMax$])\; $Yp$=$clf$.\texttt{\footnotesize PREDICT}($Xp$)\; \tcc{\scriptsize points better than $X$[$idxMax$]} $idxList$=\texttt{\footnotesize IDX\_WHERE}($Yp$,$yp_i>0$)\; $X_s$=$S$[$idxList$]\; \tcc{\scriptsize compute best \# of clusters} $k$=\texttt{\footnotesize BEST\_CLUSTER\_NUM}($X_s$)\; \tcc{\scriptsize cluster points to promising subspaces} $C$=\texttt{\footnotesize KMEANS\_FIT\_AND\_GET\_CENTERS}($k$,$X_s$)\; \tcc{\scriptsize sampling in promising subspaces by \texttt{\footnotesize LHS}} $X\_candidates\leftarrow$\texttt{\footnotesize LHS}($C$, $m/k$)\; $y\_candidates\leftarrow$\texttt{\footnotesize EVALUATE}($X\_candidates$)\; $idxMax$=\texttt{\footnotesize ARGSORT}($y\_candidates$)[-1]\; $bestX\leftarrow X\_candidates$[$idxMax$]\; return $bestX$\; \end{algorithm} \subsection{Implementation} \label{sec:sysdetails} \textbf{Data types for sample representation.} One could notice that, we need to use data types with higher precision to represent the induced samples. In our implementation, we use the \emph{double} data type to represent the induced sample values and the \emph{float} for the original ones. However, the lengthy tail of a decimal is very likely to lose its significance in the model training process. Thanks to the sparsity of samples, it is rare that the induced inputs get collapsed with the original ones. The disadvantage of the induction is that the latent relations between configuration pairs could become even further profound. However, as we have mentioned in Section~\ref{sec:sampleMapping}, the sample induction can actually be modeled as a function of modulo and other simple arithmetic operators. Luckily, as demonstrated by many real-world applications, some classification algorithms can represent highly complex input data~\cite{evalClassifier1}. \textbf{Other implementation details.} We implement ClassyTune using Python and R, with only about 2000 lines of code. The interactions with the system under tune are implemented through shell scripts. ClassyTune maximizes a scalar performance metric. The scalar performance metric can be defined and specified through some utility function~\cite{bestconfig}, with user-concerned performance goals as inputs. \begin{table}[!t] \caption{The Evaluated Systems and Variables}\vspace{-6pt}% \label{tbl:suts}% \centering \begin{tabular}{p{1.2cm}lp{0.5cm}c} \toprule[1.2pt] {\small \textbf{System}} & {\small \textbf{Description}} & {\small \textbf{Lang.}} & \textbf{\small Workloads}\\ \midrule[0.8pt] \textbf{HDFS} & Dist. filesystem & Java & PageRank,\\ \textbf{YARN} & Dist. processing & Java & Join,\\ \textbf{Hive} & Data analytics & Java & KMeans\\ \midrule[0.2pt] \textbf{Spark} & Data processing & Scala &PageRank,TeraSort,KMeans\\ \midrule[0.2pt] \textbf{MySQL} & DB server & C++ & readOnly,readWrite,TPC-C\\ \midrule[0.2pt] \textbf{PostgreSQL} & DB server & C & readOnly,readWrite,TPC-C\\ \midrule[0.2pt] \textbf{Cassandra} & NoSQL DB & Java & readWrite(YCSB-a)\\ \midrule[0.2pt] \textbf{Tomcat} & Web server & Java & Web exploration\\ \bottomrule[1.2pt] \end{tabular} \end{table} \section{Evaluation}% \label{sec:eval}% \subsection{Experimental Settings} We evaluate ClassyTune over 7 cloud systems that are implemented in different languages. They have supported a variety of applications. These systems are listed in Table~\ref{tbl:suts}. To provide an example of tuning co-deployed cloud systems, we tune Hive and Hadoop together for offline data analytical workloads. We choose these systems in accordance with related works~\cite{smartconf,bestconfig,ottertune} for an easy comparison. We believe our choice should be representative for a large number of cloud systems. We choose 14 application workloads following the choice of related works~\cite{smartconf,bestconfig,ottertune}, as listed in Table~\ref{tbl:suts}. The cases of Tomcat and Cassandra are relatively simple as compared to other systems, so only the workloads of Web exploration and read-write are chosen respectively. The other systems are evaluated on three typical workloads. The distributed processing systems of Spark and Hive plus Hadoop are evaluated under analytical and machine learning workloads, generated by the HiBench benchmark. The transactional (readWrite) and readOnly workloads for databases are generated by the SysBench benchmark. We also include the TPC-C workload, the current industrial standard for evaluating the performance of OLTP systems. For each system, we choose 10 influential PerfConfs for tuning, unless mentioned otherwise. Related works taking the model-based approach typically use a similar number of parameters, around 7 to 16 and with 8 achieving the best on tuning with fixed parameters~\cite{metis,ottertune}. We choose the PerfConfs to tune in accordance with related works. These PerfConfs control various aspects of systems, including but not limited to network, CPU, memory, storage, indexing, caching and buffering. Performance metrics are application-specific. We adopt the performance metrics commonly used for the evaluated workloads. While workloads on Spark and Hive plus Hadoop are tuned for a shorter processing time (or task duration), workloads on the other systems are tuned for higher throughputs. Our experimental platform consists of 12 servers. Each server has two 12-core Intel Xeon E5620 CPU with 32GB RAM. CentOS 6.5 and JVM 1.7 are installed. For each evaluation, one server is used to generate workloads. Standalone SUTs are run on one server, while distributed SUTs are hosted by four servers.\vspace{-6pt} \subsection{Selecting the Classification Model} \label{sec:modelselect}% We empirically study which classifier is best to be used with ClassyTune. There exist many machine learning methods to model the comparison relations, e.g., logistic regression (LR for short), decision tree (DT), supported vector machine (SVM), neural networks (NN) and XGBoost. While the former three are the classic methods for binary classification, neural networks have been applied to many real applications and make significant progress in applications to scenarios with big data. XGBoost (XGB for short) is in the algorithm family of gradient-boosted trees~\cite{gbt}, which have been shown to be among the best classifiers ~\cite{evalClassifier1}. In binary classification problems with small data, algorithms from the families of gradient-boosted trees are on the top among all. XGBoost has been used to achieve state-of-the-art results on many machine learning challenges. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth,height=83pt]{classifier5}\vspace{-6pt} \caption{Percentage of winning settings found by different classifiers: XGB outperms all the other classifiers, while the kernel method SVM, exploiting covariance functions, fails in most cases.} \label{fig:classifier} \end{figure} In the comparison-based tuning, the key to success is to recognize the whole set of PerfConf settings that are better than and winning a given one. We evaluate the above five classifiers to see how they can recognize the winning settings. We let each classifier to be trained on a set of 50 original samples and tested on 20 samples. The 20 samples have performances higher than the best sample in the training set. We evaluate to see how many among the 20 samples can be recognized by a trained classifier. The results are plotted in Figure~\ref{fig:classifier}. From Figure~\ref{fig:classifier}, we can see that XGBoost can almost find all the winning settings for all systems. Therefore, we choose XGBoost as our classifier model in ClassyTune.\vspace{-6pt} \subsection{Tuning Efficacy} \textbf{Comparing to performance-prediction based tuning.} We have tried predicting winning settings using \emph{regression} models on the same set of original samples as in Figure~\ref{fig:classifier}. We use the decision tree based regression model, which is shown to perform best in predicting system performances~\cite{cartModel}. But the model trained on the same sample set \emph{fails to find out any} of the winning samples. This again proves the validity of taking a comparison-based approach.\vspace{3pt} \textbf{Compared to other auto-tuning methods}. To demonstrate the tuning efficacy of ClassyTune, we compare ClassyTune with two state-of-the-art tuning approaches, i.e., the search-based approach~\cite{bestconfig} and the Gaussian-process (GP) based Bayesian optimization (BO) approach~\cite{metis,ottertune}. Besides,\emph{ these two approaches are the few auto-tuning proposals that work on a limited number of samples}. We do not compare with approaches based on control theory~\cite{smartconf} or reinforcement learning~\cite{capes} because they are only applicable to a handful of configuration parameters. We exploit the open-source implementation BestConfig\footnote{\scriptsize http://github.com/zhuyuqing/bestconf/} for the evaluation of the search-based approach. As no open-source implementation can be found for the GP-based BO tuning approach~\cite{metis,ottertune}, we implemented it exploiting the Python package of GP-based BO implementation\footnote{\scriptsize https://github.com/thuijskens/bayesian-optimization}. \begin{figure}[!t] \vspace{-9pt} \centering \subfloat[Throughputs of Web server, NoSQL database, and databases.]{ \includegraphics[width=.23\textwidth]{cqualityThroughput}% \label{fig:efficacy:throughput} }\vspace{-3pt} \subfloat[Running times of Spark and Hadoop jobs.]{ \includegraphics[width=.23\textwidth]{cqualityDuration}% \label{fig:efficacy:duration} }\vspace{-3pt} \caption{ClassyTune/BestConfig/GP-based BO(GP-BO) improving performances over those under default settings.} \label{fig:efficacy} \end{figure} \begin{figure*}[t] \centering \hspace{3pt} \begin{minipage}{0.3\textwidth} \centering \vspace{6pt} \includegraphics[width=\textwidth,height=97pt]{cdbs} \caption{Auto-tuning compared to manual tuning: databases/TPC-C.}\vspace{-12pt} \label{fig:dbs \end{minipage} \hspace{6pt} \begin{minipage}{0.64\textwidth} \subfloat[PageRank on Spark]{ \centering% \includegraphics[width=.43\textwidth,height=85pt]{promisingPagerankSpark}% \label{fig:cluster:spark} } \hspace{3pt} \subfloat[PageRank on Hive-Hadoop]{ \centering% \includegraphics[width=.43\textwidth,height=85pt]{promisingPagerankHadoop}% \label{fig:cluster:hadoop} } \caption{Promising subspaces (bounded by circles) with optimal settings (i.e., evaluating points) as located by ClassyTune.}\vspace{-12pt} \label{fig:cluster} \end{minipage} \end{figure*} For each combination of tuning solutions, systems and workloads, we run the tuning experiment for three times and report the average performance improvement. In each tuning experiment, we tune within 100 tests, as following the evaluation methodology of related works~\cite{bestconfig}. Figure~\ref{fig:efficacy} shows that ClassyTune can find configurations better than and occasionally as good as those output by the two state-of-the-art solutions. ClassyTune can improve throughputs to as much as about $8\times$ of that under the default setting, and decrease execution times to as much as about $1/4$. Specifically, it has improved the throughputs of Tomcat by 76\%, Cassandra by 4\%, MySQL/transactions by 654\%, MySQL/reads by 256\%, PostgreSQL/transactions by 228\% and PostgreSQL/reads by 33\%. It reduces the execution time by 58\% for Spark/PageRank, 72\% for Spark/TeraSort, 50\% for Spark/ KMeans, 6\% for Hive-Hadoop/PageRank, 7\% for Hive-Hadoop/Join and 22\% for Hive-Hadoop/KMeans. Even for the complex co-deployed system of Hive-Hadoop, ClassyTune can still improve the performance by reducing as much as 22\% execution time of the KMeans workload. In comparison, the search-based method and the GP-based BO method cannot tune such a complex system to a performance as good as ClassyTune. ClassyTune tunes several systems to a performance much higher than the state-of-the-art solutions, e.g., Spark/PageRank and MySQL/txns in Figure~\ref{fig:efficacy}. For other systems, ClassyTune can only win the state-of-the-art solutions by a small percentage. A system can in no way be tuned as well as one would wish by only changing PerfConf settings. There is an upper bound on the performance that tuning PerfConf settings can improve, although this bound can hardly be figured out for the high-dimensional continuous space of PerfConfs. The performances that ClassyTune has tuned to are the best we have found for the corresponding combinations of systems, workloads and environments. We have tried testing each combination over thousands of different PerfConf settings, but we never find one setting better than the one suggested by ClassyTune.\vspace{3pt} \textbf{Compared to manual and expert tuning.} To further demonstrate the effectiveness of ClassyTune, we also evaluate the performances tuned by ClassyTune towards those tuned manually or by expert knowledge. We experiment with databases under the TPC-C workload. To enable the comparison, we adopt the setting as suggested by the Internet and related works~\cite{ottertune} for the manual setting. Before automatic tuning appears, a common way for tuning databases is to use scripts that are written by experts based on their knowledge and expertise. We exploit two tuning scripts for MySQL\footnote{\scriptsize https://launchpad.net/mysql-tuning-primer} and PostgreSQL\footnote{\scriptsize http://pgfoundry.org/projects/pgtune/} respectively. These scripts are also evaluated in a related work~\cite{ottertune}. We also demonstrate the tuning results of GP-based BO and BestConfig. Figure~\ref{fig:dbs} presents the results. ClassyTune can improve the system performance to about $3.2\times$ of that under the manually tuned configuration. In fact, human beings can hardly capture fully the characteristics of complicated workloads, thus auto-tuning methods find PerfConf settings with better performances than those under manual-tuned and script-tuned PerfConf settings. And, the latent relations between PerfConfs and performances are better captured if modeled in the way of ClassyTune than if modeled in the way of GP-based BO. Therefore, ClassyTune has an advantage in both database cases, while the BO-based and the search-based approaches perform slightly worse than the script-based approach on tuning PostgreSQL. We believe that the number of samples is an influential factor. ClassyTune acquires its advantage from the comparison-based modeling.\vspace{-6pt} \subsection{Understanding Comparison-Based Tuning Process} \textbf{Have winning PerfConf settings been recognized?} We measure to see whether ClassyTune can correctly differentiating all PerfConf settings better than a given one. As plotted in Figure~\ref{fig:classifier}, we can see that the classifier model can almost perfectly identify the list of winning PerfConf settings when only 50 samples are provided. This fact supports our design choice in locating promising subspaces by clustering these winning PerfConf settings.\vspace{3pt} \textbf{Are promising subspaces located?} We examine whether ClassyTune actually locates the promising subspaces. To better view the PerfConf-performance relations, we run a tuning experiment with 1000 tests for Spark/PageRank and Hive-Hadoop/PageRank respectively. We select the most influential PerfConf \texttt{\small spark.default.parallelism} for Spark and \texttt{\small mapreduce.job.maps} for Hive-Hadoop. We plot all the sampled points in the sampling phase and the evaluated points in the searching phase. The results are shown in Figure~\ref{fig:cluster}. For both systems, the evaluated points are clustering in its space, which is circled out. And, the clusters are having short execution times, i.e., higher performances, than other sampled points. In other words, ClassyTune has successfully located the promising subspaces and recognized a set of good settings.\vspace{3pt} \textbf{Imprecision is alleviated by the systematic approach.} We further verify the impacts of classifiers' imprecision on tuning. We choose to evaluate on Tomcat-webExplore and PostgreSQL-reads because classifiers display the most difference in the former and the least in the latter in Figure~\ref{fig:classifier}. XGB, DT and LR improve the performances to 1.76, 1.71 and 1.73 respectively for tuning Tomcat/WebExplore, while they improve to 1.33, 1.25 and 1.24 respectively for PosgreSQL/reads. We can find that the differences between the improved performances are not as much as those between the percentage of winning settings found. In fact, the tuning results of ClassyTune do not solely rely on the precision of the classifier. Rather, after the classifier pins down the promising areas, we take a systematic approach by resampling in the areas using the LHS method. This result leads us to think that, while exploiting machine learning models are beneficial, taking a systematic approach to the goal will also help to reduce the effect brought about by the imprecision of machine learning models.\vspace{-6pt} \subsection{Sample Induction Method} We evaluate whether the bijection-based sample induction actually performs better than the simple way of directly taking the difference (i.e., using the minus operation). We also compare our sample induction method with the direct concatenation of two PerfConf settings. We evaluate the three methods on the percentage of winning settings they can find. In the experiments, we use the XGBoost classifier for all three sample induction methods. Results are illustrated in Figure~\ref{fig:sampleinduce}. \begin{figure}[!t] \centering \includegraphics[width=0.47\textwidth,height=85pt]{sampleinduce}\vspace{-6pt} \caption{Percentage of winning settings found: sample induction based on Cantor's proof outperforms others.} \label{fig:sampleinduce} \end{figure} Our sample induction method based on the Cantor's proof performs the best for all systems. As we have mentioned in Section~\ref{sec:sampleMapping}, this sample induction method can be modeled as a function of modulo and simple arithmetic operators, although it is seemingly complicated. On the one hand, functions with modulo and simple arithmetic operators can easily be learned by common machine learning algorithms~\cite{learningTheory}. On the other hand, our sample induction method feeds the model with the real independent factors, i.e., PerfConfs. In comparison, the concatenation method mixes independent factors with correlated factors, increasing the input dimension simultaneously. And, the difference method performs worse than our method because the difference computation can lead to collision of mappings. \subsection{High-dimensional Tuning} We demonstrate ClassyTune' advantages for tuning in a high-dimensional input space. We choose a tuning space with $30$ PerfConfs and constrain the tuning within $100$ tests. We compare ClassyTune to the two state-of-the-art auto-tuning methods, i.e., the search-based~\cite{bestconfig} and the GP-based BO~\cite{metis,ottertune} approach. Manual tuning is not applicable to high-dimensional tuning because it is very difficult for human beings to comprehend relations in a high-dimensional space, if not impossible~\cite{asilomar}. Script-based tuning is also based on human experiences, making it inapplicable to high-dimensional tuning either. We tune for MySQL and PostgreSQL under the TPC-C workload respectively. The tuning results are presented in Figure~\ref{fig:hiditune:perf}. First, increasing the dimension leads to a larger input space with possibly even better results, e.g., for MySQL/TPC-C. The performance improvements are higher than those in a 10-dimensional input space, as demonstrated in Figure~\ref{fig:efficacy}. ClassyTune outperforms the other auto-tuning methods in both high and low dimensional cases. For high-dimensional tuning, the advantage of ClassyTune over the other methods is more obvious. ClassyTune improves the performance of MySQL/TPC-C by more than \emph{six} times, while the GP-based BO and the search-based BestConfig can only improve by four times. Second, some systems can have only limited effective PerfConfs, e.g., PostgreSQL/TPC-C. The performance improvements are similar for both high and low dimensional tuning. Anyhow, ClassyTune still has a slight advantage over the other auto-tuning methods. \subsection{Tuning Time} We have mentioned that the GP-based GO method has high computation overhead. For the tuning results in Figure~\ref{fig:hiditune:perf}, we record the tuning time for both ClassyTune and GP-based BO. The tuning time includes the time for model training and model optimization. As GP-based BO is a stepwise method, the tuning time sums up all the computation time in all steps. We carry out the auto-tuning process of ClassyTune and GP-based BO for five times respectively. We report the average of the tuning results and the tuning times respectively. The results are plotted in Figure~\ref{fig:hiditune:time}. ClassyTune involves a tuning time of no more than 200 seconds, while GP-based BO requires a tuning time of more than 550 seconds. Within a much shorter tuning time, ClassyTune finds a better PerfConf setting than the GP-based BO method. The GP-BO method has a heavy computation overhead because its tuning process involves the covariance matrix computation and this computation is carried out stepwise. Taking an integral approach to auto-tuning, ClassyTune trains a model once and then spends the rest of its time in searching the input space thoroughly based on the trained model. If necessary, ClassyTune can further reduce its tuning time by searching fewer points.\vspace{-6pt} \begin{figure}[!t] \vspace{-9pt} \centering \subfloat[ClassyTune outperforms other auto-tuning methods.]{ \includegraphics[width=.23\textwidth,height=68pt]{hidi}% \label{fig:hiditune:perf} }\vspace{-3pt} \subfloat[Total tuning times for ClassyTune and GP-BO respectively.]{ \includegraphics[width=.23\textwidth,height=68pt]{ttime}% \label{fig:hiditune:time} }\vspace{-3pt} \caption{High-dimensional tuning results: tuning 30 PerfConfs for databases/TPC-C.} \label{fig:hiditune} \end{figure} \subsection{Cloud Resource Reduction via Performance Tuning} \label{sec:usecase} ClassyTune can bring about the five benefits of automatic performance tuning~\cite{acts} like related works~\cite{bestconfig,ottertune,smartconf,smarthillclimbing}. Here, we present a real use case of UTuned's customers to show how ClassyTune enables cloud resource reduction via performance tuning. In this case, ClassyTune is used to tune a small online querying service deployed in the cloud. The application workload accesses the service by connecting to a stateless Web service cluster running a Spring Boot\footnote{\scriptsize https://spring.io/projects/spring-boot} application, which sends user queries to the backend. Before tuning, the service is deployed on a three-node cluster, supporting a throughput around 9000 composite operations per second. There is a resource planning question about \emph{whether all the three nodes are needed or reducing one node is possible, if the workload throughput must be guaranteed}. To answer this resource planning question, we deploy the service on clusters of one to three nodes respectively. For each deployment, we test its performance under the default PerfConf setting. Then, we use ClassyTune to tune for the best performance. Under the tuned PerfConf setting, we test the service performance again. All the performance results are listed in Table~\ref{tbl:case}. \begin{table}[!h] \vspace{-12pt} \caption{Service Throughputs: Default vs. Tuned}\vspace{-6pt} \label{tbl:case}% \centering \begin{tabular}{|c|c|c|} \hline \textbf{Node \#} & \textbf{Default} (err. rate)& \textbf{ClassyTune} (err. rate)\\ \hline \textbf{1} & 3647.4 (17.6\%) & 4376.8 (8.9\%) \\ \hline \textbf{2} & 7972.4 (9.9\%) & \textbf{\normalsize 9341.3} (5.4\%) \\ \hline \textbf{3} &\textbf{\normalsize 9295.1} (9.7\%) & 11905.2 (2.2\%) \\ \hline \end{tabular} \end{table}\vspace{-6pt} For the target workload, a two-node cluster with a well-tuned PerfConf setting is the most cost effective. Without tuning, it would require one more node, i.e., 50\% more computing resources, to satisfy the application workload. From Table~\ref{tbl:case}, we can see that a one-node deployment, tuned or untuned, cannot support the application workload. While an untuned two-node deployment cannot meet the throughput requirement, it can perfectly support the workload after being tuned by ClassyTune. For three-node deployment, performance tuning enables it to support an even heavier application workload. In sum, We have actually reduced the cloud resource requirements (and \emph{costs}) of an online service by 33\% through performance tuning by ClassyTune.\vspace{-6pt} \section{Conclusion} This paper proposes a data-driven auto-tuning system ClassyTune, which can auto-tune the system performance by adjusting the PerfConfs within a limited number of tuning tests. ClassyTune exploits and models the comparison relations between PerfConfs by classification algorithms, instead of the typical performance-based model. Thanks to the comparison-based modeling, we can induce and generate more samples for training the classification model. Like other machine learning models, the classification model is not a hundred percent correct. If exploited naively, the imprecision of model could divert the performance tuning process such that no better PerfConf can be found. To guarantee a best PerfConf setting be found, we propose a clustering-based approach towards auto-tuning, exploiting the imprecise classification model. Extensive experiments on seven systems commonly used in the cloud show that ClassyTune can outperform expert tuning and the state-of-the-art auto-tuning solutions, especially for high-dimensional inputs, while the computation overhead of ClassyTune is much lighter than that of the state-of-the-art GP-based BO method. An illustrative use case is presented to show how performance tuning by ClassyTune improves the system performance and enables the reduction of 33\% cloud computing resources for an online stateless service. \ifCLASSOPTIONcaptionsoff \newpage \fi \balance \bibliographystyle{IEEEtran}
2,877,628,089,861
arxiv
\section{Introduction} The topic of thermodynamic systems in the presence of gravity has been discussed in many occasions and forms \cite{1968Lynd1,2003Oppenheim1,2009Campa1}, and has raised many questions on how to deal with the long-range effects of gravity in the thermodynamic analysis of systems, where concepts such as the isolation of a system in one or the other regard are important. Non-stationary equilibrium situations, negative heat capacities or simply divergences in the thermodynamic limit have been plaguing the analyses, and the conventional, very successful thermodynamic framework of Boltzmann-Gibbs statistics had to be adapted and modified in order to account for the peculiarities of the thermodynamics of a gravitational system. Based on a Boltzmann-Gibbs analysis of the self-gravitating gas \cite{2002VS1}, the statistical analysis and subsequent calculation of thermodynamic properties have been carried out \cite{2018Esca1} assuming a generalized framework intended to describe a system with non-extensive properties, due to the presence of long-range forces such as gravity. The adopted generalized framework, i.e., Tsallis generalized $q$-statistics, has been developed in order to consider non-extensive effects, entailing an additional parameter $q$ in the statistical analysis. This work is an extension of these previous investigations which generalizes the self-gravitating gas to a more realistic system featuring a centrally placed compact object, like e.g. a black hole, around which the gas is extending. Due to some peculiarities and open questions, we will not continue using the non-extensive $q$-statistics, but rather return to the conventional Boltzmann-Gibbs statistics, in order to get a first impression of the results. Other generalizations can be thought of, which will be commented on in the last section. \section{Statistical mechanics and thermodynamics of a self-gravitating gas} I will briefly review the most important steps in the analysis of a self-gravitating gas \cite{2018Esca1,2002VS1}, from the system's properties to the peculiarities of the thermodynamic analysis and some of its outcomes. The governing force of the self-gravitating gas is the gravitational attraction between its $N$ identical constituent particles which are otherwise moving freely, and thus the Hamiltonian of the system is \begin{equation} \label{eq:H} \mathcal{H} = \mathcal{T} + \mathcal{U} = \sum_{i=1}^{N}\frac{p_i^2}{2m}-Gm^2 \sum_{1\leq i<j \leq N} \frac{1}{\left| \mb{q_i}-\mb{q_j} \right|_A} \,, \end{equation} where $G$ is the gravitational constant, $m$ the mass of an individual particle, and $A$ represents a short-range cutoff imposed in order to avoid the unphysical collapse of the system to a point. This Hamiltonian is the basis for a thermodynamic analysis which can be done in principle in different ensembles, like the microcanonical one, where the energy of the system is kept constant, or the canonical one, where instead the temperature is fixed, and energy can be exchanged with a reservoir. In the microcanonical ensemble, the most important thermodynamic quantity from which everything is derived is the entropy, i.e., the logarithm of this sum over microstates $\Omega (E,V,N)$, \begin{equation}\label{eq:S-MCE} S = k_B \ln \Omega (E,V,N) \,, \end{equation} where \begin{equation}\label{eq:Omega} \Omega = \frac{(2\pi m)^{3N/2}}{N! \,h^{3N} \Gamma\left(\frac{3N}{2}+1\right)} \int d^{3N}q\ \big[E-\mathcal{U}\big]^{3N/2-1} \,. \end{equation} From the entropy, you can obtain important thermodynamic quantities such as the temperature of the gas, or the equation of state, i.e., the relation between pressure, temperature and volume. \\ In the case of the canonical ensemble, the starting point is the partition function, defined as \begin{equation} Z = \frac{1}{N!h^{3N}}\int d^{3N}q \, d^{3N}p \, \exp_q{\left(-\beta \mathcal{H}(\textbf{p},\textbf{q}) \right)} \,, \end{equation} and everything else is derived from that quantity, like the equation of state. The temperature in this ensemble is fixed, so it cannot be calculated. \\ Following the definition of these basic thermodynamic functions, calculations can be simplified by the assumption of a weak gravitational interaction, i.e., the gravitational contribution can be treated as a small correction to the ideal gas, and results can be obtained analytically in this case. \\ Under this additional assumption, further quantities that are of thermodynamic interest can be calculated, like the heat capacity and other response functions of the system. Both equation of state and heat capacity have been calculated and compared in the framework of Tsallis statistics \cite{2018Esca1}, and the details can be found there. \\ An important point for further investigations is the question of the statistical framework, which is closely connected to the choice of thermodynamic limit. Tsallis' non-extensive statistics naturally features a modification of the thermodynamic limit, in which the thermodynamic state variables result in convergent functions. In the case the conventional Boltzmann-Gibbs statistics, another modification of the thermodynamic limit has to be adopted \cite{2002VS1} in order to obtain convergent results. In the following, we will employ Boltzmann-Gibbs statistics with the modified thermodynamic limit. \section{Addition of the central gravitational potential} As a modification to the basic setup of a simple self-gravitating gas many complications can be thought of. The simplest gase perhaps is the addition of a central potential, to model the situation of a self-gravitating gas around a black hole. We will start with the assumption of an external gravitational field caused by a mass $M$ of size $r_S=2GM/c^2$ in the center of the configuration, restricting the movement of the gas between the radius of the innermost stable circular orbit (ISCO) at $r_{ISCO} = 3 r_S$ and infinity. This will make a difference in the integrals contained in the sum over microstates and the partition function, respectively. Moreover, the central potential will have its influence on every particle in the gas. The generalized Hamiltonian thus reads \begin{equation} \label{eq:Hgen} \mathcal{H} = \sum_{i=1}^{N}\frac{p_i^2}{2m}-Gm^2 \sum_{1\leq i<j \leq N} \frac{1}{\left| \mb{q_i}-\mb{q_j} \right|_{A}} - G m M \sum_{1\leq i<j \leq N} \frac{1}{\left| \mb{q_i}-\mb{r} \right |_{A}} \,, \end{equation} with $\mb{r}$ denoting the center of mass of the compact object. To simplify calculations, we choose $\mb{r}=0$. \\ The computational procedures in order to extract the thermodynamic equation of state is fairly analogous to the case of a simple self-gravitating gas, and differs only in the restriction of the range of integration, due to the fact that we consider a ring-like structure, or even a flat two-dimensional disk shape. This restriction will manifest itself in the definition of the virial coefficients $b_i$, which will be slightly different. \\ The interesting question is whether the modification of the system will lead to differences in the thermodynamic limit, i.e., facilitate the calculation of otherwise divergent functions, or modify the qualitative dependence on the number of particles in the thermodynamic limit. Preliminary results indicate that this is not the case, and that modifications are limited to the virial coefficients of the problem. \section{Outlook} We have here discussed the thermodynamic properties of a self-gravitating gas under the influence of a central gravitational field caused by a heavy mass at the center of the configuration. Basing on the analysis of a self-gravitating gas cloud consisting of ideal particles \cite{2018Esca1,2002VS1}, an additional term accounting for the central gravitational potential was added to the analysis, and the resulting thermodynamic state variables were calculated. Preliminary results indicate slight modifications of the state variables, depending on the new parameter, the mass of the central object. The goal is to generalize the analysis of a simple self-gravitating gas to eventually be able to make predictions on the thermodynamic behavior of matter around a compact object, i.e., an accretion disk of sorts. Besides the inclusion of a central compact object, the gas itself can be modified in its properties, e.g., by considering non-ideal interactions between the particles. This could be accounted for in an exact way by adding additional particle-particle interactions to the Hamiltonian, with the corresponding coupling constant, like for example an electromagnetic charge. The different strengths of gravity and any other interactions that may be added have to be weighed against each other, and approximations could be applied. Another possibility would be to include effective potentials which are used in condensed matter systems, Mie-type potential like the Lennard-Jones case or others, in order to describe different variations of the gas. Investigations in this direction would represent the first steps towards the description of non-ideal fluids in gravitational contexts - either gas clouds of interacting particles, or non-ideal fluids constituting accretion disks or clouds around a central compact object. Further generalizations include rotation of the system, or charge of the central object. Importantly, these results should then be connected to results of other calculations on accretion disks, in particular the accretion of (charged) dust particles in a spherical shell or torus structure \cite{2017Schr1}. \section*{Acknowledgments} C. G. was supported by funding from the DFG Research Training Group 1620 `Models of Gravity'. \bibliographystyle{ws-procs975x65}
2,877,628,089,862
arxiv
\section{The effect of applying Hadamard, S and Hadamard gates to $|0 \rangle^{\otimes^n}$} In this section we want to determine the effect of applying the Hadamard, S and Hadamard gates on quantum state $|0\rangle ^{\otimes^n}$, that is, we want to determine a formula for \begin{equation} |\alpha \rangle= H^{\otimes^n}S^{\otimes^n}H^{\otimes^n}|0\rangle ^{\otimes^n} \end{equation} and we show that in the final superposition $|\alpha \rangle=\sum_{z=0}^{2^n-1} a_z |z\rangle$ the amplitude $a_z$ of a sigle state $|z\rangle$ depends by $w(z)$. It is known \cite{10.5555/1972505} that given any computational state $|x\rangle$, $0\leq x < 2^n$ \begin{equation} \label{eq:hadamard} |\psi \rangle =H^{\otimes^n}|x\rangle= \dfrac{1}{\sqrt{2^n}} \sum_{z=0}^{2^n-1} (-1)^{x \cdot z}|z\rangle \end{equation} Now we start with the following Lemma \begin{lemma} \label{lemma:application_S_port} Let $0\leq x <2^n$ \[ S^{\otimes^n} |x \rangle = i^{w(x)} |x_{n-1}x_{n-2} \dots x_0 \rangle \] \end{lemma} \begin{proof} By induction on $n$ being the base case with $n=1$ straightforward. So suppose that the statement holds for $n-1$. Then \begin{align*} S^{\otimes^n} |x \rangle &= S|x_{n-1}\rangle \otimes S|x_{n-2}\rangle \dots \otimes S|x_0\rangle = S^{\otimes^{n-1}} | x_{n-1}\dots x_1 \rangle \otimes S|x_0\rangle =\\ &= i^{\sum_{j=1}^{n-1} x_j} | x_{n-1} \dots x_1 \rangle \otimes S|x_0\rangle = \text{ (by induction hypothesys) }\\ &= i^{\sum_{j=1}^{n-1} x_j} | x_{n-1} \dots x_1 \rangle \otimes i^{x_0} |x_0\rangle = \\ &=i^{w(x)} |x_{n-1}x_{n-2} \dots x_0 \rangle \end{align*} \end{proof} \noindent Now by Lemma \ref{lemma:application_S_port} and equation \eqref{eq:hadamard}, we have that \begin{align*} |\psi_1 \rangle= S^{\otimes^n}H^{\otimes^n}|0\rangle ^{\otimes^n}=S^{\otimes^n}\dfrac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} |x\rangle =\dfrac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} i^{w(x)} |x\rangle \end{align*} and applying the Hadamard to $|\psi_1 \rangle$, by \eqref{eq:hadamard}, we have that \begin{align*} |\psi_2 \rangle= H^{\otimes^n}|\psi_1\rangle =\dfrac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} i^{w(x)}\left[ \dfrac{1}{\sqrt{2^n}}\sum_{z=0}^{2^n-1}(-1)^{x \cdot z} |z\rangle\right ] \end{align*} and reordering the term of the sum we have that \begin{align*} |\psi_2 \rangle= \dfrac{1}{2^n}\sum_{z=0}^{2^n-1} \sum_{x=0}^{2^n-1}(-1)^{x \cdot z} i^{w(x)} |z\rangle=\dfrac{1}{2^n}\sum_{z=0}^{2^n-1} \left( \sum_{x=0}^{2^n-1}i^{w(x)} (-1)^{x \cdot z} \right) |z\rangle \end{align*} \noindent So in order to compute the amplitudes of $|\psi_2 \rangle$ we need to compute the sum \[ \sum_{x=0}^{2^n-1} i^{w(x)} (-1)^{x \cdot z} \] for every $0 \leq z<2^n$. We will do this in the following two theorems. First of all we need the following Lemma. \begin{lemma} \label{lemma:preliminary_lemma} Let $0 \leq z< 2^{2m+1}$ and $0 \leq x< 2^{2m+1}$ and let $z_{2m}z_{2m-1}\dots z_0$ and $x_{2m}x_{2m-1}\dots z_0$ be the binary representation, respectively, of $z$ and $x$. We have that \begin{align} \label{eq:prliminary} \sum_{x=2^{2m}}^{2^{2m+1}-1} &i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j}= i(-1)^{z_{2m}} \sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j} \end{align} \end{lemma} \begin{proof} We note that, on the left hand of the equation \eqref{eq:prliminary}, for every element of the sum, we have that $x_{2m}=1$. Therefore $\sum_{j=0}^{2m} z_j \cdot x_j= \sum_{j=0}^{2m-1} z_j \cdot x_j +z_{2m}$. Based on this we have that \begin{align*} \sum_{x=2^{2m}}^{2^{2m+1}-1} &i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j}= (-1)^{z_{2m}} \sum_{x=2^{2m}}^{2^{2m+1}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j} \end{align*} Furthermore for the same reason above, if $2^{2m}\leq x <2^{2m+1}$ and if $0\leq \bar{x} <2^{2m}$ then we have that $w(x)=w(\bar{x})+1$ and this prove the equation \eqref{eq:prliminary}. \end{proof} \begin{theorem} \label{lemma:first_lemma} Let $0 \leq z < 2^n$, $z \in \mathbb N$. If $n=2m$ is even we have that \begin{equation} \label{eq:first_lemma} \sum_{x=0}^{2^n-1} i^{w(x)}(-1)^{z \cdot x} = (-1)^{w(z)}i^{m+w(z)}2^m \end{equation} \end{theorem} \begin{proof} We prove the equation \eqref{eq:first_lemma} on induction on $m$ being the base case with $m=1$ easily verifiable for all $z \in \{0,1,2,3\}$. So suppose the statement holds for all $h \leq m$ and for all $0 \leq z < 2^{2m}$. Then, for any $0\leq z <2^{2m+2}$ we have \begin{align} \sum_{x=0}^{2^{2m+2}-1} i^{w(x)}(-1)^{z \cdot x} =&\sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j} \label{first_term}\\ +&\sum_{x=2^{2m}}^{2^{2m+1}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j}+\label{second_term}\\ +&\sum_{x=2^{2m+1}}^{2^{2m+2}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m+1} z_j \cdot x_j}\label{third_term} \end{align} Now, by equation \eqref{eq:prliminary} and by induction hypothesys, we have that \eqref{second_term} is equal to \begin{align} \sum_{x=2^{2m}}^{2^{2m+1}-1} &i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j}= i(-1)^{z_{2m}} \sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j}= \nonumber\\ &=i(-1)^{z_{2m}} (-1)^{w(z_{\text{-2}})}i^{m+w(z_{\text{-2}})}2^m \label{final_second_term} \end{align} Likewise, in the term \eqref{third_term}, for each $x$ is the sum, the bit $x_{2m+1}$ is always set to 1, so we have that \eqref{third_term} is, by equation \eqref{eq:prliminary}, equal to \begin{align} \sum_{x=2^{2m+1}}^{2^{2m+2}-1}& i^{w(x)}(-1)^{\sum_{j=0}^{2m+1} z_j \cdot x_j}=i(-1)^{z_{2m+1}} \sum_{x=0}^{2^{2m+1}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j} \label{final_third_term} \end{align} Now by repeatedly applying equation \eqref{eq:prliminary} and the induction hypothesys we have that the sum in right hand of equation \eqref{final_third_term} is \begin{align} \sum_{x=0}^{2^{2m+1}-1} & i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j}=\\ =& \sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j}+\sum_{x=2^{2m}}^{2^{2m+1}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j}=\nonumber\\ &= (-1)^{ w(z_{\text{-2}}) }i^{m+w( z_{\text{-2}} ) }2^m+i(-1)^{z_{2m}}\sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j}=\nonumber\\ &=(-1)^{w(z_{\text{-2}})}i^{m+w(z_{\text{-2}})}2^m \left[ 1 +i(-1)^{z_{2m}}\right] \label{eq:insidesum} \end{align} So if we replace \eqref{eq:insidesum} in \eqref{final_third_term} and if we sum togheter \eqref{first_term}, \eqref{final_second_term} and \eqref{final_third_term} we obtain \begin{align} a_z =&(-1)^{w(z_{\text{-2}})}i^{m+w(z_{\text{-2}})}2^m \left[ 1+i(-1)^{z_{2m}}+i(-1)^{z_{2m+1}}+i^2 (-1)^{z_{2m}+z_{2m+1}}\right]=\nonumber\\ =&(-1)^{w(z_{\text{-2}} )}i^{m+1+w(z_{\text{-2}})}2^m \left[-i+(-1)^{z_{2m}}+(-1)^{z_{2m+1}}+i (-1)^{z_{2m}+z_{2m+1}}\right] \label{final_formula} \end{align} Now if we denote by $P=(-1)^{w(z_{\text{-2}})}i^{m+1+w(z_{\text{-2}})}2^m$ we have that \eqref{final_formula} is \begin{align*} a_z= \left \{ \begin{array}{rl} 2 P & \text{ if $z_{2m}=z_{2m+1}=0$ }\\ -2i P & \text{ if $z_{2m} \neq z_{2m+1}$ }\\ -2 P & \text{ if $z_{2m}=z_{2m+1}=1$ }\\ \end{array} \right . \end{align*} and it is now easy to verify that \[ a_z= (-1)^{w(z)} i^{m+1+w(z)}2^{m+1} \] for every $0\leq z <2^{2m+2}$, and this proves the induction step. \end{proof} \begin{theorem} \label{lemma:last_lemma} Let $n=2m+1$ an odd natural, $m \in \mathbb N$ and let $0 \leq z < 2^n$, $z \in \mathbb N$. Then \begin{equation} \label{eq:odd_case} \sum_{x=0}^{2^n-1} i^{w(x)}(-1)^{z \cdot x} = (-1)^{w(z)}i^{m+w(z)}2^m(1+i) \end{equation} \end{theorem} \begin{proof} First of all we note that equation \eqref{eq:odd_case} holds if $m=0$ and $z \in \{0,1\}$. So in the following we suppose that $m>1$. We have that \begin{align*} a_z &=\sum_{x=0}^{2^{2m+1}-1} i^{w(x)}(-1)^{z \cdot x} =\nonumber\\ =&\sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j} +\sum_{x=2^{2m}}^{2^{2m+1}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m} z_j \cdot x_j} \end{align*} and, by Theorem \ref{lemma:first_lemma}, and by equation \eqref{eq:prliminary}, we have \begin{align} \label{eq:final_intermediary} a_z &=(-1)^{w(z_{\text{-1}})}i^{m+w(z_{\text{-1}})}2^m+i(-1)^{z_{2m}} \sum_{x=0}^{2^{2m}-1} i^{w(x)}(-1)^{\sum_{j=0}^{2m-1} z_j \cdot x_j}=\nonumber\\ =&(-1)^{w(z_{\text{-1}})}i^{m+w(z_{\text{-1}})}2^m+i(-1)^{z_{2m}}(-1)^{w(z_{\text{-1}})}i^{m+w(z_{\text{-1}})}2^m=\nonumber\\ =& (-1)^{w(z_{\text{-1}})}i^{m+w(z_{\text{-1}})}2^m \left [ 1+i(-1)^{z_{2m}} \right] \end{align} Let $z_{2m}z_{2m-1} \dots z_0$ be the binary representation of $z$. Suppose first that $z_{2m}=0$. Then equation \eqref{eq:final_intermediary} become \begin{equation} (-1)^{w(z)}i^{m+w(z)}2^m+ (-1)^{w(z)}i^{m+w(z)+1}2^m \end{equation} and the Theorem is therefore proved. So suppose that $z_{2m}=1$. Then equation \eqref{eq:final_intermediary} become \begin{equation} (-1)^{w(z)-1}i^{m+w(z)-1}2^m+ (-1)^{w(z)}i^{m+w(z)}2^m \end{equation} but observing that \begin{equation} (-1)^{w(z)}i^{m+w(z)+1}=(-1)^{w(z)-1}i^{m+w(z)-1} \end{equation} we have that also in this case the Theorem is satisfied. \end{proof} As an example we have computed the amplitudes $a_z$ (disregarding the normalization factor) for $n \in {3,4}$ and we report them on Table \ref{tbl:amplitude_example}. \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline $\mathbf{|z\rangle }$ & $\mathbf{a_z}$ \\ \hline $|000\rangle$ & $-2+2i$\\ \hline $|001\rangle$ & $2+2i$\\ \hline $|010\rangle$ & $2+2i$\\ \hline $|011\rangle$ & $2-2i$\\ \hline $|100\rangle$ & $2+2i$\\ \hline $|101\rangle$ & $2-2i$\\ \hline $|110\rangle$ & $2-2i$\\ \hline $|111\rangle$ & $-2-2i$\\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline $\mathbf{|z\rangle }$ & $\mathbf{a_z}$ \\ \hline $|0000\rangle$ & $-4$\\ \hline $|0001\rangle$ & $4i$\\ \hline $|0010\rangle$ & $4i$\\ \hline $|0011\rangle$ & $4$\\ \hline $|0100\rangle$ & $4i$\\ \hline $|0101\rangle$ & $4$\\ \hline $|0110\rangle$ & $4$\\ \hline $|0111\rangle$ & $-4i$\\ \hline $|1000\rangle$ & $4i$\\ \hline $|1001\rangle$ & $4$\\ \hline $|1010\rangle$ & $4$\\ \hline $|1011\rangle$ & $-4i$\\ \hline $|1100\rangle$ & $4$\\ \hline $|1101\rangle$ & $-4i$\\ \hline $|1110\rangle$ & $-4i$\\ \hline $|1111\rangle$ & $-4$\\ \hline \end{tabular} \end{center} \caption{Left the amplitudes of $a_z$ for $n=3$. Right the amplitudes of $a_z$ for $n=4$. In order to get the final amplitudes one should multiply them by a suitable normalization factor. \label{tbl:amplitude_example}} \end{table} \section{Doubling the amplitude of the search problem's solution states} In this section we consider a quantum circuit for doubling the amplitude of solution's states of the search problem. For a search problem we refer, in general, to the problem of finding a solution of some NP-complete problem. Like in the Groover algorithm we will use the intrinsic quantum mechanical parallelism and an oracle $f(|x \rangle)\in \{0,1\}$, specifically designed for the specific problem at hand, which return $1$ is $x$ is a solution of the problem and $0$ otherwise. In particular, in order to present in the detail the results of this paper, we will use a quantum oracle for the Partition Problem (PP). In the PP we have a finite set of elements $E$ and a function $s: E \rightarrow \mathbb N^+$. We want to find a subset $E' \subset E$ such that $\sum_{e \in E'} s(e) = \sum_{e \in E \setminus E'} s(e)$. From now on we do not loss generality if we consider the set $E$ equal to the set of the first $|E|$ naturals, that is we always consider $E = \{0,1,\dots , n-1\}$. Furthermore we note that if PP has a solution $E'$ then $E - E'$ is also a solution of the PP: The partition problem (PP) is well known to be an NP-complete problem \cite{Kar72}.\\ We describe, in the following, an application of the gates described in the previous section in a quantum circuit to deal with PP (see Figure \ref{fig:circuit}). While the following results apply specifically to the PP they can be applied to any other search problem. Denote by $\mathcal S = \sum_{e \in E} s(e)/2$. Recall that PP problem has a solution only if $\mathcal S$ is an integer. We use the two's complements representation of $-\mathcal S $ requiring $m=\lceil \log_2{\mathcal S}\rceil +1$ qubits. Then for each $e \in E$, we use $k_e=\lceil \log_2{s(e)}\rceil+1$ qubits to encode $s(e)$. These qubits will remain constant in every phase of the circuit and therefore we will not consider them in the reasoning that follows. We use $n$ qubits to encode a subset $E'$ of $E$. If $|x_{n-1}x_{n-2}\dots x_0\rangle$ is the state of those $n$ qubits, then an element $e$, $0 \leq e < n$, is included in the set $E'$ if and only if $x_e=1$. We will use $m$ qubits, denoted in the following by $|\sigma\rangle$, to store the sum $\sigma = - \mathcal S +\sum_{e \in E'}s(e)$ for the elements selected in $|x\rangle$. In this way $|\sigma\rangle= |0\rangle^{\otimes^m}$ for a solution $|x\rangle$ of the PP. We also use a control qubit $|c\rangle$. So we have four groups of bits: $|x\rangle$, $|\sigma\rangle$, $|c\rangle$ and the sets of qubits used to represent the constants $s(e)$ for each element of $E$. Note that the number of qubits of the circuit, $n+m+1+\sum_{e \in E} k_e$, is polynomial in the size of a coincise specification of the PP. At the beginning of the circuit we have the following superposition: \begin{align*} |\varphi_0 \rangle = |0 \rangle^{\otimes^n} |\sigma\rangle|c\rangle \end{align*} where $\sigma$ is set to the two's complement of $-\mathcal S $ and $|c\rangle$ is set to $|1\rangle$. Then, we apply the Hadamard gate to the first $n$ qubits obtaining \begin{align*} |\varphi_1 \rangle &= \left (H^{\otimes^n}\otimes I^{m+1} \right)|\varphi_0 \rangle = \dfrac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} |x\rangle |\sigma\rangle|c\rangle \end{align*} Next, we uses each qubit $x_e$ to conditionally sum the element $s(e)$ to $|\sigma \rangle$. If there exist a solution to the PP then, in the final superposition of $|\sigma \rangle$, the amplitude of the state $|x\rangle|0 \rangle^{\otimes^m} |c\rangle$ will be not $0$. The states $|x\rangle$ for which $|\sigma \rangle$ is zero wil be referred as the \emph{solutions states} of the PP. The control qubit $|c\rangle$ will be set to zero exactly for those states for which $|\sigma \rangle=|0\rangle^{\otimes^m}$. At this point we apply an uncomputational step in order to set $|\sigma\rangle=|-\mathcal S\rangle $. Now if we apply the $S$ gate to the first $n$ qubits we obtain, by Lemma \ref{lemma:application_S_port} \begin{align*} |\varphi_2 \rangle &=\left ( S^{\otimes^n}\otimes I^{m+1} \right)|\varphi_1 \rangle = \dfrac{1}{\sqrt{2^n}} \sum_{x=0}^{2^n-1} i^{w(x)}|x\rangle |\sigma\rangle|c\rangle \end{align*} After this we apply again the Hadamard gate to the first $n$ qubits. This operation is controlled by the control qubit in a way that the Hadamard port is applied only to non solution states. For the sake of simplicity we suppose in the following, that the PP has only two solution whose numeric representation are $y$ and its bitwise complement $\overline y$. By equation \eqref{eq:prliminary}, we obtain \begin{align} |\varphi_2 \rangle &\xrightarrow{\text{contr } H^{\otimes^n} }|\varphi_3 \rangle = \dfrac{1}{\sqrt{2^n}} \sum_{z \in \{y, \overline y\}} i^{w(z)} |z\rangle |\sigma\rangle|0\rangle+ \nonumber \\ +& \dfrac{1}{2^n}\sum_{z=0}^{2^n-1} \sum_{x \notin \{y, \overline y\}} i^{w(x)}(-1)^{x\cdot z}|z\rangle |\sigma\rangle|1\rangle=\nonumber \\ =&\dfrac{1}{2^n}\left[ \sqrt{2^n}\sum_{z \in \{y, \overline y\}} i^{w(z)} |z\rangle|\sigma\rangle|0\rangle + \sum_{z=0}^{2^n-1} \sum_{x \notin \{y, \overline y\}} i^{w(x)}(-1)^{x\cdot z}|z\rangle|\sigma\rangle|1\rangle \right] \label{eq:final_amplitude_solution} \end{align} Now we want to quantify the amplitude of the state $|y\rangle|\sigma\rangle|1\rangle$ and $|\overline y\rangle|\sigma\rangle|1\rangle$ of equation \eqref{eq:final_amplitude_solution}. We consider only the state $|y\rangle|\sigma\rangle|1\rangle$ since the same arguments can be applied to state $|\overline y\rangle|\sigma\rangle|1\rangle$. The amplitude $b_y$ fr the state $|y\rangle|\sigma\rangle|1\rangle$ (in the following we disregard the normalization factor $1/2^n$) is given by the following formula \begin{align} b_y =\sum_{x \notin \{y, \overline y\}} i^{w(x)}(-1)^{x\cdot y} \label{eq:final_amplitude_solution_for_a_signle_solution} \end{align} We may write the above sum as \begin{align} b_y= \sum_{x \notin \{y, \overline y\}} i^{w(x)}(-1)^{x\cdot y}=\sum_{x=0}^{2^n} i^{w(x)}(-1)^{x\cdot y}- \sum_{x\in{\{y, \overline{y}\}}} i^{w(x)}(-1)^{x\cdot y} \end{align} We have that \begin{align} \sum_{x\in{\{y, \overline{y}\}}} i^{w(x)}(-1)^{x\cdot y}&=i^{w(y)}(-1)^{y\cdot y} + i^{n-w(y)}(-1)^{\overline{y} \cdot y}=\nonumber\\ =&i^{w(y)}(-1)^{{w(y)}} + i^{n-w(y)}=\label{eq:generic_z_term_first} \end{align} Then, recalling that $i^x=i^{-x}$ when $x$ is even and $i^{-x}=-i^x$ when $x$ is odd, we have two cases: $w(y)$ is even and then \begin{align} i^{w(y)}(-1)^{{w(y)}} &+ i^{n-w(y)}=i^{w(y)}(-1)^{{w(y)}} + i^{n+w(y)}=\nonumber\\ &=i^{w(y)}(1+ i^n) \end{align} while if $w(y)$ is odd \begin{align} i^{w(y)}(-1)^{{w(y)}} &+ i^{n-w(y)}=i^{w(y)}(-1)^{{w(y)}} - i^{n+w(y)}=\nonumber\\ &=-i^{w(y)}(1+ i^n) \end{align} For simplicity of notation in the following we denote $w(y)$ as simply $\overline{w}$. We have that if $n=2m$ is even then, by Theorem \ref{lemma:first_lemma}, $b_y$ is \begin{align} \label{eq:duplicate_n even} b_y= \left\{ \begin{array}{ll} (-1)^{\overline{w}} i^{m+{\overline{w}}} 2^m- i^{\overline{w}}(1+i^{2m})& \text{ if $\overline{w}$ is even}\\ & \\ (-1)^{\overline{w}} i^{m+{\overline{w}}} 2^m + i^{\overline{w}}(1+i^{2m}) & \text{ if $\overline{w}$ is odd}\\ \end{array} \right . \end{align} while if $n=2m+1$ is odd, by Theorem \ref{lemma:last_lemma}, $b_y$ is \begin{align}\label{eq:duplicate_n odd} b_y= \left\{ \begin{array}{ll} (-1)^{\overline{w}}i^{m+\overline{w}}2^m(1+i)- i^{\overline{w}}(1+i^{2m+1})& \text{ if $\overline{w}$ is even}\\ & \\ (-1)^{\overline{w}}i^{m+\overline{w}}2^m(1+i)+ i^{\overline{w}}(1+i^{2m+1}) & \text{ if $\overline{w}$ is odd}\\ \end{array} \right . \end{align} It is immediate to check that in the above equations \eqref{eq:duplicate_n even} and \eqref{eq:duplicate_n odd} the term $i^{\overline{w}}(1+i^{n})$ become trascurable, with respect to the other term in the equation, as $m$ become bigger. We conclude that the amplitude of the state $|y\rangle|\sigma\rangle|1\rangle$ is almost the same of the amplitude of state $|y\rangle|\sigma\rangle|0\rangle$, thus effectively duplicating the chances of state $|y\rangle$ at the end of the circuit. For example if $n=2m+1=3$ and $|y\rangle =|011\rangle$ we have that $b_y=3-3i$, so that the probability of getting $|y\rangle$ is, by \eqref{eq:final_amplitude_solution}, $\dfrac{1}{64}\left [|2\sqrt{2}|^2+|3-3i|^2 \right ]=\dfrac{26}{64}=.40625$ which is exactly the output of the quirk simulator. \begin{figure}[htbp] \centering \includegraphics[width=7.6cm]{circuit} \caption{The circuit exploliting $S$ gates.} \label{fig:circuit} \end{figure} \section{Conclusion and future work} We presented here a quantum algorithm for doubling the amplitude of the state correspondig to the solution of the partition problem. This is interesting because if we would be able to iterate such a doubling we could provide a polynomial quantum algorithm for solving an NP-complete problem. Possible future works should focus on: generalizing the mathematical results to istance of search problems where there are more than 2 solutions, find out if this algorithm can be combined to the Goover algorithm in order to seed up the latter of a factor of $p$ where $p\geq 2$ and, more important, to check if and how it is possibile to iterate the doubling of the amplitude in order to get some polynomial time algorithm for solving the search problem. \clearpage \bibliographystyle{abbrv}
2,877,628,089,863
arxiv
\section{Introduction} \label{S1} One of the most remarkable twists that the braneworld scenario has introduced in our view of physics is that the fundamental scale of gravity could be significantly closer to scales currently accessible by experiments than previously thought. In the braneworld paradigm, the standard model of physics is localized to a four dimensional brane while gravity (and possibly other fields) propagate in the entire space, the bulk. In the 4-D perspective, this results in the rescaling of many couplings and mass scales present in the theory, thus providing an alternative approach to the hierarchy problem \cite{Ant, Ant2, Ant3, Ant4, TeV1, TeV2}. Naturally, an important problem in the study of this type of theories is understanding the possible ways in which the standard model can be localized to a brane \cite{Rev1, Rev2}; different mechanisms to localize matter and gauge fields to a brane may have distinctive features with relevant implications for braneworld phenomenology. In addition, several aspects of the standard model's rich structure could be understood in terms of how physics is arranged in the bulk. A simple mechanism for the confinement of higher dimensional fermions to a domain wall was proposed long ago by Rubakov and Shaposhnikov \cite{fermions1} and is based purely on field theoretical considerations. In their proposal, the wave functions of fermion zero modes concentrate near the existing domain walls, generating 4-D massless chiral fermions attached to them. This mechanism has given rise to interesting braneworld scenarios with clear consequences for physics beyond the standard model. One is the split fermion scenario, proposed by Arkani-Hamed and Schmaltz \cite{fermions2}. Here, bulk fermions are split into different positions around the brane, offering a simple solution to the hierarchy problem and the proton decay problem: the separation between chiral fermions along the extra dimension generates exponentially suppressed couplings between them (for example, Yukawa couplings) \cite{fermions3, fermions4}. In the case of gauge fields, a mechanism for their localization (closely related to the confinement of fermions) is also available. This is the case of the quasilocalization of gauge fields, proposed by Dvali, Gabadadze and Shifman \cite{gauge} (see \cite{alt1, alt2} for alternatives). Here, the interaction between bulk gauge fields and the ``already'' localized fermions induces gauge kinetic terms on the brane. The result is a 4-D effective theory consisting of gauge fields mediating interactions between the localized fermions. An interesting feature of this type of mechanism is the appearance of a crossover scale $r_{c}$: at distances below this scale the propagation of gauge fields along the brane is manifestly four-dimensional, whereas above this scale the propagation becomes five-dimensional. In this paper we put together both types of confining mechanisms ---for fermions and gauge fields--- to reproduce the basic structure of the electroweak sector of the standard model. We show that the gauge symmetry exhibited by bulk fermions can be broken down through their confinement to a domain wall, giving rise to non-trivial subgroup representations. More precisely, by starting with a five-dimensional $SU(3)$ gauge theory in the bulk, we obtain an $SU(2) \times U(1)$ chiral theory on the brane, with all the basic requirements of the electroweak model. The key ingredient of the present proposal is that the positions at which 5-D fermions end up localized depend on their $SU(3)$ charges. This allows, for example, to break the $\mathbf{10}$ and $\bar \mathbf{6}$ representations of $SU(3)$ down to the lepton and quark representations of $SU(2) \times U(1)$, respectively, and confine them to a single brane. In this construction it is possible to identify the Higgs field with the fifth component of the localized bulk gauge field. Additionally, new fields inevitably appear in the resulting 4-D effective theory. These are: a left-handed neutrino with zero-hypercharge, and a massive vector field coupling together the new neutrino with other left-handed leptons. This article is organized as follows: In Sec. \ref{S2} we introduce the split fermion scenario and explain how the localization of $SU(3)$ fermions to different positions in the bulk is produced. Then, in Sec. \ref{S3} we analyze the confinement of gauge fields. There we argue that the gauge symmetry of the localized fermions is transferred to the gauge fields near the brane. Finally, in Sec. \ref{S4}, we show that the electroweak model can be constructed by putting these two mechanisms together. There, the hierarchy problem is also addressed. \section{Confinement of fermions} \label{S2} In this section we describe the localization of bulk fermions to a domain wall. We start with the split fermion scenario and then move to a more complex setup where the localization of fermions depends on their charges. \subsection{Split fermions} Consider a 5-D system consisting of a spin-1/2 fermion $\Psi$ and a real scalar field $\Phi$. To describe the 5-D space-time we use coordinates $x^{A}$ with $A = 1, \ldots , 5$. The Lagrangian for the system is \begin{eqnarray} \mathcal{L}^{(5)} = - \bar \Psi \left[\gamma^{A} \partial_{A} + m + y \, \Phi \right] \Psi - \frac{1}{2} (\partial_{A} \Phi)^{2} - V(\Phi). \nonumber\\ \label{eq2: L-split} \end{eqnarray} Here $m$ is the mass of the bulk fermion $\Psi$ and $y$ is a Yukawa coupling. Additionally, $\gamma^{A}$ are the 5-D gamma-matrices in a basis where \begin{eqnarray} \gamma^{5} = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right), \end{eqnarray} which is the usual four-dimensional $\gamma^{5}$ matrix. For the time being we disregard the presence of gauge fields. Let us consider the following potential for the scalar $\Phi$: \begin{eqnarray} V(\Phi) = \frac{\sigma}{4} \left[ \Phi^{2} - v^{2} \right]^{2}. \end{eqnarray} To discuss solutions to this system we use $z = x^{5}$ to distinguish the extra-dimension and coordinates $x^{\mu}$ with $\mu = 1, \cdots , 4$ to parameterize the usual 4-D space-time. Then, the scalar field $\Phi$ is found to have a kink solution of the form: \begin{eqnarray} \Phi(z) = v \tanh \left( k z \right), \end{eqnarray} where $k = v \sqrt{\sigma / 2} \,$. The corresponding domain wall, centered at $z=0$, is coupled to the fermion field through the $y$-term. The equation of motion for $\Psi$ reads: \begin{eqnarray} \left[ \gamma^{\mu} \partial_{\mu} + \gamma^{5} \partial_{z} + m + y \, \Phi (z) \right] \Psi = 0. \label{eq: Fermion-Wall} \end{eqnarray} Notice that the translational invariance along $z$ is broken. Thus, in order to solve Eq. (\ref{eq: Fermion-Wall}) we define left and right handed helicities $\Psi_{L}$ and $\Psi_{R}$, by $\gamma^{5} \Psi_{L} = +\Psi_{L}$ and $\gamma^{5} \Psi_{R} = -\Psi_{R}$, and expand them as: \begin{eqnarray} \Psi_{L,R} = \sum_{n} \Psi^{n}_{L,R} = \sum_{n} a^{L,R}_{n}(z) \, \psi_{L,R}^{n}(x), \label{eq: expansion} \end{eqnarray} where $a_{n}^{L,R}(z)$ are Kaluza-Klein coefficients, $\psi^{n}_{L,R}(x)$ are 4-D left and right-handed spinor fields, and $n$ labels the expansion mode. Inserting the expansion (\ref{eq: expansion}) back into Eq. (\ref{eq: Fermion-Wall}) we find the following equations for the coefficients $a_{0} (z)$ and $a_{n} (z)$ with $n > 0$: \begin{eqnarray} \left[\pm \partial_{z} + m + y \, \Phi \right] a_{0}^{L,R} = 0, \label{eq: zero} \\ \left[ -\partial_{z}^{2} + ( m + y \, \Phi)^{2} \mp y \, (\partial_{z} \Phi) \right] a_{n}^{L,R} = \mu_{n}^{2} a_{n}^{L,R}, \label{eq: non-zero} \end{eqnarray} where $\pm$ stands for the left and right-handed helicities. At this stage, it is convenient to define the following ``confinement'' length scale: \begin{eqnarray} \Delta = \frac{1}{\sqrt{|y v k|}}. \end{eqnarray} Then, in general, solutions to Eq. (\ref{eq: non-zero}) provide modes with masses $\mu_{n}^{2}$ of order $\Delta^{-2}$. From now on we assume that $\Delta$ is sufficiently small so that nonzero modes can be integrated out without affecting the theory at low energies. Solving Eq. (\ref{eq: zero}) the zero modes are found to be \begin{eqnarray} \Psi_{L,R} = A \exp \Big\{ \mp \! \int^{z}_{0} \!\!\! \left[ m + y\, \Phi (z) \right] \, dz \Big\} \, \psi_{L,R}(x), \label{eq: solution} \end{eqnarray} where the factor $A$ is a normalization constant introduced in such a way that \begin{eqnarray} \int d z \, |\Psi|^{2} = |\psi(x)|^{2}. \end{eqnarray} Notice that only one of these two solutions is normalizable: if $y>0$ ($y<0$) then the left (right) handed fermion is normalizable. Additionally, observe that if $m=0$ then the fermion wave function is centered at $z = 0$, otherwise its localization is shifted with respect to the brane. To appreciate this, let us analyze the linear behavior $\Phi \simeq v k z$ near $z = 0$ for the case $y>0$. Then, if we assume that $m^{-1} \gg k \Delta^{2}$ (so the linear expansion $\Phi \simeq v k z$ makes sense), we obtain \begin{eqnarray} \Psi_{L} \sim \frac{1}{\sqrt{\Delta}} \exp \left[ - \frac{1}{2} \Delta^{-2} (z - z_{0})^{2} \right] \, \psi_{L}(x), \end{eqnarray} where $z_{0} = - m \Delta^{2}$. Thus, the fermion wave function has a width $\Delta$ and is centered at $z_{0}$. Figure \ref{F1} sketches the confinement of the bulk fermion near the domain wall. \begin{figure}[ht] \begin{center} \includegraphics[width=0.34\textwidth]{fig1.eps} \caption[Confinement]{The figure sketches the confinement of the bulk fermion near the domain wall located at $z = 0$. The fermion wave function is centered at position $z_{0} = - m \Delta^{2}$.} \label{F1} \end{center} \end{figure} We can now compute the 4-D effective Lagrangian for $\psi_{L}(x)$ by integrating out the extra-dimension: \begin{eqnarray} \mathcal{L}^{(4)} = - \bar \psi_{L} ( \gamma^{\mu} \partial_{\mu} ) \psi_{L}. \label{eq2: L-split-eff} \end{eqnarray} Notice that in the limit $\Delta \rightarrow 0$ ($z_{0} \rightarrow 0$), we obtain a thin brane of the form: \begin{eqnarray} \mathcal{L}^{(5)} = \delta (z) \mathcal{L}^{(4)}. \end{eqnarray} There is an interesting consequence related to the shift of the fermion's positions with respect to the domain wall: Suppose a scenario in which two bulk fermions $\Psi^{1}$ and $\Psi^{2}$, with masses $m_{1}$ and $m_{2}$, are coupled to a wall in such a way that $y_{1} = y >0$ and $y_{2} = -y < 0$. If in the original 5-D Lagrangian there is a term such as \begin{eqnarray} H \bar \Psi^{1} \Psi^{2} + \mathrm{h.c.}, \end{eqnarray} where $H$ is a given bulk field (a scalar, for example), then the 4-D effective Lagrangian will contain a Yukawa term of the form: \begin{eqnarray} \sim ( H \, \bar \psi^{1}_{L} \, \psi^{2}_{R} + \mathrm{h.c.} ) \, e^{ - r^{2} / 4 \Delta^{2} }, \label{eq: Yukawa-supp} \end{eqnarray} where $r = r_{1} - r_{2}$ is the separation between both fermion wave functions with $r_{1} = - m_{1} \Delta^{2}$ and $r_{2} = + m_{2} \Delta^{2}$. Physically, this means an exponential suppression of the 4-D Yukawa coupling for the pair ($\psi^{1}_{L}$, $\psi^{2}_{R}$) offering an interesting solution to the hierarchy problem. \subsection{Confining $SU(3)$ fermions} We now proceed to analyze the localization of fermions produced by ``charged'' domain walls. Assume that space-time is described by a 5-D manifold $M$ with topology \begin{eqnarray} M = \mathbb{R}^{4} \times S^{1}, \label{eq: topology} \end{eqnarray} where $S^{1}$ is the one-dimensional circle and $\mathbb{R}^{4}$ is the 4-D Lorentzian space. In this case, the coordinate $z = x^{5} \in [0,L]$ is the spatial coordinate parameterizing $S^{1}$ with $L$ the size of the compact extra-dimension. Let us consider the existence of 5-D bulk fermions transforming nontrivially under $SU(3)$ gauge symmetry. They are described by the following Lagrangian: \begin{eqnarray} \mathcal{L}_{\Psi}^{(5)} = - \bar \Psi [\gamma^{A} D_{A} + Y(\Phi) ] \Psi. \label{eq2: Lagrangian1} \end{eqnarray} The covariant derivative is $D_{A} \Psi = (\partial_{A} - i E^{\alpha}_{A} T_{\alpha}) \Psi$, where $E^{\alpha}_{A}$ are $SU(3)$ bulk gauge fields. Here $\alpha = 1, \ldots , 8$ and $T_{\alpha}$ are the $SU(3)$ generators acting on $\Psi$. Observe that we are considering a coupling term $Y(\Phi)$ where $\Phi = \Phi^{\alpha} T_{\alpha}$ is a scalar field that transforms in the adjoint representation of $SU(3)$. In order to construct $SU(3)$-representations we proceed conventionally: We choose $T_{3}$ and $T_{8}$ as the Cartan generators and construct states to be eigenvalues with charges: \begin{eqnarray} Q = (T_{3}, T_{8}). \end{eqnarray} Assume that $\Phi$ is dominated by the following $SU(3)$ gauge invariant potential: \begin{eqnarray} V(\Phi) = \frac{\sigma}{4} \left[ \Phi^{\alpha} \Phi_{\alpha} - v^{2} \right]^{2}. \end{eqnarray} Nonzero vacuum expectation solutions $\langle \Phi \rangle$ are expected and, in general, they correspond to linear combinations of $\langle \Phi^{3} \rangle$ and $\langle \Phi^{8} \rangle$. Furthermore, since we are assuming the compact topology (\ref{eq: topology}), then the system admits nontrivial topological solutions. Take for instance the case of a single winding-number solution \begin{eqnarray} \langle \Phi (z) \rangle = \Phi_{0} \left[ \cos (k z) T_{3} + \sin(k z) T_{8} \right], \label{eq: Phi(z)} \end{eqnarray} where $k = 2 \pi / L$ and $\Phi_{0}^{2} = v^{2} - k^{2}/\sigma$. Notice that we have chosen $\langle \Phi^{8} \rangle = 0$ at $z = 0$. We can now proceed in the same way as before: we expand $\Psi$ in modes (\ref{eq: expansion}) and find zero mode solutions of the form \begin{eqnarray} \Psi_{L,R} = A \exp \left\{ \mp \int^{z}_{0} \!\!\! Y (z) \, dz \right\} \, \psi_{L,R}(x), \label{eq: solution2} \end{eqnarray} where $Y(z) \equiv Y[ \langle \Phi(z) \rangle ]$. To discuss the consequences of solution (\ref{eq: Phi(z)}) with some transparency, let us have a look to the following simple example: take a Yukawa coupling of the form: \begin{eqnarray} Y(\Phi) = y \Phi = y \Phi^{\alpha} T_{\alpha}, \label{eq: Yukawa} \end{eqnarray} and consider matter fields $\Psi$ belonging to the $\mathbf{3}$ [the fundamental representation of $SU(3)$]. In this case the confinement scale must be defined as \begin{eqnarray} \Delta = \frac{1}{\sqrt{|y \Phi_{0} k|}}. \end{eqnarray} Thus again, masses $\mu_{n}^{2}$ of nonzero modes solutions [Eq. (\ref{eq: non-zero})] are found to be of order $\Delta^{-2}$. To work out the consequences of the Yukawa coupling (\ref{eq: Yukawa}) on the $\mathbf{3}$ we chose $\Psi^{i}$ (with $i = 1,2,3$) to have the following $SU(3)$-charges (see Fig. \ref{F2}): \begin{eqnarray} Q (\Psi^{1}) &=& (-1/2,+\sqrt{3}/6), \\ Q(\Psi^{2}) &=& (+1/2,+\sqrt{3}/6), \\ Q(\Psi^{3}) &=& (0,- \sqrt{3}/3). \end{eqnarray} \begin{figure}[ht] \begin{center} \includegraphics[width=0.3\textwidth]{fig2.eps} \caption[Three]{The figure shows the $SU(3)$-charges, $T_{3}$ and $T_{8}$, of fermions $\Psi^{i}$ (with $i = 1,2,3$) in the fundamental representation $\mathbf{3}$.} \label{F2} \end{center} \end{figure} In this way, replacing (\ref{eq: Yukawa}) into (\ref{eq: solution2}), it is possible to see that the positions at which the fermion wave functions end up centered depend on their $SU(3)$-charges and their chirality. Observe, for instance, that in the present realization left and right-handed fermions are localized to diametrically opposite positions in the $S^{1}$ circle. Also, it can be seen that if $|y \Phi_{0}| \gg k$, then the widths of the fermion wave functions become of order $\Delta$ and the overlap between fermions located at different positions becomes very small. The following table provides the position of each state for the case $y \Phi_{0} > 0$: \begin{center} \begin{tabular}{l c l | l c l} Fermion & \hspace{1em} & Position ($z$) \hspace{1em} & \hspace{1em} Fermion & \hspace{1em} & Position ($z$) \\ \hline \hspace{1em} $\Psi^{3}_{R}$ & & \hspace{1em} 0 & \hspace{2em} $\Psi^{3}_{L}$ & & \hspace{1em} L/2 \\ \hspace{1em} $\Psi^{1}_{R}$ & & \hspace{1em} 2L/3 & \hspace{2em} $\Psi^{1}_{L}$ & & \hspace{1em} L/6 \\ \hspace{1em} $\Psi^{2}_{R}$ & & \hspace{1em} 5L/6 & \hspace{2em} $\Psi^{2}_{L}$ & & \hspace{1em} L/3 \\ \end{tabular} \end{center} Notice that the fundamental representation has been broken down to several branes. Figure \ref{F3} shows the way in which $\Psi^{3}$ of the fundamental representation is split. \begin{figure}[ht] \begin{center} \includegraphics[width=0.4\textwidth]{fig3.eps} \caption[Three conf]{The figure shows the way in which $\Psi^{3}$ is confined. Same representations but with different chiralities end up in branes located at diametrically opposite positions in the $S^{1}$ circle.} \label{F3} \end{center} \end{figure} We can now compute the 4-D effective theory for the matter fields localized at any desired brane of our example. Let us compute, for instance, the effective Lagrangian $\mathcal{L}_{\mathrm{eff}}$ at the first brane ($z=0$) taking into account the presence of the gauge field $E^{\alpha}_{A}$. In the limit $\Delta \rightarrow 0$ (with $L$ fixed), we obtain: \begin{eqnarray} \mathcal{L}_{\mathrm{eff}} = - \delta(z) \bar \psi^{3}_{R} \gamma^{\mu} \bigg[ \partial_{\mu} + i \frac{\sqrt{3}}{3} E^{8}_{\mu} \bigg] \psi^{3}_{R}. \label{eq: eff} \end{eqnarray} Here the delta function appears in the limit $\Delta \rightarrow 0$ after considering the right normalization factor $A$ in Eq. (\ref{eq: solution2}). Notice the appearance of the induced current \begin{eqnarray} J_{8}^{\mu} = - i \frac{\sqrt{3}}{3} \bar \psi^{3}_{R} \gamma^{\mu} \psi^{3}_{R}, \end{eqnarray} which couples to the gauge field component $E^{8}_{\mu}$ in (\ref{eq: eff}). The appearance of such currents will be important to understand the localization of gauge fields (Sec. \ref{S2-Gen}). \subsection{Generalization of the mechanism} \label{S2-Gen} In general, given a nonzero v.e.v for a scalar field $\Phi (z)$, the position $z$ at which the fermion wave function $\Psi$ is centered is determined by the condition \begin{eqnarray} Y(z) \, \Psi = 0, \end{eqnarray} where $Y(z) = Y[\Phi (z)]$. The chirality of such a state is determined by the sign of the derivative $\partial_{z} Y(\Phi)$ at the given position. To be more precise, if $\partial_{z} Y(\Phi) > 0$ ($\partial_{z} Y(\Phi) < 0$), then the confined fermion is left (right) handed. \section{Localization of gauge fields} \label{S3} We now focus on the gauge sector of the model. The localization of gauge fields to domain-walls is ensured by the already localized fermionic fields; this is the case of the quasilocalization of gauge fields \cite{gauge}. The interaction between the localized currents at the branes with the 5-D gauge fields induces an effective 4-D theory in the brane. This is produced by one-loop contributions to the effective action coming from the brane currents. \subsection{Quasi-localization of gauge fields} For simplicity, we focus only on the localization of gauge fields to the first brane ($z=0$) and neglect the effect of the coupling between $E^{\alpha}_{A}$ and $\Phi$ on the 5-D behavior of $E^{\alpha}_{A}$ near the brane. Now, assume that the spinor fields are already confined and that the overlap between different branes is very small ($\Delta^{-1} \gg k$). Then, in general, the Lagrangian for the gauge fields $E^{\alpha}_{A}$ about the brane at $z=0$ is given by \begin{eqnarray} \mathcal{L}^{(5)}_{\mathrm{G}} = -\frac{1}{4 g^{2}} F^{\alpha}_{A B} F_{\alpha}^{A B} + \delta(z) E^{\alpha}_{A} J_{\alpha}^{A}(x) \label{eq: gauge}, \end{eqnarray} where $F^{\alpha}_{A B} = \partial_{A} E^{\alpha}_{B} - \partial_{B} E^{\alpha}_{A} + C^{\alpha}_{\beta \gamma} E^{\beta}_{A} E^{\gamma}_{B}$ (here $C^{\alpha}_{\beta \gamma}$ are the $SU(3)$ structure constants) and $g$ is the gauge coupling. As mentioned, the currents $J^{A}_{\alpha}(x)$, localized at the branes, appear as a consequence of the covariant derivative $D_A \Psi = (\partial_A - i E_A^{\alpha} T_{\alpha}) \Psi$. To continue, it is important to observe that, in general, the currents $J^{A}_{\alpha}(x)$ do not continue transforming covariantly under the full set of gauge symmetry transformations [as in Eq. (\ref{eq: eff})]. This is because the many components of the $SU(3)$-spinor representations end up at different positions along the fifth dimension. In fact: since the effective terms for gauge fields are induced by loops from these currents, then the transformation properties of $J^{A}_{\alpha}(x)$ will be transferred to the confined gauge fields. Take, for instance, the case of our previous example in which the 4-D effective theory is given by Eq. (\ref{eq: eff}). There, $\psi^{3}_{R}$ provides the current $J_{8}^{\mu} = - i \frac{\sqrt{3}}{3} \bar \psi^{3}_{R} \gamma^{\mu} \psi^{3}_{R}$ which couples only to $E^{8}_{\mu}$. Then, a one-loop correction induces the following Lagrangian for $E^{8}_{\mu}$ at the brane: \begin{eqnarray} \mathcal{L}^{(4)} = -\frac{1}{4 \lambda^{2}} (\partial_{\mu} E^{8}_{\nu} - \partial_{\nu} E^{8}_{\mu})^{2}, \end{eqnarray} where \begin{eqnarray} \lambda^{-2} = \frac{N}{12 \pi^{2}} \ln (\Lambda/\mu). \end{eqnarray} Here, $\Lambda$ and $\mu$ are the ultraviolet and infrared cut-offs scales of the 5-D theory and $N = 1/3$ (which comes from the coefficient $\sqrt{3}/3$ in $J_{8}^{\mu}$). \subsection{Localization of $SU(2) \times U(1)$ gauge fields} Let us now specialize to the case in which the localized currents preserve the $SU(2) \times U(1)$ transformation properties at the first brane $z=0$. Then it makes sense to perform the following decomposition of the five-dimensional $SU(3)$ gauge field $E^{\alpha}_{A}$: \begin{eqnarray} W_{\mu}^{a} &=& E_{\mu}^{a} \qquad \mathrm{with} \qquad a = 1,2,3, \\ V_{\mu}^{i} &=& E_{\mu}^{i} \qquad \mathrm{with} \qquad i = 4,5,6,7, \\ \phi^{i} &=& E_{5}^{i} \qquad \mathrm{with} \qquad i = 4,5,6,7, \\ B_{\mu} &=& E_{\mu}^{8}. \end{eqnarray} In the limit $\Delta \rightarrow 0$, other components of $E_{A}^{\alpha}$ are decoupled from the matter fields confined to the branes (this is because these components are coupling together spinor fields with different chiralities that necessarily end up at different branes). In this decomposition, the only non-zero structure constant are: $C_{a b}^{c}$, $C_{i j}^{a}$ and $C_{i j }^{8}$ (and obvious permutation of indices). Then, the current term can be expressed as \begin{eqnarray} E^{\alpha}_{A} J^{A}_{\alpha} = W^{a}_{\mu} J_{a}^{\mu}(x) + B_{\mu} J^{\mu}(x) + V^{i}_{\mu} J_{i}^{\mu}(x) + \phi^{i} J_{i}(x), \nonumber\\ \end{eqnarray} and the 4-D induced action for the now localized fields $W_{\mu}^{a}$, $V_{\mu}^{i}$, $B_{\mu}$ and $\phi^{i}$ at the first brane ($z=0$) becomes \begin{eqnarray} \mathcal{L}^{(4)}_{\mathrm{G}} &=& - \frac{1}{4 \lambda^{2}_{H}} H^{a}_{\mu \nu} H_{a}^{\mu \nu} - \frac{1}{4 \lambda^{2}_{G}} G_{\mu \nu} G^{\mu \nu} \nonumber\\ && - \frac{1}{2 \lambda^{2}_{\phi}} |D \phi|^{2} - \frac{1}{4 \lambda^{2}_{Q}} Q^{i}_{\mu \nu} Q_{i}^{\mu \nu} + \mathcal{L}_{V} . \qquad \label{eq: induced} \end{eqnarray} Here $H_{\mu \nu}^{a}$, $Q_{\mu \nu}^{i}$, $G_{\mu \nu}$ and $D_{\mu} \phi^{i}$ are defined as \begin{eqnarray} H_{\mu \nu}^{a} &=& \partial_{\mu} W_{\nu}^{a} - \partial_{\nu} W_{\mu}^{a} + C^{a}_{b c} W_{\mu}^{b} W_{\nu}^{c} , \nonumber\\ Q_{\mu \nu}^{i} &=& \partial_{\mu} V_{\nu}^{i} - \partial_{\nu} V_{\mu}^{i} + C^{i}_{a j} W_{\mu}^{a} V_{\nu}^{j} + C^{i}_{j a} V_{\mu}^{j} W_{\nu}^{a} \nonumber\\ && + C^{i}_{8 j} B_{\mu} V_{\nu}^{j} + C^{i}_{j 8} V_{\mu}^{j} B_{\nu} , \nonumber\\ G_{\mu \nu} &=& \partial_{\mu} B_{\nu} - \partial_{\nu} B_{\mu}, \nonumber\\ D_{\mu} \phi^{i} &=& \partial_{\mu} \phi^{i} + C^{i}_{a j} W^{a}_{\mu} \phi^{j} + C^{i}_{8 j} B_{\mu} \phi^{j} . \end{eqnarray} Additionally, in Eq. (\ref{eq: induced}) we have introduced $\mathcal{L}_{V}$ which contains interaction terms between the vector field $V_{\mu}^{i}$ and the rest of the induced fields \begin{eqnarray} \mathcal{L}_{V} = - \frac{1}{4 \lambda^{2}_{1}} \left( R_{\mu \nu}^{a} R^{\mu \nu}_{a} + K_{\mu \nu} K^{\mu \nu} \right) - \frac{1}{2 \lambda^{2}_{2}} H^{a}_{\mu \nu} R^{\mu \nu}_{a} \nonumber\\ - \frac{1}{2 \lambda^{2}_{3}} G_{\mu \nu} K^{\mu \nu} - \frac{1}{2 \lambda^{2}_{4}} \left( S_{\mu}^{a} S^{\mu}_{a} + S_{\mu} S^{\mu} \right), \qquad \label{eq: L-V} \end{eqnarray} where we have defined: $R_{\mu \nu}^{a} = C^{a}_{i j} V_{\mu}^{i} V_{\nu}^{j}$, $S_{\mu}^{a} = C^{a}_{i j} V_{\mu}^{i} \phi^{j}$, $S_{\mu} = C_{i j}^{8} V^{i}_{\mu} \phi^{j} / \sqrt{3}$ and $K_{\mu \nu} = C^{8}_{i j} V_{\mu}^{i} V_{\nu}^{j} / \sqrt{3}$. Finally, the various couplings $\lambda_{H}$, $\lambda_{G}$, $\lambda_{Q}$ and $\lambda_{\phi}$ in (\ref{eq: induced}), and $\lambda_{1}$, $\lambda_{2}$, $\lambda_{3}$ and $\lambda_{4}$ in (\ref{eq: L-V}) are, in general, found to be of the form \begin{eqnarray} \frac{1}{\lambda^{2}} = \frac{N}{12 \pi^{2}} \ln \frac{\Lambda}{\mu}, \label{eq: lambda} \end{eqnarray} where $N$ measures the number of fermions present in the different loops, taking also into account the values of the various $SU(3)$-charges and combinatorics. For example, we have \begin{eqnarray} N_{H} = \mathrm{Tr} \left( T_{3}^{2} \right), \quad \mathrm{and} \quad N_{G} = \mathrm{Tr} \left( T_{8}^{2} \right), \end{eqnarray} where the traces run over all charged fermions taking place in the loops inducing the first and second terms of (\ref{eq: induced}). Notice, however, that the values of the $\lambda$-couplings may change when taking into account the split of fermions. For instance, as we shall see in Sec. \ref{hierarchy}, the $Y$ coupling of Eq. (\ref{eq2: Lagrangian1}) could induce the split of fermions around a single brane (for example, the first brane at $z=0$). This would result in a modification of the way in which the induced 4-D effective theory is computed, and therefore the way $N$ is computed in (\ref{eq: lambda}). Nevertheless, the values of the $\lambda$-couplings should all remain of the same order. \subsection{Gauge theory near the brane} The complete action describing the behavior of the gauge field $E_{A}^{\alpha}$ near the first brane now consists of: \begin{eqnarray} \mathcal{L}^{(5)}_{\mathrm{G}} = -\frac{1}{4 g^{2}} F^{\alpha}_{A B} F_{\alpha}^{A B} + \delta(z) \mathcal{L}^{(4)}_{\mathrm{G}} \label{eq: gauge2}, \end{eqnarray} where $\mathcal{L}^{(4)}_{\mathrm{G}}$ is the induced Lagrangian (\ref{eq: induced}). To study the propagation of gauge fields on the braneworld it is convenient to define a crossover scale $r_{c} = g^{2} / 2 \lambda^{2}$. Then, the physics taking place at the brane can be shown to have two different regimes \cite{gauge}: at large distances $r \gg r_{c}$ the propagator of the gauge fields becomes five-dimensional, whereas at short distances $r \ll r_{c}$ it becomes four-dimensional. \section{Confining the electroweak model to a brane} \label{S4} We now turn to the confinement of the electroweak model. Our approach consists of adding a new scalar field into the model so as to allow a richer structure to the localization mechanism generated by the $Y$-coupling. Then we show that leptons can be obtained from the $\mathbf{10}$-representation of $SU(3)$, while quarks can be obtained from the $\mathbf{\bar 6}$. \subsection{Construction of the Electroweak brane} To start, assume the existence of the same scalar field $\Phi = \Phi^{\alpha} T_{\alpha}$ (as discussed previously) and an additional scalar field $\Theta = \Theta^{\alpha} T_{\alpha}$ also transforming in the adjoint representation of $SU(3)$. This scalar is dominated by the following $SU(3)$ gauge invariant potential: \begin{eqnarray} U \propto \left[ \Theta^{\alpha} \Theta_{\alpha} - u^{2} \right]^{2}, \end{eqnarray} where $u$ is a constant parameter of the theory. Now, consider the following $Y$-coupling: \begin{eqnarray} Y = - y \left( \frac{1}{2} \{ \Phi , \Theta \} - \frac{1}{4} \Theta^{\alpha} \Phi_{\alpha} + p \frac{\sqrt{3}}{2} u \, \Phi \right) , \label{eq: Y} \end{eqnarray} where $\{ \, , \}$ denotes anticommutation. In the previous equation, $p$ is a parameter of the model that depends on the representation on which $Y$ is acting; in the present construction we allow the value $p = 1$ if $Y$ couples to the $\mathbf{10}$, and $p=-1/3$ if $Y$ couples to the $\mathbf{\bar{6}}$. Other gauge invariant terms can also be included in (\ref{eq: Y}) without modifying the main results of this section (we come back to this point towards the end of this section). We now focus on the case in which $\Theta$ acquires the following v.e.v.: \begin{eqnarray} \langle \Theta \rangle = u \, T_{8}. \end{eqnarray} Then, after the scalars have acquired their respective v.e.v.'s we are left with the following $z$-dependent coupling: \begin{eqnarray} (y \Phi_{0} u)^{-1} Y &=& - \left[ ( T_{8} + p \sqrt{3}/2 )T_{8} - 1/4 \right] \sin(k z) \nonumber\\ && - \left[ T_{8} + p \sqrt{3}/2 \right] T_{3} \cos(kz). \label{eq: Y(z)gen} \end{eqnarray} Similar to our previous example, in this case the widths of the fermion wave functions become of order $\Delta$ (the confining length scale) which now is found to be \begin{eqnarray} \Delta = \frac{1}{\sqrt{|y \Phi_{0}| u k}}. \end{eqnarray} In what follows we analyze separately the confinement of leptons (from the $\mathbf{10}$) and quarks (from the $\mathbf{\bar 6}$). \subsection{Leptons} Here we study the action of $Y$ on the $\mathbf{10}$ (where $p=1$) and show that the confined fermions to the domain wall can be identified with the usual leptons of the electroweak model. \subsubsection{Confining leptons} To proceed it is convenient to consider the decomposition of $SU(3)$ into $SU(2)$ subgroups (see Fig. \ref{F4}). The $\mathbf{10}$ has the following decomposition: $\mathbf{10} = \mathbf{1} \oplus \mathbf{2} \oplus \mathbf{3} \oplus \mathbf{4}$, with the following $T_{8}$-charges: $T_{8} = -\sqrt{3}, -\sqrt{3}/2, \, 0, +\sqrt{3}/2$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\textwidth]{fig4.eps} \caption[Ten dec]{The figure shows the $\mathbf{10}$ representation and its decomposition into $SU(2)$ subgroups: this is $\mathbf{10} = \mathbf{1} \oplus \mathbf{2} \oplus \mathbf{3} \oplus \mathbf{4}$ with charges $T_{8} = -\sqrt{3}, -\sqrt{3}/2, \, 0, +\sqrt{3}/2$, respectively.} \label{F4} \end{center} \end{figure} Using this notation, we can work out the localization produced by the $Y$-coupling to the first brane at $z=0$. First, observe from Eq. (\ref{eq: Y(z)gen}) that all of those states in the $\mathbf{10}$ with $(T_{8} + \sqrt{3}/2) T_{3} = 0$ give $Y=0$ at $z = 0$. Then, following the reasoning of Sec. \ref{S2-Gen}, a chiral fermion from each one of these states will confine to $z = 0$. The precise chirality of each state depends on the sign of $\partial_{z} Y(z)$. In the present case, assuming $y > 0$, the confined states are: the right-handed $SU(2)$-singlet $R \equiv \psi^{\mathbf{1}}_{R}$ with charge $Q=(0,-\sqrt{3})$; the two left-handed components of the $SU(2)$-doublet $L \equiv \psi^{\mathbf{2}}_{L}$ with charges $Q=(-1/2,-\sqrt{3}/2)$ and $Q=(+1/2,-\sqrt{3}/2)$; and only one left-handed component from the triplet $N \equiv \psi^{\mathbf{3}}_{L}$, with charge $Q=(0,0)$. States with opposite chirality are confined to a ``mirror-brane'' located at $z=L/2$, and any other states are confined elsewhere. Figure \ref{F5} shows those components of the $\mathbf{10}$ that confine to $z=0$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\textwidth]{fig5.eps} \caption[Ten conf]{The figure shows those states of the $\mathbf{10}$ that end up localized to $z=0$. The labels $L$ and $R$ indicate the chirality of the confined states.} \label{F5} \end{center} \end{figure} Now, the 4-D effective Lagrangian for the massless leptons at the first brane is found to be \begin{eqnarray} \mathcal{L}_{\mathrm{lep}}^{(4)} &=& - \bar L \Big[ \gamma^{\mu} \partial_{\mu} - i \gamma^{\mu} W^{a}_{\mu} T_{a} + i \frac{\sqrt{3}}{2} \gamma^{\mu} B_{\mu} \Big] L \nonumber\\ && - \bar R \Big[ \gamma^{\mu} \partial_{\mu} + i \sqrt{3} \gamma^{\mu} B_{\mu} \Big] R - \bar N \gamma^{\mu} \partial_{\mu} N + \mathcal{L}_{\mathrm{I}}^{(4)}, \nonumber\\ && \label{eq: leptons} \end{eqnarray} where $\mathcal{L}_{\mathrm{I}}^{(4)}$ contains interaction terms involving $\phi^{i}$ and $V^{i}_{\mu}$ \begin{eqnarray} \mathcal{L}_{\mathrm{I}}^{(4)} = - i \alpha \, \phi^{i} \bar R \, T_{i} \, L - i \beta \, V^{i}_{\mu} \bar N \gamma^{\mu} \, T_{i} \, L + \mathrm{h.c.}, \label{eq: leptons-I} \end{eqnarray} where $\alpha$ and $\beta$ are coefficients that appear from the overlap between wave functions of different widths. In the present case, $\alpha = \beta = (5)^{1/4} / \sqrt{3}$. In Eqs. (\ref{eq: leptons}) and (\ref{eq: leptons-I}), $T_{a}$ and $T_{i}$ denote the action of the corresponding $SU(3)$-generators on the $SU(2)$-doublet $L = \psi^{\mathbf{2}}_{L}$. We can rewrite the $T_{i}$'s in Eq. (\ref{eq: leptons-I}) to obtain a more transparent notation \begin{eqnarray} \mathcal{L}_{\mathrm{I}}^{(4)} = - i \alpha \frac{\sqrt{3}}{2} \phi^{i} \bar R \, t_{i} \, L - i \beta V^{i}_{\mu} \bar N \gamma^{\mu} \, s_{i} \, L + \mathrm{h.c.}, \label{eq: leptons-I2} \end{eqnarray} where $t_{i}$ and $s_{i}$ with $i = 4,5,6,7$, are $1 \times 2$ matrices acting on $L$ given by \begin{eqnarray} t_{4} = s_{6} = (1, 0),&& \qquad t_{5} = - s_{7} = i ( 1, 0), \nonumber\\ t_{6} = s_{4} = (0, 1),&& \qquad t_{7} = - s_{5} = i (0, 1). \end{eqnarray} \subsubsection{Confining gauge fields} The form of the theory presented in Eqs. (\ref{eq: leptons}) and (\ref{eq: leptons-I}) corresponds to an $SU(2) \times U(1)$ gauge theory with four massless chiral states. Therefore we can deduce the quasilocalization of gauge fields to the first brane as discussed in Sec. \ref{S3} [with the same Lagrangian shown in (\ref{eq: induced})]. \subsubsection{Comparison with the electroweak model} We can now compare this theory with the lepton sector of the electroweak model. The two left-handed components $L$ and the right-handed fermion $R$ can be identified with the usual counterparts of the electroweak model, and $W^{a}_{\mu}$ and $B_{\mu}$ with the $SU(2) \times U(1)$ gauge fields with couplings $g_{1} = \lambda_{H}$ and $g_{2} = \sqrt{3} \lambda_{G}$ respectively. One of the most interesting aspects of this model, however, is the appearance of two additional fields, namely the vector field $V_{\mu}^{i}$ and the left-handed neutrino $N$ (which has a zero-hypercharge). Observe that this neutrino interacts only with the other left-handed particles $L$ through $V_{\mu}^{i}$. If we further assume that $|\phi|$ develops a nonzero v.e.v. $\phi_{0}$ (which can not be ruled out by symmetries), then $\phi^{i}$ takes the role of the Higgs field. If this is the case, two of the chiral states ($R$ and one of the $L$'s) mix together to form an electron, while the other two remain massless (neutrinos). The electroweak parameters are then found to be as follows: The electron mass is $m_{e}^{2} = 3 \phi_{0}^{2} \lambda_{\phi}^{2} / 2$, the $W$-boson's mass is $M_{W}^{2} = \phi_{0}^{2} \lambda_{H}^{2}/4$, and the electroweak angle is $\sin^{2} \theta_{W} = 3 \lambda_{G}^{2}/(\lambda_{H}^{2} + 3 \lambda_{G}^{2})$. Very important for this model is that the existence of $V^{i}_{\mu}$ has no conflicts with observations. Fortunately, in the case of a nonzero v.e.v. $\phi_{0}$, the four-component vector field $V_{\mu}^{i}$ becomes massive, with $M_{V}^{2} = \phi_{0}^{2} \lambda_{\phi}^{2} \lambda_{Q}^{2} / 4 \lambda_{4}^{2}$. \subsection{Quarks} The case for quarks can be analyzed in exactly the same way as for leptons. Here we need to consider the value $p = -1/3$ in the $Y$-coupling. Having said this, recall that the $\mathbf{\bar 6}$ can be decomposed into $\mathbf{\bar 6} = \mathbf{1} \oplus \mathbf{2} \oplus \mathbf{3}$ with the following $T_{8}$ charges: $T_{8} = +2 \sqrt{3}/3, + \sqrt{3}/6, -\sqrt{3}/3$ (see Fig. \ref{F6}). \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\textwidth]{fig6.eps} \caption[Six dec]{The figure shows the $\mathbf{\bar 6}$ representation and its decomposition into $SU(2)$ subgroups: this is $\mathbf{\bar 6} = \mathbf{1} \oplus \mathbf{2} \oplus \mathbf{3}$ with charges $T_{8} = +2 \sqrt{3}/3, + \sqrt{3}/6, -\sqrt{3}/3$, respectively.} \label{F6} \end{center} \end{figure} Then, we obtain the following four massless chiral fermions confined to the first brane: the right-handed $SU(2)$-singlet $\psi_{R}^{\mathbf{1}}$ with charge $Q=(0,+2/\sqrt{3})$; the two left-handed components of the $SU(2)$-doublet $\psi_{L}^{\mathbf{2}}$ with charges $Q=(-1/2,+1/2\sqrt{3})$ and $Q=(+1/2,+1/2\sqrt{3})$; and only one right-handed component from the triplet $\psi_{L}^{\mathbf{3}}$, with charge $Q=(0,-1/\sqrt{3})$. Figure \ref{F7} shows those components of the $\mathbf{\bar{6}}$ that confine to $z=0$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.35\textwidth]{fig7.eps} \caption[Six conf]{The figure shows those states of the $\mathbf{\bar 6}$ that end up localized to $z=0$. The labels $L$ and $R$ indicate the chirality of the confined states.} \label{F7} \end{center} \end{figure} When the effective Lagrangian is computed we find the appropriate quantum numbers for this sector to be identified with the quarks of the standard model. A significant difference with the lepton case, however, is the absence of interactions between quarks and the vector field $V_{\mu}^{i}$. \subsection{Solving the hierarchy problem} \label{hierarchy} We have seen that the electron and $W$-boson masses are $m_{e} = \sqrt{3/2} \phi_{0} \lambda_{\phi} $ and $M_{W} = \phi_{0} \lambda_{H}/2$ respectively. What is more, the quark masses are found to be proportional to $\phi_{0} \lambda_{\phi}$, of the same order as the electron mass. This is just the hierarchy problem for the particular case of the present model (recall that the $\lambda$-couplings are all of the same order). A simple way to correct this problem is to introduce a new term in the definition of $Y$. For example, we could consider a new coupling $Y'$ of the form: \begin{eqnarray} Y' = Y - y \, q \, v \, \Theta, \end{eqnarray} where $q$ is a dimensionless coefficient that could depend on the representation on which $Y'$ is acting (observe the similarity of the new term with the old one $- y \, p \, u \, \Phi$, in $Y$). Then, after the scalars have acquired the v.e.v. discussed before, the $Y'$ coupling becomes: \begin{eqnarray} Y'(z) = Y(z) - q (y v u ) T_{8}. \end{eqnarray} The second term of this expression resembles the 5-D mass term of Eq. (\ref{eq2: L-split}). Therefore, the fermion wave functions will be split around the branes and an exponential factor [like the one of Eq. (\ref{eq: Yukawa-supp})] will appear suppressing the couplings of Eq. (\ref{eq: leptons-I2}). This results in a hierarchy between the mass scales of quarks, leptons and electroweak gauge bosons. Observe that in the definition of $Y$ we could also include terms proportional to $\Phi^{2}$ and $\Theta^{2}$ with coefficients depending on the representation. They would provide additional terms contributing to the split of fermions around the brane. \subsection{About the other branes} To finish, let us briefly mention that other branes are also formed in the bulk. They appear from the localization of the rest of the states in the $\mathbf{10}$ and $\mathbf{\bar 6}$ representations. The most interesting brane is the ``mirror brane'' at $z=L/2$, which contains a copy of the electroweak model obtained at the first brane $z=0$ but with states having opposite chiralities. The rest of the branes (also determined by the condition $Y=0$) all contain different versions of $U(1)$ abelian gauge theories. Figure \ref{F8} shows the 5-D configuration obtained in the construction. \begin{figure}[ht] \begin{center} \includegraphics[width=0.37\textwidth]{fig8.eps} \caption[Bulk configuration]{The figure sketches the disposition of branes in the 5-D bulk. The electroweak brane is located at $z=0$, while the mirror brane (a copy of the first brane but containing matter with opposite chirality) is located at $z=L/2$.} \label{F8} \end{center} \end{figure} \section{Conclusions} \label{S5} In this paper we have obtained a simple realization of the electroweak model confined to a brane. The mechanism consisted in breaking the $SU(3)$ gauge symmetry down in $SU(2) \times U(1)$ through the localization of bulk fermions to the brane. The localization was produced by the coupling $Y$, of Eq. (\ref{eq: Y}), between $SU(3)$ fermions and scalar fields with non-zero vacuum expectation values. As in the split fermion scenario, the four-dimensional fermions at the brane were found to be chiral. This allowed us to achieve the electroweak chiral structure by localizing those states (within given $SU(3)$-representations) with appropriate charges to the same brane. For example, the lepton sector was obtained from the $\mathbf{10}$ representation, while the quark sector was obtained from the $\mathbf{\bar{6}}$. Remarkably, in this model it was possible to identify the Higgs field with the fifth component of the $SU(3)$ bulk gauge field (see \cite{gh1, gh2, gh3, gh4, gh5, gh6} for similar approaches). One problem with this result, however, is the apparent difficulty in generating the appropriate potential for the Higgs. Whether it is possible to obtain such a potential in this particular setup remains an open question. Another feature of the present construction is the presence of two new fields coupled to the lepton sector of the standard model: A four-component vector field $V_{\mu}^{i}$ (that transforms like the Higgs under $SU(2) \times U(1)$ symmetry transformations) and a left-handed neutrino $N$ (with zero-hypercharge). The existence of these particles opens up interesting phenomenological possibilities. For instance, the nonobservation of $V$-bosons pair-production at LEP \cite{exp-W} is an indication of the constraint: \begin{eqnarray} M_{V} > 104 \mathrm{GeV}. \end{eqnarray} Nevertheless, from the results of this paper we should not expect a value $M_{V}$ significantly higher than $M_{Z}$ and $M_{W}$. At the same time, the mechanism generating the hierarchy between leptons, quarks and gauge bosons, is also suppressing the couplings between $V$ and leptons. If this is the case, then we could expect new phenomena associated with extra-dimensions in lepton-collider experiments in the near future. Let us finish by mentioning that an important question that still needs to be addressed within this model is how to include the mixing between different families of leptons and quarks. For instance, in the case of leptons, the new neutrino $N$ could be playing some relevant role in the mixing of neutrinos. \section*{Acknowledgements} The author is grateful to Anne C. Davis, Daniel Cremades, David Tong and Guy Moore for useful comments. This work is supported in part by DAMTP (Cambridge) and MIDEPLAN (Chile).
2,877,628,089,864
arxiv
\section{introduction}In literature, there have been various forms of renormalization group equation that describe running behaviors in the framework of field theories. In a sense, the known renormalization group equations could be cast into the following two categories: (a) renormalization group equations derived from the standard algorithm of perturbative renormalization\cite{Stuek,Gell,Callan,Syman,tHooft,Weinb}; (b) Wilsonian renormalization group equation\cite{Wilson} and exact renormalization group equation\cite{Weg,Nicoll,Polch,Wett,Ell,BBreview} that work essentially in nonperturbative regime. In this report, we wish to explore another avenue of renormalization group equation: nonperturbative running of in the presence of tight constraints. The objects under consideration are closed-form $T$ matrices obtained from Lippmann-Schwinger equations (LSE) or Schr\"odinger equations. Earlier studies of conventional renormalization group equation within LSE could be found in Refs.\cite{Frederico1,Frederico2}. Our studies originate from the effective field theory approach for nuclear force, for reviews, see\cite{review1b,review2a,reviewEM,review2b,review5}. For energies much below pion mass, one could simply work with the simpler pionless effective field theory or contact potentials so that closed-form solutions to Lippmann-Schwinger equations are feasible\cite{YH,JPA,AOP}. However, these closed-form $T$ matrices imposes tight constraints that precludes the conventional renormalization program (established in perturbation theory) from working, hence are usually discarded or circumvented. Nevertheless, an elaborate analysis of the closed-form $T$-matrices showed that the tight constraints could actually be exploited to arrive at a novel nonperturbative scenario for effective field theory renormalization\cite{AOP}. In such scenarios, not all parameters from effective field theory loops could be absorbed into the effective field theory couplings, some of them must be treated as physical and hence renormalization group invariant parameters to be fixed independently\cite{AOP}. The conceptual foundation of such treatment just lies in the underlying theory (underlying theory) perspective: effective field theory is only a simple description of certain phenomena, it is conceivable that some ingredients of the regular properties of underlying theory are not adequately described by effective field theory couplings but show up through effective field theory (loop) integrals in nonperturbative contexts with various truncations. Hence the real issue in effective field theory is to 'fix' the effective field theory loop integrations in ways compatible with underlying theory principles as far as possible. Such underlying theory perspective naturally allows us to employ a general parametrization of the effective field theory integrals (and the associated subtraction), immune from resorting to special prescription like power divergence subtraction (PDS) and additional treatments\cite{KSW1,KSW2,Gege}. In a sense, the tight constraints imposed by the closed-form $T$ matrices may render part of the parameters from effective field theory loop integrals encrypted with some physical contents and the running behaviors of effective field theory couplings quite different from that in perturbative regimes. Thus, examining the nonperturbative running couplings and their properties may yield us further conceptual gains. In spite that our discussion is given in the much simpler pionless effective field theory, the concepts and scenarios discussed here could in principle be extended to other effective field theories. As we employ a general parametrization of effective field theory loop integrals, which is a crucial component of our approach, our discussion should be useful for all the physical studies that are beset with ultra-violet divergences in nonperturbative contexts, where conventional programs are actually infeasible. In other words, we employ a simple and yet nontrivial setting to demonstrate alternative scenario of renormalization in nonperturbative contexts and its utilities, which has been overlooked so far in most literature. This report is organized as below: In Sec. II, we present the results of the closed-form on-shell $T$ matrices in $^1S_0$ channel and the uncoupled $P$ channels at truncation order $\mathcal{O}(Q^4)$ and some linear relations among the factors for parameterizing the closed-form $T$ matrices; Sec. III is devoted to the plausible reasonings for the issue of effective field theory renormalization and renormalization group invariance in nonperturbative contexts, which in turn leads us to the version of renormalization group equation in the presence of tight constraints imposed by closed-form $T$ matrices; In Sec. IV, we utilize the tight constraints to obtain nonperturbative running couplings in $^1S_0$ channel and the uncoupled $P$ channels, and explore for their properties and implications; The summary is given in Sec. V. \section{Closed-form $T$ matrices in pionless effective field theory for $NN$ scattering} \subsection{Parametrization of the on-shell $T$}In pionless effective field theory, the $NN$ interaction becomes contact one. The contact potential truncated at order $\mathcal{O}(Q^4)$ in $^1S_0$ channel reads:\bea V_{^1S_0}=C_0+C_2\left(q^2+{q^{\prime}}^2\right)+C_4\left(q^4+{q^{\prime}}^4\right)+\tilde{C}_4q^2{q^{\prime}}^2,\eea from which it is easy to read off the potential truncated at orders $\mathcal{O}(Q^0)$ and $\mathcal{O}(Q^2)$. The closed-form $T$ matrices for such contact potentials could be found by solving the LSE via the trick employed in Ref.\cite{Maryland} and elaborated in Ref.\cite{YH} where the convolution integral is parameterized in a general manner. Then, the on-shell $T$ matrix for $^1S_0$ channel at order $\mathcal{O}(Q^4)$ reads\cite{YH},\bea\label{Tn3}&&\frac{1}{T(p)}=\mathcal{I}_0+\frac{N_{S;0}+N_{S;1}p^2+N_{S;2}p^4}{D_{S;0}+D_{S;1}p^2+D_{S;2}p^4+D_{S;3}p^6},\\&& \mathcal{I}_0\equiv J_0+i\frac{M_N}{4\pi}p,\ p\equiv\sqrt{M_NE}.\eea The detailed expressions of $[N_{\cdots}]$ and $[D_{\cdots}]$ are listed in \ref{appA}, where we also give the definition of the prescription-dependent parameters $[J_{2n+1},n>0]$ that arise from the convolution integration. At this stage they are referred to as a general parametrization of regularization/renormalization. Putting higher order couplings to zero would yield the corresponding lower order results for $[N_{\cdots}]$ and $[D_{\cdots}]$, for example, putting $C_4=\tilde{C}_4=0$ will lead us back to $\mathcal{O}(Q^2)$:\bea&&\frac{1}{T(p)}=\mathcal{I}_0+\frac{N_{S;0}}{D_{S;0}+D_{S;1}p^2},\\&&N_{S;0} =(1-C_2J_3)^2,\ D_{S;0}=C_0+C_2^2J_5,\ D_{S;1}=2C_2-C_2^2J_3.\eea Putting all except $C_0$ to zero will lead us to $\mathcal{O}(Q^0)$: $1/T(p)=\mathcal{I}_0+1/C_0.$ For the uncoupled $p$-wave channels, we have at order $\mathcal{O}(Q^4)$:\bea&&V_{^{2s+1}P_j}=C_{P;2}qq^{\prime}+C_{P;4}\left(q^2+{q^{\prime}}^2\right)qq^{\prime}\nonumber\\&& \rightarrow\frac{1}{T(p)}=\mathcal{I}_0+\frac{N_{P;0}+N_{P;1}p^2}{D_{P;0}p^2+D_{P;1}p^4}.\eea The detailed expressions for $[N_{\cdots}]$ and $[D_{\cdots}]$ are listed in \ref{appB}. At order $\mathcal{O}(Q^0)$, the $T$ matrix does not exist as $V_{^{2s+1}P_j}=0$. At order $\mathcal{O}(Q^2)$, we have\bea&&\frac{1}{T(p)}=\mathcal{I}_0+\frac{N_{P;0}}{D_{P;0}p^2}, \nonumber\\&&N_{P;0}=1-C_{P;2}J_3,\ D_{P;0}=C_{P;2}.\eea The off-shell version of $T$ matrix at generic order is more involved and the $\mathcal{O}(Q^4)$ case for $^1S_0$ and uncoupled $P$ channels will be presented in \ref{appC}. Here, to get a rough idea, we list the $\mathcal{O}(Q^2)$ case:\bea^1S_0:&&{T_S(q,q^\prime;p)}=\frac{\tilde{D}_S(q,q^\prime;p)+\check{\delta}_S(q,q^\prime;p)\check{O}_{\texttt{\tiny off}}} {N_{S;0}+\mathcal{I}_0\left(\sum_{i=0}^1D_{S;i}p^{2i}\right)},\quad\check{O}_{\texttt{\tiny off}}\equiv\left(p^2-q^2\right)\left(p^2-{q^\prime}^2\right),\nonumber\\&& \tilde{D}_S(q,q^\prime;p)=V_S(q,q^\prime)+C^2_2\left[J_5+J_3\left(p^2-q^2-{q^{\prime}}^2\right)\right],\quad\check{\delta}_S(q,q^\prime)=-\mathcal{I}_0C^2_2.\\P:&&{T_P(q,q^\prime;p)}= \frac{\tilde{D}_P(q,q^\prime;p)+\check{\delta}_P(q,q^\prime;p)\check{O}_{\texttt{\tiny off}}}{N_{P;0}+\mathcal{I}_0D_{P;0}p^2},\nonumber\\&&\tilde{D}_P(q,q^\prime;p)=V_P(q,q^\prime), \quad\check{\delta}_P(q,q^\prime)=0.\eea It is obvious that the item containing $\check{O}_{\texttt{\tiny off}}$ is a pure off-shell part, and the on-shell momentum or energy enter the game 'everywhere' in the functional form of $T$ matrix, while in the momentum transfer only appear in the numerator. It is a simple exercise to verify that $\tilde{D}_S(q,q^\prime;p)=(\sum_{i=0}^1D_{S;i}p^{2i}$ at this order. We would also like to note that the off-shell $T$ matrix coincides exactly with the on-shell $T$ matrix only at the leading order in $^1S_0$ channel, making it a very special case. \subsection{Linear constraints for $[N_{\cdots},D_{\cdots}]$}Interestingly, there exist some linear relations among the factors $[N_{S,\cdots},D_{S,\cdots}]$ at each order of truncation:\bea \label{relation1s0-0}\mathcal{O}(Q^0):&&N_{S;0}=1,\ D_{S;0}=C_0.\\\label{relation1s0-2}\mathcal{O}(Q^2):&&N_{S;0}+D_{S;1}J_3=1,\nonumber\\&&D_{S;0}J_3+D_{S;1}J_5 =C_0J_3+2C_2J_5.\\\label{relation1s0-4}\mathcal{O}(Q^4):&&N_{S;2}+D_{S;3}J_3=0,\ N_{S;1}+D_{S;2}J_3+D_{S;3}J_5=0,\nonumber\\&&N_{S;0}+D_{S;1}J_3+D_{S;2}J_5+D_{S;3}J_7=1, \nonumber\\&&\sum_{n=0}^{3}D_{S;n}J_{2n+3}=C_0J_3+2C_2J_5+(2C_4+\tilde{C}_4)J_7.\eea All these relations could be readily verified. Given these evidences, we are tempted to conjecture that the following linear relations for $^1S_0$ channel at a generic order $\Delta$:\bea\label{NDrelation1s0}\mathcal{O}(Q^\Delta):&&N_{S;k}+\sum_{n=k+1}^{\Delta-1} D_{S;n}J_{2(n-k)+1}=0,\ 1\leq k\leq\Delta-2,\nonumber\\&&N_{S;0}+\sum_{n=1}^{\Delta-1}D_{S;n}J_{2n+1}=1,\nonumber\\&&\sum_{n=0}^{\Delta-1}D_{S;n}J_{2n+3}=\sum_{n=0}^{\Delta/2} \left(a_nC_{2n}+b_n\tilde{C}_{2n}+\cdots\right)J_{2n+3}.\eea Here, $a_n=b_n=\cdots=1$ for diagonal entries and $a_{n}=b_n=\cdots=2$ for off-diagonal entries. Similarly, for $P$-channels, we have verified up to $\mathcal{O}(Q^4)$ and again conjecture that\bea\label{relationP2}\mathcal{O}(Q^2):&&N_{P;0}+D_{P;0}J_3=1,\nonumber\\&&D_{P;0}J_5 =C_{P;2}J_5.\\\label{relationP4}\mathcal{O}(Q^4):&&N_{P;1}+D_{P;1}J_3=0,\nonumber\\&&N_{P;0}+D_{P;0}J_3+D_{P;1}J_5=1,\nonumber\\&&D_{P;0}J_5+D_{P;1}J_7=C_{P;2}J_5+2C_{P;4}J_7.\\ \label{NDrelationP}\mathcal{O}(Q^\Delta):&& N_{P;k}+\sum_{n=k}^{\Delta-3}D_{P;n}J_{2(n-k)+3}=0,\quad1\leq k\leq\Delta-3,\nonumber\\&&N_{P;0}+\sum_{n=0}^{\Delta-3}D_{P;n}J_{2n+3}=1, \nonumber\\&&\sum_{n=0}^{\Delta-3}D_{P;n}J_{2n+5}=\sum_{n=1}^{\Delta/2}\left(a_nC_{P;2n}+b_n\tilde{C}_{P;2n}+\cdots\right)J_{2n+5}.\eea Again, $a_n=b_n=\cdots=1$ for diagonal entries and $a_{n}=b_n=\cdots=2$ for off-diagonal entries. We have verified that these relations or identities are also valid at order $\mathcal{O}(Q^6)$ in $^1S_0$ channel. In Sec. IV, we will see that these relations are crucial for finding the solutions of nonperturbative running couplings. At present we could provide neither a rigorous proof nor a sound interpretation of these relations. Further explorations in the future are worthwhile. There might be some intriguing contents in these relations that could be illuminating. In pionless effective field theory, the contact couplings are dominated by pion-exchange loop diagrams in the range $l\in(m_\pi,M_N)$, such intimate relations between $[C_{\cdots}]$ and $[J_{\cdots}]$ imply that they should come from different 'sides' of the same regularities of a underlying theory, say, covariant chiral perturbation theory, or quantum chromo-dynamics (QCD). It is also obvious that the 'contents' of these relations depend upon specific prescriptions of $[J_{\cdots}]$. They would reduce to very uninteresting and uninformative ones in dimensional schemes like PDS\cite{KSW1,KSW2}. In a sense, regularization scheme really matters in nonperturbative regime, it may reveal or hide some intricacies of the underlying theory, in contrast to the conventional wisdoms. \section{renormalization group invariance in nonperturbative regime} \subsection{Standard renormalization group equation as a 'decoupling theorem' from underlying components}Let us first digress a little on the general form of renormalization group equation from the underlying theory perspective, where the 'corrections' from the 'underlying components' ($\{\sigma\}$ that render an effective field theory well defined in the ultra-violet region) to the canonical scaling laws in effective field theories could be readily interpreted as 'decoupling theorems': The scalings of the 'underlying components' ($\sum_{\sigma}d_{\sigma}\sigma\partial_{\sigma}$) could at most contribute that of local effective field theory (composite) operators $[O_i]$ in the 'decoupling limits' provided that the effective field theory is local and covariant\cite{PLB625}:\bea&&\sum_{\sigma}d_{\sigma}\sigma\partial_{\sigma}\Gamma^{(n)}([p],[g];\{\sigma\})\nonumber\\&&\Rightarrow \sum_{\bar{c}}d_{\bar{c}}{\bar{c}}\partial_{\bar{c}}\Gamma^{(n)}([p],[g];\{\bar{c}\})=\sum_{O_i}\delta_{O_i}I_{O_i}\Gamma^{(n)}([p],[g];\{\bar{c}\}),\eea where $[p]$ and $[g]$ being external momenta and couplings (including masses) in a complete $n$-point function $\Gamma^{(n)}$, $d_{\cdots}$ being the mass dimension, $\{\bar{c}\}$ the constants from the 'decoupling limits'\footnote{In the conventional algorithm of perturbative renormalization, they arise from the subtraction procedure.}, and $\delta_{O_i}$ the anomalous dimension of $O_i$. This somewhat 'primitive' form of renormalization group equation puts renormalizable and nonrenormalizable theories on the same footing and could be readily transformed into various other forms from which we could readily recover some well-known low-energy theorems derived in QCD in renormalized form as natural corollaries\cite{PLB625,JPA40}. In covariant perturbation theory, the 'anomalous' contributions of $\{\bar{c}\}$ to the scaling laws come from the logarithmic terms like $\delta_{O_i}\ln\frac {\bar{c}}{m}$ ($d_{\bar{c}}=1=d_m$, $m\in[g]$, i.e., $m$ is a mass in effective field theory) that arise from loops. However, when various truncations are employed, the standard renormalization group equation may no longer be ensured. Furthermore, sticking to the standard renormalization group equation and the associated wisdom would lead us nowhere in front of the tight constraints that arise from the combination of truncations and nonperturbative contexts. For example, for the on-shell closed-form $T$ matrices given in Sec. II.A., the scaling law simply read\bea\label{RGET}\left\{p\partial_p+\sum_{C_{\cdots}}d_{C_{\cdots}}C_{\cdots}\partial_{C_{\cdots}} +\sum_{J_{\cdots}}d_{J_{\cdots}}J_{\cdots}\partial_{J_{\cdots}}-2\right\}{T}(p;[C_{\cdots}];[J_{\cdots}])=0,\eea where $\sum_{C_{\cdots}} d_{C_{\cdots}}C_{\cdots}\partial_{C_{\cdots}}$ denotes the contribution from contact couplings and $\sum_{J_{\cdots}}d_{J_{\cdots}}J_{\cdots}\partial_{J_{\cdots}}$ denotes that from loop integrals (which should correspond to $\sum_{\bar c}d_{\bar{c}}{\bar{c}}\partial_{\bar{c}}$). Since $[C_{\cdots}]$ and $[J_{\cdots}]$ are all power like dimensional constants and highly intertwined on the same footing in the homogeneous polynomial factors ($[N_{\cdots}]$ and $D_{\cdots}]$) in the closed-form $T$ matrices, the scaling of $[J_{\cdots}]$ could no longer be simply cast into scaling anomalies in terms of local operators and absorbed into contact couplings. Actually, there is in general a mismatch between $[C_{\cdots}]$ and $[J_{\cdots}]$ just due to truncation\cite{JPA}. Such closed-form solutions would be usually deemed a disaster and discarded or circumvented at all. Here, we take this difficulty as a motivation to reexamine the whole issue from general principles so as to make sense out of the tight constraints. In our view, the best choice is to resort to the underlying theory perspective view of effective field theory renormalization in nonperturbative contexts so as to turn the tight constraints into virtues to be exploited, which will be discussed and explicated below in Sec. III.B. \subsection{Nonperturbative scenario of effective field theory renormalization in underlying theory perspective}Our take on the issue is based on the following observations\cite{YH,JPA,AOP}: First, in the underlying theory perspective ultra-violet divergences arise in effective field theory as the effective field theory projection operation $\breve{{\mathcal{P}}}_{\texttt{\tiny EFT}}$ does not commutate with loop integrations\cite{YH},\bea\texttt{CT}\equiv[\breve{{\mathcal{P}}}_{\texttt{\tiny EFT}},\int\!d^Dl]\neq0.\eea Then subtraction automatically follows in each loop as a rearrangement of this commutator ($\texttt{CT}$),\bea\breve{{\mathcal{P}}}_{\texttt{\tiny EFT}}\!\!\int\!d^Dl\ \underbrace{[f(l,\cdots)]}_{UT}=\int\!d^Dl\ \underbrace{\breve{{\mathcal{P}}}_{\texttt{\tiny EFT}}\!\left[f(l,\cdots)\right]}_{EFT}+\underbrace{\texttt{CT} \left[f(l,\cdots)\right]}_{counterterm},\eea giving rise to the parameters (here, the $[J_{\cdots}]$) to be 'fixed'. Second, there are intrinsic mismatches between the parameters and the effective field theory couplings in the closed-form $T$ matrices\cite{JPA}, the 'matched' ones could be absorbed into the effective field theory couplings $[C_{\cdots}]$ and make the latter 'run', while the 'unmatched' ones are separately constrained to be renormalization group invariant or physical, giving rise to the following scenario for effective field theory renormalization: $$\mathcal{S}\equiv[C_{\cdots}(\mu)]\oplus[J_{\cdots}^{\texttt{\tiny(phys)}}, J_{\cdots}(\mu)]=[C_{\cdots}(\mu),J_{\cdots}(\mu)]\oplus[J_{\cdots}^{\texttt{\tiny(phys)}}]$$ with $\mu$ a running scale. Consequently, $[C_{\cdots}(\mu)]$ and $[J_{\cdots}(\mu)]$ must conspire in the remaining renormalization group invariants and join with $[J_{\cdots}^{\texttt{\tiny(phys)}}]$ to parameterize the closed-form $T$ matrices\cite{AOP}. Therefore, we need to find these renormalization group invariants and then solve the running couplings in terms of these renormalization group invariants and running parameters $[J_{\cdots}(\mu)]$, which are the subjects of the next two subsections. This conception of renormalization in front of tight constraints deviates from the standard wisdom established in perturbative contexts. \subsection{renormalization group invariance of the shape of on-shell $T$}First, the dependence of an on-shell $T$ matrix on $p$ (the functional shape) is physical since it is related to the phase shift in the following way in channel $L$:\bea\Re\left\{-\frac{4\pi p^{2L}}{M_NT(p)}\right\}=p^{2L+1}\cot\delta_L(p).\eea In the present work, the functional shape of the closed-form on-shell $T$ matrices are completely encoded in the ratios like $[N_{L;i}/N_{L;0}, D_{L;j}/N_{L;0}]$ and the parameter $J_0$, so these ratios must be physical and hence renormalization group invariant in general sense:\bea& &\left\{\frac{\sum_{i}\left(N_{L;i}/N_{L;0}\right)p^{2i}}{\sum_j\left(D_{L;j}/N_{L;0}\right)p^{2j}}+J_0p^{2L}\right\}_{{RG\ inv}}\nonumber\\&&\label{RG-ND-npt}\Longrightarrow\delta_{{RG}}(J_0)=0,\ \delta_{{RG}}\left(\frac{N_{L;i}}{N_{L;0}}\right)=0,\ \delta_{{RG}}\left(\frac{D_{L;j}}{N_{L;0}}\right)=0,\ \forall i,j.\eea Here, '$\delta_{{RG}}$' denotes the variations in renormalization prescriptions. In case of infinitesimal variations, they are nothing else but the homogeneous renormalization group equations satisfied by these (physical) ratios, a fact long established in renormalization theory. In a complete formulation in terms of underlying theory, renormalization group invariance should read $\delta_{RG}\left\{\cdots\right\}=\left(\sum_{\sigma}d_{\sigma}\sigma \partial_{\sigma}-\sum_{O_i}\delta_{O_i}I_{O_i}\right)\left\{\cdots\right\}=0,$ while here in pionless effective field theory, it reads $$\delta_{{RG}}=\sum_{J_{\cdots}} d_{J_{\cdots}}J_{\cdots}(\mu)d_{J_{\cdots}}(\mu)=\sum_{J_{\cdots}}d_{J_{\cdots}}J_{\cdots}(\mu)\partial_{J_{\cdots}}(\mu)+\sum_{C_{\cdots}J_{\cdots}}\gamma_{C_{\cdots}; J_{\cdots}}C_{\cdots}(\mu)\partial_{C_{\cdots}}(\mu),\ \gamma_{C_{\cdots};J_{\cdots}}\equiv\partial{C}_{\cdots}/\partial{J_{\cdots}},$$ where the contributions from $[J_{\cdots}^{\texttt{\tiny(phys)}}]$ must be excluded due to tight constraints. By contrast, in Wilsonian renormalization group equations, $\delta_{{RG}}$ is simply implemented as $\Lambda{d}_\Lambda$ ($\Lambda$ denoting the cutoff scale), which is okay for perturbative issues. However, in presence of tight constraints it might be problematic. The reason goes as below: Using a universal cutoff $\Lambda$, each of $[J_{\cdots}]$ is $\Lambda$-dependent. However, the tight constraints require some of them, say, $[J_{\cdots}^{\texttt{\tiny(phys)}}]$, to be 'physical' or renormalization group invariant. As the variation with $\Lambda$ also induces the changes in $[J_{\cdots}^{\texttt{\tiny(phys)}}]$ (denoted as $[\Lambda{\partial_\Lambda}]_{\texttt{\tiny(Phys)}}$), this component must be excluded to ensure the real renormalization group invariance, i.e., $\delta_{RG}=\Lambda{d_\Lambda} -[\Lambda{\partial_\Lambda}]_{\texttt{\tiny(Phys)}}$. Evidently, it is a mission impossible within the conventional form of Wilsonian renormalization group equations\footnote{Unless some sophisticated ingredients are introduced to alleviate the pressure, see a recent effort in this direction in Ref.\cite{EGeM}.}. One must try to seek for tractable ways to realize the operation $\delta_{RG}=\Lambda{d}_\Lambda-[\Lambda{\partial_\Lambda}]_{\texttt{\tiny(Phys)}}$. Obviously, the general parametrization of loop integrals plays a pivotal role in the foregoing discussion. Now we should note that $J_0$ becomes renormalization group invariant or physical at a generic truncation order. It corresponds to the constant part of the fixed-point solution of $NN$ scattering in Ref.\cite{Birse}. As, the ratios that satisfy Eqs.(\ref{RG-ND-npt}) are physical or renormalization group invariant, the effective field theory couplings must 'run' in ways to exactly cancel the running parameters to keep these ratios intact. Then the running couplings could be found reversely from these physical ratios, which is our job to be done in the following sections. \subsection{Renormalization group invariance of ERE parameters}Since the effective range expansion (ERE) is actually a Taylor expansion around $p=0$, the functional shape of $T$-matrices are also completely encoded in the ERE factors:\bea\label{ERT}\Re\left\{-\frac{4\pi p^{2L}}{M_NT(p)}\right\}=p^{2L+1}\cot\delta_L(p)=-\frac{1}{a}+\frac{1}{2}r_ep^2 +\sum_{k=2}^{\infty}v_kp^{2k},\eea with $a$ and $r_e$ being known as the scattering length and effective range in channel $L$. All the ERE factors are physical observables and hence could serve as the renormalization group invariants for our purpose, as they are rational functions in terms of $[N_{L;i},D_{L;j}]$ and $J_0$. Actually, one could retrieve the renormalization group invariant ratios defined in Sec. III.C. from the ERE factors through appropriate combinations, at any given order of truncation. To illustrate, we take the $^1S_0$ channel at order $\mathcal{O}(Q^2)$ as a simple but nontrivial example, where\bea&&a^{-1}=\frac{4\pi}{M_N}(N_{S;0}{D_{S;0}}^{-1}+J_0),\ r_e=\frac{8\pi}{M_N}N_{S;0}D_{S;1}D_{S;0}^{-2},\nonumber\\&&v_k=(-1)^{k-1}\frac{4\pi}{M_N}N_{S;0} D_{S;1}^kD_{S;0}^{-k-1},\ k\geq2.\eea It is obvious that the following ratios are also renormalization group invariant:\bea&&\Xi_0\equiv\frac{M_Nr_ev_k}{8\pi v_{k+1}}+\frac{M_N}{4\pi a}=J_0,\ \Xi_1\equiv\frac{8\pi v_{k+1}}{M_Nr_ev_k}=-\frac{D_{S;0}}{N_{S;0}}, \nonumber\\&&\Xi_2\equiv\frac{8\pi v^2_{k+1}}{M_Nr_ev_k^2}=\frac{D_{S;1}}{N_{S;0}},\eea just equivalent to the Eqs.(\ref{RG-ND-npt}) at $\mathcal{O}(Q^2)$. Such kind of solutions are always feasible right due to truncation, which is exploited here rather than circumvented. We should note in passing that, in the PDS prescription, $J_{\cdots}=0, J_0\neq0$, most of the ERE parameters of the $^1S_0$-channels turn out to be rational functions of the contact couplings like\bea a^{-1}=\frac{4\pi}{M_N}\left(C^{-1}_0+J_0\right),\ r_e=F_2(C_0,C_2),\ v_k=F_k(C_0,[C_{2j}]), k\geq2.\eea One might think that this will lead us back to the Kaplan-Savage-Wise (KSW) running\cite{KSW1,KSW2} in this channel. However, with {\em the combinations} given above, we could always arrive at {\em an alternative solution} that is truly 'nonperturbative' in essence, see Sec. IV.C. This alternative has so far been overlooked in most literature. The well-known KSW running could be recovered for $C_0$ in $^1S_0$ channel, but only at the lowest order of truncation. We should note in passing that the deductions of the foregoing two subsections (III.C. and III.D.) do not mean the renormalization group invariant ratios are exact physical numbers, but that allowing for systematic effective field theory corrections according to normal power counting rules, which will be further addressed in Sec.IV.F. \section{Nonperturbative 'running' in $^1S_0$ and uncoupled $P$ channels} \subsection{$^1S_0$ channel at orders $\mathcal{O}(Q^0)$ and $\mathcal{O}(Q^2)$}Let us warm up with these two orders\cite{YH}. At order $\mathcal{O}(Q^0)$, one could see from either Eq.(\ref{relation1s0-0}) or the $T$ matrix itself\bea\frac{1}{T(p)}=J_0+i\frac{M_Np}{4\pi}+\frac{1}{C_0}\eea that there is only one constraint from scattering length: $a^{-1}=\frac {4\pi}{M_N}\left(C^{-1}_0+J_0\right)$. That is, only at this order and in $^1S_0$ channel, $J_0(\mu)$ is a running parameter, then $C_0(\mu)$ runs as below\bea\label{KSW1,KSW2} C_0(\mu)=\frac{1}{\frac{M_N}{4\pi}a^{-1}-J_0(\mu)},\eea which is exactly the KSW running for $C_0$. This is because there is only one parameter $J_0$ from loop integral and hence could be readily matched by the coupling $C_0$ in $^1S_0$ channel within the lowest order. However, it is no longer true in higher channels and/or at higher truncation orders, provided the closed-form $T$ matrices are concerned. Thus, the nonperturbative running couplings begin to show up from truncation order two. To proceed at order $\mathcal{O}(Q^2)$, we introduce the renormalization group invariant ratios\bea\alpha_0\equiv\frac{D_{S;0}}{N_{S;0}},\ \alpha_1\equiv\frac{D_{S;1}}{N_{S;0}}.\eea Then Eqs.(\ref{relation1s0-2}) and the $T$ matrix become the following\bea&&1+\alpha_1J_3=N^{-1}_{S;0},\\\label{relations2T}&&\alpha_0J_3+\alpha_1J_5=[C_0J_3+2C_2J_5]N^{-1}_{S;0},\\& &\frac{1}{T(p)}=J_0+i\frac{M_Np}{4\pi}+\frac{1}{\alpha_0+\alpha_1p^2},\eea where it is obvious that $J_0$ is renormalization group invariant while $J_3$ and $J_5$ are running parameters and thus denoted as $J_3(\mu)$ and $J_5(\mu)$ henceforth within this order\cite{YH}. Combined with the expression of $N_{S;0}$ listed in Sec. II, we have\bea N_{S;0}=\frac{1}{1+\alpha_1J_3(\mu)}=\left[1-C_2J_3(\mu)\right]^2,\eea from which the running $C_2(\mu)$ is easy to obtain:\bea C_{2\pm}(\mu)=\left(1\pm\theta_{S}^{-\frac{1} {2}}(\mu)\right)\frac{1}{J_3(\mu)},\ \theta_{S}(\mu)\equiv N_{S;0}^{-1}=1+\alpha_1J_3(\mu).\eea From the 'boundary condition' for $C_2$: $\left.C_2\right|_{J_{3,5}=0} ={\scriptstyle\frac{1}{2}}\alpha_1,$ we have\bea C_{2}(\mu)=\left(1-\theta_{S}^{-\frac{1}{2}}(\mu)\right)\frac{1}{J_3(\mu)}=\left[1-\left(1-\frac{\alpha_1}{\theta_{S}(\mu)} J_3(\mu)\right)^{\frac{1}{2}}\right]\frac{1}{J_3(\mu)}.\eea Finally, we could find the following running coupling $C_0(\mu)$ from Eq.(\ref{relations2T}):\bea C_{0}(\mu)=\frac{\alpha_0}{\theta_{S}(\mu)}-\left[1-\left(1-\frac{\alpha_1}{\theta_{S} (\mu)}J_3(\mu)\right)^{\frac{1}{2}}\right]^2 \frac{J_5(\mu)}{J_3^2(\mu)}.\eea Parameterizing $[J_{2n+1},n\geq1]$ in terms of a single running scale '$\mu$' as $$J_{2n+1}=\tilde{j}_{2n+1}\frac{M_N}{4\pi}\mu^{2n+1}$$ with $\tilde{j}_{2n+1}$ dimensionless, it is easy to see that the infrared and ultra-violet fixed points of such nonperturbative running couplings are\cite{YH}:\bea&&C^{(IR)}_0=\alpha_0,\ C^{(IR)}_2= {\scriptstyle\frac{1}{2}}\alpha_1,\\&&C^{(UV)}_0=0,\ C^{(UV)}_2=0.\eea If "$\tilde{j}_3\alpha_1$" is negative, then the factor $\theta_{S}^{-\frac{1}{2}}(\mu)$ in $C_2$ and $C_0$ would develop a singularity at a finite value of $\mu$, implying that the effective field theory description breaks down at that scale. This phenomenon is quite generic at higher truncation orders, see Sec. IV.B. It is actually consistent with the fact that pionless effective field theory breaks down beyond the scale of pion mass by definition. This could also happen in coupled channels\cite{AOP}. So, the ultra-violet fixed points obtained here are not trustworthy, they are even problematic, i.e., divergent at higher orders, see Sec. IV.B. below. \subsection{$^1S_0$ channel at order $\mathcal{O}(Q^4)$} Now we consider the order $\mathcal{O}(Q^4)$ where things become more complicated. In similar fashion, we introduce the following notations for renormalization group invariant ratios\bea \beta_i\equiv\frac{N_{S;i}}{N_{S;0}},\ \alpha_i\equiv \frac{D_{S;i}}{N_{S;0}},\eea with which Eqs.(\ref{relation1s0-4}) and the $T$ matrix become\bea\label{C0relation1}&&\beta_2+\alpha_3J_3=0,\ \beta_1+\alpha_2J_3+\alpha_3J_5=0,\nonumber\\&&1+\alpha_1J_3+\alpha_2J_5 +\alpha_3J_7=\frac{1}{N_{S;0}},\\\label{C0relation2}&&\alpha_0J_3+\alpha_1J_5 +\alpha_2J_7+\alpha_3J_9=\frac{C_0J_3+2C_2J_5+(2C_4+\tilde{C}_4)J_7}{N_{S;0}},\\&&\frac{1}{T(p)}= J_0+i\frac{M_Np}{4\pi}+\frac{1+\beta_1p^2+\beta_2p^4}{\alpha_0+\alpha_1p^2+\alpha_2p^4+\alpha_3p^6}.\eea Now it is clear that at this order, $J_0$, $J_3$ and $J_5$ are all renormalization group invariants with:\bea J_3=-\frac{\beta_2}{\alpha_3},\ J_5=\frac{\alpha_2\beta_2-\beta_1\alpha_3}{\alpha_3^2},\eea while $J_7,J_9$ are running parameters and will be denoted as $J_7(\mu),J_9(\mu)$ henceforth in this subsection. To find the running couplings, let us start with $C_4(\mu)$ which is the easiest job by using the expression of $D_{S;3}$ (\ref{appA}) and $D_{S;3}=\alpha_3N_{S;0}$, the result reads\bea&&C_4(\mu) =\pm\alpha_3\left[\beta_2\theta_{S}(\mu)\right]^{-\frac{1}{2}},\nonumber\\&&\theta_{S}(\mu)\equiv N_{S;0}^{-1}=1+\alpha_1J_3+\alpha_2J_5+\alpha_3J_7(\mu).\eea Similarly, the factor $\theta^{-\frac{1}{2}}_S(\mu)$ in $C_4(\mu)$ develop a singularity provided $j_7\alpha_3<0$ (it is reasonable to suppose that $1+\alpha_1J_3+\alpha_2J_5$ is positive) so that $\theta_S=0$ around some finite value of $\mu$, which signals the breakdown of effective field theory description beyond that scale. For later convenience, we introduce the following notation for $C_4(\mu)$:\bea C_4(\mu)=s\alpha_3\left[\beta_2\theta_S(\mu)\right]^{-\frac{1}{2}},\ s^2=1.\eea For the rest couplings, it is convenient to proceed in the order of $\tilde {C}_4$, then $C_2$ and finally $C_0$. After the elimination of $C_0$ and $C_2$ in the factors $[N_{\cdots}]$ and $[D_{\cdots}]$, we find that\bea&&\tilde{C}_{4\pm}=-\frac{\beta_1}{\eta}+2J_3\frac{\alpha_3\gamma\pm\left(\beta_2\zeta \right)^{\frac{1}{2}}}{\eta^2},\eea with $\gamma, \eta$ and $\zeta$ being functions of $[\alpha_{\cdots},\beta_{\cdots}]$ given in \ref{appD}. In the limits $[J_{\cdots}\Rightarrow0]$, we find that $\tilde{C}_{4\pm}\Rightarrow\alpha_2\pm2C_4,$ then only the negative sign is compatible with the following 'boundary conditions': $\alpha_0\Rightarrow C_0,\ \alpha_1\Rightarrow 2C_2,\ \alpha_2\Rightarrow 2C_4+\tilde{C}_4,\ \alpha_3\Rightarrow0.$ So, in the following solutions of $C_2$ and $C_0$, we will use\bea&&\tilde{C}_{4}=-\frac{\beta_1\eta+2\left[\beta_2\gamma +J_3\left(\beta_2\zeta\right)^{\frac{1}{2}}\right]}{\eta^2}=\frac{\alpha_3}{\beta_2}\Phi_4,\eea with $\Phi_4$ also given in \ref{appE}. To us surprise, $\tilde{C}_4$ is renormalization group invariant, an intricate point impossible to see without exploiting the tight constraints. With $C_4(\mu)$ and $\tilde{C}_4$ given above, $C_2$ is then obtained as\bea C_2(\mu)=&&-\frac{\alpha_3}{\beta_2}+\frac{s\alpha_3}{\sqrt{\beta_2\theta_{S}(\mu)}}\left\{\frac{\alpha_2} {\alpha_3}-\frac{\beta_1}{2\beta_2}-\frac{\Phi_4}{2}\left[\frac{\alpha_3}{\beta_2}J_7(\mu)\right.\right.\nonumber\\&&\left.\left.+\left(\frac{\alpha_2}{\alpha_3} -\frac{\beta_1+s\sqrt{\beta_2\theta_S(\mu)}}{\beta_2}\right)^2\right]\right\}.\eea Finally, with $C_4(\mu),\tilde{C}_4$ and $C_2(\mu)$ given above, $C_0$ could be simply solved using Eq.(\ref{C0relation2})\bea C_0(\mu)=&&\frac{\alpha_0}{\theta_{S}(\mu)}+\left(\frac{\beta_1}{\beta_2}-\frac{\alpha_2}{\alpha_3}\right)\left[\frac{\alpha_1}{\theta_{S}(\mu)} -\frac{2\alpha_3}{\beta_2}+\frac{s\alpha_3}{\sqrt{\beta_2\theta_S(\mu)}}\left(\frac{2\alpha_2}{\alpha_3}-\frac{\beta_1}{\beta_2}\right.\right.\nonumber\\&&\left.\left. -\left(\frac{\alpha_2}{\alpha_3}-\frac{\beta_1+s\sqrt{\beta_2\theta_S(\mu)}}{\beta_2}\right)^2\Phi_4\right)\right]+\frac{\alpha_3}{\beta_2} \left[\frac{\alpha_3}{\beta_2}\Phi_4-\frac{\alpha_2}{\theta_S(\mu)}\right.\nonumber\\&&\left.+\frac{s\alpha_3\left(2-\left(\frac{\beta_1}{\beta_2}-\frac{\alpha_2}{\alpha_3}\right) \Phi_4\right)}{\sqrt{\beta_2\theta_S(\mu)}}\right]J_7(\mu)-\frac{\alpha^2_3}{\beta_2\theta_S (\mu)}J_9(\mu).\eea Again, the presence of the same factor $\theta^{-\frac{1}{2}}_S(\mu)$ in $C_2(\mu),C_0(\mu)$ means that these running couplings suffer from the same probable singularity as $C_4(\mu)$ does. Thus it is not an 'accident' for $C_4(\mu)$ but true for all the contact couplings, which is actually compatible with the anticipation that pionless effective field theory fails by definition beyond the scale of pion mass. As a consistent check, we also verified that the running couplings truncated at the order $\mathcal{O}(Q^4)$ reproduce the $\mathcal{O}(Q^2)$ ones by taking $\alpha_3\rightarrow0$ and $\alpha_2\rightarrow0$, see \ref{appD}. From the above running couplings, it is straightforward to read off the infrared and ultra-violet fixed points qualitatively:\bea&&C^{(IR)}_0=finite,\ C^{(IR)}_2=finite,\ C^{(IR)}_4=finite,\\&& C^{(UV)}_0=\infty,\ C^{(UV)}_2=\infty,\ C^{(UV)} _4=0.\eea It is clear that the ultra-violet fixed points at this order are problematic and not trustworthy as noted above, in other words, it doe not make sense to let $\mu$ go to infinity. Obviously, the remarks made at order $\mathcal{O}(Q^2)$ also apply here. Some remarks are in order: A). It does not make sense to ultra-violet extrapolate when there is at least one probable singularity (the position will be denoted as $\mu_{sing}$) in the running couplings beyond the order $\mathcal{O}(Q^0)$ via the 'universal' factor $\sqrt{1/\theta_S(\mu)}$. The running couplings turn into complex ones as $\mu$ goes beyond $\mu_{sing}$, where the effective field theory should be inconsistent. Also the problematic ultra-violet fixed points of $C_0$ and $C_2$ warn us against ultra-violet extrapolation. That means, the effective field theory only makes sense in a very narrow window $\mu\in[0,\mu_{sing})$. Plugging in the power counting to be specified in Sec. IV.F, it is easy to see from $\theta_S(\mu)=0$ at order $\mathcal{O}(Q^2)$ that the possible pole is located at $\alpha_1J_3=-1$, thus $$\mu_{sing}=\left(-\tilde{\alpha}_1\tilde{j}_3\right)^{-1/3}\Lambda_{\not\pi}\sim2^{-1/3} \Lambda_{\not\pi},\ \alpha_1=2\frac{4\pi\tilde{\alpha}_1}{M_N}\Lambda_{\not\pi}^{-3},\ \tilde{\alpha}_1\sim\tilde{j}_3\sim\mathcal{O}(1).$$ Similar result could also be derived at order $\mathcal{O}(Q^2)$. This is in perfect accordance with the fact that effective field theory is applicable in a limited range. As far as we know, such informative contents have not been appreciated in literature yet. This is quite understandable as they obviously stem from the tight constraints that carry these information in encrypted forms, which are simply missed as they are circumvented one way or the other. B). Note that the running behaviors presented above follow from the assumption that $J_{2n+1}\sim\frac{M}{4\pi}\mu^{2n+1}.$ In other assumptions the running behaviors would evidently differ. Thus, the running behaviors in nonperturbative contexts depend nontrivially on specific prescription in use, in contrast to perturbative cases. This is in contrast to the ones based on PDS prescription\cite{KSW1,KSW2}, which overkills the nontrivial parameters $[J_{2n+1}, n>0]$ that turn out to be the true sources of running in the closed-form $T$ matrices. C). It is clear that the tight constraints at higher orders stop the running of 'lower' parameters in $[J_{\cdots}]$ and turn them in physical or renormalization group invariant parameters to be determined separately, while the 'highest' ones of $[J_{\cdots}]$ run. This is definitely a novel feature of pionless effective field theory (perhaps true for all effective field theory with similar truncations) in nonperturbative regimes. For example, at order $\mathcal{O}(Q^2)$, the physical or renormalization group invariant parameter is the 'lowest' one, $J_0$, while $J_3$ and $J_5$ would become renormalization group invariant beyond this order. Again, such scenario has not been considered in most literature that circumvent the tight constraints one way or the other. D). A more pleasing and distinctive feature of our approach is that the running couplings of lower order of truncation (corresponding to lower energies or longer distances) are dependent upon the parameters ($[\alpha_{\cdots};J_{\cdots}]$) governing interaction at higher order of truncation (corresponding to higher energies or shorter-distances), not the other way around. This is in perfect accordance with the general anticipation that, the (UV) renormalization of shorter-distance interactions should be independent of the long-distance processes, while the (UV) renormalization of longer-distance interactions could be affected by the short-distance details. This is true in any channel at any given order of truncation as could be readily read off from the explicit expressions listed in Sec. IV.A and Sec. IV.B. As a further rationale, we note that such 'directional' dependence naturally accommodates the 'decoupling procedure', by which we mean the procedure of scaling down so that the higher order parameters tend to vanish ('decouple') to smoothly go over to the lower order results. This could also serve as a consistency check of the correctness of our calculations, an instance is given in \ref{appD}. This is in sheer contrast to that in KSW scheme and the like where the situation is reversed, higher order couplings are affected by the lower order ones. \subsection{Contrast with the KSW scheme}Here we compare our nonperturbative 'running' with that obtained in Ref.\cite{KSW1,KSW2}. The latter is obtained through Taylor expansion of $Re\{1/T\}$ in term of $p^2$ in PDS. Take $^1S_0$ channel for example, $1/C_0$ conspires with $J_0$ to produce the scattering length $a$. As $J_0$ is unconstrained in such scheme, we are led to the well-known KSW running of $C_0$ depicted in Eq.(\ref{KSW1,KSW2}). The rest of the contact couplings $[C_2,\cdots]$ also 'run' according to the understanding of the expansion in Ref.\cite{KSW1,KSW2}. Here, we could only recover the KSW running for $C_0$ at the lowest truncation order. Once going over to higher truncations orders, the KSW running is lost as long as the closed-form is kept intact, no matter what prescription one uses. For example, working with closed-form $T$-matrices in PDS, the only superficially prescription-dependent parameter $J_0$ 'stands' alone in the inverted $T$-matrices and could not mix with the couplings if the closed-form is kept intact, i.e., not further manipulated:\bea\frac{1}{T_{^1S_0}(p)}=J_0+i\frac{M_N}{4\pi}p+\frac{1}{C_0+2C_2p^2+\cdots}.\eea That means it must be determined otherwise in principle and there is no running at all in such circumstances. Of course, different values of $J_0$ and the couplings would describe different 'physics'\cite{EPL,AOP}. In short, our approach has led us to an alternative and yet natural solution or scenario to effective field theories with truncations in addition to the KSW scheme and the like, which could at least serve as a supplementary choice. Moreover, PDS is a very special prescription in that it is least informative as most of the information derived above in a general parametrization is lost: The linear relations degenerate into one trivial line: $N_{L}=1$ for any uncoupled $L$ channel. And the four remarks presented above would also be gone: A'). No trace of the possible $\mu_{sing}$; B'). No running $[J_{\cdots}]$; C'). Only one renormalization group invariant, $J_0$; D'). No sign of 'directional' relations between the interactions at different orders. The drastic change of the informational contents implies that prescription does matter to certain degree due to the tight constraints, a more general parametrization is more informative. \subsection{Uncoupled $P$ channels at order $\mathcal{O}(Q^2)$}Here, we present the results for the uncoupled $P$ channels at order $\mathcal{O}(Q^2)$. Introducing the notation\bea\alpha_{P;0}\equiv \frac{D_{P;0}}{N_{P;0}},\eea the Eqs.(\ref{relationP2}) and $T$ matrix become\bea&&1+\alpha_{P;0}J_3=\theta_{P}(\mu),\ \alpha_{P:0}=C_{P;2}\theta_{P}(\mu),\nonumber\\&&\frac{1}{T(p)}=J_0+i\frac{M_Np}{4\pi}+\frac{1}{\alpha_{P;0}p^2}.\eea Now it is clear that $J_0$ is renormalization group invariant, but $J_3$ is not constrained and hence 'runs', then it is obvious that\bea C_{P:2}(\mu)=\frac{\alpha_{P;0}} {1+\alpha_{P;0}J_3(\mu)}=\frac{1}{\alpha_{P;0}^{-1}+J_3(\mu)}.\eea The functional form looks similar to the case of $^1S_0$ channel truncated at order $\mathcal{O}(Q^0)$: $C_0\Leftrightarrow C_{P;2}$. It is easy to read off the infrared and ultra-violet fixed points\bea C^{(IR)}_{P;2}=\alpha_{P;0},\ C^{(UV)}_{P;2}=0.\eea At this order, there might be a pole in $C_{P;2}$ at finite $\mu$ provided $\alpha_{P;0}J_3<0$. \subsection{Uncoupled $P$ channels at order $\mathcal{O}(Q^4)$}Again, we first introduce the notations\bea\alpha_{P;i}\equiv\frac{D_{P;i}}{N_{P;0}},\ \beta_{P;i}\equiv\frac{N_{P;i}} {N_{P;0}},\eea then we have from Eqs.(\ref{NDrelationP})\bea&&\beta_{P;1}+\alpha_{P;1}J_3=0,\nonumber\\&&1+\alpha_{P;0}J_3+\alpha_{P;1}J_5=\theta_P(\mu),\nonumber\\&& \alpha_{P;0}J_5+\alpha_{P;1}J_7=[C_{P;2}J_5+2C_{P;4}J_7]\theta_P(\mu),\\&&\frac{1}{T(p)}=J_0+i\frac{M_Np}{4\pi}+\frac{1+\beta_{P;1}p^2}{\alpha_{P;0} p^2+\alpha_{P;1}p^4}.\eea From these equations we see that, $J_0$ and $J_3$ are renormalization group invariant, while $J_5$ and $J_7$ 'run'. Then we could find the following running couplings in nonperturbative regime after imposing similar boundary conditions\bea C_{P:4}(\mu)&&=\left[1-\left(1-\frac{\alpha_{P;1}}{\theta_{P}(\mu)}J_5(\mu)\right)^{\frac{1}{2}}\right]\frac{1}{J_5(\mu)},\\ C_{P:2}(\mu)&&=\frac{\alpha_{P;0}}{\theta_{P}(\mu)}-\left[1-\left(1-\frac{\alpha_{P;1}}{\theta_{P}(\mu)}J_5(\mu)\right)^{\frac{1}{2}}\right]^2\frac{J_7(\mu)}{J^2_5(\mu)}.\eea The infrared and ultra-violet fixed points can be found from the foregoing expressions\bea&&C^{(IR)}_{P;2}=\alpha_{P;0},\ C^{(IR)}_{P;4}={\scriptstyle\frac{1}{2}}\alpha_{P;1},\\& &C^{(UV)}_{P;2}=0,\ C^{(UV)}_{P;4}=0.\eea Note again the interesting similarity in the functional forms of the nonperturbative running couplings between the $^1S_0$ channel and $P$ channels at proportionate truncation orders: $C_{0}\Leftrightarrow C_{P;2},\ C_{2}\Leftrightarrow C_{P:4}.$ We speculate that this similarity may persist at higher truncation orders. So the remarks made above in Sec. IV.B. also apply for the uncoupled $P$ channels. \subsection{Power counting, truncation and related issues}At this point, one may ask how power counting are manifested in our approach. The answer lies in the renormalization group invariant ratios $[N_{\cdots}/N_{\cdots;0},D_{\cdots}/N_{\cdots;0}]$ or $[\beta_{\cdots},\alpha_{\cdots}]$ that parameterize the closed-form $T$ matrices. That means, for a power counting to be meaningful for an effective field theory, it must manifest itself in the renormalized objects. As a very naive guess, we may have\bea\label{EFTPC}|\beta_{L;i}| \sim\Lambda_{\not\pi}^{-2i},\ |\alpha_{L;j}|\sim\eta\frac{4\pi}{M_N}\Lambda^{-2j-2L-1}_{\not\pi},\ \forall i>0,\ j\geq0,\ \eta=1(\texttt{\small diagonal}),2(\texttt{\small off-diagonal}).\eea With such a power counting and noting that the renormalization group invariant $J_0$ could be counted as $J_0\sim|\alpha^{-1}_{S;0}|\sim\frac{M_N}{4\pi}\Lambda_{\not\pi}$, one could arrive at a large scattering length provided $J_0$ and $\alpha_{S;0}$ are of the opposite sign, namely,\bea a^{-1}=-\frac{4\pi}{M_N}\left(J_0+\alpha^{-1}_{S;0}\right)\sim-\mathcal{O}(\Lambda_{\not\pi}) +\mathcal{O}(\Lambda_{\not\pi})\sim\mathcal{O}(\epsilon\Lambda_{\not\pi}),\ \epsilon\ll1.\eea This is natural to achieve in contrast to the KSW scheme that suffers from extra large ERE form factors and other problems, see Refs.\cite{Cohen,FMS} and references therein. In other words, it is the cooperation of the renormalization group invariant ratios AND the renormalization group invariant parameter $J_0$ determine whether the scattering length is unnaturally large or not. Thus we may simply work a natural power counting for the ratios $[\beta_{\cdots},\alpha_{\cdots}]$ that may well accommodate both natural and unnatural scenarios for $NN$ scattering by appropriate choice of $J_0$ (more complicated situations may be achieved as more renormalization group invariant parameters $[J_{\cdots}^{\texttt{\tiny(phys)}}]$ are available). It is clear from our presentation that at each order of truncation, the parameters $[J_{2k+1},k>0]$ with lower mass dimensions become 'unmatched' with effective field theory couplings and hence renormalization group invariants, the ones with highest mass dimensions 'run'. Thus in the limit that $\Delta\rightarrow\infty$, all $[J_{\cdots}]$ would become renormalization group invariants, and the running couplings tend to be renormalization group invariants, too. This is not a surprise, however, from the underlying theory perspective. As $\Delta\rightarrow\infty$, the truncation tends to be removed completely, so a complete description would be recovered. This is a general claim from the effective field theory/underlying theory duality perspective that should be applicable to all consistent field theories. It is natural to see that the effective field theory power counting rules delineated in Eq.(\ref{EFTPC}) automatically allow for corrections/adjustments of the renormalization group invariant ratios $[\beta_{\cdots},\alpha_{\cdots}]$ in a systematic manner, in order to yield physical ERE factors. At a given order of effective field theory truncation $\Delta$, the differences between the exact values and the theoretical values of these ratios are of higher order\bea\frac{\beta_{L;i}^{(\Delta)} -\beta_{L;i}^{(\texttt{\tiny phys})}}{\beta_{L;i}^{(\Delta)}}=\mathcal{O}\left(\frac{Q^{\delta}} {\Lambda_{\not\pi}^{\delta}}\right),\quad\frac{\alpha_{L;j}^{(\Delta)}-\alpha_{L;j}^{(\texttt{\tiny phys})}}{\alpha_{L;i}^{(\Delta)}}=\mathcal{O}\left(\frac{Q^{\delta^\prime}} {\Lambda_{\not\pi}^{\delta^\prime}}\right),\quad\delta\geq1,\ \delta^\prime\geq1.\eea Of course, these power counting rules could be readily transcribed into the effective field theory couplings via the renormalization group equations given in Sec.III.C and Sec.III.D that are expressed in terms of prescription independence of the ratios or their algebraic combinations, i.e., the physical ERE parameters. We should note that these renormalization group equations are derived in exactly the same fashion as the standard algorithm in deriving renormalization group equations where prescription independence of certain objects is the starting point, involving no additional assumptions. So far, our discussions are limited to the simpler pionless effective field theory. The realistic situations with pion exchanges would make the running behaviors more complicated. For example, rather than simple fixed points, there might be limit cycles\cite{limitcycleWil,limitcycle1,limitcycle2}. In our presentation, the complicated running behaviors might seem to stem from truncations in nonperturbative contexts. In this connection, we note that the extra divergences or parameters that arise in effective field theory as truncations do not commute with loop integrations, must correspond to some well-defined quantities if one could calculate with underlying theory. Therefore, the tight constraints and the running behaviors thus derived must have reflected at least part of the 'truths' provided the effective field theory is a rational one. Before closing this section, we wish to issue the following remarks: First, the primary distinction between our renormalization group equations and others is that the tight constraints are intensively exploited with a general parametrization of loop integrals in demanding that physical properties or observables be insensitive to variations in prescription. In Wilsonian or exact renormalization group equations\cite{Birse,Harada}, there is no room for more sophisticated situations (Sec. III.B), hence one may be led to flawed judgement about effective field theory power counting\cite{EGeM}; Second, therefore, the most natural starting point for deriving power counting is simply the basic idea of low energy expansion of a fundamental field theory in terms of local interactions in terms of low energy effective degrees, and we would generically arrive at a natural power counting. Both natural and unnatural low energy behaviors could be well achieved with the help of parameters like $J_0$ in accordance with tight constraints. Third, a general parametrization would actually allow us to accommodate the St\"ukelberg-Peterman form of renormalization group equation\cite{Stuek}. Fourth, as is shown in Sec. IV.B, the (probable) singularities in the nonperturbative running couplings mean that the effective theory is only consistent within a limited window. In contrast, as the sliding scale is not limited at all, it is somehow inconsistent to apply Wilsonian or exact renormalization group equation to the effective field theories that only make sense below a finite scale. Therefore, our presentations above at least provide an important conceptual supplementary to renormalization group equation in effective field theory contexts. \section{Summary}In this report, we have examined the running behaviors of the contact couplings of pionless effective field theory in nonperturbative contexts for $^1S_0$ and uncoupled $P$ channels. Starting with the closed-form $T$ matrices in a general parametrization of divergent integrals, some interesting linear relations among the factors and parameters for the $T$ matrices were presented. Working in the underlying theory perspective, we arrived at novel renormalization group equations by exploiting the tight constraints imposed by the closed-form $T$ matrices and obtained the running couplings that exhibit probable singularities. Then we demonstrated that the novel renormalization group equations and their solutions could be quite informative about the physics delineated by an effective field theory and its underlying theory provided that the tight constraints imposed by the closed-form $T$-matrices are adequately exploited than discarded. Brief comparisons with other literature were also presented. We also wish to note the following distinctive aspects about our study of renormalization group equations: 1). The starting point is the underlying theory perspective in the sense of being UV complete. 2). A general parametrization of the loop integrals in effective field theory is employed and proves to be crucial. 3). The tight constraints are exploited rather than circumvented, making certain parameters 'physical'. 4). The renormalization group equations are derived within the circumstances of tight constraints. 5). The running couplings are tractable from such tightly constrained renormalization group equations. 6). More (globally) structural and self-consistent properties of the effective field theory are encrypted into the running couplings due to the tight constraints. 7). It is shown that Wilsonian renormalization group equation and the like is incompatible with tight constraints due to rigidity of cutoff regulator, hence probably problematic for issues with tight constraints. 8). The power counting should be naturally derived right from the basic idea of effective field theory rather than from renormalization group equations. For most nonperturbative problems, a relativistic or covariant framework are very difficult to come by, one is often forced to work with certain non-relativistic expansion and other truncations and then encounters tight constraints of various forms. Our discussions here amounts to pointing out an alternative way to make sense of these constraints rather than resorting to means to circumvent them at all. \section*{Acknowledgement}We are deeply grateful to the anonymous referees for their criticisms that greatly improved the presentation of this manuscript. This project is supported in part by the National Natural Science Foundation of China under Grant No. 11435005 and by the Ministry of Education of China.
2,877,628,089,865
arxiv
\section*{References} \noindent $^1$M. Lesieur, {\it Turbulence in Fluids}, 3rd ed. (Kluwer, Dordrecht, 1997). \noindent $^2$T. von K\'{a}rm\'{a}n and L. Howarth, ``On the statistical theory of isotropic turbulence,'' Proc. Roy. Soc. Lond. A {\bf 164}, 192-215 (1938). \noindent $^3$S. N. Gurbatov, S. I. Simdyankin, E. Aurell, U. Frisch, and G. T\'{o}th, ``On the decay of Burgers turbulence,'' J. Fluid Mech. {\bf 344}, 339-374 (1997). \noindent $^4$G. L. Eyink and J. Xin, ``Ideal turbulent decay in the Kraichnan model of a passive scalar,'' preprint. \noindent $^5$U. Frisch, private communication. \noindent $^6$I. Proudman and W. H. Reid, ``On the decay of a normally distributed and homogeneous turbulent velocity field,'' Phil. Trans. Roy. Soc. Lond A {\bf 247}, 163-189 (1954). \noindent $^7$U. Frisch, {\it Turbulence}. (Cambridge U.P., Cambridge, 1995). \noindent $^8$T. T. Clark and C. Zemach, ``Symmetries and the approach to statistical equilibrium in isotropic turbulence,'' Phys. Fluids {\bf 10}, 2846-2858 (1998). \noindent $^9$D. J. Thomson, ``Backwards dispersion of particle pairs and decay of scalar fields in isotropic turbulence,'' in preparation. \end{document} \end
2,877,628,089,866
arxiv
\section*{Introduction}\label{intro} \numberwithin{equation}{section} The notion of {\em vertex algebra}, originally introduced by Borcherds \cite{B}, presents a remarkable connection between mathematics and theoretical physics. The vertex algebra theory led to important breakthroughs in multiple areas such as automorphic forms, finite simple groups and soliton equations; see, e.g., the books by E. Frenkel and Ben-Zvi \cite{FBZ}, I. Frenkel, Lepowsky and Meurman \cite{FLM} and Kac \cite{Kac2}. Some of the most extensively studied examples of vertex algebras come from the theory of affine Kac--Moody Lie algebras; see the books by Kac \cite{Kac} and Lepowsky and Li \cite{LLi}. Motivated by a parallel between the development of the theories of affine Lie algebras and quantum affine algebras, as well as by further applications to two-dimensional statistical models and the quantum Yang--Baxter equation, I. Frenkel and Jing \cite{FJ} formulated a fundamental problem of generalizing the vertex algebra theory to the quantum case. The notion of {\em quantum vertex algebra} was introduced by Etingof and Kazhdan \cite{EK} based on the ideas of E. Frenkel and Reshetikhin \cite{FR2}. The examples of quantum vertex algebras were constructed in \cite{EK} as quantizations of the quasiclassical structure on the universal affine vertex algebra in type $A$ when the classical $R$-matrix is of rational, trigonometric and elliptic type. Recently, a structure theory of quantum vertex algebras was developed by De Sole, Gardini and Kac \cite{DGK} and the Etingof--Kazhdan construction was generalized to the rational $R$-matrix in types $B$, $C$ and $D$ by Butorac, Jing and the author \cite{BJK}. On the other hand, several more general related notions, in particular, of {\em $h $-adic nonlocal vertex algebra} and of its {\em module}, were introduced and extensively studied by Li \cite{Li}. They present analogues of the corresponding notions, coming from the Li nonlocal vertex algebra theory \cite{LiG1} and the Bakalov--Kac field algebra theory \cite{BK}, which are defined over the commutative ring $\mathbb{C}[[h]]$, thus being compatible with Etingof--Kazhdan's theory. Moreover, the notion of $h$-adic nonlocal vertex algebra module, which presents a generalization of vertex algebra module, appears to provide the right setting for the study of representations of double Yangians and of Etingof--Kazhdan's quantum vertex algebras associated with the rational $R$-matrix; see \cite{Li} and \cite{c11} respectively. However, Li's subsequent results \cite{Li1} suggest that the solution of the original Frenkel--Jing problem of associating quantum vertex algebras to quantum affine algebras requires a new concept of {\em $\phi$-coordinated module}. Following such an approach, Li, Tan and Wang \cite{LTW} recently established a correspondence between restricted modules for the Ding--Iohara algebra of level $0$ associated with the affine Lie algebra $\widehat{\mathfrak{sl}}_2$ \cite{DI} and $\phi$-coordinated modules for certain quantum vertex algebra. The definition of a $\phi$-coordinated module $W$ for a quantum vertex algebra $V$, as given in \cite{Li1}, is characterized by a certain deformed version of the weak associativity property. Roughly speaking, it requires that the expressions $$ \big((z_1-z_2)^p\hspace{1pt} Y_W(u,z_1)Y_W(v,z_2)\big)\big|_{z_1=\phi(z_2,z_0)} \big. \quad\text{and}\quad (\phi(z_2,z_0) -z_2)^p\hspace{1pt} Y_W\left(Y(u,z_0)v,z_2\right) $$ coincide for all $u,v\in V$, where $Y(z)$ is the vertex operator map on $V$, $Y_W(z)$ the $\phi$-coordinated module map, $\phi(z_2,z_0)\in\mathbb{C}((z_2))[[z_0]]$ an associate of the one-dimensional additive formal group and $p\geqslant 0$ an integer depending on $u,v$. While setting $\phi(z_2,z_0)=z_2 +z_0$ leads to the usual weak associativity property, a different choice of the associate appears to be required in order to adapt the theory to quantum affine algebras; see \cite{Li1}. Let $\mathfrak{g}_N=\mathfrak{gl}_N ,\mathfrak{sl}_N$. In this paper, we consider the quantum affine vertex algebra $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ associated with the trigonometric $R$-matrix, as defined by Etingof and Kazhdan \cite{EK}. We should mention that $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ can be also regarded as an associative algebra over $\mathbb{C}[[h]]$, which is topologically generated by the coefficients of certain Taylor series organized into the matrix $T^+(u)\in\mathop{\mathrm{End}}\mathbb{C}^N \otimes \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N) [[u]]$, subject to certain dual Yangian-type defining relations. As a quantum vertex algebra, its vertex operator map $Y(z)$ is given in the form of {\em quantum currents} $\mathcal{T} (u)$, which go back to Reshetikhin and Semenov-Tian-Shansky \cite{RS}. Furthermore, the $\mathcal{S}$-locality property for $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$, which is a quantum analogue of the locality in the corresponding affine vertex algebra, comes from the {\em quantum current commutation relation} which, in this particular setting, can be expressed as \begin{align} &\mathcal{T}_1(u_1)R_{21} (e^{-u_1 +u_2 -hc} ) \mathcal{T}_2(u_2) R_{21}(e^{-u_1 +u_2})^{-1}\nonumber\\ &\qquad\qquad= R_{12}(e^{-u_2 +u_1})^{-1}\mathcal{T}_2(u_2)R_{12}(e^{-u_2+u_1-hc})\mathcal{T}_1(u_1),\label{intr1} \end{align} where $R(x)=R_{12}(x)$ is the trigonometric $R$-matrix of type $A$.\footnote{We explain the precise meaning of relations \eqref{intr1} and \eqref{intr2} in Subsection \ref{subsec021}.} On the other hand, the original quantum current commutation relation in \cite{RS} is given in the multiplicative form, \begin{align} &\mathcal{L}_1(x_1) R_{21}(x_2 e^{-hc}/ x_1) \mathcal{L}_2(x_2) R_{21}(x_2 /x_1)^{-1}\nonumber\\ &\qquad\qquad= R_{12}(x_1 /x_2)^{-1}\mathcal{L}_2(x_2 ) R_{12}(x_1 e^{-hc}/ x_2)\mathcal{L}_1(x_1).\label{intr2} \end{align} Its significance comes from Ding's quantum current realization of the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ \cite{D}, which relies on the famous Ding--Frenkel isomorphism \cite{DF}. The algebra generators are given as coefficients of matrix entries of the {\em quantum current} $\mathcal{L}(x)$, so that $ \mathcal{L}(x)$ belongs to $ \mathop{\mathrm{End}}\mathbb{C}^N\otimes {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N) [[x^{\pm 1}]] $, while the defining relations at the level $c\in\mathbb{C}$ are given by \eqref{intr2}, along with one more family of relations in the $\mathfrak{g}_N= \mathfrak{sl}_N$ case. As in \cite[Sect. 5]{Li1}, in this paper we consider the $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-modules for the associate $\phi(z_2,z_0)=z_2 e^{z_0}$, which connects commutation relations \eqref{intr1} and \eqref{intr2}. More specifically, by applying the substitutions $x_i=ze^{u_i}$ with $i=1,2$, multiplicative relation \eqref{intr2} takes the additive form as in \eqref{intr1}. It is worth noting that both additive and multiplicative forms of the trigonometric $R$-matrix naturally occur in the theories of quantum groups and exactly solvable models; see \cite{PS,FRT,J}. As with the rational $R$-matrix case \cite{c11}, the multiple copies of quantum currents $\mathcal{L}(x_i)$ with $i=1,\ldots ,n$ can be organized into the operators $\mathcal{L}_{[n]}(x_1,\ldots,x_n)$ in the variables $x_1,\ldots ,x_n$ which satisfy certain generalized version of commutation relation \eqref{intr2}. Roughly speaking, such operators take place of the normal-ordered products of $n$ quantum currents. In particular, for any {\em restricted} $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module, i.e. for any module $W$ such that $\mathcal{L}(x)w$ belongs to $\mathop{\mathrm{End}}\mathbb{C}^N\otimes W ((x))[[h]]$ for all $w\in W$, the series $\mathcal{L}_{[n]}(x_1,\ldots,x_n)w$ possesses only finitely many negative powers of the variables $x_1,\ldots ,x_n$ modulo $h^k$ for all $k\geqslant 0$ and $w\in W$. By combining Ding's quantum current realization \cite{D} with Li's theory of $\phi$-coordinated modules \cite{Li1} and Cherednik's fusion procedure for the trigonometric $R$-matrix \cite{C} in the $\mathfrak{g}_N=\mathfrak{sl}_N$ case, we establish the following correspondence between restricted modules for the quantum affine algebra and $\phi$-coordinated modules for the Etingof--Kazhdan quantum vertex algebra, which is the main result of this paper. \begin{mthm*}\label{mainthm1} Let $\mathfrak{g}_N=\mathfrak{gl}_N ,\mathfrak{sl}_N$. Let $W$ be a restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module of level $c\in\mathbb{C}$. There exists a unique structure of $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module on $W$, where $\phi(z_2,z_0) = z_2 e^{z_0}$, such that \begin{equation}\label{formula} Y_W(T_{[n]}^+(u_1,\ldots ,u_n)\mathop{\mathrm{\boldsymbol{1}}},z) = \mathcal{L}_{[n]}(x_1,\ldots ,x_n)\big|_{x_1 = ze^{u_1},\ldots, x_n = ze^{u_n}}\big. \quad\text{for all }n\geqslant 1. \end{equation} Conversely, let $(W,Y_W)$ be a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module, where $\phi(z_2,z_0) = z_2 e^{z_0}$. There exists a unique structure of restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module of level $c$ on $W$ such that \begin{equation}\label{moduleformula} \mathcal{L} (z) = Y_W (T^+(0)\hspace{-1pt}\mathop{\mathrm{\boldsymbol{1}}}, z). \end{equation} Moreover, a topologically free $\mathbb{C}[[h]]$-submodule $W_1$ of $W$ is a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c} (\mathfrak{g}_N)$-submodule of $W$ if and only if $W_1$ is an $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-submodule of $W$. \end{mthm*} In order to establish this correspondence, some minor modifications had to be made to the definitions of quantum affine algebra and $\phi$-coordinated module. More specifically, both notions were redefined over the ring $\mathbb{C}[[h]]$ and suitably completed, so that they are compatible with Etingof--Kazhdan's definition of quantum vertex algebra. In the end, we recollect that the universal affine vertex algebra, which governs the representation theory of the corresponding affine Lie algebra $\widehat{\mathfrak{g}}_N$, is constructed on the vacuum module over the universal enveloping algebra $ {\rm U} (\widehat{\mathfrak{g}}_N)$; see \cite{FZ,Lian}. In contrast, $\overline{\mathcal{V}}_{\hspace{-1pt}c} (\mathfrak{g}_N)$ is not the vacuum module over $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$, although its quantum vertex algebra structure turns into the corresponding affine vertex algebra in the classical limit. Furthermore, it is not clear whether the {\em vacuum module $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ at the level $c$} over the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ possesses any natural quantum vertex algebra-like structure that governs the representation theory of $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$. However, we have the following simple consequence of the \hyperref[mainthm1]{Main Theorem}: \begin{kor}\label{maincor} Let $\mathfrak{g}_N=\mathfrak{gl}_N ,\mathfrak{sl}_N$. The vacuum module $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ over the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ is a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module. Moreover, $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ is an irreducible $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module if and only if it is an irreducible $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c} (\mathfrak{g}_N)$-module. \end{kor} The paper is organized as follows. In Sections \ref{sec02} and \ref{sec0102}, we introduce the notation and provide preliminary definitions and results on restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-modules and on $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c} (\mathfrak{g}_N)$-modules respectively. In Section \ref{sec05}, we prove the \hyperref[mainthm1]{Main Theorem}. Finally, in Section \ref{newsec02}, we discuss a connection between the families of central elements of the quantum affine algebra and the quantum affine vertex algebra established by $\phi$-coordinated module map \eqref{formula}. \section{Restricted modules for the quantum affine algebra}\label{sec02} In this section, we recall some basic properties of the trigonometric $R$-matrix of type $A$. Next, we recall Ding's quantum current realization of the quantum affine algebra in type $A$ and the corresponding notion of restricted module. Also, we derive certain properties of the quantum currents which are required in the following sections. Finally, we demonstrate how the \hyperref[mainthm1]{Main Theorem} implies Corollary \ref{maincor}. \subsection{Trigonometric \texorpdfstring{$R$}{R}-matrix}\label{sec01} We use the standard tensor notation, i.e. for any $$A=\sum_{i,j,k,l=1}^N a_{ijkl} \hspace{1pt} e_{ij}\otimes e_{kl}\,\in\, \mathop{\mathrm{End}}\mathbb{C}^N\otimes\mathop{\mathrm{End}}\mathbb{C}^N$$ and indices $r,s=1,\ldots , m$ such that $r\neq s$, where $m\geqslant 2$ and $e_{ij}\in\mathop{\mathrm{End}} \mathbb{C}^N$ are the matrix units, we denote by $A_{rs}$ the element of the algebra $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}$, \begin{equation}\label{notation} A_{rs}=\sum_{i,j,k,l=1}^N a_{ijkl}\hspace{1pt} (e_{ij})_r (e_{kl})_s,\quad\text{where}\quad (e_{ij})_p = 1^{\otimes (p-1)} \otimes e_{ij} \otimes 1^{\otimes{(m-p)}}. \end{equation} Let $N\geqslant 2$ be an integer and $h$ a formal parameter. Introduce the trigonometric $R$-matrix of type $A$ by \begin{align} {\overline{R}}(x) =&\sum_{i=1}^N e_{ii}\otimes e_{ii} + e^{-h/2} \frac{1-x}{1-e^{-h}x}\sum_{\substack{i,j=1\\i\neq j}}^N e_{ii}\otimes e_{jj}\nonumber\\ &+\frac{\left(1-e^{-h}\right)x}{1-e^{-h}x}\sum_{\substack{i,j=1\\i> j}}^N e_{ij}\otimes e_{ji} +\frac{1-e^{-h}}{1-e^{-h}x}\sum_{\substack{i,j=1\\i< j}}^N e_{ij}\otimes e_{ji}.\label{Rbar} \end{align} $R$-matrix \eqref{Rbar} can be regarded as a rational function in the variables $x$ and $e^{h/2}$, i.e. as an element of $(\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes 2} (x,e^{h/2})$. It satisfies the {\em Yang--Baxter equation} \begin{equation}\label{YBE} {\overline{R}}_{12}(x/y) {\overline{R}}_{13}(x) {\overline{R}}_{23}(y)= {\overline{R}}_{23}(y) {\overline{R}}_{13}(x) {\overline{R}}_{12}(x/y) \end{equation} and it possesses the {\em unitarity property} \begin{equation}\label{unitrig} {\overline{R}}_{12}(x)\hspace{1pt} {\overline{R}}_{21}(1/x) =1, \end{equation} where, in accordance with \eqref{notation}, the subscripts indicate the copies in the tensor product algebra $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}$ with $m=3$ in \eqref{YBE} and $m=2$ in \eqref{unitrig}. Recall the formal Taylor Theorem, \begin{equation}\label{taylor} b(z+z_0)=e^{z_0\frac{\partial}{\partial z} b(z)}= \sum_{k=0}^\infty \frac{z_0^k}{k!} \frac{\partial^k}{\partial z^k} b(z)\quad\text{for}\quad b(z)\in V[[z^{\pm 1}]], \end{equation} where $V$ is a vector space. Due to \eqref{taylor}, we can regard the $R$-matrix $ {\overline{R}}(x)$ as an element of $(\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes 2} [[x,h]]$ via the expansion \begin{equation}\label{exp} \frac{1}{1-e^{ah}x}= \frac{1}{1-(x+(e^{ah}-1)x)}= \sum_{k=0}^{\infty} \frac{(e^{ah}-1)^k x^k}{k!}\frac{\partial^k}{\partial x^k}\left(\frac{1}{1-x}\right), \quad a\in\mathbb{C}, \end{equation} where \begin{equation}\label{expansions} e^{ah}=\sum_{k\geqslant 0} (ah)^k /k!\in\mathbb{C}[[h]]\quad\text{and}\quad (1-x)^{-1}=\sum_{k\geqslant 0}x^k\in\mathbb{C}[[x]]. \end{equation} Due to \cite{FR}, there exists a unique series $f_q (x)$ in $\mathbb{C}(q)[[x]]$ such that \begin{equation}\label{fqh} f_q (xq^{2N}) = f_q (x)\frac{\left(1-xq^2\right)\left(1-xq^{2N-2}\right)}{\left(1-x\right)\left(1-xq^{2N}\right)}. \end{equation} As demonstrated in \cite{KM}, the series $f_q (x)$ can be expressed as \begin{equation}\label{f2} f_q (x)=1+\sum_{k=1}^{\infty} f_{q,k} \left(\frac{x}{1-x}\right)^k, \end{equation} where all $f_{q,k} (q-1)^{-k} \in\mathbb{C}(q)$ are regular at $q=1$. Hence, applying the substitution $q=e^{h/2}$ to \eqref{f2} and using the expansions in \eqref{expansions} we obtain \begin{equation}\label{f3} f(x)\coloneqq 1+\sum_{k=1}^{\infty} f_{k} \left(\frac{x}{1-x}\right)^k\in\mathbb{C}[[x,h]],\quad\text{where}\quad f_k \coloneqq (f_{q,k})\left|_{q=e^{h/2}}\right.\in h^k\mathbb{C}[[h]] . \end{equation} By \cite[Equation (2.11)]{KM} series \eqref{f3} satisfies \begin{equation}\label{fqhqf} f (x)f (xe^h)\ldots f (xe^{(N-1)h})=\frac{1-x}{1-xe^{(N-1)h}}. \end{equation} The normalized $R$-matrix \begin{equation}\label{R} R(x)=f(x) {\overline{R}}(x)\,\in\, \mathop{\mathrm{End}}\mathbb{C}^N \otimes \mathop{\mathrm{End}}\mathbb{C}^N[[x,h]] \end{equation} possesses the {\em crossing symmetry properties} \begin{equation}\label{csym} R(xe^{Nh})^{t_1} D_1 ( R(x)^{-1})^{t_1}=D_1\quad\text{and}\quad (R(x)^{-1})^{t_2} D_2 R(xe^{Nh})^{t_2} = D_2, \end{equation} where $D $ denotes the diagonal matrix \begin{equation}\label{matrix497} D=\mathop{\mathrm{diag}}\left(e^{ (N-1)h/2 },e^{ (N-3)h/2},\ldots ,e^{- (N-1)h/2} \right) \end{equation} and $t_i$ denotes the transposition applied on the tensor factor $i=1,2$; see \cite{FR}. Express the $R$-matrix $R(x)$ defined by \eqref{R} as \begin{equation}\label{R2} R(x)=g(x)R^+(x),\quad \text{where}\quad g(x)=\frac{f(x) }{1-e^{-h}x},\quad R^+(x)=\left(1-e^{-h}x\right) {\overline{R}}(x). \end{equation} Clearly, $R^+ (x)$ is a polynomial with respect to the variable $x$, i.e. $R^+ (x)$ belongs to $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes 2} [[h]][x]$. On the other hand, as $(e^{ah}-1) x\in xh\mathbb{C}[[h]]$, we conclude by \eqref{exp} and \eqref{f3} that $g(x)$ admits the presentation \begin{equation}\label{g} g(x)=\sum_{k=0}^{\infty} g_k \frac{x^k}{\left(1-x\right)^{k+1}},\quad\text{where}\quad g_k\in h^k\mathbb{C}[[h]]\quad\text{and}\quad g_0 =1. \end{equation} Denote by $\mathbb{C}_* (z_1,\ldots ,z_n)$ the localization of the ring of Taylor series $\mathbb{C}[[z_1,\ldots ,z_n]]$ at $\mathbb{C}[z_1,\ldots ,z_n]^{\times}$. Consider the unique embedding $ \mathbb{C}_* (z_1,\ldots ,z_n)\to \mathbb{C}((z_1))\ldots ((z_n))$. Extending the embedding to the $h$-adic completion of $\mathbb{C}_* (z_1,\ldots ,z_n)$ we obtain the map \begin{equation}\label{iotas} \mathop{\iota_{z_1,\ldots ,z_n}}\colon \mathbb{C}_* (z_1,\ldots ,z_n)[[h]]\to \mathbb{C}((z_1))\ldots ((z_n))[[h]]. \end{equation} As in \cite{KM}, we now apply the substitution $x=e^u$ to the normalized $R$-matrix $R(x)$ given by \eqref{R}. First, replacing the variable $x$ by $e^u$ in \eqref{g} we obtain $$ g(e^u)=\sum_{k=0}^{\infty} g_k \frac{e^{ku}}{\left(1-e^u\right)^{k+1}}= \sum_{k=0}^{\infty} g_k \frac{e^{ku}\left(\frac{u}{1-e^{u}}\right)^{k+1}}{u^{k+1}} \in\mathbb{C}_*(u)[[h]] $$ since all numerators $e^{ku}u^{k+1}(1-e^{u})^{-k-1}$ belong to $\mathbb{C}[[u]]$ and $g_k\in h^k\mathbb{C}[[h]]$. By applying the embedding $\mathop{\iota_{u}}$ we get $\mathop{\iota_{u}} g(e^u) \in \mathbb{C}((u))[[h]]$. Next, as $R^+ (x)$ is a polynomial with respect to the variable $x$, by applying the substitution $x=e^u$ we obtain $R^+(e^u)$, which belongs to $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes 2}\, [[h,u]]$. Finally, there exists a unique $\psi\in 1+h\mathbb{C}[[h]]$ such that the $R$-matrix \begin{equation}\label{rplusg} R(e^u)\coloneqq \psi \mathop{\iota_{u}} g(e^u) R^+ (e^u) \,\in\, \mathop{\mathrm{End}}\mathbb{C}^N\otimes\mathop{\mathrm{End}}\mathbb{C}^N((u))[[h]] \end{equation} possesses the {\em unitarity property} \begin{equation}\label{uni} R_{12}(e^u) R_{21}(e^{-u}) =1 \end{equation} and the {\em crossing symmetry properties} \begin{equation}\label{csym2} R(e^{u+Nh})^{t_1} D_1 ( R(e^u)^{-1})^{t_1}=D_1\quad\text{and}\quad (R(e^u)^{-1})^{t_2} D_2 R( e^{u+Nh})^{t_2} = D_2; \end{equation} see \cite[Prop. 1.2]{EK4} and \cite[Prop. 2.1]{KM}. Of course, $R$-matrix \eqref{rplusg} also satisfies the {\em Yang--Baxter equation} \begin{equation}\label{yberat} R_{12}(e^u) R_{13}(e^{u+v}) R_{23}(e^v)=R_{23}(e^v)R_{13}(e^{u+v}) R_{12}(e^u). \end{equation} In what follows, whenever it is clear from the context, we omit the embedding symbol $\mathop{\iota}$ and write, e.g., $f(e^u)$ instead of $\mathop{\iota_{u}} f(e^u)$. Furthermore, in the multiple variable case, we employ the usual expansion convention where the choice of the embedding is determined by the order of the variables. For example, if $\sigma$ is a permutation in the symmetric group $\mathfrak{S}_n$, then $f(e^{u_{\sigma_1} +\ldots +u_{\sigma_n}})$ denotes $\mathop{\iota_{u_{\sigma_1} ,\ldots ,u_{\sigma_n}}} f(e^{u_{\sigma_1} +\ldots +u_{\sigma_n}})\in\mathbb{C}((u_{\sigma_1}))\ldots ((u_{\sigma_n} ))[[h]]$. In particular, by $R_{13}(e^{u+v})$ in \eqref{yberat} is denoted $\mathop{\iota_{u,v}} g(e^{u+v}) R_{13}^+ (e^{u+v}) $. \subsection{Quantum affine algebra}\label{subsec021} Ding's quantum current realization of the quantum affine algebra of type $A$ was given in \cite[Prop. 3.1]{D}. We slightly modify the original definition \cite[Def. 3.1]{D} in order to make the setting compatible with the quantum vertex algebra theory; see Remark \ref{comparison} for more details. Our exposition starts in parallel with \cite[Subsection 2.1]{c11}, where a certain quantum current algebra associated with the suitably normalized Yang $R$-matrix was introduced. We omit some simple proofs as they present a straightforward generalization of the arguments from the aforementioned paper to the trigonometric case. For any integer $N\geqslant 2$ denote by $ {\rm F}(N)$ the associative algebra over the ring $\mathbb{C}[[h]]$ generated by the elements $1$, $C$ and $\lambda_{ij}^{(r)}$, where $i,j=1,\ldots, N$ and $r\in\mathbb{Z}$, subject to the defining relations $$C\cdot a=a\cdot C\quad\text{and}\quad 1\cdot a=a\cdot 1=a\qquad\text{for all }a\in {\rm F}(N),$$ i.e. $1$ is the unit and $C$ is a central element in $ {\rm F}(N)$. Introduce the Laurent series \begin{equation}\label{lambda} \lambda_{ij}(x)=\delta_{ij}-h\sum_{r\in\mathbb{Z}}\lambda_{ij}^{(r)}x^{-r-1}\,\in\, {\rm F}(N)[[x^{\pm 1}]],\quad\text{where } i,j=1,\ldots ,N, \end{equation} and arrange them into the matrix $\mathcal{L}(x)\in\mathop{\mathrm{End}}\mathbb{C}^N\otimes {\rm F}(N)[[x^{\pm 1}]]$, \begin{equation}\label{LAMBDA} \mathcal{L}(x)=\sum_{i,j=1}^N e_{ij}\otimes \lambda_{ij}(x). \end{equation} We now introduce certain completion of the algebra $ {\rm F}(N)$ which is suitable for expressing the defining relations for the quantum affine algebra. For an integer $p\geqslant 1$ let $ {\rm I}_p(N)$ be the left ideal in $ {\rm F}(N)$ generated by all $\lambda_{ij}^{(r)}$, where $i,j=1,\ldots, N$ and $r\geqslant p-1$. Define the completion of $ {\rm F}(N)$ as the inverse limit $$ \widetilde{\rm F} (N) = \lim_{\longleftarrow} \hspace{1pt} {\rm F}(N)\hspace{1pt} /\hspace{1pt} {\rm I}_p(N).$$ The algebra $ \widetilde{\rm F} (N)$ is naturally equipped with the $h$-adic topology and its $h$-adic completion is equal to $ \widetilde{\rm F} (N)[[h]]$. For any integer $p\geqslant 1$ let $ {\rm I}_p^h (N)$ be the $h$-adically completed left ideal in $ \widetilde{\rm F}(N)[[h]]$ generated by $ {\rm I}_p(N)$ and the element $h^p \cdot 1$. We generalize the tensor notation from \eqref{notation} to the matrix $\mathcal{L}(x)$ so that the subscript indicates the copy in the corresponding tensor product algebra, \begin{equation}\label{notation2} \mathcal{L}_r (x)=\sum_{i,j=1}^N 1^{\otimes (r-1)}\otimes e_{ij}\otimes 1^{\otimes (m-r)} \otimes \lambda_{ij}(x) \,\in\, (\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m} \otimes {\rm F} (N)[[x^{\pm 1}]]. \end{equation} Employing such notation for $m=2$ and $r=1,2$ we introduce the expressions \begin{align*} &\mathcal{L}_{[2]}^{(1)}(x,y)=\mathcal{L}_1(x) R_{21}(ye^{-hC}/x) \mathcal{L}_2(y) R_{21}(y/x)^{-1},\\ &\mathcal{L}_{[2]}^{(2)}(x,y)= R_{12}(x/y)^{-1}\mathcal{L}_2(y) R_{12}(xe^{-hC}/y)\mathcal{L}_1(x). \end{align*} In accordance with the discussion in Subsection \ref{sec01}, the $R$-matrices $R_{21}(ye^{ahC}/x)^{\pm 1}$ and $R_{12}(xe^{ahC}/y)^{\pm 1}$ with $a\in \mathbb{C}$ are regarded as Taylor series with respect to $y/x$ and $x/y$ respectively. By arguing as in \cite[Lemma 2.1]{c11}, one can prove \begin{lem}\label{LR} The expressions $\mathcal{L}_{[2]}^{(1)}(x,y)$ and $\mathcal{L}_{[2]}^{(2)}(x,y)$ are well-defined elements of $$ \mathop{\mathrm{End}}\mathbb{C}^N \otimes\mathop{\mathrm{End}}\mathbb{C}^N \otimes \widetilde{\rm F}(N)[[ x^{\pm 1},y^{\pm 1},h]]. $$ Moreover, for any integer $p\geqslant 1$ both $\mathcal{L}_{[2]}^{(1)}(x,y) $ and $\mathcal{L}_{[2]}^{(2)}(y,x)$\footnote{Notice the swapped variables in this term.} modulo $ {\rm I}_p^h(N) $ belong to $$ \mathop{\mathrm{End}}\mathbb{C}^N \otimes \mathop{\mathrm{End}}\mathbb{C}^N \otimes {\rm F}(N) [[x^{\pm 1} ]]((y)). $$ \end{lem} By Lemma \ref{LR}, there exist elements $\lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s;t)}$ in $ \widetilde{\rm F}(N)[[h]]$ such that \begin{align*} &\mathcal{L}_{[2]}^{(t)}(x,y)=\sum_{i,j,k,l=1}^N\sum_{r,s\in\mathbb{Z}} e_{ij}\otimes e_{kl}\otimes \lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s;t)}\hspace{1pt} x^{-r-1}y^{-s-1} \quad\text{for}\quad t=1,2. \end{align*} Let $ {\rm J}(N)$ be the ideal in the algebra $ \widetilde{\rm F}(N)[[h]]$ generated by all elements \begin{equation}\label{defrel} \lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s;1)}- \lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s;2)},\qquad \text{where} \qquad r,s\in\mathbb{Z}\quad\text{and}\quad i,j,k,l=1,\ldots ,N. \end{equation} Introduce the completion of $ {\rm J}(N)$ as the inverse limit $$ \widetilde{\rm J} (N)=\lim_{\longleftarrow} \hspace{1pt} {\rm J}(N)\hspace{1pt} /\hspace{1pt} {\rm J}(N)\cap {\rm I}_p (N). $$ Note that the $h$-adic completion $[ \widetilde{\rm J}(N)][[h]]$ of $$ [ \widetilde{\rm J}(N)]=\textstyle \left\{a\in \widetilde{\rm F} (N)[[h]]\,:\, h^n a\in \widetilde{\rm J}(N)\text{ for some integer }n\geqslant 0 \right\} $$ is also an ideal in $ \widetilde{\rm F}(N)[[h]]$. Following \cite[Def. 3.1]{D}, we define the {\em (completed) quantum affine algebra} $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ as the quotient of the algebra $ \widetilde{\rm F}(N)[[h]]$ by the ideal $[ \widetilde{\rm J} (N)][[h]]$, \begin{equation}\label{quotient} {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)\hspace{1pt} =\hspace{1pt} \widetilde{\rm F}(N)[[h]]\hspace{1pt} /\hspace{1pt} [ \widetilde{\rm J} (N)][[h]]. \end{equation} Denote the images of the elements $1$, $C$ and $\lambda_{ij}^{(r)}$ in quotient \eqref{quotient} again by $1$, $C$ and $\lambda_{ij}^{(r)}$. Also, denote by $\lambda_{ij}(x)$ and $\mathcal{L}(x)$ the corresponding series in $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)[[x^{\pm 1}]]$ and $\mathop{\mathrm{End}}\mathbb{C}^N \otimes {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N) [[x^{\pm 1}]]$ respectively. Defining relations \eqref{defrel} for the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ can be expressed by the {\em quantum current commutation relation} \begin{align}\label{qc} \mathcal{L}_1(x) R_{21}(ye^{-hC}/x) \mathcal{L}_2(y) R_{21}(y/x)^{-1}=R_{12}(x/y)^{-1}\mathcal{L}_2(y) R_{12}(xe^{-hC}/y)\mathcal{L}_1(x), \end{align} as given by Reshetikhin and Semenov-Tian-Shansky \cite{RS}. As the images of the elements $\lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s;1)}$ and $\lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s;2)}$ in quotient \eqref{quotient} coincide, we denote them by $\lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s)}$. Also, we write \begin{equation}\label{eldva} \mathcal{L}_{[2]}(x,y)=\sum_{i,j,k,l=1}^N\sum_{r,s\in\mathbb{Z}} e_{ij}\otimes e_{kl}\otimes \lambda_{i\hspace{1pt} j\hspace{1pt} k\hspace{1pt} l}^{(r,s)}\hspace{1pt} x^{-r-1}y^{-s-1} . \end{equation} and $\mathcal{L}_{[1]}(x)=\mathcal{L} (x)$. Observe that the both sides of relation \eqref{qc} coincide with $\mathcal{L}_{[2]}(x,y)$. Motivated by \cite{RS}, we refer to the series $\mathcal{L}(x)$ as {\em quantum currents}. Our next goal is to derive a certain generalized version of \eqref{qc} consisting of $n+m$ quantum currents. For integers $n,m\geqslant 1$ introduce the functions depending on the variable $z$ and the families of variables $x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_m)$ with values in the space $(\mathop{\mathrm{End}}\mathbb{C}^{N})^{\otimes n} \otimes (\mathop{\mathrm{End}}\mathbb{C}^{N})^{\otimes m} $ by \begin{align} R_{nm}^{12}(zxe^{ah}/y)= \prod_{i=1,\dots,n}^{\longrightarrow} \prod_{j=n+1,\ldots,n+m}^{\longleftarrow} R_{ij}(zx_i e^{ah}/y_{j-n}),\label{rnm12}\\ R_{nm}^{21}(ye^{ah}/zx)= \prod_{i=1,\dots,n}^{\longleftarrow} \prod_{j=n+1,\ldots,n+m}^{\longrightarrow} R_{ji}( y_{j-n}e^{ah}/zx_i),\label{rnm123} \end{align} where $a\in\mathbb{C}$ and the arrows indicate the order of the factors. For example, we have $$R_{22}^{12}(zx/y)=R_{14}R_{13}R_{24}R_{23} \quad\text{and}\quad R_{22}^{21}(y/xz)=R'_{32}R'_{42}R'_{31}R'_{41},$$ where $R_{ij}= R_{ij}(zx_i /y_{j-n})$ and $R'_{ji}=R_{ji}( y_{j-n}/zx_i)$. The corresponding functions associated with the $R$-matrix $R^+(x)$ given by \eqref{R2}, $R_{nm}^{+12}(zxe^{ah}/y)$ and $R_{nm}^{+21}(ye^{ah}/zx)$, can be defined analogously. Note that the evaluations of \eqref{rnm12} and \eqref{rnm123} at $z=1$ are well-defined. We denote them by $R_{nm}^{12}(xe^{ah}/y)$ and $R_{nm}^{21}(ye^{ah}/x)$ respectively. Next, for any integer $n\geqslant 1$ and the family of variables $x=(x_1,\ldots ,x_n)$ define the functions with values in $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}$ by \begin{align} &R_{[n,a]}(x)=\prod_{i=1,\ldots ,n-1}^{\longrightarrow}\prod_{j=i+1,\ldots ,n}^{\longrightarrow}R_{ji}(x_j e^{-ah} /x_i)^{-1},\label{r1}\\ &\cev{R}_{[n,a]}(x)=\prod_{i=1,\ldots ,n-1}^{\longleftarrow}\prod_{j=i+1,\ldots ,n}^{\longleftarrow}R_{ji}(x_j e^{-ah} /x_i)^{-1},\label{r2} \end{align} where $a\in\mathbb{C} $ and the arrows again indicate the order of the factors. For example, we have $$ R_{[n,a]}(x)=R_{21}R_{31}R_{41}R_{32}R_{42}R_{43} \quad\text{and}\quad \cev{R}_{[n,a]}(x)=R_{43}R_{42}R_{32}R_{41}R_{31}R_{21}, $$ where $R_{ji}=R_{ji}(x_j e^{-ah} /x_i)^{-1}$. If $a=0$, we omit the second subscript and write $$ R_{[n]}(x)=R_{[n,0]}(x)\quad\text{and}\quad\cev{R}_{[n]}(x)=\cev{R}_{[n,0]}(x). $$ Finally, for any integer $n\geqslant 2$ we generalize $\mathcal{L}_{[2]}(x,y)$, as given by \eqref{eldva}, by setting \begin{equation}\label{Ln} \mathcal{L}_{[n]}(x)=\hspace{-4pt}\prod_{i=1,\ldots ,n}^{\longrightarrow}\hspace{-4pt} \left(\mathcal{L}_{i}(x_i)R_{i+1\hspace{1pt} i }(x_{i+1}e^{-hC}/x_i) \ldots R_{n\hspace{1pt} i}(x_{n}e^{-hC}/x_i) \right)\,\cdot\, \cev{R}_{[n]}(x). \end{equation} Denote by $ {\rm I}_p^h (\widehat{\mathfrak{gl}}_N), {\rm I}_p (\widehat{\mathfrak{gl}}_N)$ the images of the left ideals $ {\rm I}_p^h (N), {\rm I}_p (N) \subset \widetilde{\rm F}(N)[[h]]$ in the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ with respect to the canonical map $ \widetilde{\rm F}(N)[[h]]\to {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$. In the next proposition, we use the superscripts $1,2,3$ to indicate the following tensor factors: $$ \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}}^{1} \otimes \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}}^{2}\otimes \smalloverbrace{ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)}^{3}. $$ The proposition can be proved by using Lemma \ref{LR}, Yang--Baxter equation \eqref{YBE}, quantum current commutation relation \eqref{qc} and arguing as in \cite[Prop. 2.4 and 2.5]{c11}. \begin{pro}\label{qcgenpro} For any integers $n,m\geqslant 1$ and the families of variables $x=(x_1,\ldots ,x_n)$ and $y=(y_1,\ldots ,y_m)$ we have: \begin{enumerate} \item The expression $\mathcal{L}_{[n]}(x)$ is a well-defined element of $$(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n} \otimes {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)[[x_1^{\pm 1},\ldots ,x_n^{\pm 1}]].$$ \item For any $p\geqslant 1$ the element $\mathcal{L}_{[n]}(x)$ modulo $ {\rm I}_p^h (\widehat{\mathfrak{gl}}_N)$ belongs to $$(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n} \otimes {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N) ((x_1,\ldots ,x_n)).$$ \item The following quantum current commutation relation holds: \begin{align} &\mathcal{L}_{[n]}^{13}(x) R_{nm}^{21} (ye^{-hC}/x ) \mathcal{L}_{[m]}^{23}(y)R_{nm}^{21} ( y/x)^{-1}\nonumber\\ &\qquad\qquad=R_{nm}^{12}(x / y)^{-1}\mathcal{L}_{[m]}^{23}(y)R_{nm}^{12}(xe^{-hC} / y) \mathcal{L}_{[n]}^{13}(x).\label{qcgen} \end{align} Moreover, both sides of \eqref{qcgen} coincide with $\mathcal{L}_{[n+m]}(x,y)$. \end{enumerate} \end{pro} Generalizing \eqref{eldva} we denote the coefficients of the matrix entries in \eqref{Ln} as follows: $$ \mathcal{L}_{[n]}(x)=\sum_{i_1,j_1,\ldots,i_n,j_n=1}^N\sum_{r_1,\ldots ,r_n\in\mathbb{Z}} e_{i_1 j_1}\otimes\ldots \otimes e_{i_n j_n}\otimes \lambda_{i_1\hspace{1pt} j_1\ldots i_n\hspace{1pt} j_n}^{(r_1,...,r_n)}\hspace{1pt} x^{-r_1-1}_1\ldots x_n^{-r_n-1}. $$ Our next goal is to introduce the quantum affine algebra associated with the affine Lie algebra $\widehat{\mathfrak{sl}}_N$. Let $P^h $ be the $h$-permutation operator, $$ P^h = \sum_{i=1}^N e_{ii}\otimes e_{ii} + e^{h/2}\sum_{\substack{i,j=1\\i> j}}^N e_{ij}\otimes e_{ji} +e^{-h/2}\sum_{\substack{i,j=1\\i< j}}^N e_{ij}\otimes e_{ji}\in\mathop{\mathrm{End}}\mathbb{C}^N \otimes \mathop{\mathrm{End}}\mathbb{C}^N [[h]]. $$ Consider the action of the symmetric group $\mathfrak{S}_n$ on the space $(\mathbb{C}^N)^{\otimes n}$ which is given by $\sigma_i\mapsto P_{\sigma_i}^h=P_{i\hspace{1pt} i+1}^h$ for $i=1,\ldots ,n-1$, where $\sigma_i$ is the transposition $(i,i+1)$. For a reduced decomposition of a permutation $\sigma=\sigma_{i_1}\ldots \sigma_{i_k} \in \mathfrak{S}_n$ set $P_\sigma^h= P^h_{\sigma_{i_1}}\ldots P^h_{\sigma_{i_k}}$. Let $A^{(n)}$ be the image of the normalized anti-symmetrizer with respect to this action, so that \begin{equation}\label{anti} A^{(n)}=\frac{1}{n!}\sum_{\sigma\in\mathfrak{S}_n} \mathop{\rm sgn}\sigma \cdot P_\sigma^h . \end{equation} Define the {\em quantum determinant} of the matrix $\mathcal{L}(x)$ by \begin{equation}\label{qdet497} {\rm qdet}\hspace{1pt} \mathcal{L} (x)= {\rm tr}_{1,\ldots ,N} \,A^{(N)}\hspace{1pt}\mathcal{L}_{[N]}(x_1,\ldots ,x_N)\big|_{x_1 = x,\ldots, x_N = xe^{-(N-1)h}}\big. \hspace{1pt} D_1\ldots D_N , \end{equation} where the trace is taken over all $N$ copies of $\mathop{\mathrm{End}}\mathbb{C}^N$ and the matrix $D$ is given by \eqref{matrix497}. The quantum determinant is a formal power series in the variable $x$ with coefficients in the quantum affine algebra, i.e. $ {\rm qdet}\hspace{1pt} \mathcal{L} (x) $ belongs to $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)[[x^{\pm 1}]]$. Indeed, the substitution $x_1 = x,\ldots, x_N = xe^{-(N-1)h}$ in \eqref{qdet497} is well-defined due to the second assertion of Proposition \ref{qcgenpro}. Furthermore, all coefficients $d_r$ of the quantum determinant \begin{equation}\label{qdetc} {\rm qdet}\hspace{1pt} \mathcal{L} (x)= 1 - h\sum_{r\in\mathbb{Z}} d_r x^r \end{equation} belong to the center of the quantum affine algebra at the level $c\in\mathbb{C}$; see Proposition \ref{qdetpro}. Let $ {\rm I}_ {\rm qdet}\hspace{1pt}$ be the ideal in the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ generated by the elements $d_r$, where $r\in \mathbb{Z}$. Introduce its completion as the inverse limit $$ \widetilde{ {\rm I}}_ {\rm qdet}\hspace{1pt}=\lim_{\longleftarrow} \hspace{1pt} {\rm I}_ {\rm qdet}\hspace{1pt}\hspace{1pt} /\hspace{1pt} {\rm I}_ {\rm qdet}\hspace{1pt}\cap {\rm I}_p (\mathfrak{gl}_N). $$ The $h$-adic completion $[\widetilde{ {\rm I}}_ {\rm qdet}\hspace{1pt}][[h]]$ of $$ [\widetilde{ {\rm I}}_ {\rm qdet}\hspace{1pt}]=\textstyle \left\{a\in {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)\,:\, h^n a\in \widetilde{ {\rm I}}_ {\rm qdet}\hspace{1pt}\text{ for some integer }n\geqslant 0 \right\} $$ is also an ideal in $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$. Define the {\em (completed) quantum affine algebra} $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$ as the quotient of the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ by the relation $ {\rm qdet}\hspace{1pt} \mathcal{L} (x)= 1$, i.e. $$ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)\hspace{1pt} =\hspace{1pt} {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)\hspace{1pt}/\hspace{1pt} [\widetilde{ {\rm I}}_ {\rm qdet}\hspace{1pt}][[h]]. $$ \begin{rem}\label{comparison} In Ding's definition \cite[Def. 3.1]{D}, the quantum affine algebra is introduced as an associative algebra over the field $\mathbb{C}(q)$. However, as our goal is to study quantum vertex algebras associated to quantum affine algebras, we used the identification $q=e^{h/2}$ and introduced the quantum affine algebra as a suitably completed associative algebra over the commutative ring $\mathbb{C}[[h]]$. Thus we established the setting compatible with Etingof--Kazhdan's notion of quantum vertex algebra \cite[Sect. 1.4]{EK}, which, in particular, is required to be a topologically free $\mathbb{C}[[h]]$-module; see also Li's notion of $h$-adic quantum vertex algebra \cite[Def. 2.20]{Li}. Furthermore, in contrast with Ding's realization, we use normalized $R$-matrix \eqref{R} instead of \eqref{Rbar}. Such choice of the $R$-matrix enables the constructions of certain large families of central elements of the quantum affine algebra at the critical level and of the topological generators of the quantum Feigin--Frenkel center, as demonstrated in \cite{FJMR} and \cite{KM} respectively; see also Section \ref{newsec02}. \end{rem} \subsection{Restricted modules}\label{subsec022} Recall that a $\mathbb{C}[[h]]$-module $W$ is said to be {\em torsion-free} if $h w= 0$ for all nonzero $w\in W$ and that $W$ is said to be {\em separable} if $\cap_{n\geqslant 1} h^n W=0$. Moreover, $W$ is said to be {\em topologically free} if it is separable, torsion-free and complete with respect to $h$-adic topology; see \cite[Chapter XVI]{Kas}. Let $\mathfrak{g}_N=\mathfrak{gl}_N ,\mathfrak{sl}_N$. By arguing as in \cite[Prop. 2.2]{c11} one can show that the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ is topologically free. Define a {\em restricted} $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module $W$ as a topologically free $\mathbb{C}[[h]]$-module such that \begin{equation}\label{restricted} \mathcal{L}(x)w\in \mathop{\mathrm{End}}\mathbb{C}^N\otimes W((x))[[h]]\quad \text{for all } w\in W. \end{equation} \begin{pro}\label{restricted496} Let $W$ be a restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module. Then for any $n\geqslant 1$ and the variables $x=(x_1,\ldots ,x_n) $ we have \begin{equation}\label{restrictedf} \mathcal{L}_{[n]}(x)w\in (\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}\otimes W((x_1,\ldots ,x_n))[[h]]\quad \text{for all } w\in W. \end{equation} \end{pro} \begin{prf} Apply quantum current commutation relation \eqref{qc} on an arbitrary element of some restricted module. For every integer $k\geqslant 0$ the left hand side contains finitely many negative powers of the variable $y$ modulo $h^k$ while the right hand side contains finitely many negative powers of the variable $x$ modulo $h^k$. Hence the statement of the proposition holds for $n=2$. The case $n>2$ is proved by induction on $n$ which relies on \eqref{qcgen}. \end{prf} \begin{rem}\label{linremark} Note that \eqref{restrictedf} implies $\mathcal{L}_{[n]}(x)\in\mathop{\mathrm{End}}\mathbb{C}^N\otimes\mathop{\mathrm{Hom}}(W,W((x_1,\ldots,x_n))[[h]])$ for all $n\geqslant 1$. Hence we can apply the substitutions $x_1 = ze^{u_1}, \ldots, x_n=ze^{u_n}$, thus getting \begin{align} \mathcal{L}_{[n]}(x_1,\ldots ,x_n)\big|_{x_1 = ze^{u_1},\ldots , x_n=ze^{u_n}}\big. \in \mathop{\mathrm{End}}\mathbb{C}^N\otimes\mathop{\mathrm{Hom}}(W,W((z))[[h,u_1,\ldots ,u_n]]).\label{lin} \end{align} We will often denote the expression in \eqref{lin} more briefly by $\mathcal{L}_{[n]}(x)\left|_{x_i = ze^{u_i}}\right.$. \end{rem} As usual, an $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module $W$ is said to be of {\em level $c$} if the central element $C\in {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ acts on $W$ as a scalar multiplication by some $c\in\mathbb{C}$. Denote by $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c$ the {\em quantum affine algebra at the level $c$}, i.e. the quotient of $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ by the ideal generated by the element $C-c$. Let $ {\rm K}_c $ be the left ideal in the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c$ generated by all elements $$ \lambda_{i_1\hspace{1pt} j_1\hspace{1pt}\ldots\hspace{1pt} i_n\hspace{1pt} j_n }^{(r_1,\ldots ,r_n)}\quad\text{such that}\quad r_k\geqslant 0\text{ for at least one integer }k=1,\ldots,n,$$ where $n\geqslant 1$, $i_1,\ldots, i_n,j_1,\ldots ,j_n=1,\ldots , N$ and $r_1,\ldots,r_n\in \mathbb{Z}$. Introduce the completion of $ {\rm K}_c $ as the inverse limit $$\widetilde{ {\rm K}}_c =\lim_{\longleftarrow} \hspace{1pt} {\rm K}_c \hspace{1pt} /\hspace{1pt} {\rm K}_c \cap {\rm I}_p (\widehat{\mathfrak{g}}_N).$$ Then the $h$-adic completion $[\widetilde{ {\rm K}}_c][[h]]$ of $$[\widetilde{ {\rm K}}_c]=\left\{a\in {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c\,:\, h^n a\in \widetilde{ {\rm K}}_c\text{ for some }n\geqslant 0\right\}$$ is also a left ideal in $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c$. Define the {\em vacuum module $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ at the level $c$} over the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ as the quotient of $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c $ by its left ideal $[\widetilde{ {\rm K}}_c][[h]]$, \begin{equation}\label{quotient2} \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)\,=\, {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c\,/\, [\widetilde{ {\rm K}}_c][[h]]. \end{equation} Observe that the canonical map $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)_c\to \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ maps the left ideal $ {\rm I}_p^h(\widehat{\mathfrak{g}}_N)$ to $h^p \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$. Denote by $\mathop{\mathrm{\boldsymbol{1}}}$ the image of the unit $1 \in {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$ with respect to this map. \begin{pro}\label{free2} The vacuum module $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ is a topologically free $\mathbb{C}[[h]]$-module. Moreover, it is a restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$-module. \end{pro} \begin{prf} The first assertion is verified by arguing as in \cite[Prop. 2.2]{c11}. As for the second assertion, we first observe that all elements \begin{equation}\label{elsofform} \lambda_{i_1\hspace{1pt} j_1\hspace{1pt}\ldots\hspace{1pt} i_n\hspace{1pt} j_n }^{(r_1,\ldots ,r_n)}\mathop{\mathrm{\boldsymbol{1}}}\qquad \text{such that}\qquad n\geqslant 0\quad\text{and}\quad r_k< 0 \text{ for all }k=1,\ldots,n \end{equation} span an $h$-adically dense $\mathbb{C}[[h]]$-submodule of $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$. Indeed, this follows from the fact that each monomial $\lambda_{k_1\hspace{1pt} l_1}^{(s_1)}\ldots \lambda_{k_m\hspace{1pt} l_m}^{(s_m)}\mathop{\mathrm{\boldsymbol{1}}}\in \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ can be expressed using elements \eqref{elsofform}. This is done by employing crossing symmetry properties \eqref{csym} and invertibility of the trigonometric $R$-matrix to move all $R$-matrices which appear on the right hand side of \begin{equation}\label{qcvac} \mathcal{L}_{[a+b]}(x,y)\mathop{\mathrm{\boldsymbol{1}}}=\mathcal{L}_{[a]}^{13}(x) R_{ab}^{21} (ye^{-hC}/x ) \mathcal{L}_{[b]}^{23}(y)R_{ab}^{21} ( y/x)^{-1}\mathop{\mathrm{\boldsymbol{1}}}, \end{equation} where $a+b=m$, $x=(x_1,\ldots ,x_a)$ and $y=(y_1,\ldots ,y_b)$, to the left hand side (for more details see Remark \ref{csrem}), and then taking the coefficient of $ x_1^{-s_1-1}\hspace{-2pt}\ldots x_{a}^{-s_a -1} y_1^{-s_{a+1}-1}\hspace{-2pt}\ldots y_b^{-s_m-1} $ at the matrix entry $e_{k_1\hspace{1pt} l_1}\otimes \ldots \otimes e_{k_m\hspace{1pt} l_m}$. Note that \eqref{qcvac} follows from Proposition \ref{qcgenpro}. Therefore, it is sufficient to check that $\mathcal{L}(z)w$ belongs to $\mathop{\mathrm{End}}\mathbb{C}^N\otimes \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)((z))[[h]]$ for all $w\in \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ of the form as in \eqref{elsofform}. However, as \eqref{qcvac} contains only nonnegative powers of the variables $x_1,\ldots ,x_a,y_1,\ldots ,y_b$, this follows by setting $a=1$ and $b=n$ in \eqref{qcvac}, then moving $R_{1n}^{21} (ye^{-hC} /x )$ and $R_{1n}^{21} ( y/x)^{-1}$ to the left hand side and, finally, by taking the coefficient of $ y_1^{-r_{1}-1}\ldots y_n^{-r_n-1} $ at the matrix entries $e_{ij}\otimes e_{i_1\hspace{1pt} j_1}\otimes \ldots \otimes e_{i_n\hspace{1pt} j_n}$ for $i,j=1,\ldots ,N$. \end{prf} Observe that Proposition \ref{free2} and the \hyperref[mainthm1]{Main Theorem} imply Corollary \ref{maincor}. \section{\texorpdfstring{$\phi$}{phi}-Coordinated modules for the quantum affine vertex algebra}\label{sec0102} In this section, we recall Etingof--Kazhdan's construction of the quantum affine vertex algebra associated with trigonometric $R$-matrix in type $A$. Next, we suitably modify Li's definition of $\phi$-coordinated module, thus establishing the setting for the \hyperref[mainthm1]{Main Theorem}. \subsection{Quantum affine vertex algebra}\label{subsec012} We follow \cite{EK3,EK4} to introduce the $R$-matrix algebras $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{g}}_N)$; see also \cite{FRT,RS}. Let $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ be the associative algebra over the ring $\mathbb{C}[[h]]$ generated by elements $t_{ij}^{(-r)}$, where $i,j=1,\ldots ,N$ and $r=1,2,\ldots$, subject to the defining relations \begin{equation}\label{rtt} R(e^{u-v})\hspace{1pt} T_{1}^+(u)\hspace{1pt} T_2^+ (v)= T_2^+ (v)\hspace{1pt} T_{1}^+(u)\hspace{1pt} R(e^{u-v}), \end{equation} where $T^+(u) \in \mathop{\mathrm{End}}\mathbb{C}^N \otimes \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)[[u]]$ is given by $$ T^+(u) =\sum_{i,j=1}^N e_{ij}\otimes t_{ij}^+ (u)\qquad\text{for}\qquad t_{ij}^+(u)=\delta_{ij}-h\sum_{r=1}^{\infty}t_{ij}^{(-r)}u^{r-1}\in \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)[[u]]. $$ As in \eqref{notation2}, we use subscripts in \eqref{rtt} to indicate copies in the tensor product algebra $\mathop{\mathrm{End}}\mathbb{C}^N \otimes \mathop{\mathrm{End}}\mathbb{C}^N \otimes \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$. Note that the $R$-matrix $R(e^{u-v})$ in defining relation \eqref{rtt} can be replaced by $R^+ (e^{u-v})$. Define the {\em quantum determinant} of the matrix $T^+(u)$ by \begin{equation}\label{qdetvoa} {\rm qdet}\hspace{1pt} T^+ (u)= {\rm tr}_{1,\ldots ,N} \,A^{(N)}\hspace{1pt} T_1^+(u)\ldots T_N^+(u-(N-1)h)\hspace{1pt} D_1\ldots D_N , \end{equation} where the trace is taken over all $N$ copies of $\mathop{\mathrm{End}}\mathbb{C}^N$ and the matrix $D$ is given by \eqref{matrix497}. The quantum determinant $ {\rm qdet}\hspace{1pt} T^+ (u)$ belongs to $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N) [[u]]$. Moreover, its coefficients $\delta_r$, which are given by \begin{equation}\label{detkoef} {\rm qdet}\hspace{1pt} T^+ (u)=1-h\sum_{r\geqslant 0}\delta_r u^r, \end{equation} belong to the center of the algebra $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$; see proof of \cite[Prop. 3.10]{KM}. Define the algebra $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$ as the quotient of $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ over the $h$-adically completed ideal generated by the elements $\delta_0,\delta_1,\ldots$ Hence we have the following relation in $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$: \begin{equation}\label{qdetvoa2} {\rm qdet}\hspace{1pt} T^+ (u)=1 . \end{equation} Let $\mathfrak{g}_N =\mathfrak{gl}_N, \mathfrak{sl}_N$. For positive integers $n$ and $m$ we extend the notation in \eqref{rnm12} and \eqref{rnm123} by introducing the functions depending on the variable $z$ and the families of variables $u = (u_1 ,...,u_n )$ and $v = (v_1 ,...,v_m )$ with values in the space $(\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes n} \otimes (\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes m}$ by \begin{align} &R_{nm}^{12}(e^{z+u-v+ah})= \prod_{i=1,\dots,n}^{\longrightarrow} \prod_{j=n+1,\ldots,n+m}^{\longleftarrow} R_{ij} (e^{z+u_i-v_{j-n}+ah}),\label{rnm12exp}\\ &R_{nm}^{21}(e^{z+u-v+ah})= \prod_{i=1,\dots,n}^{\longleftarrow} \prod_{j=n+1,\ldots,n+m}^{\longrightarrow} R_{ji}( e^{z+u_i-v_{j-n}+ah}),\label{rnm123exp} \end{align} where $a\in\mathbb{C}$. Note that the expansion convention, as introduced at the end of Subsection \ref{sec01}, is applied on every factor on the right hand side, i.e. $$R_{ij} (e^{z+u_i-v_{j-n}+ah}) = \psi \mathop{\iota_{z,u_i,v_{j-n}}} g (e^{z+u_i-v_{j-n}+ah})\hspace{1pt} R_{ij}^+ (e^{z+u_i-v_{j-n}+ah}).$$ If the variable $z$ is omitted in \eqref{rnm12exp} or \eqref{rnm123exp}, the embeddings $\mathop{\iota_{u_i,v_{j-n}}}$ are applied on the corresponding normalizing functions $g(e^{u_i-v_{j-n}+ah})$ instead. The functions $R_{nm}^{+12}(e^{z+u-v+ah})$ and $R_{nm}^{+21}(e^{z+u-v+ah})$ corresponding to the $R$-matrix $R^+(x)$ given by \eqref{R2} can be defined analogously. Denote by $\mathop{\mathrm{\boldsymbol{1}}}$ the unit in the algebra $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{g}}_N) $. We recall \cite[Lemma 2.1]{EK}: \begin{lem}\label{lemma21} For any $c\in\mathbb{C}$ there exists a unique operator series $$T^*(u)\in\mathop{\mathrm{End}}\mathbb{C}^N \otimes \mathop{\mathrm{Hom}}( \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{g}}_N) [[h]], \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{g}}_N) ((u))[[h]] )$$ such that for all $n\geqslant 0$ we have \begin{align} &T^{*}_{1} (u)\hspace{1pt} T_{2}^{+}(v_1)\ldots T_{n+1}^+(v_n)\mathop{\mathrm{\boldsymbol{1}}}\nonumber\\ &\qquad\qquad = R_{1n}^{12}(e^{u-v+hc/2})^{-1} T_{2}^{+}(v_1)\ldots T_{n+1}^+(v_n) R_{1n}^{12}(e^{u-v-hc/2})\mathop{\mathrm{\boldsymbol{1}}}.\label{tstar} \end{align} \end{lem} In order to indicate action \eqref{tstar}, which is uniquely determined by the scalar $c\in\mathbb{C}$, we denote the topologically free $\mathbb{C}[[h]]$-module $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{g}}_N) [[h]]$ by $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$. Following \cite{EK}, we introduce the operators on $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n} \otimes \overline{\mathcal{V}}_{c}(\mathfrak{g}_N)$ by \begin{align*} T_{[n]}^{+}(u|z)=T_{1}^{+}(z+u_1)\ldots T_{n}^{+}(z+u_n)\quad\text{and}\quad T_{[n]}^*(u|z)=T_{1}^*(z+u_1)\ldots T_{n}^*(z+u_n). \end{align*} By the expansion convention from Subsection \ref{sec01}, the operator $T_{[n]}^* (u|z)$ contains only nonnegative powers of the variables $u_1,\ldots ,u_n$ as the embeddings $\mathop{\iota_{z,u_i}}$ are applied on its corresponding factors. If the variable $z$ is omitted, we write \begin{equation}\label{rnm1234t} T_{[n]}^{+}(u )=T_{1}^{+}(u_1)\ldots T_{n}^{+}(u_n)\quad\text{and}\quad T_{[n]}^*(u)=T_{1}^*(u_1)\ldots T_{n}^*(u_n). \end{equation} The next proposition, as given in \cite[Prop. 2.2]{EK}, is verified using \eqref{rtt} and \eqref{tstar}. In relations \eqref{rtt2}--\eqref{rtt3}, the superscripts $1,2,3$ indicate the tensor factors as follows: $$ \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}}^{1} \otimes \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}}^{2}\otimes \smalloverbrace{\overline{\mathcal{V}}_c(\mathfrak{g}_N)}^{3}. $$ For example, the superscripts $1,3$ in $T_{[n]}^{*13}(u|z_1)$ indicate that the operator $T_{[n]}^{*}(u|z_1)$ is applied on the tensor factors $1,\ldots ,n$ and $n+m+1$. \begin{pro} For any integers $n,m\geqslant 1$ and the families of variables $u=(u_1,\ldots ,u_n)$ and $v=(v_1,\ldots ,v_m)$ the following equalities hold on $\overline{\mathcal{V}}_c(\mathfrak{g}_N)$: \begin{align} &R_{nm}^{12}(e^{z_1-z_2+u-v})T_{[n]}^{*13}(u|z_1)T_{[m]}^{*23}(v|z_2) =\,T_{[m]}^{*23}(v|z_2)T_{[n]}^{*13}(u|z_1)R_{nm}^{12}(e^{z_1-z_2+u-v}),\label{rtt2}\\ &R_{nm}^{12}(e^{z_1-z_2+u-v})T_{[n]}^{+13}(u|z_1)T_{[m]}^{+23}(v|z_2) =\,T_{[m]}^{+23}(v|z_2)T_{[n]}^{+13}(u|z_1)R_{nm}^{12}(e^{z_1-z_2+u-v}),\label{rtt1}\\ &R_{nm}^{\hspace{1pt} 12}(e^{z_1-z_2+u-v+hc/2})T_{[n]}^{*13}(u|z_1)T_{[m]}^{+23}(v|z_2) =\,T_{[m]}^{+23}(v|z_2)T_{[n]}^{*13}(u|z_1)R_{nm}^{\hspace{1pt} 12}(e^{z_1-z_2+u-v-hc/2}). \label{rtt3} \end{align} \end{pro} From now on, the tensor products are understood as $h$-adically completed. The notion of {\em quantum vertex algebra} was introduced by Etingof and Kazhdan \cite{EK}. It is defined as a quadruple $(V,Y,\mathop{\mathrm{\boldsymbol{1}}},\mathcal{S})$ such that \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item[1. ] $V$ is a topologically free $\mathbb{C}[[h]]$-module. \item[2. ] $Y=Y(z)$ is the {\em vertex operator map}, i.e. a $\mathbb{C}[[h]]$-module map \begin{align*} Y \colon V\otimes V&\to V((z))[[h]]\\ u\otimes v&\mapsto Y(z)(u\otimes v)=Y(u,z)v=\sum_{r\in\mathbb{Z}} u_r v \hspace{1pt} z^{-r-1} \end{align*} which satisfies the {\em weak associativity}: for any $u,v,w\in V$ and $n\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that \begin{align} &(z_0 +z_2)^p\hspace{1pt} Y(u,z_0 +z_2)Y(v,z_2)\hspace{1pt} w - (z_0 +z_2)^p\hspace{1pt} Y\big(Y(u,z_0)v,z_2\big)\hspace{1pt} w \in h^n V[[z_0^{\pm 1},z_2^{\pm 1}]].\label{associativity} \end{align} \item[3. ] $\mathop{\mathrm{\boldsymbol{1}}}$ is the {\em vacuum vector}, i.e. a distinct element of $V$ satisfying \begin{equation}\label{v1} Y(\mathop{\mathrm{\boldsymbol{1}}} ,z)v=v,\quad Y(v,z)\mathop{\mathrm{\boldsymbol{1}}}\in V[[z]]\quad\text{and}\quad \lim_{z\to 0} Y(v,z)\hspace{1pt}\mathop{\mathrm{\boldsymbol{1}}} =v\quad\text{for all }v\in V, \end{equation} \item[4. ] $\mathcal{S}=\mathcal{S}(z)$ is the {\em braiding map}, i.e. a $\mathbb{C}[[h]]$-module map $ V\otimes V\to V\otimes V\otimes\mathbb{C}((z))[[h]]$ which satisfies the $\mathcal{S}$-{\em locality}: for any $u,v\in V$ and $n\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that for all $w\in V$ \begin{align} (z_1-z_2)^{p}\hspace{1pt} Y(z_1)\big(1\otimes Y(z_2)\big)\big(\mathcal{S}(z_1 -z_2)(u\otimes v)\otimes w\big)& \nonumber\\ -(z_1-z_2)^{p}\hspace{1pt} Y(z_2)\big(1\otimes Y(z_1)\big)(v\otimes u\otimes w) &\in h^n V[[z_1^{\pm 1},z_2^{\pm 1}]].\label{locality} \end{align} \end{enumerate} The given data should posses several other properties which we omit as they are not used in this paper; for a complete definition see \cite[Sect. 1.4]{EK}. Finally, we recall Etingof--Kazhdan's construction \cite[Thm. 2.3]{EK} in the trigonometric $R$-matrix case: \begin{thm}\label{EK:qva} For any $c\in \mathbb{C}$ there exists a unique quantum vertex algebra structure on $\overline{\mathcal{V}}_c(\mathfrak{g}_N)$ such that the vertex operator map $Y$ is given by \begin{equation}\label{qva1} Y\big(T_{[n]}^+ (u)\mathop{\mathrm{\boldsymbol{1}}},z\big)=T_{[n]}^+ (u|z)\hspace{1pt} T_{[n]}^* (u|z+hc/2)^{-1}, \end{equation} the vacuum vector is $\mathop{\mathrm{\boldsymbol{1}}}\in \overline{\mathcal{V}}_c(\mathfrak{g}_N)$ and the braiding map $\mathcal{S}(z)$ is defined by the relation \begin{align} \mathcal{S}(z)\big(R_{nm}^{ 12}(e^{z+u-v})^{-1} T_{[m]}^{+24}(v) R_{nm}^{ 12}(e^{z+u-v-h c}) T_{[n]}^{+13}(u)(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}}) \big)&\nonumber\\ =T_{[n]}^{+13}(u) R_{nm}^{ 12}(e^{z+u-v+h c})^{-1} T_{[m]}^{+24}(v) R_{nm}^{ 12}(e^{z+u-v})(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})\label{qva2}& \end{align} for operators on $ (\mathop{\mathrm{End}}\mathbb{C}^{N})^{\otimes n} \otimes (\mathop{\mathrm{End}}\mathbb{C}^{N})^{\otimes m}\otimes \overline{\mathcal{V}}_c(\mathfrak{g}_N) \otimes \overline{\mathcal{V}}_c(\mathfrak{g}_N) $. \end{thm} \begin{rem}\label{csrem} Crossing symmetry properties \eqref{csym2} of $R$-matrix \eqref{rplusg} can be expressed using the ordered product notation as \begin{equation}\label{csym_equiv} (D_1 R(e^{u+hN}) D_1^{-1})\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} R(e^{u})^{-1}=1 \quad\text{and}\quad (D_2 R(e^u)^{-1} D_2^{-1})\mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} R(e^{u+hN})=1, \end{equation} where the subscript RL (LR) indicates that the first tensor factor of $D_i R(e^u)^{-1} D_i^{-1}$, $i=1,2$, is applied from the right (left) while the second tensor factor is applied from the left (right). Such notation naturally extends to the products of multiple $R$-matrices such as \eqref{rnm12exp} and \eqref{rnm123exp}. For example, by \eqref{csym_equiv}, we have \begin{equation}\label{crossingexample} \left(D_{[m]}^2 R_{nm}^{ 12}(e^{z+u-v-h (N+ c)})^{-1}(D_{[m]}^2)^{-1}\right) \mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} R_{nm}^{ 12}(e^{z+u-v-h c}) =1, \end{equation} where $D_{[m]}^2 =1^{\otimes n}\otimes D^{\otimes m}$ and the subscript LR now indicates that the tensor factors $1,\ldots ,n$ ($n+1,\ldots ,n+m$) are applied from the left (right). As with \eqref{csym_equiv}, one can write crossing symmetry properties \eqref{csym} of $R$-matrix \eqref{R} using the ordered product notation. As before, the notation naturally extends to the multiple $R$-matrix products such as \eqref{rnm12}--\eqref{r2}. \end{rem} Combining \eqref{qva2} and \eqref{crossingexample} we find the explicit formula for the action of the braiding, \begin{align} &\mathcal{S}(z)\big( T_{[n]}^{+13}(u)T_{[m]}^{+24}(v) (\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}}) \big) =\left(D_{[m]}^2 R_{nm}^{ 12}(e^{z+u-v-h (N+ c)})^{-1}(D_{[m]}^2)^{-1}\right)\nonumber \\ & \qquad\mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \big( R_{nm}^{ 12}(e^{z+u-v}) T_{[n]}^{+13}(u) R_{nm}^{ 12}(e^{z+u-v+h c})^{-1} T_{[m]}^{+24}(v) R_{nm}^{ 12}(e^{z+u-v})(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})\big). \label{sop1} \end{align} \begin{rem}\label{remark17} As with \eqref{exp}, by formal Taylor Theorem \eqref{taylor} we have $$ \frac{1}{1-xe^{u-v+ah}}=\sum_{k=0}^{\infty} \frac{(e^{u-v+ah}-1)^k x^k}{k!}\frac{\partial^k}{\partial x^k}\left(\frac{1}{1-x}\right). $$ Therefore, due to \eqref{R2}, we can regard the $R$-matrix $R(xe^{u-v+ah})$ as an element of $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes 2}(x)[[u,v,h]]$ for any $a\in\mathbb{C}$, i.e. as a rational function in the variable $x$. Clearly, applying the embedding $\mathop{\iota_{x,u,v}}$ we obtain an element of $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes 2}((x))[[u,v,h]]$. \end{rem} We now extend the notation \eqref{rnm12exp} and \eqref{rnm123exp} by introducing the functions depending on the variable $x$ and the families of variables $u = (u_1 ,...,u_n )$ and $v = (v_1 ,...,v_m )$ with values in the space $(\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes n} \otimes (\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes m}$ by \begin{align} &R_{nm}^{12}(xe^{u-v+ah})= \prod_{i=1,\dots,n}^{\longrightarrow} \prod_{j=n+1,\ldots,n+m}^{\longleftarrow} R_{ij} (xe^{u_i-v_{j-n}+ah}),\label{rnm12expx}\\ &R_{nm}^{21}(xe^{u-v+ah})= \prod_{i=1,\dots,n}^{\longleftarrow} \prod_{j=n+1,\ldots,n+m}^{\longrightarrow} R_{ji}( xe^{u_i-v_{j-n}+ah}),\label{rnm123expx} \end{align} where $a\in\mathbb{C}$. In accordance with Remark \ref{remark17}, the $R$-matrices in \eqref{rnm12expx} and \eqref{rnm123expx} are regarded as rational functions in the variable $x$. We use the map given by the following lemma in Definition \ref{phimod} below, to introduce the notion of $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module. \begin{lem}\label{eshet} There exists a unique $\mathbb{C}[[h]]$-module map $$\widehat{\mathcal{S}}(x)\colon \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)\otimes \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N) \to \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)\otimes \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N) (x)[[h]]$$ such that \begin{align} &\widehat{\mathcal{S}}(x)\big( T_{[n]}^{+13}(u)T_{[m]}^{+24}(v) (\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}}) \big) =\left(D_{[m]}^2 R_{nm}^{ 12}(xe^{u-v-h (N+ c)})^{-1}(D_{[m]}^2)^{-1}\right)\nonumber \\ &\qquad\mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \big( R_{nm}^{ 12}(xe^{u-v}) T_{[n]}^{+13}(u) R_{nm}^{ 12}(xe^{u-v+h c})^{-1} T_{[m]}^{+24}(v) R_{nm}^{ 12}(xe^{u-v})(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})\big). \label{sop1hat} \end{align} Moreover, the map $\widehat{\mathcal{S}}(x)$ satisfies \begin{align} \widehat{\mathcal{S}}(x)\big(R_{nm}^{ 12}(xe^{u-v})^{-1} T_{[m]}^{+24}(v) R_{nm}^{ 12}(xe^{u-v-h c}) T_{[n]}^{+13}(u)(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}}) \big)&\nonumber\\ =T_{[n]}^{+13}(u) R_{nm}^{ 12}(xe^{u-v+h c})^{-1} T_{[m]}^{+24}(v) R_{nm}^{ 12}(xe^{u-v})(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})&.\label{qva2hat} \end{align} \end{lem} \begin{prf} The map $\widehat{\mathcal{S}}(x)$ is well-defined by \eqref{sop1hat}, i.e. it maps the ideal of relations \eqref{rtt}, and \eqref{qdetvoa2} in the $\mathfrak{g}_N=\mathfrak{sl}_N$ case, to itself. Indeed, this follows by a straightforward calculation which relies on the identity $$ R_{12}(e^{u-v})D_1 D_2 =D_2 D_1 R_{12}(e^{u-v}) $$ and the following version of Yang--Baxter equation \eqref{YBE}: $$ R_{12}(e^{u-v}) R_{13} (xe^{u +\alpha h}) R_{23} (xe^{ v+\alpha h}) = R_{23} (xe^{ v+\alpha h})R_{13} (xe^{u +\alpha h})R_{12}(e^{u-v}),\quad \alpha\in\mathbb{C}. $$ Moreover, the proof in the $\mathfrak{g}_N=\mathfrak{sl}_N$ case employs identity \eqref{fqhqf} and some properties of the anti-symmetrizer $A^{(N)}$, which are given by \eqref{497a}, \eqref{497d} and $$ A^{(N)}T_1^+(u) T_2^+(u-h)\ldots T_N^+(u-(N-1)h)=T_N^+(u-(N-1)h)\ldots T_2^+(u-h)T_1^+(u)A^{(N)}, $$ see \cite[Equality (3.12)]{KM}. As for relation \eqref{qva2hat}, it follows from \eqref{sop1hat} and the equality $$ \left(D_{[m]}^2 R_{nm}^{ 12}(xe^{u-v-h (N+ c)})^{-1}(D_{[m]}^2)^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} R_{nm}^{ 12}(xe^{u-v-h c}) =1, $$ which is verified by using crossing symmetry properties \eqref{csym}; recall Remark \ref{csrem}. \end{prf} \subsection{\texorpdfstring{$\phi$}{phi}-Coordinated modules} The notion of $\phi$-coordinated module, where $\phi$ is an associate of the one-dimensional additive formal group, was introduced by Li \cite{Li1}. As in \cite[Sect. 5]{Li1}, throughout this paper we consider the associate \begin{equation}\label{phi} \phi(z_2,z_0) = z_2 e^{z_0}. \end{equation} Before we proceed to the definition of $\phi$-coordinated module, we introduce some notation. Let $V$ be a topologically free $\mathbb{C}[[h]]$-module and $a_1,\ldots ,a_n,k>0$ arbitrary integers. Suppose that some element $A$ of $\mathop{\mathrm{Hom}}(V,V[[z_1^{\pm 1},z_2^{\pm 1},u_1,\ldots ,u_n]])$ can be expressed as \begin{gather} A=B+ u_1^{a_1} C_1 +\ldots u_n^{a_n} C_n +h^k C_{n+1} \qquad\text{for some}\label{decompmod}\\ B\in\mathop{\mathrm{Hom}}(V,V((z_1,z_2))[[u_1,\ldots ,u_n,h]]),\,\, C_1,\ldots , C_{n+1}\in\mathop{\mathrm{Hom}}(V,V[[z_1^{\pm 1},z_2^{\pm 1},u_1,\ldots ,u_n]]). \nonumber \end{gather} To indicate the fact that $A $ possesses a decomposition as in \eqref{decompmod}, we write \begin{equation}\label{asups} A\in\mathop{\mathrm{Hom}}(V,V((z_1,z_2))[[u_1,\ldots ,u_n]] )\mod u_1^{a_1},\ldots, u_n^{a_n},h^k. \end{equation} Note that the substitution \begin{equation}\label{bsupst} B\big|_{z_1=\phi(z_2,z_0)}\big. =\mathop{\iota_{z_2,z_0,u_1,\ldots ,u_n}}\left( B(z_1,z_2,u_1,\ldots , u_n)\big|_{z_1=\phi(z_2,z_0)}\big.\right) \end{equation} is well-defined even though the substitution $\textstyle A\big|_{z_1=\phi(z_2,z_0)}\big.$ does not exist in general. In what follows, the substitution $z_1=\phi(z_2,z_0)$ is always understood as in \eqref{bsupst}, i.e. the given expression is expanded in nonnegative powers of the variable $z_0$. In order to simplify our notation, we denote \eqref{bsupst} as \begin{equation}\label{asupst} A \big|_{z_1=\phi(z_2,z_0)}^{ \,{\rm mod\,\,} u_1^{a_1},\ldots, u_n^{a_n},h^k}\big. =A(z_1,z_2,u_1,\ldots , u_n)\big|_{z_1=\phi(z_2,z_0)}^{ \,{\rm mod\,\,} u_1^{a_1},\ldots, u_n^{a_n},h^k}\big. . \end{equation} The element $B$ as in \eqref{decompmod} is clearly unique modulo $$ \sum_{i=1}^n u_i^{a_i}\mathop{\mathrm{Hom}}(V,V[[z_1^{\pm 1},z_2^{\pm 1},u_1,\ldots ,u_n]]) + h^k \mathop{\mathrm{Hom}}(V,V[[z_1^{\pm 1},z_2^{\pm 1},u_1,\ldots ,u_n]]). $$ Let $\mathfrak{g}_N =\mathfrak{gl}_N, \mathfrak{sl}_N$. The following definition of $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module is based on \cite[Def. 3.4]{Li1}, which we slightly modify in order to make it compatible with Etingof--Kazhdan's quantum vertex algebra theory; see Remark \ref{napomena} for more details. \begin{defn}\label{phimod} A {\em $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module} is a pair $(W,Y_W)$ such that $W$ is a topologically free $\mathbb{C}[[h]]$-module and $Y_W=Y_W(z)$ is a $\mathbb{C}[[h]]$-module map \begin{align*} Y_W \colon \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)\otimes W&\to W((z))[[h]]\\ u\otimes w&\mapsto Y_W(z)(u\otimes w)=Y_W(u,z)w=\sum_{r\in\mathbb{Z}} u_r w \hspace{1pt} z^{-r-1} \end{align*} which satisfies $Y_W(\mathop{\mathrm{\boldsymbol{1}}},z)w=w $ for all $w\in W$; the {\em weak associativity}: for any $u,v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ and $k\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that \begin{align} &(z_1-z_2)^p\hspace{1pt} Y_W(u,z_1)Y_W(v,z_2)\in\mathop{\mathrm{Hom}} (W,W((z_1,z_2)) )\mod h^k\qquad\text{and}\qquad\label{associativitymod0}\\ &\big((z_1-z_2)^p\hspace{1pt} Y_W(u,z_1)Y_W(v,z_2)\big)\big|_{z_1=\phi(z_2,z_0)}^{ \,{\rm mod\,\,} h^k} \big. \nonumber\\ &\qquad- (\phi(z_2,z_0) -z_2)^p\hspace{1pt} Y_W\left(Y(u,z_0)v,z_2\right)\hspace{1pt} \in\hspace{1pt} h^k \mathop{\mathrm{Hom}}(W,W[[z_0^{\pm 1},z_2^{\pm 1}]]);\label{associativitymod} \end{align} and the {\em $\widehat{\mathcal{S}}$-locality}: for any $u,v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ and $k\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that \begin{align} &(z_1-z_2)^{p}\hspace{1pt} Y_W(z_1)\big(1\otimes Y_W(z_2)\big)\mathop{\iota_{z_1,z_2}}\big(\widehat{\mathcal{S}}(z_1/z_2)(u\otimes v)\otimes w\big)\label{localitymod}\\ &\qquad-(z_1-z_2)^{p}\hspace{1pt} Y_W(z_2)\big(1\otimes Y_W(z_1)\big)(v\otimes u\otimes w)\hspace{1pt} \in\hspace{1pt} h^k W[[z_1^{\pm 1},z_2^{\pm 1}]] \quad\text{for all }w\in W.\nonumber \end{align} Let $W_1$ be a topologically free $\mathbb{C}[[h]]$-submodule of $W$. A pair $(W_1,Y_{W_1})$ is said to be a {\em $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-submodule} of $W$ if $Y_W(v,z)w$ belongs to $W_1$ for all $v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ and $w\in W_1$, where $Y_{W_1}$ denotes the restriction and corestriction of $Y_W$, $$Y_{W_1} (z)=Y_{W} (z)\Big|_{\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)\otimes W_1}^{ W_1((z))[[h]]}\Big.\colon \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)\otimes W_1 \to W_1 ((z))[[h]].$$ \end{defn} \begin{rem}\label{assocremark} Regarding the weak associativity, note that \eqref{associativitymod0} and \eqref{associativitymod} employ the notation introduced in \eqref{asups} and \eqref{asupst} for $n=0$, i.e. there are no variables $u_1,\ldots ,u_n$. Next, observe that the $\widehat{\mathcal{S}}$-locality already implies that there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that \eqref{associativitymod0} holds. However, we still include this requirement in the definition as it ensures that the integer $p$ is large enough so that the substitution $z_1=\phi(z_2,z_0)$ in \eqref{associativitymod} is well-defined. Finally, the motivation for expressing the weak associativity in the form as in \eqref{associativitymod0} and \eqref{associativitymod} is given in \cite[Rem. 3.2]{Li1}. \end{rem} \begin{rem}\label{napomena} As with the quantum affine algebra in the previous section, we introduce the notion of $\phi$-coordinated module over the ring $\mathbb{C}[[h]]$ instead of a field in order to make it compatible with the Etingof--Kazhdan quantum vertex algebra theory; cf. original definition \cite[Def. 3.4]{Li1}. Furthermore, unlike the original definition, we require that the $\phi$-coordinated module map $Y_W(z)$ possesses $\widehat{\mathcal{S}}$-locality property \eqref{localitymod}. The general theory developed by Li suggests that \eqref{localitymod} might be omitted from the definition, due to the fact that the vertex operator map $Y(z)$ already possesses $\mathcal{S}$-locality property \eqref{locality}; see \cite[Prop. 5.6]{Li1}. However, we include the $\widehat{\mathcal{S}}$-locality in the definition in order to emphasize the importance of quantum current commutation relation \eqref{qc}. More specifically, in the proof of the \hyperref[mainthm1]{Main Theorem}, we derive the $\widehat{\mathcal{S}}$-locality property directly from the quantum current commutation relation; see Lemma \ref{lemlem4}. \end{rem} Introduce the series $$\delta(z)=\sum_{k\in\mathbb{Z}} z^k\in\mathbb{C}[[z^{\pm 1}]]\quad\text{and}\quad \log(1+z)=-\sum_{k=1}^{\infty} \frac{(-z)^{k}}{k}\in z\mathbb{C}[[z]]. $$ The following Jacobi-type identity was established in \cite[Prop. 5.9]{Li1}. Although, in contrast with \cite{Li1}, we consider quantum vertex algebras and $\phi$-coordinated modules defined over the ring $\mathbb{C}[[h]]$, the next proposition can be proved by arguing as in the proofs of \cite[Lemma 5.8]{Li1} and \cite[Prop. 5.9]{Li1}. \begin{pro}\label{Jacobi_prop} Let $W$ be a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$-module, where $\phi(z_2,z_0)=z_2 e^{z_0}$. For any $u,v\in\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{g}_N)$ we have \begin{align} &(z_2 z)^{-1} \delta\left(\frac{z_1 -z_2}{z_2 z}\right) Y_W(z_1) (1 \otimes Y_W(z_2))(u\otimes v) \label{Jacobi1}\\ &\qquad -(z_2 z)^{-1} \delta\left(\frac{z_2-z_1}{-z_2 z}\right) Y_W(z_2)(1\otimes Y_W(z_1)) \mathop{\iota_{z_2,z_1}}\hspace{-1pt} \widehat{\mathcal{S}}(z_2 /z_1)(v\otimes u)\label{Jacobi2}\\ =&\, z_1^{-1}\delta\left(\frac{z_2 (1+z)}{z_1}\right) Y_W\left(Y(u,\log(1+z))v,z_2\right).\label{Jacobi3} \end{align} \end{pro} \section{Proof of the \texorpdfstring{\hyperref[mainthm1]{Main Theorem}}{Main Theorem}}\label{sec05} In this section we prove the \hyperref[mainthm1]{Main Theorem}. The proof is divided into four parts, Subsections \ref{sec051}--\ref{sec054}. In Subsection \ref{sec051}, we obtain some properties of the normalizing functions for the trigonometric $R$-matrix which are required in the later stages of the proof; see Lemmas \ref{arofexlemma}--\ref{lemma4}. In Subsection \ref{sec052}, we demonstrate how to establish the $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-module structure on a restricted module of level $c$ for the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$; see Lemmas \ref{lemlem1}--\ref{lemlem4}. The key ingredient in this part of the proof is Ding's quantum current realization and, in particular, the fact that quantum current commutation relation \eqref{qc} resembles $\widehat{\mathcal{S}}$-locality property \eqref{localitymod}. In Subsection \ref{sec053}, we use Li's Jacobi-type identity, as given in Proposition \ref{Jacobi_prop}, to establish the structure of restricted module of level $c$ for the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ on a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-module; see Lemma \ref{lemlem5}. Finally, we finish the proof in the $\mathfrak{g}_N=\mathfrak{gl}_N$ case by showing that the $\mathbb{C}[[h]]$-submodules invariant with respect to the action of the quantum affine algebra and with respect to the corresponding action of the quantum vertex algebra coincide; see Lemma \ref{lemlem6}. In Subsection \ref{sec054}, we use the fusion procedure for the two-parameter trigonometric $R$-matrix to extend the results to the $\mathfrak{g}_N=\mathfrak{sl}_N$ case, thus completing the proof of the \hyperref[mainthm1]{Main Theorem}; see Lemmas \ref{profuzion}--\ref{lemlemE}. \subsection{Normalizing functions }\label{sec051} Introduce the function $r(x)$ by \begin{equation}\label{arofexrr31} r(x)=- xe^h (1-e^{h}x)^{-1} f(x)^{-1}, \end{equation} where $f(x)$ is given by \eqref{f3}. \begin{lem}\label{arofexlemma} The function $r(x)\in\mathbb{C}[[x,h]]$ satisfies \begin{equation}\label{arofexrr} R_{21}(x)^{-1}=r(x) R_{12}^+(1/x). \end{equation} Moreover, it admits the presentation \begin{equation}\label{arofex} r(x)=\sum_{k=0}^{\infty} r_k \frac{x^{k+1}}{(1-x)^{k+1}} \quad\text{such that}\quad r_{k}\in h^k \mathbb{C}[[h]]\quad\text{and}\quad r_0=-e^{h}. \end{equation} \end{lem} \begin{prf} By combining unitarity property \eqref{unitrig} and \eqref{R2} we obtain \begin{align*} &R_{21}(x)^{-1}=\left(f(x) {\overline{R}}_{21}(x)\right)^{-1}=f(x)^{-1} {\overline{R}}_{21}(x)^{-1} =f(x)^{-1} {\overline{R}}_{12}(1/x)\\ = &\,f(x)^{-1} (1-e^{-h}x^{-1})^{-1} R_{12}^+ (1/x) =-xe^h (1-e^{h}x)^{-1} f(x)^{-1} R_{12}^+ (1/x) =r(x) R_{12}^+(1/x), \end{align*} as required. Next, by using \eqref{f3} we find \begin{equation}\label{lemaf} f(x)^{-1}=\sum_{l=0}^{\infty} \left(-\sum_{k=1}^{\infty} f_k\left( \frac{x}{1-x}\right)^k \right)^l=1+\sum_{k=1}^{\infty} \beta_k\left( \frac{x}{1-x}\right)^k \end{equation} for some $\beta_k\in h^k\mathbb{C}[[h]]$. It is clear that the product of \eqref{exp} for $a=1$, \eqref{lemaf} and $-xe^h $ is equal to $r(x)$ and, furthermore, that it admits presentation \eqref{arofex}. \end{prf} We use the following lemma in the proofs of weak associativity and $\widehat{\mathcal{S}}$-locality of the $\phi$-coordinated module map, as well as to establish the restricted module structure on a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-module; see Lemmas \ref{lemlem3}, \ref{lemlem4} and \ref{lemlem5} respectively. \begin{lem}\label{poleslemma} Let $F=g^{\pm 1}$ or $F=r^{\pm 1}$. For any integers $a_1,a_2,k> 0$ and $\alpha\in\mathbb{C}$ there exists an integer $p\geqslant 0$ such that the coefficients of all monomials \begin{equation}\label{monomi} u_1^{a_1'} u_2^{a_2'} h^{k'}, \qquad\text{where}\qquad 0\leqslant a_1'< a_1,\quad 0\leqslant a_2'< a_2\quad\text{and}\quad 0\leqslant k'< k, \end{equation} in $ (z_1 - z_2)^p F(z_1 e^{u_1 -u_2 +\alpha h} / z_2)$ belong to $\mathbb{C}[z_1 , z_2^{\pm 1}]$ and such that the coefficients of all monomials \eqref{monomi} in \begin{equation}\label{compare} \left((z_1 - z_2)^p F(z_1 e^{u_1 -u_2 +\alpha h} / z_2)\right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} u_1^{a_1}, u_2^{a_2}, h^{k}}\big. \quad\text{and}\quad z_2^p (e^{z_0}-1)^p F(e^{z_0+u_1-u_2+\alpha h}) \end{equation} coincide. \end{lem} \begin{prf} Set $\delta=0$ for $F=g$ and $\delta=1$ for $F=r$, i.e. $\delta=\delta_{F,r}$, so that we can consider both cases simultaneously. Let $U=\mathbb{C}[[x,x_0,h]]$. Recall \eqref{g} and \eqref{arofex}. As the map $\mathop{\iota_{x}}$ commutes with partial differential operator $\partial / \partial x$, by using Taylor Theorem \eqref{taylor} we find $$ \mathop{\iota_{x,x_0}} F(x+x_0) = \sum_{s=0}^{\infty} \frac{x_0^l} {l!} \frac{\partial^l}{\partial x^l} \mathop{\iota_{x}} F(x) = \sum_{l,s=0}^{\infty} \frac{F_s\hspace{1pt} x_0^l}{l!} \mathop{\iota_{x}} \frac{\partial^l}{\partial x^l} \left(\frac{x^{s+\delta}} {\left(1-x\right)^{s+1}}\right)\in U, $$ where $F_s =g_s$ for $F=g$ and $F_s =r_s$ for $F=r$. By \eqref{g} and \eqref{arofex}, every $F_s$ belongs to $h^s \mathbb{C}[[h]]$, so all summands with $s\geqslant k$ are trivial modulo $h^k U$. Hence the given expression modulo $U_0\coloneqq x_0^{a_1+a_2+k-2} U + h^k U$ contains only finitely many nonzero summands and, consequently, only finitely many terms $(1-x)^{s+1}$ in the denominator. Therefore, there exists an integer $p\geqslant 0$ such that $$ \mathop{\iota_{x,x_0}} (1-x)^p F(x+x_0) = \sum_{l,s=0}^{\infty} \frac{F_s\hspace{1pt} x_0^l}{l!} \mathop{\iota_{x}}\hspace{1pt} (1-x)^p \frac{\partial^l}{\partial x^l} \left(\frac{x^{s+\delta}} {\left(1-x\right)^{s+1}}\right) \in \mathbb{C}[x,x_0,h]\mod U_0 , $$ where the equality holds modulo $U_0$ and the map $\mathop{\iota_{x}}$ can be omitted on the right hand side as $p$ can be chosen so that $(1-x)^p$ cancels all negative powers of $(1-x)$ modulo $U_0$. By applying the substitution $ (x,x_0)=(z_1 /z_2 , z_1 (e^{u_1-u_2+\alpha h}-1) /z_2 ) $ to \begin{equation}\label{modulodp} \mathop{\iota_{x,x_0}} (1-x)^p F(x+x_0)\mod U_0 \end{equation} and then multiplying the resulting expression by $(-z_2)^p$ we get \begin{equation}\label{drugiput2} (z_1 - z_2)^p F(z_1 e^{u_1 -u_2 +\alpha h} / z_2)\in\mathbb{C}[z_1,z_2^{\pm 1},u_1,u_2,h]\mod V_0 \end{equation} for $V_0=u_1^{a_1}V + u_2^{a_2}V + h^{k}V$ and $V=\mathbb{C}[[z_1,z_2^{\pm 1},u_1,u_2,h]]$, thus proving the first assertion of the lemma. Set $W_0 =u_1^{a_1}W + u_2^{a_2}W + h^{k}W$ for $W=\mathbb{C} [[z_0, z_2 ,u_1,u_2,h]]$. As \eqref{modulodp} is a polynomial in the variables $x$ and $x_0$, by applying the substitution $ (x,x_0)=(e^{z_0}, e^{z_0}(e^{u_1 -u_2 +\alpha h} -1)) $ to \eqref{modulodp} and then multiplying the resulting expression by $(-z_2)^p$ we get \begin{equation}\label{drugiput3} z_2^p (e^{z_0}-1)^p F(e^{z_0+u_1-u_2+\alpha h})\mod W_0, \end{equation} where, by the expansion convention from Subsection \ref{sec01}, $F(e^{z_0+u_1-u_2+\alpha h})$ stands for $\mathop{\iota_{z_0,u_1,u_2}} F(e^{z_0+u_1-u_2+\alpha h})$.\footnote{Note that the expression $\mathop{\iota_{x,x_0}} (1-x)^p F(x+x_0)$ is considered modulo $U_0$ because, otherwise, the aforementioned substitution would not be well-defined (although the same substitution is well-defined when applied to $(1-x)^p F(x+x_0)$ with $F(x+x_0)$ being regarded as a rational function with respect to the variables $x$ and $x_0$).} Finally, as \eqref{drugiput2} modulo $V_0$ is a polynomial with respect to the variables $z_1 /z_2$ and $z_2$, by applying the substitution $z_1=z_2 e^{z_0}$ we again obtain \eqref{drugiput3}, thus proving the second assertion of the lemma. If $F=g^{-1}$ or $F=r^{-1}$, one easily checks that \begin{equation}\label{onemoreref} F(x)= \sum_{s=0}^\infty F_s \frac{x^{s-\delta_{ F,r^{-1}}}}{(1-x)^{s-1}}\quad\text{for some}\quad F_s\in h^s\mathbb{C}[[h]], \end{equation} so the lemma is verified by arguing as above. \end{prf} We now recall a certain useful consequence of \cite[Lemma 2.7]{Li1}, as given in \cite[Rem. 2.8]{Li1}: For any $A(z_1,z_2),B(z_1,z_2)\in\mathbb{C}((z_1,z_2))$, the equality \begin{equation}\label{rem28} A(z_1,z_2)\big|_{z_1=z_2 e^{z_0}}\big. =B(z_1,z_2)\big|_{z_1=z_2 e^{z_0}}\big. \quad\text{implies}\quad A(z_1,z_2) =B(z_1,z_2). \end{equation} Since the $\mathbb{C}[[h]]$-module $\mathbb{C}((z_1,z_2))[[h]]$ is separable, implication \eqref{rem28} clearly extends to any $A(z_1,z_2),B(z_1,z_2)\in\mathbb{C}((z_1,z_2))[[h]]$. \begin{lem}\label{lemma3} In $\mathbb{C}((u))[[h]]$ we have \begin{equation}\label{arofexg} r(e^{-u}) =\psi^2 g(e^u). \end{equation} Moreover, for any integers $a_1,a_2,k> 0$ and $\alpha\in\mathbb{C}$ there exists an integer $p\geqslant 0$ such that the coefficients of all monomials \eqref{monomi} in \begin{equation}\label{arofexg2} (z_1 - z_2)^p \hspace{1pt} r(z_2 e^{-u_1+u_2-\alpha h}/z_1)\qquad\text{and}\qquad (z_1 - z_2)^p\hspace{1pt} \psi^2\hspace{1pt} g(z_1 e^{u_1-u_2+\alpha h}/z_2) \end{equation} coincide. \end{lem} \begin{prf} By \cite[Prop. 2.1]{KM} we have $ \psi^2 f(e^u)= f(e^{-u})^{-1}$. Therefore, using \eqref{arofexrr31} we get \begin{align*} r(e^{-u})&= - e^{-u+h}(1-e^{-u+h})^{-1}f(e^{-u})^{-1}=- \psi^2 e^{-u+h}(1-e^{-u+h})^{-1}f(e^{u})\\ &= \psi^2 (1-e^{u-h})^{-1}f(e^{u})=\psi^2 g(e^u), \end{align*} as required, where the last equality follows from \eqref{R2}. Next, by Lemma \ref{poleslemma} and \eqref{arofexg}, there exists $p\geqslant 0$ such that the coefficients of all monomials \eqref{monomi} in both expressions in \eqref{arofexg2} belong to $\mathbb{C}[z_1^{\pm 1},z_2^{\pm 1}] $ and such that the coefficients of all monomials \eqref{monomi} in \begin{align*} &\left( (z_1 - z_2)^p r(z_2 e^{-u_1+u_2 -\alpha h}/z_1)\right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} u_1^{a_1}, u_2^{a_2}, h^{k}}\big. \qquad\text{and}\qquad\\ & \left( (z_1 - z_2)^p \psi^2 g(z_1 e^{u_1 -u_2+\alpha h}/z_2)\right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} u_1^{a_1}, u_2^{a_2}, h^{k}}\big. \end{align*} coincide. The second assertion of the lemma now follows by implication \eqref{rem28}. \end{prf} The next lemma, which relies on Lemma \ref{lemma3}, will be used in the proof of $\widehat{\mathcal{S}}$-locality of the $\phi$-coordinated module map; see Lemma \ref{lemlem4}. \begin{lem}\label{lemma4} {\rm (1)} Let $F=g^{\pm 1}$ or $F=r^{\pm 1}$. There exists $\widehat{F}(x,u,v)$ in $\mathbb{C}(x)[[u,v,h]]$ such that for all $\alpha\in\mathbb{C}$ the following equality in $\mathbb{C}((z))[[u,v,h]]$ holds: \begin{equation}\label{fzuv} \widehat{F}(e^z,u,v-\alpha h)= F(e^{z+u-v+\alpha h}). \end{equation} {\rm (2)} For any integers $n,m>0$, the families of variables $u=(u_1,\ldots ,u_n)$, $v=(v_1,\ldots ,v_m)$ and $c\in\mathbb{C}$ there exist functions $ \widehat{G}(x,u,v), \widehat{H}(x,u,v) \in\mathbb{C}(x)[[u_1,\ldots ,u_n,v_1,\ldots ,v_m,h]]$ such that the following equalities hold in $\mathbb{C}((z))[[u_1,\ldots ,u_n,v_1,\ldots ,v_m,h]]$: \begin{align} &\qquad\qquad \widehat{G}(e^z,u,v)= G(z,u,v)\quad\text{and}\quad \widehat{H}(e^z,u,v)= H(z,u,v),\qquad\text{where}\nonumber\\ &G(z,u,v)=\prod_{i=1}^n \prod_{j=1}^m g(e^{z+u_i-v_j-h(N+c)})^{-1} g(e^{z+u_i-v_j+hc})^{-1} g(e^{z+u_i-v_j})^2 ,\label{gzuv}\\ &H(z,u,v)=\prod_{i=1}^n \prod_{j=1}^m g(e^{-z+u_i-v_j-h(N+c)})^{-1} r(e^{z-u_i+v_j-hc})^{-1} g(e^{-z+u_i-v_j}) r(e^{z-u_i+v_j}).\label{fzuv2} \end{align} {\rm (3)} Let $a_1,\ldots ,a_n,b_1,\ldots ,b_m ,k>0$ be arbitrary integers and $\mathop{\iota}=\mathop{\iota_{z_1,z_2,u_1,\ldots,u_n,v_1,\ldots ,v_m}}$ the embedding. There exists an integer $p\geqslant 0$ such that the coefficients of all monomials $$ u_1^{a'_1}\ldots u_n^{a'_n} v_1^{b'_1}\ldots v_{m}^{b'_m} h^{k'},\qquad \text{where}\qquad 0\leqslant a'_{i} < a_i,\, \quad 0\leqslant b'_j < b_j \quad\text{and}\quad 0\leqslant k'< k $$ in $ (z_1 -z_2)^p \mathop{\iota} \widehat{G}(z_1 /z_2,u,v)$ and $(z_1 -z_2)^p \mathop{\iota} \widehat{H}(z_2 /z_1,u,v)$ coincide. \end{lem} \begin{prf} Due to \eqref{g}, \eqref{arofex} and \eqref{onemoreref}, we can regard $g(x)^{\pm 1}$ and $r(x)^{\pm 1}$ as elements of $\mathbb{C}(x)[[h]]$. Let $F=g^{\pm 1}$ or $F=r^{\pm 1}$ and write $F(x)=\sum_{s=0}^{\infty} F_s(x) h^s$ for some $F_s(x)\in \mathbb{C}(x)$. Applying formal Taylor Theorem \eqref{taylor} to $z\mapsto \mathop{\iota_{z}} F_s(e^{z})$ we get for any $\alpha\in\mathbb{C}$ $$ \mathop{\iota_{z,u,v,h}} F_s(e^{z+u-v+\alpha h}) =\sum_{l= 0}^\infty \frac{(u-v+\alpha h)^l}{l!}\frac{\partial^l}{\partial z^l} \mathop{\iota_{z}} F_s (e^z)\quad\text{in}\quad \mathbb{C}((z))[[u,v,h]]. $$ The partial differential operator $\partial/\partial z$ commutes with the map $\mathop{\iota_{z}}$ and all $\frac{\partial^l}{\partial z^l} F_s (e^z)$ can be naturally regarded as elements of $\mathbb{C}(e^z)$. Hence we can introduce functions $\widehat{F}_{ l,s}(x)\in\mathbb{C}(x)$ by the requirement $\widehat{F}_{ l,s}(e^z)=\frac{\partial^l}{\partial z^l}F_s (e^z) $. The first statement of the lemma now clearly follows as the function $\widehat{F}(x,u,v)\in\mathbb{C}(x)[[u,v,h]]$ satisfying \eqref{fzuv} can be defined by $$\widehat{F}(x,u,v)=\sum_{ s=0}^{\infty}\left(\sum_{ l=0}^{\infty}\frac{(u-v)^l }{l!} \widehat{F}_{l,s}(x)\right)h^s .$$ The second statement is proved by applying the first statement on each factor of \eqref{gzuv} and \eqref{fzuv2}. Finally, by \eqref{arofexg} we have $G(z,u,v)=H(-z,u,v)$, so the third statement follows by Lemma \ref{lemma3}. \end{prf} \subsection{Establishing the \texorpdfstring{$\phi$}{phi}-coordinated \texorpdfstring{$\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$}{Vc(glN)}-module structure}\label{sec052} Let $W$ be a restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module of level $c\in \mathbb{C}$. In this subsection, we prove the first assertion of the \hyperref[mainthm1]{Main Theorem}, i.e. we show that \eqref{formula} defines a unique structure of $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-module on $W$, where $\phi(z_2,z_0) = z_2 e^{z_0}$. The proof is divided into four lemmas which verify all requirements imposed by Definition \ref{phimod}. \begin{lem}\label{lemlem1} Formula \eqref{formula}, together with $Y_W(\mathop{\mathrm{\boldsymbol{1}}},z)=1_W$, defines a unique $\mathbb{C}[[h]]$-module map $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)\otimes W \to W((z))[[h]]$. \end{lem} \begin{prf} First, we note that the right hand side of \eqref{formula} is well-defined, as was discussed in Remark \ref{linremark}. Next, we recall that the algebra $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ is spanned by all coefficients of all matrix entries of $T_{[n]}^+(u)$, $n\geqslant 1$, and $\mathop{\mathrm{\boldsymbol{1}}}$; see \cite[Sect. 3.4]{EK3} or \cite[Prop. 2.4]{KM}\footnote{ We should mention that the notation in this paper slightly differs from \cite{KM}. In particular, the algebra $\textrm{U}(R)$, as defined in \cite[Sect. 2]{KM}, coincides with the algebra $ \overline{{\rm U}}_{\hspace{-1pt}h\hspace{-1pt}}^{+\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ defined in Subsection \ref{subsec012}. }. In order to prove the lemma, we have to show that $v\mapsto Y_W(v,z)$ preserves the ideal of relations \eqref{rtt}. More specifically, it is sufficient to check that for any integers $n\geqslant 2$ and $i=1,2,\ldots ,n-1$ and the family of variables $u=(u_1,\ldots ,u_n)$ the expression \begin{align} R_{i\hspace{1pt} i+1}(e^{u_i -u_{i+1}} )T_{[n]}^+ (u)\mathop{\mathrm{\boldsymbol{1}}} - P_{i\hspace{1pt} i+1}T_{[n]}^+ (u_{i+1,i})\mathop{\mathrm{\boldsymbol{1}}} P_{i\hspace{1pt} i+1}R_{i\hspace{1pt} i+1}(e^{u_i -u_{i+1}} ),\label{expr01} \end{align} where $u_{ i+1,i}=(u_1,\ldots ,u_{i-1},u_{i+1},u_i,u_{i+2},\ldots ,u_n)$, belongs to the kernel of $v\mapsto Y_W(v,z)$. Let $x=(x_1,\ldots ,x_n)$ and $x_{i+1,i}=(x_1,\ldots ,x_{i-1},x_{i+1},x_i,x_{i+2},\ldots ,x_n)$. Using Yang--Baxter equation \eqref{YBE} and commutation relation \eqref{qc} one can prove the identity \begin{equation}\label{expr03} R_{i\hspace{1pt} i+1} (x_i /x_{i+1})\mathcal{L}_{[n]}(x)=P_{i\hspace{1pt} i+1}\mathcal{L}_{[n]}(x_{ i+1,i})P_{i\hspace{1pt} i+1}R_{i\hspace{1pt} i+1} (x_i /x_{i+1}). \end{equation} By Proposition \ref{restricted496}, all matrix entries of $\mathcal{L}_{[n]}(x)$ belong to $\mathop{\mathrm{Hom}}(W,W((x_1,\ldots ,x_n))[[h]])$, so all matrix entries in \eqref{expr03} belong to $$\mathop{\mathrm{Hom}}(W,W((x_{i+1}))((x_1,\ldots ,x_i,x_{i+2},\ldots ,x_n))[[h]]).$$ Recall $R$-matrix decomposition \eqref{R2}. By \eqref{onemoreref} the function $g(x_i /x_{i+1})^{-1}$ belongs to $\mathbb{C}[x_{i+1}^{-1}][[h,x_{i}]]$, so we can multiply \eqref{expr03} by $g(x_i /x_{i+1})^{-1}$, thus getting \begin{equation}\label{expr04} R_{i\hspace{1pt} i+1}^+ (x_i /x_{i+1})\mathcal{L}_{[n]}(x)=P_{i\hspace{1pt} i+1}\mathcal{L}_{[n]}(x_{ i+1,i})P_{i\hspace{1pt} i+1}R_{i\hspace{1pt} i+1}^+ (x_i /x_{i+1}). \end{equation} Since the $R$-matrix $R_{i\hspace{1pt} i+1}^+ (x_i /x_{i+1})$ is a polynomial in $x_i/x_{i+1}$, all matrix entries of both sides in \eqref{expr04} belong to $\mathop{\mathrm{Hom}}(W,W ((x_1,\ldots ,x_n))[[h]])$. Therefore, we can apply the substitutions $x_i =ze^{u_i}$ with $i=1,\ldots ,n$ to \eqref{expr04}, thus getting the following equality in $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n} \otimes \mathop{\mathrm{Hom}}(W,W((z))[[h,u_1,\ldots ,u_n]])$: $$ R_{i\hspace{1pt} i+1}^+ (e^{u_i -u_{i+1}}) \cdot \left(\mathcal{L}_{[n]}(x)\right)\big|_{x_i = ze^{u_i}}\big. =P_{i\hspace{1pt} i+1}\left(\mathcal{L}_{[n]}(x_{i+1,i})\right)\big|_{x_i = ze^{u_i}}\big.\hspace{1pt} P_{i\hspace{1pt} i+1} \hspace{1pt} R_{i\hspace{1pt} i+1}^+ (e^{u_i -u_{i+1}}). $$ Multiplying the equality by $\psi g (e^{u_i -u_{i+1}})\in\mathbb{C}((u_{i+1}))[[h,u_{i}]]$ and using \eqref{rplusg} we find $$ R_{i\hspace{1pt} i+1} (e^{u_i -u_{i+1}}) \left(\mathcal{L}_{[n]}(x)\right)\big|_{x_i = ze^{u_i}}\big. -P_{i\hspace{1pt} i+1}\left(\mathcal{L}_{[n]}(x_{i+1,i})\right)\big|_{x_i = ze^{u_i}}\big. \hspace{1pt} P_{i\hspace{1pt} i+1}\hspace{1pt} R_{i\hspace{1pt} i+1} (e^{u_i -u_{i+1}})=0. $$ As the left hand side coincides with the image of \eqref{expr01}, with respect to $Y_W(z)$, we conclude that \eqref{formula} defines a $\mathbb{C}[[h]]$-module map $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)\otimes W \to W[[z ^{\pm 1}]]$, as required. Moreover, by Remark \ref{linremark} its image belongs to $ W((z))[[h]]$. Finally, it is clear that the $\mathbb{C}[[h]]$-module map $Y_W(z)$ is uniquely determined by \eqref{formula}. \end{prf} The next lemma follows from $\widehat{\mathcal{S}}$-locality property \eqref{localitymod} which is verified in Lemma \ref{lemlem4} below; recall Remark \ref{assocremark}. Nonetheless, we provide the direct proof as the underlying calculations are required in the proof of Lemma \ref{lemlem3}. \begin{lem}\label{lemlem2} The map $Y_W(z)$ satisfies \eqref{associativitymod0}, i.e. for any $u,v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ and $k\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that \begin{equation}\label{1associativitymod0} (z_1-z_2)^p\hspace{1pt} Y_W(u,z_1)Y_W(v,z_2)\in\mathop{\mathrm{Hom}} (W,W((z_1,z_2)) )\mod h^k . \end{equation} \end{lem} \begin{prf} For any integers $n,m\geqslant 1$ and families of variables $u=(u_1,\ldots ,u_n)$ and $v=(v_1,\ldots ,v_m)$ we have \begin{equation}\label{exprr01} Y_W(T_{[n]}^{+13}(u)\mathop{\mathrm{\boldsymbol{1}}},z_1 )Y_W(T_{[m]}^{+23}(v)\mathop{\mathrm{\boldsymbol{1}}},z_2 ) =\left(\mathcal{L}_{[n]}^{13} (x)\right)\big|_{x_i =z_1e^{u_i}}\big. \left(\mathcal{L}_{[m]}^{23} (y)\right)\big|_{y_j =z_2e^{v_j}}\big. , \end{equation} where $x=(x_1,\ldots ,x_n)$ and $y=(y_1,\ldots ,y_m)$. The coefficients in \eqref{exprr01} are operators on the multiple tensor product with superscripts $1,2,3$ indicating the tensor factors: $$ \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}}^{1} \otimes \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}}^{2} \otimes \smalloverbrace{W}^{3}. $$ Let us rewrite the right hand side in \eqref{exprr01}. The third assertion of Proposition \ref{qcgenpro} implies \begin{equation}\label{exprr03} \mathcal{L}_{[n]}^{13}(x) R_{nm}^{21} (ye^{-hc} / x) \mathcal{L}_{[m]}^{23}(y)=\mathcal{L}_{[n+m]}(x,y)R_{nm}^{21} (y/ x). \end{equation} By expressing the second crossing symmetry relation in \eqref{csym} in the variable $x=y_j e^{-h(N+c)}/x_i$, then applying the transposition $t_2$ and finally conjugating the resulting equality by the permutation operator $P$ we find $$ \left(D_1 R_{21}(y_j e^{-h(N+c)}/x_i)^{-1} D_1^{-1} \right) \mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} R_{21} (y_j e^{-hc}/x_i) =1. $$ Furthermore, due to Lemma \ref{arofexlemma}, we can write this equality as $$ r(y_j e^{-h(N+c)}/x_i) \left( D_1 R_{12}^+ (x_i e^{h(N+c)}/y_j) D_1^{-1} \right) \mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} R_{21} (y_j e^{-hc}/x_i) =1. $$ Hence we have \begin{gather} r(x,y) \left( D_{[n]}^1 R_{nm}^{+12}(x e^{h(N+c)}/y) \left(D_{[n]}^1\right)^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} R_{nm}^{21} (ye^{-hc} / x) =1,\qquad \text{where}\label{temp9}\\ D_{[n]}^1 =D^{\otimes n}\otimes 1^{\otimes m} \quad\text{and}\quad r(x,y)=\prod_{i=1}^n \prod_{j=1}^m r(y_j e^{-h(N+c)}/x_i) .\label{rfunction} \end{gather} Using \eqref{temp9} we can move $ R_{nm}^{21} (ye^{-hc} / x)$ in \eqref{exprr03} to the right hand side, which gives us \begin{align} &\mathcal{L}_{[n]}^{13}(x) \mathcal{L}_{[m]}^{23}(y)= r(x,y) \left( D_{[n]}^1 R_{nm}^{+12}(x e^{h(N+c)}/y) \left(D_{[n]}^1\right)^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\left(\mathcal{L}_{[n+m]}(x,y) R_{nm}^{21} (y/ x)\right) \nonumber\\ =&\, r(x,y) \hspace{1pt} g(x,y) \left( D_{[n]}^1 R_{nm}^{+12}(x e^{h(N+c)}/y) \left(D_{[n]}^1\right)^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\left(\mathcal{L}_{[n+m]}(x,y) R_{nm}^{+21} (y/ x)\right), \label{exprr04} \end{align} where the second equality comes from \eqref{R2} and the function $g(x,y)$ is given by \begin{equation}\label{gfunction} g(x,y)=\prod_{i=1}^n \prod_{j=1}^m g(y_j /x_i) . \end{equation} Let $a_1,\ldots ,a_n,b_1,\ldots ,b_m ,k>0$ be arbitrary integers. We now apply the substitutions \begin{equation}\label{subs} x_i=z_1 e^{u_i}, \quad y_j=z_2 e^{v_j}\qquad\text{for}\qquad i=1,\ldots ,n,\quad j=1,\ldots ,m \end{equation} to \eqref{exprr04}, thus getting \eqref{exprr01}, and then consider the coefficients of all monomials \begin{equation}\label{monomials} u_1^{a'_1}\ldots u_n^{a'_n} v_1^{b'_1}\ldots v_{m}^{b'_m} h^{k'},\qquad \text{where} \qquad 0\leqslant a'_{i} < a_i,\quad 0\leqslant b'_j < b_j \quad\text{and}\quad k'< k. \end{equation} First, as the $R$-matrix $R^+(w)$ is a polynomial with respect to the variable $w$, we conclude by Proposition \ref{restricted496} and Remark \ref{linremark} that \begin{align} &\left( \left( D_{[n]}^1 R_{nm}^{+12}(x e^{h(N+c)}/y) \left(D_{[n]}^1\right)^{-1} \right) \mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} \left( \mathcal{L}_{[n+m]}(x,y)R_{nm}^{+21} (y/ x) \right) \right) \bigg|_{\substack{x_i =z_1e^{u_i},\, y_j =z_2e^{v_j}}}\bigg. \label{prviclan}\\ &\in (\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}\otimes(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}\otimes\mathop{\mathrm{Hom}}(W,W((z_1,z_2))[[u_1,\ldots,u_n,v_1,\ldots ,v_n, h]]). \nonumber \end{align} Next, by Lemma \ref{poleslemma} there exists an integer $p\geqslant 0$, which depends on the choice of integers $a_1,\ldots ,a_n,b_1,\ldots ,b_m,k$, such that the coefficients of all monomials \eqref{monomials} in \begin{equation}\label{drugiclan} (z_1 -z_2)^p \big( r(x,y) g(x,y)\big)\big|_{x_i =z_1e^{u_i},\,y_j =z_2e^{v_j}} \big. \end{equation} belong to $\mathbb{C}[z_1^{\pm 1},z_2^{\pm 1}]$. Finally, we observe that the coefficients of all monomials \eqref{monomials} in the product of \eqref{prviclan} and \eqref{drugiclan} coincide with the corresponding coefficients in $$ (z_1 -z_2)^p \hspace{1pt} Y_W(T_{[n]}^{+13}(u)\mathop{\mathrm{\boldsymbol{1}}},z_1 )Y_W(T_{[m]}^{+23}(v)\mathop{\mathrm{\boldsymbol{1}}},z_2 ). $$ Therefore, by the preceding discussion, these coefficients belong to $$ (\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}\otimes(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}\otimes\mathop{\mathrm{Hom}}(W,W((z_1,z_2))) ,$$ which implies the statement of the lemma. \end{prf} \begin{lem}\label{lemlem3} The map $Y_W(z)$ satisfies weak associativity \eqref{associativitymod}, i.e. for any $u,v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ and $k\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that \eqref{1associativitymod0} holds and such that \begin{align} &\big((z_1-z_2)^p\hspace{1pt} Y_W(u,z_1)Y_W(v,z_2)\big)\big|_{z_1= z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^k} \big. \nonumber\\ &\qquad- z_2^p ( e^{z_0} -1)^p\hspace{1pt} Y_W\left(Y(u,z_0)v,z_2\right)\hspace{1pt} \in\hspace{1pt} h^k \mathop{\mathrm{Hom}}(W,W[[z_0^{\pm 1},z_2^{\pm 1}]]).\label{1associativitymod} \end{align} \end{lem} \begin{prf} Let $n,m,a_1,\ldots ,a_n,b_1,\ldots ,b_m ,k>0$ be arbitrary integers, $u=(u_1,\ldots ,u_n)$ and $v=(v_1,\ldots ,v_m)$ the families of variables. Consider the coefficients of all monomials \eqref{monomials} in the expression \begin{equation}\label{left} \big((z_1-z_2)^p\hspace{1pt} Y_W(T_{[n]}^{+13} (u)\mathop{\mathrm{\boldsymbol{1}}},z_1)Y_W(T_{[m]}^{+23} (v) \mathop{\mathrm{\boldsymbol{1}}},z_2)\big)\big|_{z_1= z_2 e^{z_0}}^{ \,{\rm mod\,\,} u_1^{a_1},\ldots , u_n^{a_n},v_1^{b_1},\ldots ,v_m^{b_m},h^k}\big., \end{equation} which corresponds to the first summand in \eqref{1associativitymod}. As demonstrated in the proof of Lemma \ref{lemlem2}, they coincide with the coefficients of all monomials \eqref{monomials} in the product \begin{align} &\left( \hspace{-2pt}\left( \hspace{-1pt} \left( D_{[n]}^1 R_{nm}^{+12}(x e^{h(N+c)}/y) \left(D_{[n]}^1\right)^{-1} \right) \mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} \left( \mathcal{L}_{[n+m]}(x,y) R_{nm}^{+21} (y/ x) \right)\hspace{-1pt} \right) \Big|_{\substack{x_i =z_1e^{u_i},\,y_j =z_2e^{v_j}}}\Big.\right) \bigg|_{z_1= z_2 e^{z_0}}\bigg. \label{tren1}\\ & \times \left( (z_1 -z_2)^p \cdot \big( r(x,y) g(x,y)\big)\big|_{\substack{x_i =z_1e^{u_i},\,y_j =z_2e^{v_j}}}\big. \right) \Big|_{z_1= z_2 e^{z_0}}^{ \,{\rm mod\,\,} u_1^{a_1},\ldots , u_n^{a_n},v_1^{b_1},\ldots ,v_m^{b_m},h^k}\Big. \label{tren2} \end{align} for a suitably chosen integer $p\geqslant 0$ (which depends on $a_1,\ldots ,a_n,b_1,\ldots ,b_m ,k$). Recall that the functions $r$ and $g$ are given by \eqref{rfunction} and \eqref{gfunction}. First, we observe that the coefficients of all monomials \eqref{monomials} in factor \eqref{tren1} coincide with the corresponding coefficients in \begin{equation}\label{tren3} \left( D_{[n]}^1 R_{nm}^{+12}(e^{z_0 +u-v+h(N+c)}) \left(D_{[n]}^1\right)^{-1} \right) \mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} \left(\hspace{-1pt} \bigg( \mathcal{L}_{[n+m]}(x,y) \Big|_{\substack{x_i =z_1e^{u_i}\\y_j =z_2e^{v_j}}}\Big. \bigg) \bigg|_{z_1= z_2 e^{z_0}}\bigg. R_{nm}^{+21} (e^{-z_0-u+v}) \hspace{-1pt}\right)\hspace{-1pt}\hspace{-1pt}. \end{equation} Next, we turn to factor \eqref{tren2}. Due to Lemma \ref{poleslemma}, we can assume that the integer $p$ was chosen so that the coefficients of all monomials \eqref{monomials} in factor \eqref{tren2} coincide with the coefficients of all monomials \eqref{monomials} in $$ z_2^p (e^{z_0} -1)^p \prod_{i=1}^n \prod_{j=1}^m r(e^{-z_0-u_i+v_j-h(N+c)}) g(e^{-z_0-u_i+v_j}). $$ Moreover, by \eqref{arofexg}, this is equal to \begin{equation}\label{left3} z_2^p (e^{z_0} -1)^p \psi^{2nm} \prod_{i=1}^n \prod_{j=1}^m g(e^{z_0+u_i-v_j+h(N+c)}) g(e^{-z_0-u_i+v_j}). \end{equation} Finally, we conclude that the coefficients of all monomials \eqref{monomials} in \eqref{left} coincide with the coefficients of the corresponding monomials in the product of \eqref{tren3} and \eqref{left3}. Consider the expression \begin{equation}\label{right1} z_2^p (e^{z_0} -1)^p \hspace{1pt} Y_W(Y(T_{[n]}^{+13} (u)\mathop{\mathrm{\boldsymbol{1}}},z_0)T_{[m]}^{+23} (v)\mathop{\mathrm{\boldsymbol{1}}},z_2), \end{equation} which corresponds to the second summand in \eqref{1associativitymod}. By \eqref{qva1} it is equal to \begin{equation}\label{exprt01} z_2^p (e^{z_0} -1)^p \hspace{1pt} Y_W( T_{[n]}^{+13} (u|z_0)T_{[n]}^{*13} (u|z_0+hc/2)^{-1} T_{[m]}^{+23} (v)\mathop{\mathrm{\boldsymbol{1}}},z_2). \end{equation} Since $T^*(u)\mathop{\mathrm{\boldsymbol{1}}} =\mathop{\mathrm{\boldsymbol{1}}}$, by combining relation \eqref{rtt3} and the first crossing symmetry relation in \eqref{csym_equiv} we obtain \begin{align} &T_{[n]}^{*13} (u|z_0+hc/2)^{-1} T_{[m]}^{+23} (v)\mathop{\mathrm{\boldsymbol{1}}}\nonumber \\ &\qquad= \left(D_{[n]}^{1} R_{nm}^{12}(e^{z_0+u-v +h(N+c)}) (D_{[n]}^{1})^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\left(T_{[m]}^{+23} (v)\mathop{\mathrm{\boldsymbol{1}}} R_{nm}^{12}(e^{z_0+u-v})^{-1} \right).\label{ge3} \end{align} Introduce the functions $$ g_1(u,v,z_0)=\psi^{nm}\prod_{i=1}^n \prod_{j=1}^m g(e^{z_0+u_i-v_j+h(N+c)}), \quad g_2(u,v,z_0)=\psi^{nm} \prod_{i=1}^n \prod_{j=1}^m g(e^{-z_0-u_i+v_j}). $$ By \eqref{rplusg} we have \begin{equation}\label{ge1} R_{nm}^{12}(e^{z_0+u-v +h(N+c)})=g_1(u,v,z_0)R_{nm}^{+12}(e^{z_0+u-v +h(N+c)}). \end{equation} Furthermore, by combining \eqref{rplusg} and unitarity property \eqref{uni} we find \begin{equation}\label{ge2} R_{nm}^{12}(e^{z_0+u-v })^{-1}=R_{nm}^{21}(e^{-z_0-u+v })=g_2(u,v,z_0)R_{nm}^{+21}(e^{-z_0-u+v }). \end{equation} Using \eqref{ge1} and \eqref{ge2} we rewrite the right hand side of \eqref{ge3} as \begin{align} &g_1(u,v,z_0)g_2(u,v,z_0) \left(D_{[n]}^{1} R_{nm}^{+12}(e^{z_0 +u-v+h(N+c)}) (D_{[n]}^{1})^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\left(T_{[m]}^{+23} (v)\mathop{\mathrm{\boldsymbol{1}}} R_{nm}^{+21}(e^{-z_0-u+v }) \right).\label{ge4} \end{align} Next, we employ \eqref{ge4} and then \eqref{formula} to express \eqref{exprt01} as \begin{align} &\left(D_{[n]}^{1} R_{nm}^{+12}(e^{z_0 +u-v +h(N+c)}) (D_{[n]}^{1})^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\Big(Y_W(T_{[n]}^{+13} (u|z_0)T_{[m]}^{+23} (v)\mathop{\mathrm{\boldsymbol{1}}},z_2) R_{nm}^{+21}(e^{-z_0-u+v })\Big)\nonumber\\ &\qquad\times z_2^p (e^{z_0} -1)^p \hspace{1pt} g_1(u,v,z_0)\hspace{1pt} g_2(u,v,z_0)\nonumber\\ =&\left(D_{[n]}^{1} R_{nm}^{+12}(e^{z_0 +u-v +h(N+c)}) (D_{[n]}^{1})^{-1}\right)\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} \Big(\mathcal{L}_{[n+m]} (x,y) \Big|_{\substack{x_i =z_2e^{z_0+u_i}\\\hspace{-13pt}y_j =z_2e^{v_j}}}\Big. \hspace{1pt} R_{nm}^{+21}(e^{-z_0-u+v })\Big)\nonumber\\ &\qquad\times z_2^p (e^{z_0} -1)^p \hspace{1pt} g_1(u,v,z_0)\hspace{1pt} g_2(u,v,z_0).\nonumber \end{align} Note that $z_2^p (e^{z_0} -1)^p g_1(u,v,z_0)g_2(u,v,z_0)$ is equal to \eqref{left3} and that $$ \mathcal{L}_{[n+m]} (x,y) \Big|_{\substack{x_i =z_2e^{z_0+u_i}\\\hspace{-13pt}y_j =z_2e^{v_j}}}\Big. = \left( \mathcal{L}_{[n+m]}(x,y) \Big|_{\substack{x_i =z_1e^{u_i}\\y_j =z_2e^{v_j}}}\Big. \right) \bigg|_{z_1= z_2 e^{z_0}}\bigg. . $$ Therefore, the product of \eqref{tren3} and \eqref{left3} is equal to \eqref{right1}, so we conclude that the coefficients of all monomials \eqref{monomials} in \eqref{left} and in \eqref{right1} coincide, as required. \end{prf} \begin{lem}\label{lemlem4} The map $Y_W(z)$ satisfies $\widehat{\mathcal{S}}$-locality \eqref{localitymod}, i.e. for any $u,v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ and $k\in\mathbb{Z}_{\geqslant 0}$ there exists $p\in\mathbb{Z}_{\geqslant 0}$ such that for all $w\in W$ \begin{align} &(z_1-z_2)^{p}\hspace{1pt} Y_W(z_1)\big(1\otimes Y_W(z_2)\big)\mathop{\iota_{z_1,z_2}}\big(\widehat{\mathcal{S}}(z_1/z_2)(u\otimes v)\otimes w\big)\nonumber\\ &\qquad-(z_1-z_2)^{p}\hspace{1pt} Y_W(z_2)\big(1\otimes Y_W(z_1)\big)(v\otimes u\otimes w) \in h^k W[[z_1^{\pm 1},z_2^{\pm 1}]].\label{1localitymod} \end{align} \end{lem} \begin{prf} Let $n,m,a_1,\ldots ,a_n,b_1,\ldots ,b_m ,k>0$ be arbitrary integers, $u=(u_1,\ldots ,u_n)$ and $v=(v_1,\ldots ,v_m)$ the families of variables. We will apply $Y_W(z_1)\big(1\otimes Y_W(z_2)\big)\mathop{\iota_{z_1,z_2}}\widehat{\mathcal{S}}(z_1/z_2)$, which corresponds to the first summand in \eqref{1localitymod}, to \begin{equation}\label{elment} T_{[n]}^{+13}(u)\hspace{1pt} T_{[m]}^{+24}(v)(\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}}) \end{equation} and then consider the coefficients of all monomials \eqref{monomials} in the resulting expression. Note that the superscripts $1,2,3, 4$ in \eqref{elment} indicate the tensor factors as follows: $$ \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}}^{1} \otimes \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m} }^{2} \otimes \smalloverbrace{\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)}^{3}\otimes \smalloverbrace{\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)}^{4} . $$ Applying the map $\widehat{\mathcal{S}}(z_1/z_2)$, given by \eqref{sop1hat}, to \eqref{elment} and then using \eqref{R2} we get \begin{align} & \widehat{G}(z_1/z_2,u,v)\left(D_{[m]}^2 R_{nm}^{ + 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1} (D_{[m]}^2)^{-1}\right)\label{sop5} \\ & \mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \left( R_{nm}^{ + 12}(z_1 e^{u-v}/z_2) T_{[n]}^{+13}(u) R_{nm}^{ +12}(z_1 e^{u-v+h c}/z_2) ^{-1} T_{[m]}^{+24}(v) R_{nm}^{ + 12}(z_1 e^{u-v}/z_2)(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})\right),\nonumber \end{align} where the function $\widehat{G}(x,u,v)$ is given by Lemma \ref{lemma4}. As we only consider the coefficients of monomials \eqref{monomials}, it is sufficient to carry out the calculations modulo $U$, where \begin{align*} &U= \sum_{i=1}^n u_i^{a_i} V + \sum_{j=1}^m v_j^{b_j} V +h^k V\qquad\text{and}\qquad \\ & V=(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes (n+m)} \otimes\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)^{\otimes 2} ((z_1))((z_2)) [[u_1,\ldots ,u_n,v_1,\ldots ,v_m,h]] . \end{align*} By Lemma \ref{lemma4}, there exists an integer $p\geqslant 0$ such that the image of the product of $(z_1-z_2)^p$ and \eqref{sop5} with respect to the map $\mathop{\iota}=\mathop{\iota_{z_1,z_2,u_1,\ldots,u_n,v_1,\ldots ,v_m}}$ coincides with \begin{align} & (z_1-z_2)^p\mathop{\iota} \widehat{H}(z_2/z_1,u,v)\left(D_{[m]}^2 R_{nm}^{ + 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1} (D_{[m]}^2)^{-1}\right)\label{ssp5} \\ & \mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \left( R_{nm}^{+ 12}(z_1 e^{u-v}/z_2) T_{[n]}^{+13}(u) R_{nm}^{ +12}(z_1 e^{u-v+h c}/z_2) ^{-1} T_{[m]}^{+24}(v) R_{nm}^{+ 12}(z_1 e^{u-v}/z_2)(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})\right)\nonumber \end{align} modulo $U$, where the function $\widehat{H}(x,u,v)$ is given by Lemma \ref{lemma4}. Note that there are only finitely many monomials \eqref{monomials}. Therefore, due to Lemma \ref{poleslemma}, we can assume that $p=4p_0$ for some integer $p_0\geqslant 0$ such that all coefficients of monomials \eqref{monomials} in \begin{alignat*}{2} &\mathop{\iota} \left( (z_1-z_2)^{p_0}\hspace{1pt} R_{nm}^{ 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1}\right) ,\quad &&\mathop{\iota}\left( (z_1-z_2)^{p_0}\hspace{1pt} R_{nm}^{ 12}(z_1 e^{u-v}/z_2)\right) , \\ &\mathop{\iota} \left( (z_1-z_2)^{p_0}\hspace{1pt} R_{nm}^{ 21}(z_2 e^{-u+v-h c}/z_1)\right) ,\quad &&\mathop{\iota}\left( (z_1-z_2)^{p_0}\hspace{1pt} R_{nm}^{ 21}(z_2 e^{-u+v}/z_1)^{-1} \right) \end{alignat*} belong to $ (\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes (n+m)} ((z_1,z_2)) $. Due to the definition of the function $\widehat{H}(x,u,v)$, see in particular \eqref{fzuv2}, we conclude by \eqref{R2} and \eqref{arofexrr} that the expression in \eqref{ssp5} equals \begin{align} & (z_1-z_2)^p \mathop{\iota} \left(D_{[m]}^2 R_{nm}^{ 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1} (D_{[m]}^2)^{-1}\right) \mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \Big( R_{nm}^{ 12}(z_1 e^{u-v}/z_2)\Big.\label{ssp6} \\ & \Big. \times T_{[n]}^{+13}(u) R_{nm}^{ 21}(z_2 e^{-u+v-h c}/z_1) T_{[m]}^{+24}(v) R_{nm}^{ 21}(z_2 e^{-u+v}/z_1)^{-1}(\mathop{\mathrm{\boldsymbol{1}}}\otimes \mathop{\mathrm{\boldsymbol{1}}})\Big)\mod U.\nonumber \end{align} Next, we apply $Y_W(z_1)(1\otimes Y_W(z_2)) $ to \eqref{ssp6}, thus getting \begin{align} & (z_1-z_2)^p \left(D_{[m]}^2 \mathop{\iota}\left( R_{nm}^{ 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1}\right) (D_{[m]}^2)^{-1}\right) \mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \Big( \mathop{\iota}\left( R_{nm}^{ 12}(z_1 e^{u-v}/z_2)\right)\Big. \label{sssp7} \\ & \Big. \times \mathcal{L}_{[n]}^{13}(x)\big|_{x_i = z_1 e^{u_i}} \big. \mathop{\iota}\left( R_{nm}^{ 21}(z_2 e^{-u+v-h c}/z_1)\right) \mathcal{L}_{[m]}^{23}(y) \big|_{y_j = z_2 e^{v_j}} \big. \mathop{\iota}\left( R_{nm}^{ 21}(z_2 e^{-u+v}/z_1)^{-1}\right)\Big) \nonumber \end{align} modulo $U_0$, where $x=(x_1,\ldots ,x_n)$ and $y=(y_1,\ldots ,y_m)$ are the families of variables and \begin{align*} &U_0= \sum_{i=1}^n u_i^{a_i} V_0 + \sum_{j=1}^m v_j^{b_j} V_0 +h^k V_0\qquad\text{for} \\ & V_0=(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes (n+m)}\otimes \mathop{\mathrm{Hom}} (W,W[[z_1^{\pm 1},z_2^{\pm 2}]])[[u_1,\ldots ,u_n,v_1,\ldots ,v_m]]. \end{align*} By employing quantum current commutation relation \eqref{qcgen} we rewrite \eqref{sssp7} as \begin{align} & \left(D_{[m]}^2 \mathop{\iota}\left((z_1-z_2)^{p_0} R_{nm}^{ 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1}\right) (D_{[m]}^2)^{-1}\right)\nonumber\\ &\mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} \Big( \mathop{\iota}\left((z_1-z_2)^{p_0} R_{nm}^{ 12}(z_1 e^{u-v}/z_2)\right) \mathop{\iota}\left( (z_1-z_2)^{p_0} R_{nm}^{ 12}(z_1 e^{u-v}/z_2)^{-1}\right) \Big. \label{ssp9} \\ & \Big. \times \mathcal{L}_{[m]}^{23}(y) \big|_{y_j = z_2 e^{v_j}} \big. \mathop{\iota}\left((z_1-z_2)^{p_0} R_{nm}^{ 12}(z_1 e^{u-v-hc}/z_2)\right) \mathcal{L}_{[n]}^{13}(x)\big|_{x_i = z_1 e^{u_i}} \big. \Big)\mod U_0. \nonumber \end{align} Observe that all products in \eqref{ssp9} are well-defined modulo $U_0$ due to our choice of the integer $p=4p_0$ and Remark \ref{linremark}. Canceling the $R$-matrices $ R_{nm}^{ 12}(z_1 e^{u-v}/z_2)^{\pm 1} $ and then using the following consequence of the second crossing symmetry relation in \eqref{csym} which is verified by arguing as in Remark \ref{csrem}, $$ \left(D_{[m]}^2 R_{nm}^{ 12}(z_1 e^{u-v-h (N+ c)}/z_2)^{-1} (D_{[m]}^2)^{-1}\right) \mathop{\hspace{-2pt}\underset{\text{LR}}{\cdot}\hspace{-2pt}} R_{nm}^{ 12}(z_1 e^{u-v-hc}/z_2), $$ the expression in \eqref{ssp9} simplifies to \begin{align} (z_1 -z_2)^p \hspace{1pt} \mathcal{L}_{[m]}^{23}(y) \big|_{y_j = z_2 e^{v_j}} \big. \mathcal{L}_{[n]}^{13}(x)\big|_{x_i = z_1 e^{u_i}}\mod U_0.\label{sopa} \end{align} Finally, consider the expression which corresponds to the second summand in \eqref{1localitymod}, i.e. which is obtained by applying $(z_1 -z_2)^p \hspace{1pt} Y_W(z_2)(1\otimes Y_W(z_1))$ to $T_{[n]}^{+14}(u)\hspace{1pt} T_{[m]}^{+23}(v)(\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}})$. Clearly, all its coefficients with respect to monomials \eqref{monomials} coincide with the corresponding coefficients in \eqref{sopa}, so the $\widehat{\mathcal{S}}$-locality follows. \end{prf} \subsection{Establishing the restricted \texorpdfstring{$ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$}{Uh(glNhat)}-module structure}\label{sec053} Let $(W,Y_W)$ be a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-module for some $c\in\mathbb{C}$, where $\phi(z_2,z_0) = z_2 e^{z_0}$. In this subsection, which consists of two lemmas, we finish the proof of the \hyperref[mainthm1]{Main Theorem} in the $\mathfrak{gl}_N$ case. \begin{lem}\label{lemlem5} Formula \eqref{moduleformula} defines a unique structure of restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module of level $c$ on $W$. \end{lem} \begin{prf} The uniqueness is clear as \eqref{moduleformula} determines the action of all generators of $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ on $W$. We now use the Jacobi-type identity given in Proposition \ref{Jacobi_prop} to check that \eqref{moduleformula} satisfies defining relation \eqref{qc} for the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$ at the level $c$. Let $n\geqslant 0$ be an arbitrary integer. Choose $p\geqslant 0$ such that the expressions \begin{align} &\mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1 -z_2)^p R_{12}(z_2 /z_1)^{-1}\hspace{1pt} T_{23}^+ (0)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} T_{14}^+(0) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}})\qquad\text{and}\qquad\label{jac1}\\ &\mathop{\iota_{z_1,z_2}} (z_1 -z_2)^p R_{12}(z_2 /z_1)^{-1}\hspace{1pt} T_{23}^+ (0)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} T_{14}^+(0) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}}), \label{jac2} \end{align} whose coefficients belong to $ (\mathop{\mathrm{End}}\mathbb{C}^N )^{\otimes 2} \otimes \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)^{\otimes 2} $, coincide modulo $ h^n$. Note that the embedding map $\mathop{\iota_{z_1,z_2}}$ in \eqref{jac2} can be omitted as both $R$-matrices are Taylor series in $z_2 /z_1$, i.e. they consist of nonnegative powers of $z_2/z_1$. Furthermore, we can assume that the integer $p$ is chosen so that expression \eqref{jac1} modulo $h^n$ is a polynomial in the variables $z_1^{\pm 1}, z_2^{\pm 1}$. Hence the embedding map $\mathop{\iota_{z_2,z_1}}\hspace{-1pt}$ can be also omitted when regarding \eqref{jac1} modulo $h^n$. Applying first term \eqref{Jacobi1} of the Jacobi identity on \eqref{jac2} we get \begin{align*} &(z_2 z)^{-1} \delta\left(\frac{z_1 -z_2}{z_2 z}\right) Y_W(z_1) (1 \otimes Y_W(z_2))\\ &\times (z_1 -z_2)^p R_{12}(z_2 /z_1)^{-1}\hspace{1pt} T_{23}^+ (0)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} T_{14}^+(0) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}}). \end{align*} Using \eqref{moduleformula} we rewrite this as \begin{align*} &(z_2 z)^{-1} \delta\left(\frac{z_1 -z_2}{z_2 z}\right) (z_1 -z_2)^p\hspace{1pt} R_{12}(z_2 /z_1)^{-1} \mathcal{L}_2(z_1) R_{12}(z_2 e^{-hc}/z_1) \mathcal{L}_1(z_2) . \end{align*} Due to the well-known $\delta$-function identity, \begin{equation}\label{delta567} x\delta(x)=\delta(x), \end{equation} by multiplying by $(z_2 z)^{-p}$ and then taking the residue $\mathop{\mathrm{Res}}_{z_2 z}$ we obtain \begin{equation}\label{jac3} R_{12}(z_2 /z_1)^{-1} \hspace{1pt}\mathcal{L}_2(z_1)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} \mathcal{L}_1(z_2). \end{equation} We now turn to second term \eqref{Jacobi2} of the Jacobi identity. Choose $r\geqslant 0$ such that \begin{alignat*}{2} &A(z_1,z_2)\coloneqq\mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1 - z_2)^r R_{12}(z_2e^{hc}/z_1)^{-1}, \quad &&B(z_1,z_2)\coloneqq\mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1 - z_2)^r \psi^2 R_{21}(z_1e^{-hc}/z_2),\\ &C(z_1,z_2)\coloneqq\mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1 - z_2)^r R_{12}(z_2/z_1), \quad &&D(z_1,z_2)\coloneqq\mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1 - z_2)^r \psi^{-2} R_{21}(z_1/z_2)^{-1} \end{alignat*} belong to $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes 2} ((z_1,z_2))$ modulo $h^n$. As with \eqref{jac2}, observe that the embedding map $\mathop{\iota_{z_2,z_1}}\hspace{-1pt}$ can be omitted in the definitions of $B(z_1,z_2)$ and $D(z_1,z_2)$ above. By combining \eqref{rplusg} and unitarity property \eqref{uni} we find $$ A(z_1,z_2)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^n}\big. =B(z_1,z_2)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^n}\big. \quad\text{and}\quad C(z_1,z_2)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^n}\big. =D(z_1,z_2)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^n}\big. . $$ Therefore, by the implication in \eqref{rem28} we conclude that \begin{equation}\label{jac4} A(z_1,z_2)=B(z_1,z_2)\mod h^n\qquad\text{and}\qquad C(z_1,z_2)=D(z_1,z_2)\mod h^n. \end{equation} Consider \eqref{jac1} modulo $h^n$. Applying second term \eqref{Jacobi2} of the Jacobi identity we get\footnote{Note that, in contrast with \eqref{Jacobi1} and \eqref{Jacobi3}, the vectors $u$ and $v$ in \eqref{Jacobi2} are swapped, so that in \eqref{delta1234} we have $T_{24}^+ (0)$ and $T_{13}^+(0)$ instead of $T_{23}^+ (0)$ and $T_{14}^+(0)$.} \begin{align} &-(z_2 z)^{-1} \delta\left(\frac{z_2-z_1}{-z_2 z}\right) Y_W(z_2)(1\otimes Y_W(z_1)) \mathop{\iota_{z_2,z_1}}\hspace{-1pt}\big( \widehat{\mathcal{S}}(z_2 /z_1)\big. \nonumber\\ &\times (z_1 -z_2)^p R_{12}(z_2 /z_1)^{-1}\hspace{1pt} T_{24}^+ (0)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} T_{13}^+(0) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}})\big.\big)\mod h^n .\label{delta1234} \end{align} By using explicit formula \eqref{qva2hat} for the map $\widehat{\mathcal{S}}(x)$ we rewrite \eqref{delta1234} as \begin{align*} &-(z_2 z)^{-1} \delta\left(\frac{z_2-z_1}{-z_2 z}\right) Y_W(z_2)(1\otimes Y_W(z_1)) \\ &\times (z_1 -z_2)^p\hspace{1pt} T_{13}^+(0)\mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2e^{hc} /z_1)^{-1}\right) T_{24}^+ (0)\mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2 /z_1) \right) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}})\mod h^n . \end{align*} Next, the application of \eqref{moduleformula} gives us \begin{align} &-(z_2 z)^{-1} \delta\left(\frac{z_2-z_1}{-z_2 z}\right) (z_1 -z_2)^p \mathcal{L}_1 (z_2)\nonumber \\ &\times \mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2e^{hc} /z_1)^{-1}\right) \mathcal{L}_2 (z_1)\mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2 /z_1) \right) \mod h^n .\label{delta6778} \end{align} Note that \eqref{delta567} implies $$ \delta\left(\frac{z_2-z_1}{-z_2 z}\right)=\frac{(z_2-z_1)^{r}}{(-z_2 z)^{r}}\delta\left(\frac{z_2-z_1}{-z_2 z}\right), $$ so that we can use both equalities in \eqref{jac4} to rewrite \eqref{delta6778} as \begin{align*} &-(z_2 z)^{-1} \delta\left(\frac{z_2-z_1}{-z_2 z}\right) (z_1 -z_2)^p \mathcal{L}_1 (z_2)\hspace{1pt} R_{21}(z_1e^{-hc} /z_2)\hspace{1pt} \mathcal{L}_2 (z_1)\hspace{1pt} R_{21}(z_1 /z_2)^{-1} \mod h^n, \end{align*} where the embedding maps $\mathop{\iota_{z_2,z_1}}\hspace{-1pt}$ are omitted as both $R$-matrices consist of nonnegative powers of $z_1/z_2$. Finally, multiplying by $(z_2 z)^{-p}$ and taking the residue $\mathop{\mathrm{Res}}_{z_2 z}$ we get \begin{equation}\label{jac5} - \mathcal{L}_1 (z_2)\hspace{1pt} R_{21}(z_1e^{-hc} /z_2)\hspace{1pt} \mathcal{L}_2 (z_1)\hspace{1pt} R_{21}(z_1 /z_2)^{-1} \mod h^n. \end{equation} By applying third term \eqref{Jacobi3} of the Jacobi identity to \eqref{jac1} we get \begin{align} &z_1^{-1}\delta\left(\frac{z_2 (1+z)}{z_1}\right) Y_W (z_2)\left(Y(\log(1+z))\otimes 1\right)\nonumber\\ &\times\mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1 -z_2)^p R_{12}(z_2 /z_1)^{-1}\hspace{1pt} T_{23}^+ (0)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} T_{14}^+(0) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}})\mod h^n\label{jac7}. \end{align} As before, by \eqref{rplusg} and unitarity property \eqref{uni} there exists $s\geqslant 0$ such that \begin{equation}\label{temp_uni8} \mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1-z_2)^s R_{12}(z_2 e^{-hc}/z_1) = \mathop{\iota_{z_2,z_1}}\hspace{-1pt} (z_1-z_2)^s \psi^{-2} R_{21}(z_1 e^{hc}/z_2)^{-1} \mod h^n. \end{equation} Using the $\delta$-function identities \begin{equation}\label{theiid} \left(\frac{z_1}{z_2}\right)^l \delta\left(\frac{z_2(1+z)}{z_1}\right)=(1+z)^l\,\delta\left(\frac{z_2(1+z)}{z_1}\right), \end{equation} which follow directly from \eqref{delta567}, one can easily derive $$ \delta\left(\frac{z_2(1+z)}{z_1}\right)=\frac{(z_1-z_2)^k}{(z_2 z)^k}\delta\left(\frac{z_2(1+z)}{z_1}\right), $$ in particular for $k=p,s$. Therefore, we can employ \eqref{temp_uni8} to rewrite \eqref{jac7} as \begin{align*} & \psi^{-2} z_1^{-1}\delta\left(\frac{z_2 (1+z)}{z_1}\right) ( z_2 z)^{p} \hspace{1pt} Y_W (z_2)\left(Y(\log(1+z))\otimes 1\right) \\ &\times \mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2 /z_1)^{-1}\right) T_{23}^+ (0)\mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{21}(z_1 e^{hc}/z_2)^{-1}\right) T_{14}^+(0) (\mathop{\mathrm{\boldsymbol{1}}}\otimes\mathop{\mathrm{\boldsymbol{1}}})\mod h^n. \end{align*} Next, using definition \eqref{qva1} of the vertex operator map and \eqref{theiid} we get \begin{align*} &\psi^{-2} z_1^{-1}\delta\left(\frac{z_2 (1+z)}{z_1}\right) ( z_2 z)^{p} \hspace{1pt} Y_W (z_2) \mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2 /z_1)^{-1}\right)\\ &\times T_{23}^+ (\log(1+z))\hspace{1pt} T_{23}^* (\log(1+z) +hc/2)^{-1} \hspace{1pt} R_{21}((1+z) e^{hc})^{-1} \hspace{1pt} T_{13}^+(0) \mathop{\mathrm{\boldsymbol{1}}} \mod h^n. \end{align*} Finally, we use relation \eqref{rtt3} to swap the operators $T^*_{23}$ and $T^+_{13}$, and then we employ the identity $T^*(x)\mathop{\mathrm{\boldsymbol{1}}} =\mathop{\mathrm{\boldsymbol{1}}}$, thus getting \begin{align} & \psi^{-2} z_1^{-1}\delta\left(\frac{z_2 (1+z)}{z_1}\right) ( z_2 z)^{p} \hspace{1pt} Y_W (z_2) \mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{12}(z_2 /z_1)^{-1}\right) \nonumber\\ &\times T_{23}^+ (\log(1+z))\hspace{1pt} T_{13}^+(0) \mathop{\iota_{z_2,z_1}}\hspace{-1pt}\left( R_{21}(z_1 /z_2)^{-1}\right) \mathop{\mathrm{\boldsymbol{1}}} \mod h^n. \label{thisistheend} \end{align} It is clear that the application of the module map $Y_W (z_2)$ in \eqref{thisistheend} will not produce any negative powers of the variable $z$. Therefore, multiplying \eqref{thisistheend} by $(z_2 z)^{-p}$ and then taking the residue $\mathop{\mathrm{Res}}_{z_2 z}$ we obtain $0\, \,{\rm mod\,\,} h^n$. Hence combining the Jacobi-type identity from Proposition \ref{Jacobi_prop} with \eqref{jac3} and \eqref{jac5} we obtain the equality \begin{align*} & \mathcal{L}_1 (z_2)\hspace{1pt} R_{21}(z_1e^{-hc} /z_2)\hspace{1pt} \mathcal{L}_2 (z_1)\hspace{1pt} R_{21}(z_1 /z_2)^{-1} \\ & - R_{12}(z_2 /z_1)^{-1} \hspace{1pt}\mathcal{L}_2(z_1)\hspace{1pt} R_{12}(z_2 e^{-hc}/z_1)\hspace{1pt} \mathcal{L}_1(z_2) =0 \mod h^n \end{align*} for operators on $W$. As the integer $n$ was arbitrary, we conclude that the given equality holds for all $n$. Hence we proved that \eqref{moduleformula} satisfies quantum current commutation relation \eqref{qc}, so that it defines the structure of $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module of level $c$ on $W$, as required. In the end, in order to finish the proof, it remains to observe that $W$ is a topologically free $\mathbb{C}[[h]]$-module and, furthermore, restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module by Definition \ref{phimod}. \end{prf} The next lemma completes the proof of the \hyperref[mainthm1]{Main Theorem} for $\mathfrak{g}_N =\mathfrak{gl}_N$. \begin{lem}\label{lemlem6} A topologically free $\mathbb{C}[[h]]$-submodule $W_1$ of $W$ is a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c} (\mathfrak{gl}_N)$-submodule of $W$ if and only if it is an $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-submodule of $W$. \end{lem} \begin{prf} Suppose that $W_1$ is a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-submodule of $W$. Then $$\mathcal{L} (z)w=Y_W(T^+(0)\mathop{\mathrm{\boldsymbol{1}}},z)w \,\in\, \mathop{\mathrm{End}}\mathbb{C}^N \otimes W_1 ((z))[[h]]\quad\text{for any }w\in W_1,$$ so $W_1$ is clearly an $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-submodule of $W$. Conversely, suppose that $W_1$ is a topologically free $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-submodule of $W$. Clearly, $W_1$ is a restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module of level $c$, so by Proposition \ref{restricted496} we have $$ \mathcal{L}_{[n]}(x_1,\ldots ,x_n)w\,\in\,\left(\mathop{\mathrm{End}}\mathbb{C}^N\right)^{\otimes n} \otimes W_1 ((x_1,\ldots ,x_n))[[h]]\quad\text{for all }n\geqslant 1\text{ and } w\in W_1. $$ Applying the substitutions $x_i=ze^{u_i}$ with $i=1,\ldots ,n$ we get $$ \mathcal{L}_{[n]}(x)\big|_{x_i = ze^{u_i}}\big. w = Y_W(T_{[n]}^+(u)\mathop{\mathrm{\boldsymbol{1}}},z) w \,\in\,\left(\mathop{\mathrm{End}}\mathbb{C}^N\right)^{\otimes n} \otimes W_1 ((z))[[u_1,\ldots ,u_n,h]]. $$ By \cite[Sect. 3.4]{EK3}, see also \cite[Prop. 2.4]{KM}, the coefficients of all matrix entries of $T_{[n]}^+(u)$, $n\geqslant 1$, and $\mathop{\mathrm{\boldsymbol{1}}}$ span an $h$-adically dense $\mathbb{C}[[h]]$-submodule of $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$, so we conclude that $W_1$ is a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$-submodule of $W$, as required. \end{prf} \subsection{Proof of the \texorpdfstring{\hyperref[mainthm1]{Main Theorem}}{Main Theorem} in the \texorpdfstring{$\mathfrak{sl}_N$}{slN} case}\label{sec054} For any integer $n=1,\ldots ,N$ set $$ u_{[n]}=(u,u-h,\ldots ,u-(n-1)h)\quad\text{and}\quad x_{[n]}=(x,xe^{-h},\ldots ,xe^{-(n-1)h}) . $$ Let $ P^{(n)}\colon x_1\otimes\ldots \otimes x_n \mapsto x_n\otimes\ldots \otimes x_1 $ be the permutation operator on $(\mathbb{C}^N)^{\otimes n}$. Write \begin{align*} &\mathcal{L}_{[n]}(x_{[n]})=\mathcal{L}_{[n]}(x_1,\ldots ,x_n)\big|_{x_1 = x,\ldots, x_n = xe^{-(n-1)h}}\big. , \\ &\cev{\mathcal{L}}_{[n]} (x_{[n]}) =P^{(n)}\, \mathcal{L}_{[n]}(x_n,\ldots ,x_1)\big|_{x_1 = x,\ldots, x_n = xe^{-(n-1)h}}\big. \, P^{(n)} . \end{align*} We first list some useful properties of the anti-symmetrizer $A^{(n)}$ defined by \eqref{anti}. \begin{lem} \label{profuzion} For any $n=1,\ldots ,N$ we have \begin{gather} A^{(n)}\hspace{1pt} \mathcal{L}_{[n]}(x_{[n]}) = \cev{\mathcal{L}}_{[n]}(x_{[n]}) \hspace{1pt} A^{(n)} ,\label{497c}\\ A^{(n)}\hspace{1pt} D_1\ldots D_n = D_1\ldots D_n \hspace{1pt} A^{(n)} \label{497a},\\ A^{(N)}\hspace{1pt} \overline{R}_{1N}^{12}(y/x_{[N]})= \cev{\overline{R}}_{1N}^{12}(y/x_{[N]})\hspace{1pt} A^{(N)}= A^{(N)} e^{-(N-1)h/2} \hspace{1pt}\frac{x-e^{(N-1)h}y}{x-y},\label{497d} \end{gather} where the arrow in $\cev{\overline{R}}_{1N}^{12}(y/x_{[N]})$ indicates the reversed order of factors. The coefficients in \eqref{497d} belong to $\mathop{\mathrm{End}}\mathbb{C}^N \otimes(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes N}[[h]]$ and the anti-symmetrizer $A^{(N)}$ is applied on the tensor factors $2,\ldots ,N+1$. \end{lem} \begin{prf} Equality \eqref{497c} is verified by using Yang--Baxter equation \eqref{YBE}, generalized quantum current commutation relation \eqref{qcgen} and the following case of the fusion procedure for the two-parameter $R$-matrix $ {\overline{R}}(x,y)=(xe^{-h/2}-ye^{h/2}) {\overline{R}}(x/y)$ going back to \cite{C}, \begin{equation}\label{fuzion} \prod_{i=1,\ldots ,n-1}^{\longrightarrow}\prod_{j=i+1,\ldots ,n}^{\longrightarrow} {\overline{R}}_{ij}(xe^{-(i-1)h},xe^{-(j-1)h}) = n!\, x^{\frac{n(n-1)}{2}} \prod_{0\leqslant i< j\leqslant n-1} (e^{-ih}-e^{-jh})\hspace{1pt} A^{(n)}. \end{equation} Equality \eqref{497a} follows from the identities $$ D_1 D_2 =D_2 D_1\quad\text{and}\quad {\overline{R}}(x,y) D_1 D_2 =D_2 D_1 {\overline{R}}(x,y) $$ while \eqref{497d} is established in the proof of \cite[Lemma 4.3]{FJMR}. \end{prf} Let $W$ be a restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$-module of level $c\in \mathbb{C}$. \begin{lem}\label{lemlem11} Formula \eqref{formula}, together with $Y_W(\mathop{\mathrm{\boldsymbol{1}}},z)=1_W$, defines a unique structure of $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{sl}_N)$-module on $W$, where $\phi(z_2,z_0) = z_2 e^{z_0}$. \end{lem} \begin{prf} In order to prove the lemma, it is sufficient to verify that \eqref{formula}, together with $Y_W(\mathop{\mathrm{\boldsymbol{1}}},z)=1_W$, defines a $\mathbb{C}[[h]]$-module map $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{sl}_N)\otimes W \to W((z))[[h]]$. Indeed, all other properties of the aforementioned map are recovered by arguing as in the $\mathfrak{g}_N =\mathfrak{gl}_N$ case; see Subsection \ref{sec052}. Therefore, we have to show that the map $v\mapsto Y_W (v,z)$ preserves the ideal of relations \eqref{rtt} and \eqref{qdetvoa2}. However, it is sufficient to consider \eqref{qdetvoa2} as relations \eqref{rtt} are already taken care of in the proof of Lemma \ref{lemlem1}. Let $n$ and $m$ be nonnegative integers. Introduce the families of variables $v=(v_1,\ldots ,v_n)$ and $w=(w_1,\ldots ,w_m)$. Consider the image of the expression \begin{equation}\label{231} T_{[n]}^{+13}(v) \hspace{1pt} {\rm qdet}\hspace{1pt} T^+(u)\hspace{1pt} T_{[m]}^{+23}(w)\mathop{\mathrm{\boldsymbol{1}}} \in (\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes (n+m)} \otimes \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{sl}_N)[[v_1,\ldots, v_n,u,w_1,\ldots w_m]] \end{equation} with respect to $Y_W(z)$. Introduce the tensor product \begin{equation}\label{232} \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes n}}^{1} \otimes \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes N}}^{2} \otimes \smalloverbrace{(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes m}}^{3} \otimes \smalloverbrace{\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{sl}_N)}^{4} \end{equation} and write $$ A^{(N)}_2=1^{\otimes n}\otimes A^{(N)} \otimes 1^{\otimes m}\quad\text{and}\quad D_{[N]}^2=1^{\otimes n}\otimes D_{[N]} \otimes 1^{\otimes m}=D_{n+1}\ldots D_{n+N}. $$ By the definition of quantum determinant given by \eqref{qdetvoa}, using the labels in \eqref{232} to indicate the corresponding tensor factors, \eqref{231} can be expressed as \begin{equation}\label{233} {\rm tr}_{n+1,\ldots , n+N}\,A^{(N)}_2\hspace{1pt} T_{[n]}^{+14}(v) \hspace{1pt} T_{[N]}^{+24}(u_{[N]})\hspace{1pt} T_{[m]}^{+34}(w)\hspace{1pt} D_{[N]}^2 \mathop{\mathrm{\boldsymbol{1}}} . \end{equation} By \eqref{formula}, the image of \eqref{233} with respect to $Y_W(z)$ equals \begin{equation}\label{234} {\rm tr}_{n+1,\ldots , n+N}\,A^{(N)}_2\hspace{1pt} \mathcal{L}_{[n+N+m]}(\overline{x},x_{[N]},\overline{y})\Big|_{\substack{x_1 = ze^{v_1},\,\ldots, \, x_n = ze^{v_n},\, x=ze^u\\ y_1 = ze^{w_1},\, \ldots,\, y_n = ze^{w_m}}}\Big. \hspace{1pt} D_{[N]}^2, \end{equation} where $ \overline{x}=(x_1,\ldots ,x_n)$ and $\overline{y}=(y_1,\ldots ,y_m). $ Using generalized quantum current commutation relation \eqref{qcgen} we transform $A^{(N)}_2\mathcal{L}_{[n+N+m]}(\overline{x},x_{[N]},\overline{y})$ and bring it to the form \begin{align} &A^{(N)}_2\hspace{1pt} \mathcal{L}_{[n]}^{14}(\overline{x})\hspace{1pt} R_{nN}^{21}(x_{[N]}e^{-hc}/\overline{x})\hspace{1pt} R_{nm}^{31}(\overline{y}e^{-hc}/\overline{x})\hspace{1pt} \mathcal{L}_{[N]}^{24}(x_{[N]})\nonumber \\ &\times R_{Nm}^{32}(\overline{y}e^{-hc}/x_{[N]})\hspace{1pt} \mathcal{L}_{[m]}^{34}(\overline{y})\hspace{1pt} R_{Nm}^{32}(\overline{y}/x_{[N]})^{-1}\hspace{1pt} R_{nm}^{31}(\overline{y} /\overline{x})^{-1}\hspace{1pt} R_{nN}^{21}(x_{[N]} /\overline{x})^{-1}\label{235}. \end{align} By employing \eqref{fqhqf} and \eqref{497d} one can verify the following identities: $$ \begin{aligned} &A^{(N)}_2\hspace{1pt} R_{nN}^{21}(x_{[N]}e^{-hc}/\overline{x}) = e^{-n(N-1)h/2} A^{(N)}_2,\quad& &A^{(N)}_2\hspace{1pt} R_{nN}^{21}(x_{[N]} /\overline{x})^{-1} = e^{n(N-1)h/2} A^{(N)}_2,\\ &A^{(N)}_2\hspace{1pt} R_{Nm}^{32}(\overline{y}e^{-hc}/x_{[N]}) = e^{-m(N-1)h/2} A^{(N)}_2,\quad& &A^{(N)}_2\hspace{1pt} R_{Nm}^{32}(\overline{y}/x_{[N]})^{-1} = e^{m(N-1)h/2} A^{(N)}_2. \end{aligned} $$ As the anti-symmetrizer $A^{(N)}_2$ commutes with the terms $R_{nm}^{31}(\overline{y}e^{-hc}/\overline{x})$, $R_{nm}^{31}(\overline{y} /\overline{x})^{-1}$, $\mathcal{L}_{[n]}^{14}(\overline{x})$ and $\mathcal{L}_{[m]}^{34}(\overline{y})$, by combining the above identities and \eqref{497c}, we rewrite \eqref{235} as \begin{align}\label{237} \mathcal{L}_{[n]}^{14}(\overline{x})\hspace{1pt} R_{nm}^{31}(\overline{y}e^{-hc}/\overline{x})\hspace{1pt} A^{(N)}_2\hspace{1pt} \mathcal{L}_{[N]}^{24}(x_{[N]})\hspace{1pt} \mathcal{L}_{[m]}^{34}(\overline{y})\hspace{1pt} R_{nm}^{31}(\overline{y} /\overline{x})^{-1} . \end{align} Note that the expression in \eqref{234} is obtained from \eqref{237} by applying the substitutions $$ x_1 = ze^{v_1},\,\ldots,\, x_n = ze^{v_n},\, x=ze^u,\,y_1 = ze^{w_1},\,\ldots,\, y_n = ze^{w_m}, $$ then multiplying by $D_{[N]}^2$ from the right and, finally, taking the trace $ {\rm tr}_{n+1,\ldots , n+N}$. However, as $D_{[N]}^2$ commutes with the terms $\mathcal{L}_{[m]}^{34}(\overline{y})$ and $R_{nm}^{31}(\overline{y} /\overline{x})^{-1}$, it is clear that applying the aforementioned transformations to \eqref{237} and using definition of quantum determinant \eqref{qdet497} results in \begin{align} &\mathcal{L}_{[n]}^{13}(\overline{x})\hspace{1pt} R_{nm}^{21}(\overline{y}e^{-hc}/\overline{x})\hspace{1pt} {\rm qdet}\hspace{1pt}\mathcal{L} (x)\hspace{1pt} \mathcal{L}_{[m]}^{23}(\overline{y})\hspace{1pt} R_{nm}^{21}(\overline{y} /\overline{x})^{-1}\Big|_{\substack{x_1 = ze^{v_1},\,\ldots,\, x_n = ze^{v_n},\,x=ze^u \\ y_1 = ze^{w_1},\,\ldots,\, y_n = ze^{w_m}}}\Big. ,\label{238} \end{align} where, due to application of the trace, the tensor factors in \eqref{238} are now labeled in accordance with \eqref{231}. As $ {\rm qdet}\hspace{1pt}\mathcal{L} (x)=1$ in $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$ we conclude by quantum current commutation relation \eqref{qcgen} that \eqref{238} is equal to \begin{align*} \mathcal{L}_{[n+m]} (\overline{x},\overline{y}) \Big|_{\substack{x_1 = ze^{v_1},\,\ldots,\, x_n = ze^{v_n} \\ y_1 = ze^{w_1},\,\ldots,\, y_n = ze^{w_m}}}\Big. =Y_W(T_{[n]}^{+13}(v) \hspace{1pt} T_{[m]}^{+23}(w)\mathop{\mathrm{\boldsymbol{1}}},z) . \end{align*} Therefore, the images of \eqref{231} and $T_{[n]}^{+13}(v) T_{[m]}^{+23}(w)\mathop{\mathrm{\boldsymbol{1}}}$ with respect to $Y_W(z)$ coincide, so we conclude that the $\mathbb{C}[[h]]$-module map $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{sl}_N)\otimes W \to W((z))[[h]]$ is well-defined by \eqref{formula}, as required. \end{prf} Let $(W,Y_W)$ be a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{sl}_N)$-module for some $c\in\mathbb{C}$, where $\phi(z_2,z_0) = z_2 e^{z_0}$. In order to prove that \eqref{moduleformula} defines a unique structure of restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$-module of level $c$ on $W$, we need the following identity. \begin{lem}\label{lemlemC} For any positive integer $n$ the identity \begin{align} Y_W\left((T_1^+((n-1)h)T_2^+((n-2)h)\ldots T_n^+(0)\mathop{\mathrm{\boldsymbol{1}}}, ze^{-(n-1)h}\right)&\nonumber\\ \qquad=\mathcal{L}_{[n]}(x_1,x_2,\ldots ,x_n)\big|_{x_1 = z , x_2=ze^{-h},\ldots, x_n = ze^{-(n-1)h}}\big. & \label{id460} \end{align} holds for operators on $W$, where the action of $\mathcal{L}_{[n]}(x_1,\ldots ,x_n)$ on $W$ is given by formula \eqref{Ln} with $\mathcal{L}(x)=Y_W(T^+(0)\mathop{\mathrm{\boldsymbol{1}}},x)$. \end{lem} \begin{prf} We derive \eqref{id460} using the weak associativity property. Let $k$ be a positive integer. By \eqref{associativitymod0} and \eqref{associativitymod} there exists an integer $p\geqslant 0$ such that $$ (z_1 -z_2)^p\hspace{1pt} Y_W(T_1^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_1)\hspace{1pt} Y_W(T_2^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_2) =(z_1 -z_2)^p\hspace{1pt} \mathcal{L}_1(z_1)\hspace{1pt} \mathcal{L}_2(z_2) $$ belongs to $(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes 2}\otimes\mathop{\mathrm{Hom}} (W,W((z_1,z_2)))$ modulo $h^k$ and such that \begin{equation}\label{lab1} \left((z_1 -z_2)^p\hspace{1pt} \mathcal{L}_1(z_1)\hspace{1pt} \mathcal{L}_2(z_2)\right) \big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^k} \big. \quad\text{and}\quad z_2^p (e^{z_0}-1)^p\hspace{1pt} Y_W(Y(T_1^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_0)T_2^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_2) \end{equation} coincide modulo $h^k$. Using relation \eqref{rtt3} and then the first crossing symmetry property in \eqref{csym_equiv} we express the second term in \eqref{lab1} as \begin{equation}\label{ned4} z_2^p (e^{z_0}-1)^p\hspace{1pt} (D_1 R(e^{z_0+h(c+N)})D_1^{-1})\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\left(Y_W(T_1^+(z_0) T_2^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_2)\hspace{1pt} R(e^{z_0})^{-1}\right). \end{equation} The first crossing symmetry property in \eqref{csym_equiv} and unitarity \eqref{uni} imply the identities $$ R_{21}(e^{-z_0-hc})\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} (D_1 R(e^{z_0+h(c+N)})D_1^{-1}) =1 \quad\text{and}\quad R(e^{z_0})^{-1}R_{21}(e^{-z_0})^{-1} =1, $$ which enable us to move the $R$-matrices appearing in \eqref{ned4} from the second term in \eqref{lab1} to the first term in \eqref{lab1}. Hence we find that \begin{gather} \left(R_{21}(e^{-z_0-hc})\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}} \left((z_1 -z_2)^p\hspace{1pt} \mathcal{L}_1(z_1)\hspace{1pt} \mathcal{L}_2(z_2)\right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^k} \big.\right) R_{21}(e^{-z_0})^{-1}\label{lab2} \\ \text{and}\quad z_2^p (e^{z_0}-1)^p\hspace{1pt} Y_W(T_1^+(z_0) T_2^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_2) \label{lab3} \end{gather} coincide modulo $h^k$. Without loss of generality we can assume that the integer $p$ is sufficiently large, so that we conclude by Lemma \ref{poleslemma} that \eqref{lab2} is equal to \begin{align} &\left((z_1 -z_2)^p\hspace{1pt} \mathcal{L}_1(z_1)\hspace{1pt} R_{21}(z_2 e^{-hc}/z_1)\hspace{1pt} \mathcal{L}_2(z_2) \hspace{1pt} R_{21}(z_2/z_1)^{-1}\right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^k} \big. .\label{mpim} \end{align} By employing \eqref{Ln} for $n=2$ and the relation $$\mathcal{L}_{[2]}(z_1,z_2)\in(\mathop{\mathrm{End}} \mathbb{C}^N)^{\otimes 2}\otimes \mathop{\mathrm{Hom}} (W,W((z_1,z_2))[[h]]),$$ which is verified by arguing as in the proof of Proposition \ref{restricted496}, we rewrite \eqref{mpim} as \begin{align} \left((z_1 -z_2)^p\hspace{1pt} \mathcal{L}_{[2]}(z_1,z_2) \right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^k} \big. =z_2^p (e^{z_0}-1)^p \left( \mathcal{L}_{[2]}(z_1,z_2) \right)\big|_{z_1=z_2 e^{z_0}}^{ \,{\rm mod\,\,} h^k} \big. .\label{lab4} \end{align} Thus we proved that \eqref{lab3} and \eqref{lab4} coincide modulo $h^k$. Hence, multiplying \eqref{lab3} and \eqref{lab4} by $z_2^{-p} (e^{z_0}-1)^{-p}$ we find that $$ \mathcal{L}_{[2]}(z_1,z_2) \big|_{z_1=z_2 e^{z_0}} \quad\text{and}\quad Y_W(T_1^+(z_0) T_2^+(0)\mathop{\mathrm{\boldsymbol{1}}},z_2) $$ coincide modulo $h^k$. Moreover, by setting $z_0 =h$ and $z_2=ze^{-h}$ we conclude that $$ \mathcal{L}_{[2]}(z_1,z_2) \big|_{z_1=z,\, z_2=ze^{-h}} \quad\text{and}\quad Y_W(T_1^+(h) T_2^+(0)\mathop{\mathrm{\boldsymbol{1}}},ze^{-h}) $$ coincide modulo $h^k$. As the integer $k>0$ was arbitrary, this implies equality \eqref{id460} for $n=2$. The general case is proved by induction on $n$. \end{prf} The next two lemmas complete the proof of the \hyperref[mainthm1]{Main Theorem} for $\mathfrak{g}_N =\mathfrak{sl}_N$. The second lemma follows by the same arguments as for Lemma \ref{lemlem6}, so we omit its proof. \begin{lem}\label{lemlemD} Formula \eqref{moduleformula} defines a unique structure of restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$-module of level $c$ on $W$. \end{lem} \begin{prf} Due to the proof of Lemma \ref{lemlem5}, it is sufficient to verify the equality $ {\rm qdet}\hspace{1pt}\mathcal{L}(z)=1$ on $W$, where the action of $\mathcal{L}(z)$ on $W$ is given by \eqref{moduleformula}. By \eqref{qdet497} and \eqref{id460}, the action of quantum determinant of $\mathcal{L}(z)$ on $W$ is given by \begin{align*} &\, {\rm tr}_{1,\ldots, N}\, A^{(N)}\hspace{1pt} Y_W\left((T_1^+((N-1)h)T_2^+((N-2)h)\ldots T_N^+(0)\mathop{\mathrm{\boldsymbol{1}}}, ze^{-(N-1)h}\right)\hspace{1pt} D_1\ldots D_N\\ =&\, Y_W\left( {\rm tr}_{1,\ldots ,N}\, A^{(N)}(T_1^+((N-1)h)T_2^+((N-2)h)\ldots T_N^+(0)\mathop{\mathrm{\boldsymbol{1}}} \hspace{1pt} D_1\ldots D_N, ze^{-(N-1)h}\right). \end{align*} By applying \eqref{qdetvoa} with $u=(N-1)h$ and \eqref{qdetvoa2} the given expression takes the form $$\hspace{1pt} Y_W\left( {\rm qdet}\hspace{1pt} T^+ ((N-1)h)\mathop{\mathrm{\boldsymbol{1}}} , ze^{-(N-1)h}\right)=Y_W\left(\mathop{\mathrm{\boldsymbol{1}}} , ze^{-(N-1)h}\right). $$ Finally, Definition \ref{phimod} implies $Y_W\left(\mathop{\mathrm{\boldsymbol{1}}} , ze^{-(N-1)h}\right)=1$, which completes the proof. \end{prf} \begin{lem}\label{lemlemE} A topologically free $\mathbb{C}[[h]]$-submodule $W_1$ of $W$ is a $\phi$-coordinated $\overline{\mathcal{V}}_{\hspace{-1pt}c} (\mathfrak{sl}_N)$-submodule of $W$ if and only if $W_1$ is an $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{sl}}_N)$-submodule of $W$. \end{lem} \section{Image of the center of the quantum affine vertex algebra}\label{newsec02} In this section, we briefly discuss a connection between families of central elements for the quantum affine vertex algebra and the quantum affine algebra established by the $\phi$-coordinated module map from the \hyperref[mainthm1]{Main Theorem}. \subsection{Noncritical level}\label{newsec022} Following \cite{JKMY}, we define the {\em center} of the quantum vertex algebra $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N) $ at the level $c\in \mathbb{C}$ as the $\mathbb{C}[[h]]$-submodule $$ \mathfrak{z}(\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)) =\left\{ v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)\,:\, Y(w,z)v\in \overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)[[z]]\text{ for all }w\in\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N) \right\}. $$ For more details on the notion of center of quantum vertex algebra see \cite[Thm. 1.4]{DGK} and \cite[Sect. 3.2]{JKMY}. Observe that \eqref{formula} implies the identity \begin{equation}\label{noncrit} Y_W( {\rm qdet}\hspace{1pt} T^+(0)\mathop{\mathrm{\boldsymbol{1}}}, z)= {\rm qdet}\hspace{1pt} \mathcal{L} (z) \end{equation} on any restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module $W$ of level $c\in\mathbb{C}$. By \cite[Prop. 3.10]{KM} the coefficients of the quantum determinant $ {\rm qdet}\hspace{1pt} T^+(u)$, as given by \eqref{detkoef}, belong to the center of the quantum vertex algebra $\overline{\mathcal{V}}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ for any $c\in\mathbb{C}$. The next proposition, which is well-known, provides a quantum affine algebra counterpart of this fact; cf. \cite{FJMR}. We formulate the proposition and outline its proof in terms of Ding's quantum current realization for completeness. \begin{pro}\label{qdetpro} For any $c\in\mathbb{C}$ all coefficients $d_r$ of the quantum determinant $ {\rm qdet}\hspace{1pt} \mathcal{L} (z)$, as given by \eqref{qdetc}, belong to the center of the quantum affine algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_c$. \end{pro} \begin{prf} It is sufficient to prove the equality \begin{equation}\label{781} \mathcal{L} (y)\, {\rm qdet}\hspace{1pt}\mathcal{L}(x)= {\rm qdet}\hspace{1pt}\mathcal{L}(x)\, \mathcal{L} (y) \end{equation} in $\mathop{\mathrm{End}}\mathbb{C}^N\otimes {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_c$. By \eqref{qdet497} the left hand side in \eqref{781} equals \begin{equation}\label{782} {\rm tr}_{1,\ldots ,N} \, \mathcal{L}_0 (y)\hspace{1pt} A^{(N)}\hspace{1pt}\mathcal{L}_{[N]}(x_{[N]}) \hspace{1pt} D_{[N]},\quad\text{where }D_{[N]}= D_1\ldots D_N \end{equation} and the coefficients of the expression under the trace belong to the tensor product $\mathop{\mathrm{End}}\mathbb{C}^N \otimes(\mathop{\mathrm{End}}\mathbb{C}^N)^{\otimes N}\otimes {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_c$. The copies of $\mathop{\mathrm{End}}\mathbb{C}^N$ in \eqref{782} are labeled by $0,\ldots, N$. The matrix $\mathcal{L}(y)$ is applied on the tensor factor $0$ while the remaining terms, $A^{(N)}$, $\mathcal{L}_{[N]}(x_{[N]})$ and $D_{[N]}$ are applied on the tensor factors $1,\ldots ,N$. By $\mathcal{L}_0 (y) A^{(N)}= A^{(N)}\mathcal{L}_0 (y)$ and generalized quantum current commutation relation \eqref{qcgen} we rewrite \eqref{782} as \begin{equation}\label{498} {\rm tr}_{1,\ldots ,N} \, A^{(N)}\left( A\mathop{\hspace{-2pt}\underset{\text{RL}}{\cdot}\hspace{-2pt}}\left(\left(B\hspace{1pt}\mathcal{L}_{[N]}(x_{[N]})\hspace{1pt} C\hspace{1pt} \mathcal{L}_0 (y)\right)E\right)\right) D_{[N]}, \end{equation} where $$ \begin{aligned} &A=D_{[N]}^{-1}\hspace{1pt} R_{1N}^{21}(x_{[N]}e^{-(N+c)h}/y)^{-1}D_{[N]},\quad& &B=R_{1N}^{12}(y/x_{[N]})^{-1},\\ &C=R_{1N}^{12}(ye^{-hc}/x_{[N]}),& &E=R_{1N}^{21}(x_{[N]}/y). \end{aligned} $$ Note that the element $A$ is found via the second crossing symmetry property in \eqref{csym}; see also Remark \ref{csrem}. Next, by using \eqref{fqhqf} and \eqref{497d} one can verify the following equalities: \begin{equation}\label{497b} A^{(N)}\hspace{1pt} Z=\lambda_Z\hspace{1pt} A^{(N)} \text{ for } Z=A,B,C,E\text{ and } \lambda_A=\lambda_B=\lambda_C^{-1}=\lambda_E^{-1}= e^{(N-1)h/2}. \end{equation} Using \eqref{497c} and \eqref{497b} we move the anti-symmetrizer in \eqref{498} to the right, thus getting $$ {\rm tr}_{1,\ldots ,N} \, \cev{\mathcal{L}}_{[N]}(x_{[N]}) \hspace{1pt} \mathcal{L}_0 (y)\hspace{1pt} A^{(N)} D_{[N]} = {\rm tr}_{1,\ldots ,N} \, \cev{ \mathcal{L}}_{[N]}(x_{[N]}) \hspace{1pt} A^{(N)} D_{[N]} \hspace{1pt} \mathcal{L}_0 (y) . $$ Finally, we use \eqref{497c} to move the anti-symmetrizer $A^{(N)}$ to the left, thus getting the right hand side in \eqref{781}, as required. \end{prf} Following \cite[Sect. 3.3]{loop}, we define the {\em submodule of invariants} of the vacuum module $\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ as the $\mathbb{C}[[h]]$-submodule $$ \mathfrak{z} (\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N))=\left\{v\in \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)\,:\, \mathcal{L}(z)v\in\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)[[z]] \right\}. $$ Recall Corollary \ref{maincor}. By setting $W=\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ in \eqref{noncrit} and then applying the resulting equality on $1\in\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$ one recovers the invariants of the vacuum module; cf. \cite{FJMR}. \begin{kor} For any $c\in\mathbb{C}$ all coefficients of the series $$ \overline{\ell}_N (z)\coloneqq Y_{\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)} ( {\rm qdet}\hspace{1pt} T^+(0), z) 1 = {\rm qdet}\hspace{1pt} \mathcal{L} (z)1 \in \mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N) [[z]]. $$ belong to the submodule of invariants $\mathfrak{z} (\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N) )$. \end{kor} \begin{prf} The Corollary follows by applying identity \eqref{781} on $1\in\mathcal{V}_{\hspace{-1pt}c}(\mathfrak{gl}_N)$. \end{prf} \subsection{Critical level}\label{newsec021} Consider the quantum affine vertex algebra at the critical level $\overline{\mathcal{V}}_{\hspace{-1pt}\text{\rm cri}}(\gl_N)=\overline{\mathcal{V}}_{\hspace{-1pt}-N}(\gl_N)$. The following family of central elements for the quantum vertex algebra $\overline{\mathcal{V}}_{\hspace{-1pt}\text{\rm cri}}(\gl_N)$ was given by Molev and the author \cite[Prop. 3.5]{KM}. \begin{pro} All coefficients of the series $$ \phi_n(u)\coloneqq {\rm tr}_{1,\ldots ,n} \,A^{(n)}\hspace{1pt} T_{[n]}^+(u,u-h,\ldots ,u-(n-1)h) \hspace{1pt} D_1\ldots D_n \mathop{\mathrm{\boldsymbol{1}}} \in\overline{\mathcal{V}}_{\hspace{-1pt}\text{\rm cri}}(\gl_N)[[u]] $$ with $n=1,\ldots ,N$ belong to the center of the quantum vertex algebra $\overline{\mathcal{V}}_{\hspace{-1pt}\text{\rm cri}}(\gl_N) $. \end{pro} Now consider the quantum affine algebra at the critical level $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_{\text{\rm cri}} = {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_{-N}$. The next theorem goes back to Frappat, Jing, Molev and Ragoucy \cite[Thm. 3.2]{FJMR}. Although it is originally given in terms of the $RLL$ realization of the quantum affine algebra, we formulate the theorem using Ding's quantum current realization. The direct proof in terms of Ding's realization is carried out by arguing as in the proof of \cite[Thm. 2.14]{c11} and using Lemma \ref{profuzion}. \begin{thm} All coefficients of the series $$ \ell_n(z)\coloneqq {\rm tr}_{1,\ldots ,n} \,A^{(n)}\hspace{1pt}\mathcal{L}_{[n]}(z_1,\ldots ,z_n)\big|_{z_1 = z,\ldots, z_n = ze^{-(n-1)h}}\big. \hspace{1pt} D_1\ldots D_n \in {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_{\text{\rm cri}}[[z^{\pm 1}]] $$ with $n=1,\ldots ,N$ belong to the center of the algebra $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)_{\text{\rm cri}} $. \end{thm} Finally, let $W$ be any restricted $ {\rm U}_{\hspace{-1pt}h\hspace{-1pt}} (\widehat{\mathfrak{gl}}_N)$-module of level $-N$. Then the identities \begin{equation}\label{cen1} Y_W(\phi_n(0), z)=\ell_n (z) \quad\text{for } n=1,\ldots ,N \end{equation} hold for operators on $W$, where the map $Y_W(z)$ is given by \eqref{formula}. Recall Corollary \ref{maincor}. By setting $W=\mathcal{V}_{\hspace{-1pt}\text{\rm cri}}(\mathfrak{gl}_N)$ in \eqref{cen1} and then applying the resulting equality on $1\in\mathcal{V}_{\hspace{-1pt}\text{\rm cri}}(\mathfrak{gl}_N)$ one recovers the invariants of the vacuum module; see \cite[Corollary 3.3]{FJMR}. \begin{kor} All coefficients of the series $$ \overline{\ell}_n (z)\coloneqq Y_{\mathcal{V}_{\hspace{-1pt}\text{\rm cri}}(\mathfrak{gl}_N)} (\phi_n(0), z) 1 = \ell_n (z)1 \in \mathcal{V}_{\hspace{-1pt}\text{\rm cri}}(\mathfrak{gl}_N) [[z]] $$ with $n=1,\ldots ,N$ belong to the submodule of invariants $\mathfrak{z} (\mathcal{V}_{\hspace{-1pt}\text{\rm cri}}(\mathfrak{gl}_N) )$. \end{kor} \section*{Acknowledgement} The author would like to thank Naihuan Jing and Mirko Primc for stimulating discussions. The research reported in this paper was finalized during the author's visit to Max Planck Institute for Mathematics in Bonn. The author is grateful to the Institute for its hospitality and financial support. This work has been supported in part by Croatian Science Foundation under the project 8488.
2,877,628,089,867
arxiv
\section{Introduction} \label{sec:intro} Radial expansion is one of the fundamental characteristics of coronal mass ejections (CMEs), \change{evident both from}{described in early work} using {\it in situ} measurements \citep[]{Klein:1982,Burlaga:1982,Suess:1988}. \add{It is also clearly occurring based on} the fact that CMEs are \add{remotely} imaged as being a fraction of a solar radius wide when they erupt and are on average \add{measured as being} 45 solar radii (0.21 au) when they reach Earth. Associated with this increase in radial size, the magnetic field strength inside the CME decreases \add{as the CME propagates to larger heliocentric distances} \citep[]{Bothmer:1998}. Most of what is known about the increase in radial size and decrease in magnetic field inside magnetic ejecta (MEs) is obtained from statistical studies of {\it in situ} measurements of different MEs at different heliocentric distances. Thus, based on measurements by Helios, ISEE-3, IMP-8, ACE, {\it Wind} and Voyager, using different boundaries and different subsets of CMEs, past studies have found that the radial size of an ME increases as $r^{0.6}$ to $r^{0.9}$ and the magnetic field scales as $r^{-1.4}$ to $r^{-1.9}$ \citep[]{Bothmer:1998,Liu:2005, Leitner:2007,Gulisano:2010}. This was revisited using STEREO, ACE and MESSENGER data for the solar cycle 24 yielding almost the same index of radial dependency as $-1.95 \pm 0.19$ \citep[]{Winslow:2015}. This approach provides a measure of the average global expansion and assumes that there is a unique typical behavior of CMEs. Statistical methods would not work well if, for example, fast CMEs always expand differently than slow ones. A different measure of the global CME expansion can be obtained in a case-by-case basis by tracking CME radial size with heliospheric imagers up to distances of about 0.5~au \citep[]{Savani:2009,Nieves:2012,Lugaz:2012b}, which has revealed an expansion on the lower end of the range from statistical studies, as $r^{0.6}$ to $r^{0.8}$. In a recent work, \citet{AlHaddad:2019} compared the index of decrease of the magnetic field with the index of increase of the ME radial size for two different simulations, finding that the initiation mechanism \add{and the CME propagation speed} do not appear to have a large influence on the ME expansion in the innermost heliosphere. Another measure of CME expansion can be obtained from the direct analysis of {\it in situ} measurements at a given location, as the large majority of MEs have a decreasing speed profile. This is clearly a local measure. Figure~\ref{fig:sketch} shows schematic representations of the various measures of CME expansion. The expansion speed, defined as half the front-to-back speed difference is found to vary from a few tens of km\,s$^{-1}$ to as much as 250 km\,s$^{-1}$ \citep[]{Burlaga:1982,Farrugia:1993}. \citet{Klein:1982} noted that the expansion speed is on the order of half the ambient Alfv{\'e}n speed, meaning that expansion occurs sub-Aflv{\'e}nically. \citet{Gosling:1994} and \citet{Reisenfeld:2003} presented the observations of several CMEs which were bounded by a forward-reverse shock pair. This shock pair was attributed to the CME expansion becoming super-fast due to high pressure inside the ME. This type of over-expanding structure has only been reported away from the ecliptic with Ulysses observations (at latitudes greater than 22$^\circ$). In a recent study, \citet{Lugaz:2017b} showed that slow CMEs may drive shocks because of their radial expansion in the ecliptic plane, although the expansion remains sub-Alfv{\'e}nic. Such shocks may form at distances of 0.2 au or greater, depending on the rate at which the CME expansion speed and Alfv{\'e}n speed decrease \citep[]{Poedts:2016,Lugaz:2017}. A difficulty with studying CME expansion is that the expansion speed is found to depend significantly on the CME size and propagation speed, with larger and faster CMEs having larger expansion speeds \citep[]{Owens:2005,Gulisano:2010}. To solve this problem, researchers have focused on a dimensionless expansion parameter, typically the ratio of the expansion to propagation speed. \citet{Demoulin:2009} and \citet{Gulisano:2010} developed a different formalism, in which a dimensionless expansion parameter, $\zeta$, is defined as follows: \begin{equation}\zeta = \frac{D}{V_c^2} \frac{\Delta V}{\Delta t} \sim \frac{D}{S} \frac{2 V_\mathrm{exp}}{V_c}.\end{equation} Here, $D$ is the heliospheric distance where the measurements are made, $S = V_c \Delta t$ is the CME size, $V_\mathrm{exp}$ and $V_c$ are the CME expansion and center speeds, respectively, $\frac{\Delta V}{\Delta t}$ is the slope of the CME velocity time profile. This dimensionless parameter scales as $V_c^{-2}$, taking into consideration that faster and wider CMEs have higher expansion speed. Based on measurements in the inner heliosphere for several dozen isolated CMEs, the authors found that $\zeta$ clusters around 0.8 \citep[]{Demoulin:2010b}. From a theoretical analysis, \citet{Gulisano:2010} argued that this local measure should represent the global expansion of CMEs with the CME size growing as r$^\zeta$ and the magnetic field strength decreasing as r$^{-2\zeta}$. Note that the formula uses the slope of the velocity profile $\frac{\Delta V}{\Delta t}$, which is equivalent to using the expansion speed only for those cases where the velocity can be fitted linearly for the entire ME duration. The physical cause of CME expansion is still a matter of debate, although it is generally agreed that it is associated with pressure balance or imbalance between the ME and the solar wind. It has been proposed that CME expansion is associated with the pressure imbalance between the high pressure of the magnetically dominated ME and the lower pressure in the solar wind \citep[]{Klein:1982}. In that sense, CME expansion is associated with over-pressure. A somewhat different explanation is that CME expansion is related to the pressure balance between the ME and the solar wind, {\it i.e} between the ME magnetic pressure and the solar wind dynamic pressure. The fact that the solar wind pressure decreases with heliospheric distance then implies that CMEs keep on expanding as they propagate outward \citep[]{Demoulin:2009,Gulisano:2010}. Lastly, \citet{Suess:1988} argued that measurements of decreasing speed profile inside MEs are associated with magnetic tension and the necessary plasma motion to maintain a force-free state of the ME. Very few studies have investigated CME size or expansion from multiple {\it in situ} measurements in near-conjunction for more than one CME event. The exceptions are the study of \citet{Leitner:2007}, which focused on 7 CMEs measured in conjunction (4 with measurements below 1~au), the recent study by \citet{Good:2019}, which focuses on 18 events and the study by \citet{Vrsnak:2019}, which focuses on 11 events during the cruise phase of MESSENGER and VEX. In particular, \citet{Good:2019} found a significant difference between the power-law obtained from performing a fit of the maximum magnetic field with distance ($-1.76 \pm 0.04$) as compared to the average of the power-law indices of these 18 events ($-1.34 \pm 0.71$). The same type of results was obtained for the larger statistics of \citet{Salman:2020}. Here, we further dive into these datasets to investigate ME expansion. \add{We note that near-conjunction is often taken quite loosely, as has also been done here. Angular separations for spacecraft considered in near-conjunction in these studies typically range from 1-20$^\circ$ with a few cases up to 30$^\circ$ with the average angular separation being $\sim 5^\circ$ in the study of} \citet{Good:2019} \add{and $\sim 16^\circ$ for the dataset of} \citet{Salman:2020}. To learn more about CME expansion, it is essential to compare its local measures (the $\zeta$ parameter, the expansion speed, etc.) with global ones (how much do the CME size and magnetic field change with distance). For example, the use of the dimensionless index of \citet{Demoulin:2009} is meant to take into consideration the fact that fast and wide CMEs may have a large front-to-back speed difference without having a large expansion {\it per se}. However, this begs the question of the cause of the large size of these CMEs. Is it related to their expansion earlier on or a large size near the Sun? Performing such a study has not been possible until now because it requires the investigation of CME expansion in both its global and local ways in a case-by-case basis for enough events to compare with past statistical studies. Here, we take advantage of the numerous CME events measured in conjunction between two spacecraft in the inner heliosphere as recently presented in \citet{Salman:2020} using data from MESSENGER, Venus Express (VEX), {\it Wind} and STEREO. In section~\ref{sec:data}, we quickly summarize our data and procedure. In section~\ref{sec:results}, we compare the different measures of CME expansion with each other and with other related CME properties. In section~\ref{sec:discuss}, we discuss and conclude. \begin{figure}[tb] \centering {\includegraphics*[width=.98\linewidth]{Figure1.png}} \caption{Schematic representation and definitions of the global and local measures of ME expansion. The idealized ME cross-section and associated magnetic field measurements are shown at two locations at different heliocentric distances. Comparing measurements from these two locations define the global expansion. At the second spacecraft, measurements of the plasma velocity allow to derive various measures of the local expansion.} \label{fig:sketch} \end{figure} \section{Data and Methods} \label{sec:data} \citet{Salman:2020} presented 47 two-spacecraft conjunction measurements of CMEs over the first half of solar cycle 24, from 2008 to 2014 for spacecraft longitudinal separations of less than 35$^\circ$, with 8 events measured at less than 5$^\circ$ separations and 20 at less than 15$^\circ$ separations. Five events were conjunction between Venus Express (VEX) and MESSENGER, 18 conjunction events occurred between MESSENGER and a spacecraft near 1 au ({\it Wind}, STEREO-A or STEREO-B), and 24 between VEX and a spacecraft near 1~au. Since STEREO and {\it Wind} have plasma instruments, we have {\it in situ} measurements of the CME speed near 1~au for these 42 CMEs in addition to magnetic field measurements at two different distances. For the five conjunctions events between MESSENGER and VEX, we do not have any plasma measurements. Our analysis thereafter focuses on the 42 events with plasma measurements near 1~au. Because Mercury's heliocentric distance (and therefore MESSENGER's) varies between 0.31 and 0.47 au, whereas Venus stays at 0.72-0.73~au, we have measurements over distances varying from a factor of 1.3 (Venus to STEREO-A) to a factor of 3.2 (Mercury at perihelion to STEREO-B). The magnetic field decrease with heliospheric distance for this dataset is presented by \citet{Salman:2020} who found a decrease of the maximum field, $B_{\max}$, inside the ME with an index of $-1.91 \pm 0.25$. Although most events have gaps in measurements corresponding to the time when MESSENGER or VEX are inside their planetary magnetosphere, we can do the same study with the average magnetic field, $B_{av}$ for which we find an index of $ -1.87 \pm 0.32$ excluding the 5 MESSENGER-VEX conjunctions. For each conjunction event, we also calculate the quantity $\alpha_B$ \citep[see for example][]{Dumbovic:2018}: $$\alpha_B = \frac{\log \left( {B_2/B_1} \right) }{\log \left( {r_2/r_1} \right) },$$ where indices 1 and 2 correspond to the first \add{(closer to the Sun)} and second \add{(further away from the Sun)} spacecraft, respectively. We do so for both the maximum magnetic field ($\alpha_{Bmax}$), the average magnetic field ($\alpha_{Bav}$), as well as the maximum value of the tangential ($T$) and normal ($N$) magnetic field components inside the ME ($\alpha_{BT}$ and $\alpha_{BN}$). In addition, near 1~au, we derive local measures of the ME radial expansion: 1) the dimensionless expansion parameter $\zeta_{fit}$ using the procedure of \citet{Gulisano:2010}, {\it i.e.} by performing a linear fit on the velocity data to derive $\Delta V /\Delta t$. We also calculate 2) the expansion speed $V_\mathrm{exp}$, 3) the ratio of the expansion speed to the CME speed $V_\mathrm{exp}/V_\mathrm{center}$, and 4) $\zeta_{mes}$ using the measured value of $\Delta V = 2 V_\mathrm{exp}$ in equation (1) rather than the fit to the velocity data. We also use or derive associated CME properties: its initial speed from coronagraph \citep[as listed in][]{Salman:2020}, its size near 1 AU (using the average CME speed), and the maximum and average magnetic field inside the ME. \begin{figure}[tb] \centering {\includegraphics*[width=.58\linewidth]{STEREO_A.png}} {\includegraphics*[width=.4\linewidth]{MESSENGER.png}} \caption{2013 May 1--4 CME measured at STEREO-A ({\it left}) and MESSENGER ({\it right}). A linear trend in the velocity can be found in about the first 30\% of the ME at STEREO-A. This is used to derive $\zeta_\mathrm{fit}$. The maximum and average of the magnetic field magnitude as well as the maximum and minimum of $B_T$ and $B_N$ are used to derive various exponent decrease $\alpha$. The red line marks the shock arrival at STEREO-A and MESSENGER, the blue lines mark the ME boundaries with dashed lines used when the boundary's location is not certain. For the MESSENGER data, the dashed red line shows a shock propagating inside the ME.} \label{fig:example} \end{figure} \section{Results: Specific Events} \label{sec:results_spec} Most of the best conjunction events have been studied in detail in previous work. Here, we present one additional event to illustrate our technique and summarize results for three previously published events. The four events we highlight are among the eight best conjunctions (separations of less than 5$^\circ$) with data near 1~au. The results of the analysis described below for these four events are listed in Table~\ref{tab:example}. \subsection{2013 May 1--4 CME: MESSENGER-STEREOA conjunction} The May 1--4, 2013 CME event \citep[event 14-2013 in][]{Salman:2020} is a conjunction between MESSENGER (at 0.36~au) and STEREO-A (at 0.96 au) when the longitudinal separation between the two spacecraft was only $\sim 2.9^\circ$. The measurements at MESSENGER and STEREO-A are shown in Figure~\ref{fig:example}. This was a moderately fast event with a coronagraphic speed of 700~km\,s$^{-1}$ and a maximum ME speed near 1~au of 570~km\,s$^{-1}$. \add{In most cases, solar wind plasma measurements are not available at Mercury with MESSENGER}. The maximum ME magnetic field of 132 nT at MESSENGER and 22 nT at STEREO-A results in a value of $\alpha_{Bmax} = -1.84$, which is relatively typical. The exponent for the average magnetic field is very similar at $\alpha_{Bav} = -1.89$. The speed profile at 1~au is complex and we consider that a linear trend in the velocity can only be found for the front 30\% of the ME. Using this limited period, a linear fit to the velocity profile implies that $\zeta_{fit}$ = 1.7. The expansion speed as measured from maximum to minimum is 62~km\,s$^{-1}$ and the ME center speed is about 485~km\,s$^{-1}$. Using the measured $V_\mathrm{exp}$, we can derive $\zeta_{mes} = 0.61$. The ratio of expansion to center speeds is $\sim 13$\%. From this event, we can already see a disagreement between the global decrease in the magnetic field strength and the local measure, especially using the procedure of \citet{Gulisano:2010}. However, one can also see that the peak in the magnetic field at MESSENGER occurs in what appears to be a discontinuity or fast forward shock near the back of the ME, whereas there is no such signature at Earth. Excluding this period, we find that $B_{\max} = 102$~nT and $\alpha_B = -1.57$. The presence of fast-forward shocks at the back of MEs was discussed in \citet{Lugaz:2015a}. \add{Such a shock/discontinuity was not observed at STEREO-A. Based on past work, this raises two possibilities: i) the shock fully propagated through the ME before the ME impacting STEREO-A. In that case, the period of compression by the shock is expected to be followed by a period of over-expansion} \citep[]{Gulisano:2010,Lugaz:2012b}. \add{Depending on the timing of this exit, the ME global and local measures of expansion may be affected. ii) The shock dissipated as it propagated inside the ME} \citep[]{Farrugia:2004,Lugaz:2007} \add{and only the back half got affected}. \add{In both cases, the back half of the ME may have been compressed, resulting in the flat velocity profile in the back measured near 1~au. At MESSENGER, there is no clear driver for this shock as the magnetic field strength goes back to normal values a few hours after the shock. At such, it is unlikely that the ME measured near 1~au is the result of the merging of two CMEs}. The decrease of the tangential and normal magnetic field components for the front of the ME (positive values) is $\alpha_\mathrm{Tfront} = -1.40$ and $\alpha_\mathrm{Nfront} = -1.68$. The back (negative values) for which the peak occurs after the shock at Mercury are $\alpha_\mathrm{Tback} = -1.41$ and $\alpha_\mathrm{Nback} = -2.29$. This shows that the normal (north-south) component of the magnetic field decreased a bit faster than the tangential (east-west) component, but also highlights how this detailed analysis may be affected by the presence of shocks and ``datagaps'' in MESSENGER measurements associated with magnetospheric crossings. \subsection{Other Events} \citet{Good:2015} and \citet{Salman:2020} presented a different conjunction that occurred on November 4--8, 2011 (event 8-2011) with an initial speed of 750~km\,s$^{-1}$ and a maximum ME speed near 1~AU of 440~km\,s$^{-1}$. Although a different section of the same event also impacted Venus, the best conjunction is between MESSENGER and STEREO-B ($\sim 4.8^\circ$ longitudinal separation). For this event, $\alpha_{Bmax} = -1.93$ and $\alpha_{Bav} = -1.80$, but there is a large expansion speed at 1 AU of $\sim 85$~km\,$^{-1}$ \begin{table}[h] \caption{Examples from past studies and Figure~\ref{fig:example}. Results with a $^*$ indicate cases for which the peak is likely to have occurred during a magnetospheric path of MESSENGER and is therefore likely missed. Values in parentheses for $B_T$ or $B_N$ correspond to the decrease for that component of the magnetic field in the back half of the ME.} \centering \small \addtolength{\tabcolsep}{-2pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Event & Sep.\ & $V_\mathrm{init}$ & $\alpha_{Bmax}$ & $\alpha_{Bav}$ & $\alpha_{BT}$ & $\alpha_{BN}$ & $\zeta_\mathrm{fit}$ & $\zeta_\mathrm{mes}$ & $V_\mathrm{exp}/V_\mathrm{c}$\\ \hline 8-2011 & 4.8$^\circ$ & 950\,km\,s$^{-1}$& $-1.9$ & $-1.8$ & $-1.8$ ($-1.5$) & $-2.4$ ($-1.5$)& 1.5 & 0.95 & 0.18 \\ 9-2011 & 4.6$^\circ$ & 760\,km\,s$^{-1}$& $-2.0$ & $-1.6$ & $-2.0$ ($-2.1$) & $-1.8$ ($-2.2$)& 0.19 & 0.15 & 0.01 \\ 14-2013 & 2.9$^\circ$ & 700\,km\,s$^{-1}$& $-1.8$ & $-1.9$ & $-1.4$ ($-1.4$)& $-1.7$ ($-2.3$) & 1.7 & 0.61 & 0.13\\ 21-2013 & 3.1$^\circ$ & 700\,km\,s$^{-1}$& $-1.6$ & $-1.4$ & $-1.2^*$ & $-1.4$ ($-1.1$) & 0.67 & 0.51 & 0.10\\ \hline \end{tabular} \label{tab:example} \end{table} corresponding to $\zeta_{fit} = 1.5$, $\zeta_{mes} = 0.95$ and a ratio of the expansion to the center ME speed of 18\%. Once again, local and global measures of expansion disagree. \citet{Winslow:2016} presented a complex conjunction event (event 9-2011) between MESSENGER (at 0.42~au) and STEREO-A ($\sim 4.6^\circ$ longitudinal separation) on December 30, 2011 -- January 1, 2012 with an initial speed of 950~km\,s$^{-1}$ and a maximum ME speed near 1~au of 630~km\,s$^{-1}$. For this event, $\alpha_{Bmax} = -1.99$ and $\alpha_{Bav} = -1.63$, whereas measurements near 1~au show a nearly flat velocity profile with an expansion speed of only 10~km\,s$^{-1}$, $\zeta_{fit} = 0.19$ and $\zeta_{mes} = 0.15$. In this case, the decrease of the magnetic field inside the ME with distance is typical, but the bulk speed profile at 1 AU indicates a lack of expansion near 1 AU. This case is somewhat unusual because of the complex interaction with the heliospheric current sheet that is found to be engulfed inside the ME at 1 AU. The ratio of expansion to center speeds is of the order of 1\%. Another event (event 21-2013) was recently discussed in \citet{Lugaz:2020} for a conjunction between MESSENGER and L1 ($\sim 3.1^\circ$ longitudinal separation) on July 11--14, 2013 with an initial speed of 600~km\,s$^{-1}$ and a maximum ME speed near 1~au of 500~km\,s$^{-1}$. \begin{table}[h] \caption{Average values and 1-$\sigma$ standard deviations obtained in this study and comparison to past studies. The first four quantities are obtained in our study by measuring the magnetic field at two spacecraft in conjunction, while for the past studies, these are typically from fits to different MEs measured at different heliocentric distances. The seven other quantities are obtained from measurements near 1~au. F05: \citet{Farrugia:2005}, L07: \citet{Leitner:2007}, W15: \citet{Winslow:2015}, G19: \citet{Good:2019}, W05: \citet{WangC:2005}, L05: \citet{Liu:2005}, G10: \citet{Gulisano:2010}, D08: \citet{Demoulin:2008}, RC10: \citet{Richardson:2010}, J18: \citet{Jian:2018}, NC18: \citet{Nieves:2018} and L18: \citet{Lepping:2018}. The data source in the inner heliosphere is indicated in parentheses (H: Helios, P: Pioneer Venus Orbiter, M: MESSENGER, V: {\it Venus Express}). For $\alpha_{Bmax}$, we list two values for G19, the first one using the same procedure as done here but for 13 events and the second one using a fitting procedure.} \centering \small \addtolength{\tabcolsep}{-2pt} \begin{tabular}{|c|c|c|c|c|} \hline Quantity & Average $\pm$ $\sigma$ & Past Results & Source\\ \hline $\alpha_{Bmax}$ & $-1.81$ $\pm$ 0.84 & -1.73, -1.64 $\pm$ 0.40, -1.89 $\pm$ 0.14(3-$\sigma$), & F05(H), L07(H,P), W15(M)\\ & & -1.34 $\pm$ 0.71, -1.76 $\pm$ 0.04(3-$\sigma$) & G19(M,V)\\ \hline $\alpha_{Bav}$ & $-1.91$ $\pm$ 0.85 & -1.38, -1.52, -1.4 $\pm$ 0.08, & F05(H), W05(H,P), L05(H)\\ & & -1.85 $\pm$ 0.07, -1.95 $\pm$ 0.19(3-$\sigma$) & G10(H), W15(M) \\ \hline $\alpha_{BT}$ & $-1.71$ $\pm$ 0.67 & & \\ \hline $\alpha_{BN}$ & $-1.76$ $\pm$ 0.65 & & \\ \hline \hline $\zeta_\mathrm{fit}$ & 0.95 $\pm$ 1.05 & 0.81 $\pm$ 0.19, 0.7 $\pm$ 0.61 & D08, G10(H) \\ \hline $\zeta_\mathrm{mes}$ & 0.43 $\pm$ 0.52 & 0.45 & RC10 \\ \hline $V_\mathrm{exp}$ (km\,s$^{-1}$) & 32 $\pm$ 42 & 31 $\pm$ 3, 62 $\pm$ 3, 28 & RC10, J18, NC18 \\ \hline $V_\mathrm{center}$ (km\,s$^{-1}$) & 449 $\pm$ 131 & 476 $\pm$ 6, 445, 434, 436 & RC10, J18, NC18, L18 \\ \hline $V_\mathrm{exp}/V_\mathrm{center}$ & 0.066 $\pm$ 0.085 & & \\ \hline $S_\mathrm{ME}$ (AU) & 0.29 $\pm$ 0.14 & 0.33 $\pm$ 0.01, 0.22 $\pm$ 0.11 & RC10, L18 \\ \hline $<B_\mathrm{ME}>$ (nT) & 10.8 $\pm$ 4.4 & 10.1 $\pm$ 0.3, 11 & RC10, NC18 \\ \hline \end{tabular} \label{tab:stats} \end{table} This long-duration event is found to have $\alpha_{Bmax} = -1.58$ and $\alpha_{Bav} = -1.38$ and $\zeta_{fit} = 0.67$ for an expansion speed of about 50~km\,s$^{-1}$ corresponding to $\zeta_{mes} = 0.51$ and $V_\mathrm{exp}/V_\mathrm{center} \sim 10$\%. With the exception of the case highlighted in \citet{Lugaz:2020}, these four examples highlight the following: even for conjunctions between two spacecraft with longitudinal separations of less than 5$^\circ$, the local and global measures of expansion do not necessarily agree. In the following section, we look at overall results for all CMEs and compare the various measures of expansion with each others. \section{Results: Statistics}\label{sec:results} \subsection{Average Values} Table~\ref{tab:stats} shows the statistics of the values of the different measures of CME expansion and other CME properties, as well as comparison to previous studies in the inner heliosphere (excluding other studies with Ulysses or Voyager data past 2~au). Throughout, we give the 1-$\sigma$ standard-deviation as an error bar when quoting a value. For $\alpha_{B\max}$, the average is $-1.81 \pm 0.84$ for the full dataset but $-1.67 \pm 0.43$ for the conjunctions between MESSENGER and 1~au and $-1.93 \pm 1.06$ for the conjunctions between VEX and 1~au. In all cases, this is a similar average as compared to the results from statistical studies but with a much larger standard-deviation, as for example \citet{Winslow:2015} found a 3-$\sigma$ value of $\pm 0.19$. Combining all past studies, a range for the exponent decrease of $B_{\max}$ can be obtained as $-1.75 \pm 0.4$. Only 20 out of the 42 events studied here are within this range. For $\alpha_{Bav}$, we find an average of $-1.96 \pm 0.90$. The average value for $\zeta$ is 0.86 $\pm 0.83$, comparable to past studies. We note that we are able to identify a linear trend in the velocity profile for 82\% $\pm$ 22\% of the ME duration (for 31 events, a trend is identified for more than 60\% of the ME duration). In fact, the event highlighted in Figure~\ref{fig:example} is the one for which the linear trend is the least clear. The average expansion speed is 32 $\pm$ 44~km\,s$^{-1}$ (or 39 $\pm$ 35~km\,s$^{-1}$ if excluding three contracting events) and that of the dimensionless expansion is 0.067 $\pm 0.09$ (0.083 $\pm 0.06$ if excluding these three events). The average size of the MEs near 1~au is 0.29 $\pm 0.14$~au, comparable to that found from all {\it ACE} MEs by \citet{Richardson:2010}. That number is larger than the canonical 0.21~au from \citet{Lepping:2018} but the latter is obtained for a force-free fit to the data, whereas our number and that by \citet{Richardson:2010} are simply derived by integrating the solar wind speed with time during the ME passage. For all quantities, the average values are within the typical ranges from past studies, highlighting that our dataset is not biased. \begin{figure}[tb] \centering {\includegraphics*[width=.48\linewidth]{alpha_zeta.png}} {\includegraphics*[width=.48\linewidth]{alpha_zeta_MES.png}} {\includegraphics*[width=.48\linewidth]{alpha_Vexp.png}} {\includegraphics*[width=.48\linewidth]{alpha_Vexp_divVc.png}} \caption{Global ($y$-axis) vs.\ local ($x$-axis) measures of CME expansion. The top panels show the index decrease of the magnetic field, $\alpha$, as compared to the dimensionless expansion parameter near 1~AU, $\zeta$. The line show the expected value of $\alpha = -2\zeta$. The bottom panels show $\alpha$ as compared to the ME expansion speed near 1~AU (left) and the ratio of ME expansion to center speeds (right). The thin line shows the linear relation for the best fit: $-\alpha_{Bav} = 1.66 + 0.0075 V_\mathrm{exp}$. All data points are color-coded with the angular separation between the two spacecraft with the scale in $^\circ$ given on the right-hand colorbar.} \label{fig:stats_zeta} \end{figure} \subsection{Comparison of Local and Global Measures of CME Expansion} In the top panels of Figure~\ref{fig:stats_zeta}, we show plots of $\alpha_{Bmax}$ and $\alpha_{Bav}$ as compared to $\zeta_\mathrm{fit}$ and $\zeta_\mathrm{mes}$ as well as the values when the first spacecraft is MESSENGER rather than VEX since the $\alpha$ values have less variability when the former is the first spacecraft rather than the latter. The symbols are color-coded with the spacecraft angular separation and the top panels show the line $\alpha = 2 \zeta$, which is the expected trend. The data is un-correlated for $\zeta_\mathrm{fit}$ (obtained by fitting to the slope of the velocity), while there is a very weak correlation with $\zeta_\mathrm{mes}$ (calculated using the measured expansion speed) with the highest correlation coefficient, $r = 0.3$ for the average magnetic field. In the bottom panels of Figure~\ref{fig:stats_zeta}, $\alpha_{Bmax}$ and $\alpha_{Bav}$ are compared to $V_\mathrm{exp}$ and $V_\mathrm{exp}/V_\mathrm{center}$. This again compares global quantities of CME expansion (in the $y$-axis) to local quantities near 1~au of the CME expansion (in the $x$-axis). The largest correlation coefficient is found between the $\alpha$ index for the average magnetic field and the expansion speed and is $r = 0.378$. Other correlation coefficients are below 0.25. From these plots and the correlation values, it is clear that, irrespective of the exact quantities being compared, local and global measures of CME expansion are at best weakly related. In particular, even for the smaller angular separations and large radial separations (the MESSENGER plot), small and large values of $\alpha$ are associated with typical values of $\zeta$ around its average of 0.7. \subsection{Correlation of Global Measures of CME Expansion with Other CME Properties} We extend the analysis of local and global measures of expansion to determine whether other CME properties are correlated with CME global expansion. We focus on the CME initial speed, obtained from the best-observing coronagraph, as explained in \citet{Salman:2020}, the CME final speed measured near 1 AU, as well as the CME magnetic field strength as measured at various distances. Figures~\ref{fig:stats_V} and \ref{fig:stats_B} show the results for $\alpha$ as compared to the ME velocity and magnetic field, respectively, in the same format as Figure~\ref{fig:stats_zeta}. \begin{figure}[tb] \centering {\includegraphics*[width=.48\linewidth]{alpha_Vinit.png}} {\includegraphics*[width=.48\linewidth]{alpha_Vfinal.png}} \caption{Global measures of CME expansion ($y$-axis) vs.\ CME speed. The left panel shows the index decrease of the magnetic field, $\alpha$, as compared to the initial plane-of-sky coronagraphic speed. The thin line shows the linear relation for the best fit (in a log-linear plot): $-\alpha_{Bav} = 1.40 + 0.0007 V_\mathrm{init}$. The right panel shows $\alpha_{Bav}$ as compared to the final front and center ME speeds near 1~AU. The thin line shows the linear relation for the best fit: $-\alpha_{Bav} = 0.81 + 0.0022 V_\mathrm{front}$. All data points are color-coded with the angular separation between the two spacecraft with the scale in $^\circ$ given on the right-hand colorbar.} \label{fig:stats_V} \end{figure} The ME expansion is only weakly correlated with the CME initial speed, with faster CMEs expanding more rapidly in the inner heliosphere. This correlation remains present near 1~au for $\alpha_{Bav}$ as compared to the CME front and center speeds. It is only a weak correlation but reflects that faster CMEs do expand more strongly in a statistical sense. We note that the dimensionless analysis of \citet{Dasso:2009} and \citet{Gulisano:2010} results in $\zeta$ being approximately independent of the CME speed, but here we find a weak correlation between $\alpha_\mathrm{av}$ and the CME speed. \add{It is possible that a stronger correlation would exist if the speed was measured at the first spacecraft or if the expansion was calculated for distances closer to the Sun, where expansion may be more related to the initial characteristics of the CME}. \begin{figure}[tb] \centering {\includegraphics*[width=.48\linewidth]{alpha_Bfinal.png}} {\includegraphics*[width=.48\linewidth]{alpha_Binit.png}} {\includegraphics*[width=.48\linewidth]{alpha_B_VEX.png}} {\includegraphics*[width=.48\linewidth]{alpha_B_MES.png}} \caption{Global measures of CME expansion ($y$-axis) vs.\ ME magnetic fields measured or scaled to various distances. The top left panel shows the index decrease of the magnetic field, $\alpha$, as compared to the ME magnetic field strength measured near 1~AU. The top right panel shows $\alpha$ as compared to the ME magnetic field strength measured by the spacecraft closest to the Sun (VEX or MESSENGER) and scaled to 0.308~au (see text for details). The bottom panels show $\alpha$ as compared to the ME magnetic field measured by VEX (left) and measured by MESSENGER and scaled to 0.308~au (right). The colorbars are the same as in Figures 3 and 4.} \label{fig:stats_B} \end{figure} We then direct our attention to the correlation between $\alpha$ and the magnetic field inside the ME at various distances. When we compare the $\alpha$ parameter with magnetic field measurements, we only compute the correlation of $\alpha_{Bav}$ with $B_\mathrm{max}$ and of $\alpha_{Bmax}$ with $B_\mathrm{av}$. This way, the values of the magnetic field used to calculate $\alpha$ are not compared with the same values measured at various locations. We note however, that $B_\mathrm{max}$ and $B_\mathrm{av}$ are obviously very well correlated (correlation coefficient $\sim$ 0.81 near 0.72 au and near 1~au), so this may affect the results. As shown in the top left panel of Figure~\ref{fig:stats_B}, we find no correlation between $\alpha$ with the magnetic field (average or maximum) measured near 1~au with a correlation coefficient below 0.15, whether or not it is corrected for the difference in heliocentric distance between the various spacecraft (see below for details). However, we find a much stronger correlation with the magnetic field measured by Venus Express, with a correlation coefficient of 0.62-0.65 (bottom left panel). Correlating the magnetic field measured by MESSENGER with the CME expansion is not straight-forward, because MESSENGER heliocentric distance in our sample varies between 0.308 and 0.466~au. For a typical decrease of the magnetic field as $r^{-1.75}$, this means that the magnetic field would decrease by more than a factor of 2 between these two distances. In comparison, VEX is always between 0.72 and 0.73~au and the variation in magnetic field strength between STEREO-A at 0.96~au and STEREO-B at 1.09~au is only by a factor of 1.25. To correct for the variation in the heliocentric distance of MESSENGER, we scale all measurements to 0.308~au (the measurement made at the lowest heliocentric distance) using the $\alpha$ value obtained for this particular CME. The results show a strong correlation (bottom right panel of Figure~\ref{fig:stats_B}). It should be noted that we use (for example) the value of $\alpha_{Bav}$ obtained for a particular CME to scale the value of $B_\mathrm{av}$ measured for this CME by MESSENGER to 0.308~au and compare it with $\alpha_{Bmax}$. As such, we use fully separated measurements to determine the correlation. Lastly, we scale all VEX and MESSENGER measurements to 0.308~au and obtain very significant correlations between the scaled value of $B$ in the inner heliosphere and $\alpha$, the expansion index (top right panel of Figure~\ref{fig:stats_B}). We interpret these results as follows: in the innermost heliosphere, there is clear positive correlation between the ME maximum magnetic field and the expansion index, {\it i.e.}, that MEs with higher internal magnetic pressure in the innermost heliosphere expand more on their way to 1~au. However, near 1~au, there is no relation between the internal magnetic pressure and how much expansion occurred. In addition, the range of ME average magnetic fields is narrower near 1~au than near 0.72~au (at VEX) and at MESSENGER. In our sample, the average ME magnetic field at 1~au is 10.3~nT~$\pm~33\%$ (with STEREO measurements scaled to 1~au), at VEX, it is 18.6~nT~$\pm~36\%$ and at MESSENGER it is 87~nT~$\pm~42\%$ scaled to 0.308~au (62~nT~$\pm~53\%$ without scaling). The percentage indicate the value of the standard deviation divided by the average. A similar reduction of the standard deviation was found in \citet{Janvier:2019}. For our sample, the reduction in the standard deviation still hold if we divide the larger of CMEs measured near 1~au into two subsamples (those in conjunction with VEX and with MESSENGER). As such, the faster expansion of more magnetized MEs on their way to 1~au has the effect of uniformizing (reducing the variance of) the ME magnetic field strength at 1~au. A 16~nT ME at 1~au may be the result of an ME with maximum magnetic field of 177~nT at 0.308~au with a fast expansion or one with a 85~nT magnetic field at 0.308~au with a slower expansion. These correspond to events 22-2012 and 23-2014 both in February 2014. This is a situation similar to the drag experienced by fast CMEs being higher than the drag experienced by slower CMEs, which result in a uniformization of the speed at 1~au, and a loss of information of what was the initial speed. While we expect the solar wind quantities to influence the CME expansion, the values measured at 1~au do not allow us to draw a conclusion regarding this. Overall, this points towards expansion being influenced by the magnetic pressure inside the ME, and therefore, some of the expansion being associated with an ``overpressure'' of the ME as compared to the background. We also find no correlation between the expansion index and the CME size near 1~au. \begin{figure}[tb] \centering {\includegraphics*[width=.48\linewidth]{alpha_BT_BN.png}} {\includegraphics*[width=.48\linewidth]{alpha_front-back.png}} \caption{Left: Average of the expansion indices of the positive and negative $B_N$ component vs. average of the expansion indices of the positive and negative $B_T$ components. The lines shows the 1-to-1, 1-to-2 and 2-to-1 values as well as the expected $-1$ and $-2$ values for the indices. The colorbar is the same as in Figures 3 and 4. Right: Comparison of the expansion indices of $B_T$ and $B_N$ in the front and back of the MEs.} \label{fig:component1} \end{figure} \subsection{Evolution with Distance of Magnetic Field Components Inside MEs} \citet{Vrsnak:2019} investigated how the fitted magnetic field inside MEs and the radial size of MEs vary with distance and discussed the implications of their study for the self-similar expansion of MEs. They concluded that, for individual cases, reconnection between the ME and the solar wind and/or pancaking of the ME cross-section is necessary to understand the evolution of the ME size as compared to the evolution of the magnetic field. In a previous work, \citet{Leitner:2007} noted that the expected difference in the decrease rate with distance of the axial and azimuthal components of the magnetic field may create differences in the trend found for the inner and outer heliosphere. \citet{Good:2019} discussed the change in orientation of the 18 CMEs measured in conjunction that they studied, finding a tendency towards lower-inclined MEs at the outer spacecraft compared to the first spacecraft. This implies a (small) difference in the way different magnetic field components change with distance. We note that for a force-free field with self-similar expansion, the axial magnetic field is expected to vary with distance as $r^{-1}$, whereas the poloidal field should vary as r$^{-2}$. In \citet{Lugaz:2020} for the 2013 July 10-13 CME, we found that a uniform decrease of the magnetic field components as r$^{-1.6}$ was a better fit to the data than a separate fit for the $y$ (axial) or $z$ (poloidal) components of the magnetic field. Here, we continue this analysis for the 42 CMEs measured in conjunction between two spacecraft. Because MESSENGER and VEX were planetary missions, there are significant ``data gaps'' in the IMF measurements corresponding to the time when the spacecraft were in the planetary magnetosphere. In addition, the magnetic fields inside MEs have been reported to significantly rotate in the inner heliosphere in some cases \citep[e.g., see][]{Nieves:2012,Winslow:2016} and it is unclear how this should be considered when comparing the axial or poloidal fields measured by these spacecraft with those measured near 1~au. As such, we compare the tangential and normal components of the magnetic field measured at the two spacecraft in the $RTN$ coordinate system. We focus on the extrema of the variation of the magnetic field components. For an ME that has a clear low (resp.\ high) inclination, the $B_T$ (resp.\ $B_N$) component typically keeps the same sign throughout the ME interval. In addition, the expansion of the front and back half of the ejecta may occur at different rates. For example, in event 21-2013, discussed in \citet{Lugaz:2020}, $B_T$ is always positive, while $B_N$ varies from positive to negative \citep[NWS ME following the classification of][]{Bothmer:1998}. For this event, as shown in Table~1, the $B_N$ positive component (at the front) decreases with an index of $-1.4$, whereas the $B_N$ negative (at the back) decreases with an index of $-1.1$. We therefore calculate the average of the indices for the positive and negative extrema of one component, and compare these. For the 21-2013 event, this means comparing the index of $-1.2$ for the $B_T$ unipolar component with $-1.25$ for the average of the $B_N$ indices. This shows that, although this is a low-inclined cloud, the axial and poloidal fields do not expand with a 1-to-2 ratio, but have approximately the same rate of expansion. We perform the same analysis for all MEs and these averages for the index decrease of the $B_T$ and $B_N$ components inside the ME are plotted in the left panel of Figure~\ref{fig:component1}. This Figure shows that there is no ME for which one component decreases as r$^{-1}$ while the other decreases as r$^{-2}$, which would be expected for the force-free expansion of a low or high inclined ME. There are a few cases for which this is approximately true. In fact for most MEs, the expansion index of the normal and tangential components agree with each other. The average of the expansion index of $B_T$ and $B_N$ are nearly identical (see Table~\ref{tab:stats} and the ratio of $\alpha_{BT}$ to $\alpha_{BN}$ is 1.09 $\pm$ 0.63. These results could occur if all MEs in our sample have an inclination close to $45^\circ$, which would imply that the normal and tangential components decrease similarly in a force-free model. This is highly unlikely; if nothing else, the four events described in Section~\ref{sec:results_spec} include MEs with a low inclination. In addition, such a situation should result in indices of both components around $-1.5$, whereas we find a cluster of MEs for which both components decrease approximately as $r^{-2}$. Lastly, we compare the expansion of the components in the front half of the MEs with that in the back half. The results are plotted in the right panel of Figure~\ref{fig:component1}. It shows a bias towards the expansion in the front of the ejecta to be stronger than the expansion at the back. Note that we have reliable exponents only for 34 pairs (front and back) of magnetic field components, and that these are dominated by conjunctions involving MESSENGER data (28 cases vs.\ six for VEX data). The ratio of the front-to-back expansion is 1.57 $\pm$ 1.18 with 15 events with the front expansion at least 25\% larger than the back expansion and only seven for which the reverse is true (the other ten events are consistent with the same expansion in the front and the back). As this result is based on the extrema of $B_T$ and $B_N$, the exact position of the boundaries is not expected to influence the results. This is somewhat consistent with the findings of \citet{Janvier:2019} that showed that the profile of the magnetic field inside MEs is more peaked towards the front at MESSENGER and more symmetric at 1~au. This would result in the front half to show more expansion than the back half of MEs as found here. This result may be associated with the presence of a sheath region in front of the ME that allows the front part of the ME to expand relatively freely. On the contrary, the expansion of the back part of the ME may be hindered by the presence of the ME wake with speed comparable (or sometimes slightly higher) than the back of the ME. \add{The presence of fast solar wind streams behind MEs may also result in MEs being somewhat compressed in the back, and would thus still be consistent with these results. The presence of fast streams behind MEs near 1~au is a relatively frequent occurrence}. We note that this cannot be explained by aging as the back part of the ME is older than the front when it passes over a spacecraft and it is the section of the ME which has had most time to expand. This finding, if confirmed, further complicates the notion of force-free and self-similar expansion of MEs as there might not be a balance of the magnetic field at all time throughout the ME propagation. \section{Discussion and Conclusions} \label{sec:discuss} \begin{figure}[tb] \centering {\includegraphics*[width=.98\linewidth]{final_figure.png}} \caption{Schematic representation of the expansion of two MEs. In the inner heliosphere, one ME (orange) expands more slowly than the other (blue) until both reach total pressure balance with the solar wind (bottom row). Afterwards, they expand with the same rate, dictated by the solar wind expansion. Both MEs have $\alpha = -1.8$ between 0.8 and 1~au. Between 0.3 and 0.8~au, the rapidly expanding ME has $\alpha = -2.5$ and the slowly expanding ME has $\alpha = -3.2$. The combined $\alpha$ from 0.3 to 1~au are $-2.4$ and $-1.3$ for the rapidly and slowly expanding MEs, respectively. {\it In situ} measurements (top row) near 1~au do not reflect what happened in the innermost heliosphere, while measurements below $\sim 0.8$~au might reflect the conditions but are not available. Overall, the $\alpha$ of various MEs (middle row) tend towards the solar wind value (green curve) as the MEs approach 1~au with the orange and blue curves representing the MEs shown below and other colors for other potential behaviors. } \label{fig:final} \end{figure} In this work, we have used {\it in situ} measurements of 42 CMEs made in conjunction by two spacecraft among MESSENGER, Venus Express, {\it Wind}, STEREO-A and STEREO-B during solar cycle 24 to compare global and local measures of ME expansion. In terms of global measures, we have focused on the index of the decrease of the magnetic field with distance, $\alpha$. In terms of local measures, we have examined the expansion speed and various dimensionless parameters, primarily $\zeta$ from \citet{Demoulin:2009} as calculated near 1~au. We have also compared the global expansion with local properties of CMEs, its initial and final speed and magnetic field strength. Our sample, in terms of average properties of the CMEs, appears typical when compared to the average properties from larger samples measured near 1~au \citep[]{Richardson:2010,Jian:2018}. We have found that the global and local measures of CME expansion are, at best, only weakly correlated, indicating that measurements near 1~au do not reflect the expansion of CMEs between $\sim$ 0.3~au and 1~au. The only strong correlation has been found to occur between the ME magnetic field strength (average or maximum) at the innermost spacecraft and the index of decrease of the magnetic field. Overall, a picture of CME expansion in the inner heliosphere (from $\sim$ 0.3~au to 1~au) can be drawn from this work to explain the various measurements. A sketch of this scenario can be seen in Figure~\ref{fig:final}. MEs with strong internal magnetic pressure at 0.3~au expand rapidly until they reach total pressure balance with the solar wind, somewhere before 1~au (blue curves). MEs with weak internal magnetic pressure at 0.3~au expand slowly until they reach total pressure balance with the solar wind, somewhere before 1~au (orange curves). In the heliosphere near Earth ($\sim$ 0.8 to 1.1~au and probably beyond), the ME expansion is controlled by the change in the solar wind pressure \citep[]{Demoulin:2009} and it does not depend on what was the initial magnetic fiel strength. As such, both the final (near 1~au) magnetic field strength and the local measure of expansion ($\zeta$) do not reflect processes that occurred below $\sim$ 0.8~au. The power index of decrease of the magnetic field with distance, $\alpha$, is dominated by what happens in the inner heliosphere. In the example in Figure~\ref{fig:final}, both MEs have $\alpha = -1.8$ between 0.8 and 1~au for a decrease of the magnetic field by a factor of 1.5. Between 0.3 and 0.8~au, the rapidly expanding ME has $\alpha = -2.5$ for a decrease of the magnetic field by a factor of 11.6 and the slowly expanding ME has $\alpha = -3.2$ for a decrease of 3.2. The combined $\alpha$ from 0.3 to 1~au are $-2.4$ and $-1.3$, even though the ME magnetic field at 1~au is the same for both cases with a value of 13.5~nT. While this scenario fits with the various findings in this work, to be fully tested, it would require i) more conjunction events involving three or more spacecraft, and ii) plasma measurements, especially of the velocity, in the inner heliosphere (below 0.95~au) to test the prediction that the $\zeta$ parameter may be better correlated with $\alpha$ in the innermost heliosphere. In addition, we have found some evidence from the evolution of the tangential and normal components of the magnetic field inside MEs between the two spacecraft that MEs do not maintain force-free conditions while they expand. This conclusion has been obtained without performing fitting of the magnetic field measurements, which would require to make assumptions regarding the morphology of the magnetic field inside MEs. In addition, fitting methods have been found to often disagree regarding ME orientation \citep[]{AlHaddad:2013}. Lastly, we have found evidence that the front of the ME expands faster than the back. This might be consistent with the back half of the ME being overtaken by the solar wind behind it. Such a scenario would hinder the ME expansion in its back half. This finding is consistent with the fact that many {\it in situ} measurements within MEs, such as those presented in Figure~\ref{fig:example}, have a decreasing speed profile in the front part of the ME and a constant ME speed equal to the solar wind speed in the back of the ME. This indicates that the ME expansion in the ecliptic plane is not able to continue beyond the point where the ME back speed equals the solar wind speed. This is also consistent with the lack of reverse shocks measured at the back of MEs in the ecliptic plane, contrary to what occurs at the back of stream interaction regions or MEs at high latitudes \citep[]{Gosling:1998}. Some of these results could be further tested if we had multi-spacecraft measurements of CMEs made at approximately the same heliocentric distance. This would allow us to compare different local measures of the CME expansion (expansion speed, $\zeta$, etc.) to determine how they vary through different crossings within the same ME. Such multi-spacecraft measurements will be possible when STEREO-A comes back to the proximity of the Sun-Earth line in 2023-2024, but this will only provide about 11 months of potential measurements within 10$^\circ$ from the Sun-Earth line. \citet{Lugaz:2018} highlighted differences between spacecraft measurements for angular separation of $\sim 0.7^\circ$; however the maximum magnetic field strength remained very consistent between two spacecraft even when the components measured by the two spacecraft showed significant differences. If such differences between MEs are common, the results about the expansion of various magnetic field components may be affected. This highlights the need for a dedicated mission providing multi-point measurements of MEs in the inner heliosphere. \acknowledgments This work has been made possible by the following grants: NASA NNX15AB87G, 80NSSC20K0700, 80NSSC17K0556 and 80NSSC20K0431 and NSF AGS1435785. RMW acknowledges support from NASA grant 80NSSC19K0914 and NSF grant AGS1622352. CJF acknowledges support from {\it Wind} grant 80NSSC19K1293. All the data analyzed in this study are publicly available. MESSENGER and VEnus Express data are available on the Planetary Data System (\url{https://pds.jpl.nasa.gov}) while other data are available from the CDAWeb (\url{https://cdaweb.sci.gsfc.nasa.gov/index.html/})
2,877,628,089,868
arxiv
\section{Introduction}\label{sec:intro} Multiple hypothesis testing is both a classical and highly active research area, dating back (at least) to an initially unpublished 1953 manuscript by Tukey entitled ``The Problem of Multiple Comparisons''~\cite{tukey1953problem,tukey1994}. Given a large set of null hypotheses, the goal of multiple testing is to decide which subset to reject, while guaranteeing some notion of control on the number of false rejections. It is of practical importance to incorporate different forms of prior knowledge into existing multiple testing procedures; such prior knowledge can yield improvements in power and precision, and can also provide more interpretable results. Accordingly, we study methods that control the False Discovery Rate (FDR) or test the global null (GN) hypothesis while incorporating any number of the following strategies for incorporating prior knowledge: (a) the use of prior weights, (b) the use of penalty weights, (c) the partitioning of the hypotheses into groups, (d) the incorporation of knowledge of the dependence structure within the data, including options such as estimating and adapting to the unknown number of nulls under independence, or reshaping rejection thresholds to preserve error-control guarantees in the presence of arbitrary dependence. It is a challenge to incorporate all of these forms of structure while maintaining internal consistency (coherence and consonance) among the pattern of rejections and acceptances, and most existing work has managed to allow only one or two of the four strategies (a), (b), (c), (d) to be employed simultaneously. We present a general unified framework, called \texttt{p-filter}, for integrating these four strategies while performing a GN test or controlling the FDR. The framework is accompanied by an efficient algorithm, with code publicly available at \href{https://www.stat.uchicago.edu/~rina/pfilter.html}{https://www.stat.uchicago.edu/$\sim$rina/pfilter.html}. This framework allows scientists to mix and match techniques, and use multiple different forms of prior knowledge simultaneously. As a by-product, our proofs often simplify and unify the analysis of existing procedures, and generalize the conditions under which they are known to work. \textbf{Organization}. The rest of this paper is organized as follows. In \secref{contributions}, we begin with an example to provide intuition, and we discuss the contributions of this paper. In \secref{pf+}, we describe the general \textnormal{\texttt{p-filter}}~framework, along with its associated theoretical guarantees; this section lays out the central contribution of our work. In \secref{lemmas}, we present three lemmas that provide valuable intuition and are central to the proof of \thmref{pf+}; see \secref{proof_main} for the proof itself. We prove the three aforementioned lemmas in \secref{lemmas-proofs}, and prove some related propositions in \secref{prop-proofs}. While directly related work is discussed immediately when referenced, we end by overviewing other related work in \secref{disc}. \section{An example, and our contributions}\label{sec:contributions} The various kinds of prior information considered in this paper have been studied in earlier works and repeatedly motivated in applied settings, and our focus is accordingly on the conceptual and mathematical aspects of multiple decision-making with prior knowledge. Before beginning our formal presentation, we consider a simple example, illustrated in Figure~\ref{fig:partitions}, in order to provide intuition. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{graph.jpg} \caption{\small Consider a set of $n = 16$ hypotheses arrayed in a $4 \times 4$ grid, with four different partitions into groups: elementary, rows, columns and blocks. On the top row is the underlying truth, with the leftmost panel showing the hypothesis-level non-nulls, and the other three panels showing which groups in each partition are hence identified as non-null. On the bottom row is an example of a set of discoveries, with the leftmost panel showing the hypothesis-level rejections, and the other three panels showing which groups are correspondingly rejected (light-grey for correct rejections, black for false rejections). The false discovery proportions (FDP) in each partition are 0.2, 0, 0.33, 0.5 respectively. The true discovery proportions (empirical power) in each partition are 0.8, 0.66, 1, 0.5 respectively.} \label{fig:partitions} \end{figure} Consider a set of sixteen hypotheses arranged in a $4\times4$ grid, as displayed in the first panel of Figure~\ref{fig:partitions}. One may imagine that one coordinate refers to spatial locations, and the other to temporal locations, so that each square represents an elementary null hypothesis $H_{s,t}$, stating that there is nothing of interest occurring at spatial location $s$ at time $t$. As displayed in the leftmost panel of the first row, the non-nulls may be expected to have some spatio-temporal structure. In order to exploit this structure, the scientist may choose to group the hypotheses a priori in three ways: by spatial location, by temporal location, and by spatio-temporal blocks, as displayed by the other three panels in the first row. Each such group can be associated with a group null hypothesis, which states that there is nothing of interest occurring within that group. As displayed in the first row, the group-level non-nulls are simply implied by the elementary non-nulls. The second row of the figure displays the results of a hypothetical procedure that makes some elementary rejections (first panel), and hence some corresponding group level rejections (other three panels). The scientist may wish to not report too many false elementary rejections, but also not report discoveries at spurious locations, times or space-time blocks. One way of enabling this wish is to enforce group FDR constraints, in addition to overall FDR control. This would correspond to controlling the ``spatial'' FDR, ``temporal'' FDR, and ``spatio-temporal'' FDR, in addition to the overall FDR. Requiring the rejected hypotheses to satisfy additional constraints may reduce power, but it may also increase interpretability, and result in higher precision (achieved FDP). For example, \citet{barber2016p} consider an example from neuroscience where each null hypothesis $H_{v,t}$ states that at $t$ seconds after the presentation of the stimulus, a chosen feature of the stimulus is unrelated to (a specific measure of) the brain activity at voxel $v$ in the brain. One may spatially group these hypotheses according to pre-defined regions of interest (ROIs) such as the visual cortices V1 to V5 and the left/right temporal cortices. This spatial partition allows us to capture the idea that either an ROI is unrelated to the stimulus, or many of its voxels $v$ will be related. Similarly, for a fixed voxel, one may temporally group the hypotheses, to capture the idea that either a voxel $v$ will remain unrelated to the stimulus at various delays, or it will be related at several consecutive delays (usually $t=4,6,8,10$ seconds after stimulus onset). The earlier \textnormal{\texttt{p-filter}}~algorithm \cite{barber2016p} simultaneously guarantees FDR control for multiple arbitrary and possibly non-hierarchical partitions, while ensuring that the elementary and group rejections are ``internally consistent.'' It is a multi-dimensional step-up procedure, one which reduces to the BH step-up procedure or Simes' GN test in special cases. The reader may refer to the original paper for numerical simulations and more details on the neuroscience example. \subsection*{Our contributions} Consider a collection $\{H_1, \ldots, H_n \}$ of $n$ unordered hypotheses, along with associated $p$-values $\{P_1,\dots,P_n\}$. It is convenient to introduce the shortand notation $[n] \ensuremath{:\, =} \{1,\dots,n\}$. Further, consider $M$ arbitrary unordered partitions of these hypotheses into groups, where the $m$-th partition (``layer'') contains an unordered set of $G^{(m)}$ groups: \begin{align*} \small A^{(m)}_1,\dots,A^{(m)}_{G^{(m)}}\subseteq [n] \text{ for $m=1,\dots,M$.} \end{align*} It may help the reader to imagine the first partition as being the elementary or finest partition, meaning that it contains $n$ groups of one hypothesis each, but it is important to note that this partition is entirely optional, and can be dropped if there is no desire of controlling the overall FDR. If $\mathcal{H}_0 \subseteq n$ are the true nulls, then we call a group $g$ null if $A^{(m)}_g \subseteq \mathcal{H}_0$. Let the set of null groups in partition $m$ be denoted by $\mathcal{H}_0^{(m)}$. Although we continue to use the name \textnormal{\texttt{p-filter}}~for the algorithm that we discuss in this paper, the algorithm goes significantly beyond the original algorithm; in particular, in our general setting of $M$ arbitrary partitions there are seven ways in which the new procedure to be developed here goes beyond the original framework. \begin{enumerate} \item \textbf{Overlapping groups.} We allow the groups in any partition to overlap. An elementary hypothesis need not be part of just a single group \mbox{$g \in [G^{(m)}]$,} and we let $g^{(m)}(i)$ denote the set of groups in the $m$-th partition to which $P_i$ belongs---viz. \begin{align*} g^{(m)}(i) = \{g \in [G^{(m)}] : P_i \in A^{(m)}_g \}. \end{align*} For example, in the neuroscience example introduced earlier, if the scientist is unsure about the accuracy of the ROI borders, they may place boundary hypotheses into two or more ROIs to reflect this uncertainty. \item \textbf{Incomplete partitions.} We allow partitions to be incomplete---let the $m$-th partition's leftover set $L^{(m)} \subset [n]$ represent all elementary hypotheses that do not belong to any group in the $m$-th partition: \begin{align*} \small L^{(m)} = [n] \backslash \bigcup_g A^{(m)}_g \end{align*} This gives additional flexibility to the user who may not want to assign some hypotheses to any groups. Note that $L^{(m)}$ is not just another group; this set is not counted when calculating the group-level FDR in layer $m$, meaning that elementary discoveries within $L^{(m)}$ do not alter the group FDR at layer $m$. Hence, hypotheses in this leftover set have no internal consistency constraints imposed by layer $m$. For instance, in the neuroscience example, if it is determined (for example due to brain damage or surgery) that some voxels may not naturally fit into any ROI, then they can be left out of that partition. \item \textbf{Internal consistency (IC)}. In order to maintain interpretability when dealing with overlapping groups, it is convenient to introduce two natural notions of internal consistency of the group rejections and elementary rejections: \begin{itemize} \item \textbf{Weak IC}. We reject $H_i$ if and only if in every partition, either there is at least one rejected group containing $i$, or $i\in L^{(m)}$. \item \textbf{Strong IC}. We reject $H_i$ if and only if in every partition, either every group that contains $i$ is rejected, or $i\in L^{(m)}$. \end{itemize} These definitions\footnote{We remark these are not the only two notions of internal consistency that can fit into our framework: any \emph{monotone} notion of IC can be handled, where \emph{monotone} means that decreasing the $p$-values can only possibly increase the number of rejections at all layers.} of IC are extensions to the multilayer setting of the notions of \emph{coherence} and \emph{consonance} as defined by \citet{gabriel1969simultaneous}, and explored in the FWER literature by Sonnemann and Finner~\cite{sonnemann1982allgemeine,sonnemann2008general,sonnemann1988vollstandigkeitssatze}, and \citet{romano2011consonance}. In the aforementioned neuroscience example, weak internal consistency may be more appropriate. \item \textbf{Weights.} The $m$-th partition can be associated with two sets of positive weights, one pair for each group $g$ in that partition: \begin{align*} \text{Penalties } \{u^{(m)}_g\} \text{ and priors } \{w^{(m)}_g\}, \text{ such that } \sum_{g=1}^{G^{(m)}} u^{(m)}_g w^{(m)}_g = G^{(m)}. \end{align*} This generalizes work on doubly-weighted procedures by \citet{blanchard2008two}, who considered a single partition. Their work in turn generalizes earlier work using prior weights \cite{genovese2006false} and penalty weights \citet{BH97} separately. Large prior weights indicate beliefs that the hypotheses are more likely to be non-null, and large penalties reflect which hypotheses are more scientifically important. For example, in the neuroscience example, weights can also be used to take differing ROI sizes into account, or prior knowledge of when and where effects are expected to be found. \item \textbf{Reshaping.} Reshaping functions $\beta$ are used to guard against possible dependence among the $p$-values by \emph{undercounting} the size (or weight) of rejected sets. Reshaping makes it possible to handle arbitrary dependence; on the other hand, this favorable robustness property is accompanied by a loss of power. Following Blanchard and Roquain~\citep{blanchard2008two}, for any probability measure $\nu$ on $[0,\infty)$, we define the reshaping function \begin{align} \label{eqn:reshaping} \beta(k) \ensuremath{:\, =} \int_{0}^k x~ \mathsf{d}\nu(x) ~\leq~ k. \end{align} If the $p$-values within or across layers are arbitarily dependent, we may use reshaping functions $\beta^{(m)}$ to reshape thresholds in layer $m$. In the special case that Simes $p$-values are used to form the group-level $p$-values $P^{(m)}_g$, we may use reshaping functions $\widetilde\beta^{(m)}_g$ to protect the Simes procedure from arbitrary dependence within the group. The original procedure of \citet{BY01} corresponds to choosing the reshaping function $\beta_{BY}(k) = \frac{k}{\sum_{i=1}^n \frac1i}$. Many other examples and their connections to other formulations of multiple testing methods can be found in the literature~\citep{blanchard2008two,sarkar2008methods,sarkar2008two}. In contrast to the discrete distributions which have been the focus of past work, in the current paper it is necessary to consider continuous measures since the penalty weight of rejected hypotheses, unlike their count, can be fractional. \item \textbf{Adaptivity.} For any partition whose group $p$-values are known to be independent (i.e., independence \emph{between} groups, but not necessarily \emph{within} each group), we can incorporate ``null-proportion adaptivity'' for that partition \cite{hochberg1990more,benjamini2000adaptive}. For partition $m$, we fix a user-defined constant $\lambda^{(m)} \in (0,1)$, and define a weighted null-proportion estimator: \begin{small} \begin{align} \label{eqn:pihatm} \widehat{\pi}^{(m)} := \frac{|u^{(m)} \cdot w^{(m)}|_\infty + \sum_{g} u^{(m)}_g w^{(m)}_g \One{P^{(m)}_g > \lambda^{(m)}}}{G^{(m)} (1-\lambda^{(m)})} \;. \end{align} \end{small} The use of null-proportion adaptivity in any \emph{one layer} may improve the power in \emph{all layers}, since more groups being discovered in one layer leads to more elementary discoveries, and hence more discovered groups in other layers. For a single group with no weights, our approach reduces to the original suggestion of Storey et al.~\cite{Storey02,Storey04}. \item \textbf{Arbitrary group $p$-values.} Our new $\textnormal{\texttt{p-filter}}$ algorithm is no longer tied to the use of Simes $p$-values at the group layers, unlike the original algorithm. In other words, each group-level $p$-value at each layer can be formed by combining the elementary $p$-values within that group~\cite{vovk2012combining,heard2018choosing}. When the $p$-values are independent, some options include Fisher's $-2\sum_i \ln P_i$ and Rosenthal's $\sum_i \Phi^{-1}(P_i)$, where $\Phi$ is the Gaussian CDF (originally proposed by \citet{stouffer1949american}). When there are very few non-nulls, the Bonferroni correction is known to be more powerful, and it also works under arbitrary dependence, as does R\"uschendorf's proposal of $2 \sum_i P_i / n$, and R\"uger's proposal of $P_{(k)} \cdot n/k$ for a fixed $k$. Alternately the group $p$-values can be constructed directly from raw data. Accordingly, we can appropriately use adaptivity or reshaping, as needed, depending on the induced dependence. \end{enumerate} Suppose a procedure rejects a subset $\widehat{\mathcal{S}}\subseteq[n]$ of hypotheses and a subset $\Sh^{(m)} \subseteq[G^{(m)}]$ of groups in partition $m$. Then, we may define the penalty-weighted FDR as \begin{align*} \textnormal{FDR}^{(m)}_u = \EE{\dotfrac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g \One{g \in \Sh^{(m)}}}{\sum_{g \in [G^{(m)}]} u^{(m)}_g \One{g \in \Sh^{(m)}}}} \;. \end{align*} The rejections made by \textnormal{\texttt{p-filter}}~will be internally consistent, and satisfy \begin{align*} \textnormal{FDR}^{(m)}_u \leq \alpha^{(m)} \text{ simultaneously for all } m = 1,\dots,M. \end{align*} In order to handle the possibility of a ratio ``$\frac{0}{0}$'' in this definition, and in our later work, we adopt the ``dotfraction'' notation \begin{align} \label{eqn:define_dotfrac} \dotfrac{a}{b} & \ensuremath{:\, =} \begin{cases} 0,&\text{ if } a=0,\\ \frac{a}{b},&\text{ if }a \neq 0,b\neq 0,\\ \text{undefined } &\text{ if }a\neq 0, b=0. \end{cases} \end{align} Dotfractions behave like fractions whenever the denominator is nonzero. The use of dotfractions simplifies the presentation: note that $\dotfrac{a}{b} \neq \frac{a}{\max(b,1)}$ since $b$ may be fractional due to the use of penalty weights. We formally derive properties of dotfractions in the supplement (\appref{dotfrac}). When there is only one partition, and the weights equal one, the quantity $\textnormal{FDR}^{(m)}_u$ reduces to the usual FDR defined by \citet{BH95}, and $\textnormal{\texttt{p-filter}}$~reduces to the BH procedure\footnote{For a review of its history that involves Eklund and Seeger in the 1960s, and Simes, Hommel, Soric, Benjamini and Hochberg in the 1980s and 1990s, see \cite{seeger1968note,benjamini2000adaptive}.}. Many other procedures are recovered as special cases of $\textnormal{\texttt{p-filter}}$, as detailed after \thmref{pf+}. \section{\texorpdfstring{$\textnormal{FDR}$}{Lg} control and internal consistency for multiple partitions}\label{sec:pf+} Even in the case of just two partitions---one partition of groups and the elementary partition of individual hypotheses---it is non-trivial to provide a guarantee of internal consistency while controlling both group-level and individual-level FDR. For example, a sequential procedure, of first rejecting groups at a target FDR level $\alpha_1$ and then rejecting individual hypotheses within rejected groups at level $\alpha_2$, may not control the elementary FDR (due to not accounting for selection bias), and may not be internally consistent (because there might be a group rejected in the first round, with none of the elementary hypotheses in this group rejected in the second round). Further, such a method is not easily generalized to non-hierarchical partitions. Similarly, a parallel procedure that independently runs FDR procedures on the groups and on the individuals, may also fail to be internally consistent. Naively intersecting these rejections---that is, rejecting those hypotheses whose groups are rejected at every layer---may also fail to control the FDR (see \citet{barber2016p} for explicit examples). The \textnormal{\texttt{p-filter}}~algorithm is a multivariate extension of classical step-up procedures that is roughly based on the following sequence of steps: \begin{itemize} \item Select all hypotheses in each layer whose $p$-values are smaller than some initial layer-specific threshold. \item Reject an elementary hypothesis if it is contained in a selected group in every layer. \item In each layer, reject a group hypothesis if it contains a rejected elementary hypothesis. Then, estimate the group-FDP in each layer. % \item Lower the initial thresholds at each layer, and repeat the steps above, until the group-FDP is below the desired level for all partitions. \end{itemize} Next, we discuss the necessary dependence assumptions and then derive the \textnormal{\texttt{p-filter}}~algorithm that implements the above scheme. \subsection{Marginal and joint distributional assumptions on $p$-values} We assume that the marginal distribution of each null $p$-value is stochastically larger than the uniform distribution, referred to as \emph{super-uniform} for brevity. Formally, for any index $i \in \mathcal{H}_0$, we assume that \begin{align} \label{EqnSuperUniform} \PP{P_i \leq t} \leq t \quad\mbox{for all $t \in [0,1]$.} \end{align} Of course, uniformly-distributed $p$-values trivially satisfy this condition. We use the phrase \emph{under uniformity} to describe the situation in which the null $p$-values are marginally exactly uniform. If this phrase is not employed, it is understood that the null $p$-values are marginally super-uniform. Regarding assumptions on the joint distribution of $p$-values, three possible kinds of dependence will be considered in this paper: independence, positive dependence or arbitrary dependence. In the independent setting, null $p$-values are assumed to be mutually independent, and independent of non-nulls. In the arbitrary dependence setting, no joint dependence assumptions are made on the $p$-values. The last case is that of positive dependence, as formalized by the \emph{Positive Regression Dependence on a Subset} (PRDS) condition~\citep{lehmann1966some,sarkar1969some,BY01}. In order to understand its definition, it is helpful to introduce some basic notation. For a pair of vectors $x, y \in [0,1]^n$, we use the notation $x \preceq y$ to mean that $x \leq y$ in the orthant ordering, i.e., $x_i \leq y_i$ for all $i \in \{1, \dots, n\}$. A set $\ensuremath{\mathcal{D}} \subseteq [0,1]^n$ is said to be \emph{nondecreasing} if $x \in \ensuremath{\mathcal{D}}$ implies $y \in \ensuremath{\mathcal{D}}$ for all $y \succeq x$. We say that a function $f: [0,1]^n \mapsto \ensuremath{[0,\infty)}$ is \emph{nonincreasing}, if $x \preceq y$ implies $f(x) \geq f(y)$. \begin{definition}[positive dependence, PRDS] \label{ass:PRDS} We say that the vector $P$ satisfies PRDS if for any null index $i \in \mathcal{H}_0$ and nondecreasing set $ \ensuremath{\mathcal{D}} \subseteq [0,1]^n$, the function $t~\mapsto~\PPst{P\in \ensuremath{\mathcal{D}}}{P_i \leq t}$ is nondecreasing over $t\in(0,1]$. \end{definition} The original positive regression dependence assumption was introduced by~\citet{lehmann1966some} in the bivariate setting and by~\citet{sarkar1969some} in the multivariate setting, and extended to the PRDS assumption first made by~\citet{BY01}. These previous papers used the equality $P_i = t$ instead of the inequality $P_i \leq t$ in the definitions, but one can prove that both conditions are essentially equivalent. The PRDS condition holds trivially if the $p$-values are independent, but also allows for some amount of positive dependence. For intuition, suppose that \mbox{$Z = (Z_1,\ldots,Z_n)$} is a multivariate Gaussian vector with covariance matrix $\Sigma$; the null components correspond to Gaussian variables with zero mean. Letting $\Phi$ be the CDF of a standard Gaussian, the vector of $p$-values \mbox{$P=(\Phi(Z_1),\dots,\Phi(Z_n))$} is PRDS on $P_i$ for every index $i$ if and only if all entries of the covariance matrix $\Sigma$ are non-negative. See~\citet{BY01} for additional examples of this type. \subsection{Specifying the \texorpdfstring{$\textnormal{\texttt{p-filter}}$}{Lg} algorithm} In order to run the $\textnormal{\texttt{p-filter}}$ algorithm, we need to search for rejection thresholds for each layer. These thresholds will be parametrized by \emph{weighted} discovery counts $k^{(m)}\in[0,G^{(m)}]$ for each layer $m=1,\dots,M$. The reader is cautioned that each $k^{(m)}$ need not be an integer but instead should be viewed as a real number corresponding to the total rejected penalty weight. If the weights $u^{(m)}_g$ are all set equal to one, then $k^{(m)}$ corresponds to the number of groups in layer $m$ that are rejected. Given some prototypical vector $\vec{k} \ensuremath{:\, =} (k^{(1)},\dots,k^{(M)})$, we first perform an initial screening on each layer separately: \begin{align} \label{eqn:Shm} \Sh^{(m)}_{\textnormal{init}}(\vec{k}) = \left\{g\in[G^{(m)}]: P^{(m)}_g \leq \min \Big \{ \tfrac{ w^{(m)}_{g} \alpha^{(m)} \beta^{(m)}(k^{(m)})}{\widehat{\pi}^{(m)} G^{(m)}}, \lambda^{(m)} \Big \} \right\}. \end{align} If the groups in partition $m$ are independent, we replace $\beta^{(m)}(k^{(m)})$ by just $k^{(m)}$, and set $\widehat{\pi}^{(m)}$ using~\eqref{eqn:pihatm}; on the other hand, if they are arbitrarily dependent, we set $\widehat{\pi}^{(m)}=1$ and $\lambda^{(m)}=1$. This convention allows the same expressions to be used in all settings. For weak internal consistency, we define the elementary rejections as \begin{subequations} \begin{align} \label{eqn:hatS-defn} \widehat{\mathcal{S}}(\vec{k}) & = \widehat{\mathcal{S}}_{\textnormal{weak}}(\vec{k})= \bigcap_{m=1}^M \Biggr( \Big[\bigcup_{g\in \Sh^{(m)}_{\textnormal{init}}(\vec{k})} A^{(m)}_g \Big] \cup L^{(m)} \Biggr) \nonumber \\ &= \{ P_i : \text{$\forall m$, either } P_i \in L^{(m)}, \text{ or $\exists \ g \in g^{(m)}(i)$, } A^{(m)}_g \in \Sh^{(m)}_{\textnormal{init}}(\vec{k}) \}. \end{align} Alternately, for strong internal consistency, we may instead define \begin{align}\label{eqn:hatS-defn-strong} \widehat{\mathcal{S}}(\vec{k}) &= \widehat{\mathcal{S}}_{\textnormal{strong}}(\vec{k})= \bigcap_{m=1}^M \left([n]\backslash \bigcup_{g\in[G^{(m)}]\backslash \Sh^{(m)}_{\textnormal{init}}(\vec{k})} A^{(m)}_g\right) \nonumber \\ & = \{ P_i : \text{$\forall m$, either } P_i \in L^{(m)}, \text{ or $\forall \ g \in g^{(m)}(i)$, } A^{(m)}_g \in \Sh^{(m)}_{\textnormal{init}}(\vec{k}) \}. \end{align} \end{subequations} Finally, using either $\widehat{\mathcal{S}}(\vec{k})=\widehat{\mathcal{S}}_{\textnormal{weak}}(\vec{k})$ or $\widehat{\mathcal{S}}(\vec{k})=\widehat{\mathcal{S}}_{\textnormal{strong}}(\vec{k})$, we redefine the set of groups in layer $m$ which are rejected as: \begin{align} \label{eqn:hatSm-defn} \Sh^{(m)}(\vec{k}) = \left\{ g \in [G^{(m)}]: A^{(m)}_g\cap \widehat{\mathcal{S}}(\vec{k}) \neq \emptyset \text{ and } g \in \Sh^{(m)}_{\textnormal{init}}(\vec{k}) \right \}. \end{align} Examining these definitions, it may be verified that (weak or strong) internal consistency is satisfied by the rejections $\widehat{\mathcal{S}}(\vec{k}), \widehat{\mathcal{S}}_m(\vec{k})$. Of course, these definitions depend on the initial choice of the vector $\vec{k}$. Since we would like to make a large number of discoveries, we would like to use a $\vec{k}$ that is as large as possible (coordinatewise), while at the same time controlling the layer-specific FDRs, which are the expectations of \begin{align*} \small \fdp^{(m)}_u(\vec{k}) \ensuremath{:\, =} \dotfrac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g \One{g \in \Sh^{(m)}(\vec{k})} }{\sum_{g \in [G^{(m)}]} u^{(m)}_g \One{g \in \Sh^{(m)}(\vec{k})}} \;. \end{align*} \vspace{0.2in} \noindent Now, define the data-dependent set of feasible vectors $\vec{k}$ as \begin{align} \label{eqn:feasible-thresholds} \widehat{\mathcal{K}} = \left \{ \vec{k} \in [0,G_1] \times \dots \times [0,G_M] : \sum_{g \in \Sh^{(m)}(\vec{k})} u^{(m)}_g \geq k^{(m)} \textnormal{ for all } m \right \}, \end{align} where we suppress the implicit dependence of $\widehat{\mathcal{K}}$ on input parameters such as $\alpha^{(m)}, \lambda^{(m)}, \{w^{(m)}_g\},\{u^{(m)}_g\}$. In particular, if the penalty weights are all equal to one, then the consistency condition defining the ``feasible'' $\vec{k}$'s is equivalent to requiring that $|\Sh^{(m)}(\vec{k})| \geq k^{(m)}$ for all $m=1,\dots,M$; i.e., the numbers of rejections in each layer at the vector $\vec{k}$ are elementwise $\geq \vec{k}$. This condition can be viewed as a generalization, to the multi-partition setting, of the ``self-consistency'' condition described by \citet{blanchard2008two}. It is also worth noting that the $\textnormal{\texttt{p-filter}}$ algorithm in \citet{barber2016p} was derived in terms of thresholds $\vec{t}$ instead of number of rejections $\vec{k}$, and there the corresponding feasiblity condition was that $\widehat{\fdp}_m(\vec{t}) \leq \alpha^{(m)}$, where $\widehat{\fdp}_m(\vec{t})$ is an empirical-Bayes type estimate of the FDP. Indeed, if we avoid $\widehat{\pi}^{(m)},\beta^{(m)},w^{(m)},u^{(m)}$ for simplicity, then associating $\th^{(m)}$ to $\alpha^{(m)} \widehat{k}^{(m)} / G^{(m)}$ and comparing our derivation to that of \citet{barber2016p}, we can see that the ``self-consistency'' viewpoint and the ``empirical-Bayes'' viewpoint are equivalent and lead to the same algorithm. However, when dealing with reshaping under arbitrary dependence, the proofs are simpler in terms of $\vec{k}$ than in terms of $\vec{t}$, explaining our switch in notation. As with the BH and BY procedures, we then choose the largest feasible thresholds $k^{(m)}$, given by: \begin{align}\label{eqn:max-threshold-m} \widehat{k}^{(m)} = \max\left\{k^{(m)} : \exists k^{(1)},\dots,k^{(m-1)},k^{(m+1)},\dots,k^{(M)}\textnormal{ s.t. } \vec{k} \in\widehat{\mathcal{K}} \right\} \;. \end{align} This choice defines our algorithm: the $\textnormal{\texttt{p-filter}}$ algorithm rejects the hypotheses $\widehat{\mathcal{S}}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$, as defined in~\eqnref{hatS-defn} or~\eqnref{hatS-defn-strong}, with rejections at layer $m$ given by $\Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ as defined in~\eqnref{hatSm-defn}. Next, we present the theoretical guarantees associated with $\textnormal{\texttt{p-filter}}$. \subsection{Theoretical guarantees} The following proposition states that the set of feasible vectors $\widehat{\mathcal{K}}$ actually has a well-defined ``maximum'' corner. \begin{proposition} \label{prop:max} Let the set of feasible vectors $\widehat{\mathcal{K}}$ be defined as in equation~\eqref{eqn:feasible-thresholds}, and let the partition-specific maximum feasible vector $\widehat{k}^{(m)}$ be defined as in equation~\eqref{eqn:max-threshold-m}. Then we have \begin{align} \label{eqn:feasible-thresholds2} (\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})\in\widehat{\mathcal{K}} \;. \end{align} \end{proposition} \noindent The proof is provided in \secref{proof_max}; it is a generalization of the corresponding result for the original $\textnormal{\texttt{p-filter}}$ algorithm~\cite{barber2016p}. The vector $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ is not just feasible from the perspective of self-consistency as captured by $\widehat{\mathcal{K}}$, but it is also feasible from the perspective of FDR control. Specifically, the next theorem establishes that---assuming for now that we can find $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$---selecting the set $\widehat{\mathcal{S}}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ guarantees simultaneous control of $\textnormal{FDR}_u^{(m)}$ for all $M$ partitions. In this theorem, the notation $\textnormal{Simes}_w(P_{A^{(m)}_g})$ refers to the weighted Simes' $p$-value (see \appref{simes} in the supplement for details). \begin{thm} \label{thm:pf+} Any procedure that computes the vector $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ according to definition~\eqref{eqn:max-threshold-m} satisfies the following properties, for all partitions $m = 1,\dots,M$ simultaneously: \begin{enumerate} \item[(a)] If the base $p$-values are independent, and all group $p$-values are given by $P^{(m)}_g = \textnormal{Simes}_w(P_{A^{(m)}_g})$, then employing adaptivity by defining $\widehat{\pi}^{(m)}$ as in~\eqref{eqn:pihatm} guarantees that $\textnormal{FDR}_u^{(m)}\leq\alpha^{(m)}$. \item[(b)] If base $p$-values are positively dependent (PRDS) and group $p$-values are given by $P^{(m)}_g = \textnormal{Simes}_w(P_{A^{(m)}_g})$, then without adaptivity or reshaping, we have that $\textnormal{FDR}_u^{(m)}\leq \alpha^{(m)} \frac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g}{G^{(m)}} \leq \alpha^{(m)}$. \item[(c)] When all $p$-values are arbitrarily dependent, and are constructed arbitrarily (under the assumption that $P^{(m)}_g$ is super-uniform for any null group $g\in\mathcal{H}_0^{(m)}$, meaning it is a valid $p$-value), then using reshaping as in~\eqref{eqn:reshaping} guarantees that $\textnormal{FDR}_u^{(m)}\leq \alpha^{(m)}\frac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g}{G^{(m)}} \leq \alpha^{(m)}$. \item[(d)] In the setting of part (c), if additionally the groups at layer $m$ are independent (that is, $P_{A^{(m)}_g}$ is independent from $P_{-A^{(m)}_g}$, for each $g\in[G^{(m)}]$), then using reshaping as in~\eqref{eqn:reshaping} and adaptivity for layer $m$ as in~\eqref{eqn:pihatm}, guarantees that $\textnormal{FDR}_u^{(m)}\leq\alpha^{(m)}$. \end{enumerate} \end{thm} The proof, given in \secref{proof_main}, uses three interpretable lemmas that we first discuss in \secref{lemmas}. It also introduces several new ideas to handle overlapping groups with dependent $p$-values. To remark on the difference between parts (c) and (d), what these two results guarantee is that if we use adaptivity for some set $\mathcal{M}_{\textnormal{adapt}}\subset[M]$ of layers, and do not use adaptivity (i.e.~set $\widehat{\pi}^{(m)}=1$) for the remaining layers, then FDR control is maintained across {\em all} layers as long as, for each $m\in\mathcal{M}_{\textnormal{adapt}}$, the layer-specific independence statement holds---$P_{A^{(m)}_g}$ is independent from $P_{-A^{(m)}_g}$, for each $g\in[G^{(m)}]$. If this condition fails for some $m\in\mathcal{M}_{\textnormal{adapt}}$, the FDR control in other layers will in fact not be affected. One application of statement (d) is when the base $p$-values are independent, there are no overlapping groups, and group $p$-values are formed using a Fisher, Rosenthal, or other combinations of the base $p$-values. Recently, \citet{katsevich17mkf} proved that in case (d), the FDR is controlled even without using reshaping, albeit at a constant factor larger than the target level. In practice, if we have accurate side information about group structures that the rejected hypotheses likely respect, then we may significantly improve our \textit{precision}, achieving a lower FDR than the theoretical bound, without affecting our power much. However, inaccurate side information may significantly lower our power, since each $p$-value would have additional misguided constraints to meet. These issues were explored in simulations by \citet{barber2016p}. \paragraph{Special cases} The setting with a single partition ($M=1$) recovers a wide variety of known algorithms. Considering only the finest partition with $n$ groups containing one hypothesis each, the \textnormal{\texttt{p-filter}}~ algorithm and associated \thmref{pf+} together recover known results about (a) the BH procedure of \citet{BH95} when weights, reshaping and adaptivity are not used, (b) the BY procedure of \citet{BY01} when reshaping is used, (c) the prior-weighted BH procedure of \citet{genovese2006false} when only prior weights are used, (d) the penalty-weighted BH procedure of \citet{BH97} when only penalty weights are used, (e) the doubly-weighted BH procedure of \citet{blanchard2008two} when both sets of weights and reshaping are used, and (f) the Storey-BH procedure of Storey et al.~\cite{Storey02,Storey04} when only adaptivity is used. When we instantiate $\textnormal{\texttt{p-filter}}$ with the coarsest partitions with a single group containing all $n$ hypotheses, we recover exactly (g) the $\textnormal{Simes}$ test \cite{simes1986improved} without weights, and (h) a variant by \citet{HL94} if prior weights are used. We recover the results of (i) the p-filter by \citet{barber2016p} under positive dependence, when we do not use weights, adaptivty, reshaping, overlapping groups, leftover sets, and restrict ourselves to Simes' $p$-values. We also recover a host of new procedures: for example, while the past literature has not yet shown how to use either prior or penalty weights together with adaptivity, \textnormal{\texttt{p-filter}}~reduces to (j) a doubly-weighted adaptive procedure for the finest partition under independence. Also, while the aforementioned procedures were each proved under one form of dependence or the other, we recover results for all three forms of dependence at one go, with a single unified proof technique. \subsection{An efficient implementation} Although one can employ a brute-force grid search to find $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$, the \texttt{p-filter} algorithm presented in Algorithm~\ref{alg:multi-layer_fdr} is able to find this vector efficiently using a coordinate-descent style procedure, and is a strict generalization of the algorithm by the same name in \citet{barber2016p}. \begin{algorithm}[h] \caption{The $\textnormal{\texttt{p-filter}}$ for multi-layer FDR control} \label{alg:multi-layer_fdr} \begin{algorithmic} \STATE \textbf{Input:} $M$ possibly incomplete partitions of possibly overlapping groups of indices $[n]$;\\\quad\quad\quad A vector of base $p$-values $P\in[0,1]^n$;\\ \quad\quad\quad Group $p$-values $P^{(m)}_g$ for each group $g=1,\dots,G^{(m)}$ in layers $m=1,\dots,M$;\\ \quad\quad\quad $M$ target FDR levels $\{\alpha^{(m)}\}$; \\ \quad\quad\quad $M$ sets of prior weights and/or penalty weights $\{w^{(m)}_g, u^{(m)}_g\}$;\\ \quad\quad\quad $M$ thresholds for adaptive null proportion estimation $\{\lambda^{(m)}\}$;\\ \quad\quad\quad $M$ reshaping functions $\{\beta^{(m)}\}$, if desired. \STATE \textbf{Initialize:} Set $k^{(m)}=G^{(m)}$, and $\widehat{\pi}^{(m)}$ as in definition~\eqref{eqn:pihatm}. \REPEAT \FOR{$m=1,\dots,M$} \STATE Update the $m$th vector: defining $\Sh^{(m)}(\vec{k})$ as in equation~\eqref{eqn:hatSm-defn} (using weak or strong internal consistency, as desired), let \begin{align*} k^{(m)} \leftarrow \max\left\{k'^{(m)} \in [0,G^{(m)}]: \sum\limits_{g \in \Sh^{(m)}(k^{(1)},\dots,k^{(m-1)},k'^{(m)},k^{(m+1)},\dots,k^{(M)})} u^{(m)}_g \geq k'^{(m)} \right\} \end{align*} \ENDFOR \UNTIL{the vectors $k^{(1)},\dots,k^{(M)}$ are all unchanged for one full cycle.} \STATE \textbf{Output:} Vector $\widehat{k}=(k^{(1)},\dots,k^{(m)})$, rejected hypotheses $\widehat{\mathcal{S}}(\widehat{k})$, and rejected groups $\Sh^{(m)}(\widehat{k})$ in each partition. \end{algorithmic} \end{algorithm} The following proposition provides a correctness guarantee for Algorithm~\ref{alg:multi-layer_fdr}: \begin{proposition} \label{prop:alg} The output of Algorithm~\ref{alg:multi-layer_fdr} is the maximum feasible corner $(\widehat{k}^{(1)},\dots,\widehat{k}^{(m)})$ defined in equations~\eqref{eqn:max-threshold-m} and~\eqref{eqn:feasible-thresholds2}. \end{proposition} \noindent This result was proved by \citet{barber2016p} in the setting of the original $\textnormal{\texttt{p-filter}}$ algorithm, where the $k^{(m)}$'s take only integer values; here, the algorithm is slightly more subtle, with real-valued $k^{(m)}$'s due to the presence of penalty weights $u^{(m)}_g$. The proof of the proposition for this more general setting is given in \secref{proof_alg}. \section{Three lemmas} \label{sec:lemmas} In this section, we present three lemmas that lie at the heart of the succinct proofs of the theorems in this paper. Our motivation for presenting these lemmas here is that they are interpretable, and provide valuable intuition for the proofs that follow. \subsection{A super-uniformity lemma for FDR control} In order to develop some intuition for the lemma that follows, we note that our super-uniformity assumption~\eqref{EqnSuperUniform} on the null $p$-values can be reformulated as: \begin{align} \label{eqn:PRDS-fixed} \text{For any $i \in \mathcal{H}_0$, ~}~ \EE{\dotfrac{{\One{P_i\leq t}}}{t}} & \leq 1 \text{ for any fixed $t \in [0,1]$.} \end{align} Of course, if $P_i$ is uniform then the above inequality holds with equality. The following lemma guarantees that property~\eqnref{PRDS-fixed} continues to hold for certain random thresholds $f(P)$. Recall that the term ``nonincreasing'' is interpreted coordinatewise, with respect to the orthant ordering. \begin{lemma}[Super-uniformity lemma] \label{lem:power} Let $i \in \mathcal{H}_0$ be a null hypothesis with $p$-value $P_i$, and let $P^{-i}$ denote the other $n-1$ $p$-values. \begin{enumerate} \item[(a)] For any nonincreasing function $f:[0,1]^n\rightarrow[0,\infty)$, if $P_i$ is independent of $P^{-i}$, then we have \begin{align*} \EEst{\dotfrac{\One{P_i\leq f(P)}}{f(P)} }{ P^{-i}} \leq 1. \end{align*} Furthermore, if we additionally assume that $f$ has range $[0,1]$ and satisfies the LOOP condition (supplement, \appref{LOOP}), and that $P_i$ is uniformly distributed, then the inequality is replaced with equality: \begin{align*} \EEst{\dotfrac{\One{P_i\leq f(P)}}{f(P)} }{ P^{-i}} = 1. \end{align*} \item[(b)] For any nonincreasing function $f:[0,1]^n\rightarrow[0,\infty)$, if $P$ is PRDS with respect to $P_i$, then \begin{align*} \EE{\dotfrac{\One{P_i\leq f(P)}}{f(P)} } \leq 1. \end{align*} \item[(c)] For any constant $c\geq 0$, any function $f:[0,1]^n\rightarrow[0,\infty)$, and any reshaping function $\beta$, under arbitrary dependence of the $p$-values, \begin{align*} \small \EE{\dotfrac{\One{P_i\leq c\cdot \beta(f(P)) }}{c\cdot f(P)} } \leq 1. \end{align*} \item[(d)] For any constant $c\geq 0$, any functions $f_1,\dots,f_m:[0,1]^n\rightarrow[0,\infty)$, and any reshaping functions $\beta_1,\dots,\beta^{(m)}$, under arbitrary dependence of the $p$-values, \begin{align*} \small \EE{\dotfrac{\One{P_i\leq c\cdot \prod_{\ell=1}^m\beta_\ell(f_\ell(P)) }}{c\cdot \prod_{\ell=1}^m f_\ell(P)} } \leq 1 . \end{align*} \end{enumerate} \end{lemma} The proofs of statement (a) with equality, and of statement (d), are given in \secref{power-proof}. Statement (a) with inequality is recovered as a special case of statement (b), which was proved by \citet{blanchard2008two}, who also proved (c). The more general statement (d), with more than one reshaping function present in the bound, will be required in the proof of the following novel group super-uniformity \lemref{power3}. \subsection{A group-level super-uniformity lemma} In analogy to the super-uniformity \lemref{power}, we present the following lemma, which contains analogous bounds under the settings of independent or positively dependent base $p$-values (in which case the group $p$-value is constructed with a Simes $p$-value), and in the setting of arbitrarily dependent base $p$-values (in which case the group $p$-value can be constructed by any method---reshaped Simes, Fisher, or others---as long as it is a valid $p$-value.) \begin{lemma}[Group-level super-uniformity lemma] \label{lem:power3} Let $g \in \mathcal{H}_0^{\grp}$ be a null group, that is, $A_g\subseteq \mathcal{H}_0$. Let $P_{A_g}$ denote the $p$-values in this group, $P_{A_g}=(P_j)_{j\in A_g}$, and let $P_{-A_g}$ denote the remaining $p$-values, $P_{-A_g}=(P_j)_{j\not \in A_g}$. \begin{enumerate} \item[(a)] If $f:[0,1]^n\rightarrow[0,\infty)$ is a nonincreasing function, and the base $p$-values $P_1,\dots,P_n$ are independent, then \begin{align*} \EEst{\dotfrac{\One{\textnormal{Simes}_w(P_{A_g})\leq f(P)}}{f(P)} }{P_{-A_g}} \leq 1. \end{align*} \item[(b)] If $f:[0,1]^n\rightarrow[0,\infty)$ is a nonincreasing function, and the base $p$-values $P_1,\dots,P_n$ are positively dependent (PRDS), then \begin{align*} \EE{\dotfrac{\One{\textnormal{Simes}_w(P_{A_g})\leq f(P)}}{f(P)} } \leq 1. \end{align*} \item[(c)] If the base $p$-values $P_1,\dots,P_n$ are arbitrarily dependent, then for any constant $c>0$, any reshaping function $\beta$, and any function $f:[0,1]^n\rightarrow[0,\infty)$, we have \begin{align*} \EE{\dotfrac{\One{T(P_{A_g})\leq c \beta(f(P)) }}{cf(P)} } \leq 1, \end{align*} where $T:[0,1]^{|A_g|}\rightarrow[0,1]$ is any valid group $p$-value; i.e., any function with the property that $T(P_{A_g})$ is super-uniform whenever $g$ is null. \item[(d)] Let $g_1,\dots,g_k$ be a set of $k$ possibly overlapping null groups, meaning $A_{g_1},\dots,A_{g_k} \subseteq \mathcal{H}_0$, and $S_1,\dots,S_k$ represent the corresponding Simes' $p$-values. If $f:[0,1]^n\rightarrow[0,\infty)$ is a nonincreasing function, and the base $p$-values $P_1,\dots,P_n$ are positively dependent (PRDS), then \begin{align*} \EE{\dotfrac{\One{\textnormal{Simes}(S_1,\dots,S_k) \leq f(P)}}{f(P)} } \leq 1. \end{align*} \end{enumerate} \end{lemma} \noindent The proof of this lemma relies on \lemref{power}, and can be found in \secref{power2-proof}. We remark that statement (d) is different from statement (b) applied to the null group $g = \bigcup_{i=1}^k g_i$; indeed, in statement (d), the arguments to the Simes' procedure are themselves Simes' $p$-values, and not the original base $p$-values. If desired, statement (d) can be further bootstrapped to apply to the root of an entire tree of null groups, where each internal node stores the Simes' $p$-value calculated on its children. As an aside, one may wonder whether the Simes' $p$-values are themselves positively dependent (PRDS), given that they satisfy a super-uniformity lemma much like the PRDS $p$-values. We have neither been able to prove nor disprove such a claim, and it may be of independent interest to do so. \subsection{An inverse binomial lemma for adaptivity with weights} The following lemma is required for the proof of adaptivity with weights; more specifically, we use it to bound the expected inverse of the doubly-weighted null-proportion estimate. \begin{lemma}[Inverse binomial lemma] \label{lem:power2} Given a vector $a \in [0,1]^{d}$, constant $b \in [0,1]$, and Bernoulli variables $Z_i ~\stackrel{\textnormal{i.i.d.}}{\sim}~ \text{Bernoulli}(b)$, the weighted sum $Z \ensuremath{:\, =} 1 + \sum_{i=1}^{d} a_i Z_i$ satisfies \begin{align} \frac{1}{1 + b \sum_{i=1}^d a_i } \leq \EE{\frac{1}{Z}} & \leq \frac{1}{b (1 + \sum_{i=1}^d a_i)}. \end{align} \end{lemma} \noindent Since $\EE{Z} = 1 + b \sum_{i=1}^d a_i $, the lower bound on $\EE{1/Z}$ follows by Jensen's inequality. We include this bound to provide context for the upper bound on $\EE{1/Z}$, whose proof can be found in~\secref{power3-proof}. When $a_i=1$ for all $i$ and $b=1$, the claim follows by a standard property of binomial distributions, as described in \citet{benjamini2006adaptive}.\\ With these three lemmas in place, we now turn to the proof of the main theorem in the next section. \section{Proof of Theorem~\lowercase{\ref{thm:pf+}}} \label{sec:proof_main} \noindent In order to be able to handle all four cases of the theorem, we define a function $\gamma^{(m)}$ to be the identity if we are not using reshaping (theorem statements (a,b)), or $\gamma^{(m)}=\beta^{(m)}$ if we are using reshaping (theorem statements (c,d)). We also let $\widehat{\pi}^{(m)}=1$ and $\lambda^{(m)}=1$ if we are not using adaptivity (theorem statements (b,c)), or let $\widehat{\pi}^{(m)}$ be defined as in equation~\eqref{eqn:pihatm} where adaptivity is used (theorem statements (a,d)). Fix any partition $m$. Since $\PP{P_i=0}=0$ for any $i\in\mathcal{H}_0$ by assumption, we assume that $P_i\neq 0$ for any $i\in\mathcal{H}_0$ without further mention; this assumption then implies that if $ g\in\Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ for some null group $ g\in\mathcal{H}_0^{(m)}$, we must have $\widehat{k}^{(m)}>0$. We can then calculate \begin{align*} \fdp^{(m)}_u(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)}) &= \dotfrac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g \One{g \in \Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})}}{ \sum_{g \in [G^{(m)}]} u^{(m)}_g \One{g \in \Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})} } \\[1.5em] & \leq \dotfrac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g \One{g \in \Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})}}{ \widehat{k}^{(m)} }, \\[1em] & \leq \dotfrac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g \One{g \in \Sh^{(m)}_{\textnormal{init}}(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})}}{ \widehat{k}^{(m)} }, \\[1em] & = \dotfrac{\sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g \One{P^{(m)}_g \leq \min\{w^{(m)}_g \frac{\alpha^{(m)} \gamma^{(m)}(\widehat{k}^{(m)})}{\widehat{\pi}^{(m)} G^{(m)}}, \lambda^{(m)} \} } }{ \widehat{k}^{(m)} } , \end{align*} where the first inequality follows by definition~\eqref{eqn:feasible-thresholds} of the feasible set $\widehat{\mathcal{K}}$, the second follows since $\Sh^{(m)}(\vec{k})\subseteq\Sh^{(m)}_{\textnormal{init}}(\vec{k})$ for any $\vec{k}$ by definition, and the last step uses the definition of $\Sh^{(m)}_{\textnormal{init}}(\vec{k})$ in~\eqref{eqn:Shm} (without reshaping, for theorem statements (a,b), or with reshaping for theorem statements (c,d)). Multiplying the numerator and denominator of each term by $\frac{\alpha^{(m)} w^{(m)}_g}{G^{(m)}}$, and taking expectations on both sides, it follows that \begin{align} \label{eqn:maintheorem-midproof-new} \textstyle \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq \min\{ \frac{w^{(m)}_g \alpha^{(m)} \gamma^{(m)}(\widehat{k}^{(m)})}{\widehat{\pi}^{(m)} G^{(m)}}, \lambda^{(m)} \} }}{ \frac{w^{(m)}_g \alpha^{(m)} \widehat{k}^{(m)}}{ G^{(m)}} }}. \end{align} With these calculations in place, we now prove the four statements of the theorem. Given the suggestive form of the above expression, it is natural to anticipate the use of the two super-uniformity lemmas. \noindent \paragraph{Theorem statement (a)} Define the function $f^{(m)}_{g}$ that maps the vector $P$ to $\frac{w^{(m)}_g \alpha^{(m)} \widehat{k}^{(m)} }{\widehat{\pi}^{(m)} G^{(m)}}$. Note that $f^{(m)}_{g}$ is a nonincreasing function of $P$, since $\widehat{k}^{(m)}$ is a nonincreasing function of $P$ by definition of our procedure, while $\widehat{\pi}^{(m)}$ is a nondecreasing function of $P$. We also define the quantity \begin{align} \label{eqn:pihatm_g} \widehat{\pi}^{(m)}_{-g} ~\ensuremath{:\, =}~ \frac{|u^{(m)} w^{(m)}|_\infty + \sum_{h \neq g} u^{(m)}_h w^{(m)}_h \One{ P^{(m)}_h >\lambda^{(m)}}}{G^{(m)} (1-\lambda^{(m)})}. \end{align} Returning to expression~\eqref{eqn:maintheorem-midproof-new}, we may then deduce that \begin{align} \textnormal{FDR}^{(m)}_u \notag & \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq \min\{ \frac{w^{(m)}_g \alpha^{(m)} \widehat{k}^{(m)}}{\widehat{\pi}^{(m)} G^{(m)}}, \lambda^{(m)} \} }}{ \widehat{\pi}^{(m)} \frac{w^{(m)}_g \alpha^{(m)} \widehat{k}^{(m)}}{\widehat{\pi}^{(m)} G^{(m)}} }}\\ \notag&= \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\One{P^{(m)}_g \leq\lambda^{(m)}}\cdot \dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ \widehat{\pi}^{(m)} f^{(m)}_g(P) }} \\ \label{eqn:maintheorem-midproof-new-a} & \stackrel{(i)}{\leq} \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ \widehat{\pi}^{(m)}_{-g} f^{(m)}_g(P) }}, \end{align} where inequality (i) holds because the event $P^{(m)}_g\leq \lambda^{(m)}$ implies $\widehat{\pi}^{(m)}=\widehat{\pi}^{(m)}_{-g}$. Conditioning on $P_{-A^{(m)}_g}$ for each group $g$ in expression~\eqref{eqn:maintheorem-midproof-new-a}, we get: \begin{align*} \textnormal{FDR}^{(m)}_u & \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ \widehat{\pi}^{(m)}_{-g} f^{(m)}_g(P) }}\\ &= \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\EEst{\dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ \widehat{\pi}^{(m)}_{-g} f^{(m)}_g(P) }}{P_{-A^{(m)}_g}}} \\ & \stackrel{(ii)}{=} \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\frac{1}{\widehat{\pi}^{(m)}_{-g}}\EEst{\dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ f^{(m)}_g(P) }}{P_{-A^{(m)}_g}}} \\ & \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\frac{1}{\widehat{\pi}^{(m)}_{-g}}}, \end{align*} where equality (ii) holds because $\widehat{\pi}^{(m)}_{-g}$ is a function of only the $p$-values outside of group $g$, i.e., of $P_{-A^{(m)}_g}$, while the last inequality holds by \lemref{power3}(a). Finally, observe that independence between the different groups of partition $m$ implies that the indicator variables $\One{P^{(m)}_h >\lambda^{(m)}}$ are independent Bernoullis with probabilities $\geq 1-\lambda^{(m)}$ of success. Thus, as a consequence of \lemref{power2}, we can prove that \begin{align} \label{eqn:storey-wbinomial-maintheorem} \EE{\frac{1}{\widehat{\pi}^{(m)}_{-g}}} ~\leq~ \frac{G^{(m)}}{\sum\limits_{h \in \mathcal{H}_0^{(m)}} u^{(m)}_h w^{(m)}_h}. \end{align} \noindent To establish property~\eqref{eqn:storey-wbinomial-maintheorem}, let $b \ensuremath{:\, =} (1-\lambda^{(m)}),~ d \ensuremath{:\, =} |\mathcal{H}_0^{(m)}|-1$, and define \begin{align*} Z \ensuremath{:\, =} 1+ \sum_{h \in \mathcal{H}_0^{(m)}, h \neq g} a_h \One{P^{(m)}_h > \lambda^{(m)}} \text{ where } a_h &= \frac{u^{(m)}_h w^{(m)}_h}{|u^{(m)}\cdot w^{(m)}|_\infty}. \end{align*} Since $Z \leq \frac{G^{(m)} (1-\lambda^{(m)})}{|u^{(m)} \cdot w^{(m)}|_\infty} \widehat{\pi}^{(m)}_{-g}$ as the right-hand side expression sums over more indices than the left, applying~\lemref{power2} guarantees that \begin{align*} \EE{\frac{|u^{(m)} \cdot w^{(m)}|_\infty}{G^{(m)} (1-\lambda^{(m)}) \widehat{\pi}^{(m)}_{-g}}} \leq \EE{\frac1{Z}} \leq \frac{|u^{(m)}\cdot w^{(m)}|_\infty}{(1-\lambda^{(m)})(|u^{(m)}\cdot w^{(m)}|_\infty+\sum_{h\in \mathcal{H}_0^{(m)}, h \neq g}u^{(m)}_h w^{(m)}_h)}. \end{align*} Some simple algebra then leads to property~\eqref{eqn:storey-wbinomial-maintheorem}. \noindent Plugging~\eqref{eqn:storey-wbinomial-maintheorem} back into our bounds on FDR, we finally obtain \begin{align*} \textnormal{FDR}^{(m)}_u & \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\frac{1}{\widehat{\pi}^{(m)}_{-g}}}\\ &\leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g\frac{G^{(m)}}{\sum\limits_{h \in \mathcal{H}_0^{(m)}} u^{(m)}_h w^{(m)}_h} \\ &\leq \alpha^{(m)}. \end{align*} \noindent \paragraph{Theorem statement (b)} The proof of statement (b) follows the same steps as for (a), but without the need to condition on $P_{-A^{(m)}_g}$, since we do not use adaptivity. Define the function $f^{(m)}_{g}(P) = \frac{w^{(m)}_g \alpha^{(m)} \widehat{k}^{(m)} }{ G^{(m)}}$. Then $f^{(m)}_{g}$ is a nonincreasing function of $P$, since $\widehat{k}^{(m)}$ is a nonincreasing function of $P$. Returning to~\eqref{eqn:maintheorem-midproof-new}, as in the proof of statement (a), we calculate \begin{align*} \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ f^{(m)}_g(P) }}. \end{align*} By \lemref{power3}(b), we know that $\EE{\dotfrac{ \One{P^{(m)}_g \leq f^{(m)}_g(P)}}{ f^{(m)}_g(P) }}\leq 1$, and therefore \begin{align*} \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g, \end{align*} as claimed. \noindent \paragraph{Theorem statement (c)} We now turn to proving the method under reshaping. Define $f^{(m)}_g(P) = \widehat{k}^{(m)}$, and define constant $c^{(m)}_g = \frac{w^{(m)}_g \alpha^{(m)}}{G^{(m)}}$. Returning to~\eqref{eqn:maintheorem-midproof-new}, as before, we calculate \begin{align*} \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq c^{(m)}_g \cdot \beta^{(m)}\big(f^{(m)}_g(P)\big)}}{ c^{(m)}_g \cdot f^{(m)}_g(P) }}. \end{align*} By \lemref{power3}(c), we know that $\EE{\dotfrac{ \One{P^{(m)}_g \leq c^{(m)}_g \cdot \beta^{(m)}\big(f^{(m)}_g(P)\big)}}{ c^{(m)}_g \cdot f^{(m)}_g(P) }}\leq 1$ since $P^{(m)}_g$ is assumed to be super-uniform for any null group $g\in\mathcal{H}_0^{(m)}$. Therefore, \begin{align*} \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g. \end{align*} \noindent \paragraph{Theorem statement (d)} The proof of part (d) combines the calculations of part (a) (where adaptivity is used) with part (c) (where reshaping is used). Define $f^{(m)}_g = \widehat{k}^{(m)}$ and $c^{(m)}_g = \frac{w^{(m)}_g \alpha^{(m)}}{\widehat{\pi}^{(m)}_{-g} G^{(m)}}$, where $\widehat{\pi}^{(m)}_{-g}$ is defined as in equation~\eqref{eqn:pihatm_g} from part (a). Note that $c^{(m)}_g$ is no longer a constant, but nonetheless, proceeding as in part (a), we can calculate \begin{align*} \textnormal{FDR}^{(m)}_u &\leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\dotfrac{ \One{P^{(m)}_g \leq c^{(m)}_g\cdot \beta^{(m)}\big( f^{(m)}_g(P)\big)}}{ \widehat{\pi}^{(m)}_{-g}\cdot c^{(m)}_g \cdot f^{(m)}_g(P) }}. \end{align*} Next we condition on the $p$-values outside the group $A^{(m)}_g$: \begin{align*} \textnormal{FDR}^{(m)}_u &\leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\EEst{\dotfrac{ \One{P^{(m)}_g \leq c^{(m)}_g\cdot \beta^{(m)}\big( f^{(m)}_g(P)\big)}}{ \widehat{\pi}^{(m)}_{-g}\cdot c^{(m)}_g \cdot f^{(m)}_g(P) }}{P_{-A^{(m)}_g}}} \\ & = \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\frac{1}{\widehat{\pi}^{(m)}_{-g}}\cdot \EEst{\dotfrac{ \One{P^{(m)}_g \leq c^{(m)}_g\cdot \beta^{(m)}\big( f^{(m)}_g(P)\big)}}{ c^{(m)}_g \cdot f^{(m)}_g(P) }}{P_{-A^{(m)}_g}}}, \end{align*} where the last step holds since $\widehat{\pi}^{(m)}_{-g}$ is a function of $P_{-A^{(m)}_g}$. Finally, we apply \lemref{power3}(c) to show that each of these conditional expected values is $\leq 1$. Of course, the subtlety here is that we must condition on $P_{-A^{(m)}_g}$. To do so, note that, after fixing $P_{-A^{(m)}_g}$, the function $f^{(m)}_g(P)$ can be regarded as a function of only the remaining unknowns (i.e., of $P_{A^{(m)}_g}$), and is still nonwincreasing, the value $c^{(m)}_g$ is now a constant; and $P^{(m)}_g = T^m_g(P_{A^{(m)}_g})$ is indeed super-uniform since, due to the independence of $P_{A^{(m)}_g}$ from $P_{-A^{(m)}_g}$, its distribution has not changed. Therefore, we can apply \lemref{power3}(c) (with the random vector $P_{A^{(m)}_g}$ in place of $P$, while $P_{-A^{(m)}_g}$ is treated as constant), to see that $\EEst{\dotfrac{ \One{P^{(m)}_g \leq c^{(m)}_g\cdot \beta^{(m)}\big( f^{(m)}_g(P)\big)}}{ c^{(m)}_g \cdot f^{(m)}_g(P) }}{P_{-A^{(m)}_g}}~\leq~1$, and therefore, \begin{align*} \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \EE{\frac{1}{\widehat{\pi}^{(m)}_{-g}}}. \end{align*} Finally, we need to bound $\widehat{\pi}^{(m)}_{-g}$. As in the proof of part (a), we see that the indicator variables $\One{P^{(m)}_h >\lambda^{(m)}}$ are independent, since $P^{(m)}_h=T^{(m)}_h(P_{A^m_h})$, and the sets of $p$-values $P_{A^m_h}$ are assumed to be independent from each other. Furthermore, since $T^{(m)}_h(P_{A^m_h})$ is assumed to be a valid $p$-value, i.e., super-uniform for any $h\in\mathcal{H}_0^{(m)}$, this means that the variable $\One{P^{(m)}_h >\lambda^{(m)}}$ is Bernoulli with chance $\geq 1-\lambda^{(m)}$ of success. Therefore, the bound~\eqref{eqn:storey-wbinomial-maintheorem} calculated in the proof of part (a) holds here as well, and so \begin{align*} \textnormal{FDR}^{(m)}_u \leq \frac{\alpha^{(m)}}{G^{(m)}} \sum_{g \in \mathcal{H}_0^{(m)}} u^{(m)}_g w^{(m)}_g \frac{G^{(m)}}{\sum\limits_{h \in \mathcal{H}_0^{(m)}} u^{(m)}_h w^{(m)}_h} = \alpha^{(m)}. \end{align*} \noindent This concludes the proof of all four parts of \thmref{pf+}. \section{Proofs of supporting lemmas} \label{sec:lemmas-proofs} In this section, we collect the proofs of some supporting lemmas. \subsection{Proof of super-uniformity Lemma~\lowercase{\ref{lem:power}}} \label{sec:power-proof} \lemref{power} follows directly from earlier work \citep{blanchard2008two,barber2016p}. Statement (a) with inequality (but not with equality) follows as a special case of (b), since independence is a special case of positive dependence, and the distribution of a null $P_i$ does not change on conditioning on an independent set of $p$-values. Statement (c) was proved also by \citet{blanchard2008two}. We now prove the statements (a), (d). \paragraph{Statement (a)} We prove the first part of \lemref{power}, under the assumptions that the function $P \mapsto f(P)$ satisfies the leave-one-out property with respect to index $i$, and that $P_i$ is uniformly distributed and is independent of the remaining $p$-values. Since $\PP{P_i = 0}=0$, we ignore this possibility in the following calculations. Since $f$ satisfies the LOOP condition, we have \begin{align*} \dotfrac{\One{P_i\leq f(P)}}{f(P)} = \frac{\One{P_i\leq f(\ITOZERO{P})}}{f(\ITOZERO{P})}. \end{align*} This can be seen by separately considering what happens when the numerator on the left-hand side is zero or one. Since $P^{-i}$ determines $f(\ITOZERO{P})$, it immediately follows that \begin{align*} \EEst{\dotfrac{\One{P_i\leq f(P)}}{f(P)} }{P^{-i}} &= \EEst{ \frac{\One{P_i\leq f(\ITOZERO{P})}}{f(\ITOZERO{P})}}{P^{-i}}\\ &= \frac{\PPst{P_i\leq f(\ITOZERO{P})}{f(\ITOZERO{P})}}{f(\ITOZERO{P})} \\ &= 1, \end{align*} where the last step follows since $f$ has range $[0,1]$, and $P_i$ is uniformly distributed and is independent of $\ITOZERO{P}$; therefore, we may deduce that $\PPst{P_i\leq f(\ITOZERO{P})}{f(\ITOZERO{P})} = f(\ITOZERO{P})$. This concludes the proof of the super-uniformity lemma under independence and uniformity. \paragraph{Statement (d)} For each $\ell=1,\dots,m$, let $\nu_{\ell}$ be a probability measure on $[0,\infty)$ chosen such that $\beta_\ell(k) = \beta_{\nu_\ell}(k) = \int_{x=0}^k x \;\mathsf{d}\nu_\ell(x)$, as in the definition of a reshaping function. Let $X_\ell\sim \nu_\ell$ be drawn independently for each $\ell=1,\dots,m$, and let $\nu$ be the probability measure on $[0,\infty)$ corresponding to the distribution of $Z = \prod_{\ell=1}^m X_\ell$. Then \begin{align*} c \cdot \prod_{\ell=1}^m\beta_\ell(f_\ell(P)) &=c \cdot \prod_{\ell=1}^m \left( \int_{x_\ell=0}^{f_\ell(P)} x_\ell\;\mathsf{d}\nu_\ell(x_\ell)\right)\\ &=c \cdot \int_{x_1=0}^{\infty} \dots \int_{x_m=0}^{\infty} \left( \prod_{\ell=1}^m x_{\ell} \cdot \One{x_\ell\leq f_\ell(P)}\right)\; \mathsf{d}\nu_m(x_m)\dots \mathsf{d}\nu_1(x_1)\\ &=c \cdot\EE{ \prod_{\ell=1}^m \left(X_\ell\cdot \One{X_\ell\leq f_\ell(P)}\right)}\\ &=c \cdot\EE{Z \cdot \One{X_1\leq f_1(P),\dots,X_m\leq f_m(P)}}\\ &\leq c \cdot\EE{Z \cdot \One{Z \leq \prod_{\ell=1}^m f_\ell(P)}}\\ &=c \cdot\int_{z=0}^{ \prod_{\ell=1}^m f_\ell(P)} z\;\mathsf{d}\nu(z) =c \cdot\beta_\nu\left( \prod_{\ell=1}^m f_\ell(P)\right). \end{align*} Therefore, \[ \EE{\dotfrac{\One{P_i \leq c\cdot \prod_{\ell=1}^m\beta_\ell(f_\ell(P))}}{c\cdot \prod_{\ell=1}^m f_\ell(P)}}\\ \leq \EE{\dotfrac{\One{P_i \leq c \cdot\beta_\nu\left( \prod_{\ell=1}^m f_\ell(P)\right)}}{c\cdot \prod_{\ell=1}^m f_\ell(P)}}\leq 1,\] where the last step holds by Lemma~\ref{lem:power}(c). \subsection{Proof of group super-uniformity Lemma~\lowercase{\ref{lem:power3}}} \label{sec:power2-proof} First, note that the proof of \lemref{power3}(c) is straightforward, by applying \lemref{power}(c). More precisely, define an augmented vector $P' = (P_1,\dots,P_n,T(P_{A_g}))\in[0,1]^{n+1}$, and define a function $f'(P') \ensuremath{:\, =} f(P_1,\dots,P_n) = f(P)$. Since $T(P_{A_g})$ is assumed to be super-uniform (since $g\in\mathcal{H}_0^{\grp}$ is a null group), this means that $P'_{n+1} = T(P_{A_g})$ is super-uniform, i.e., index $n+1$ is a null $p$-value, in the augmented vector of $p$-values $P'$. Then applying \lemref{power}(c), with $P'$ and $f'$ in place of $P$ and $f$, and with index $i=n+1$, yields the desired bound. \lemref{power3}(a) is simply a special case of \lemref{power3}(b) since independence is a special case of positive dependence, and conditioning on an independent set of $p$-values $P_{-A_g}$ doesn't change the distribution of $P_{A_g}$. For \lemref{power3}(b), our proof strategy is to reduce this statement into a form where \lemref{power}(b) becomes applicable. (Note that we cannot simply take the approach of our proof of \lemref{power3}(c), because if we define an augmented vector of $p$-values $P'=\big(P_1,\dots,P_n,\textnormal{Simes}_w(P_{A_g})\big)$, we do not know if this vector is positively dependent---specifically, whether $P'$ is PRDS on entry $P'_{n+1}=\textnormal{Simes}_w(P_{A_g})$.) With this aim in mind, let $\widehat{k}_g \in \{0,\dots,n_g\}$ be the number of discoveries made by the $\textnormal{BH}_w$ procedure when run on the $p$-values within group $g$ at level $f(P)$. Then, using the connection between the Simes test and the BH procedure, we may write \begin{align*} \One{P_g \leq f(P)} = \One{\widehat{k}_g > 0} = \dotfrac{\widehat{k}_g}{\widehat{k}_g} = \dotfrac{\sum_{i \in A_g} \One{P_i \leq \frac{w_i \widehat{k}_g f(P)}{n_g} }}{\widehat{k}_g} \end{align*} since for the $\textnormal{BH}_w$ procedure at level $f(P)$, the $i$th $p$-value $P_i$ will be rejected if and only if $P_i\leq \frac{w_i \widehat{k}_g f(P)}{n_g}$. Hence, we may conclude that \begin{align*} \dotfrac{\One{P_g \leq f(P)}}{f(P)} = \dotfrac{\sum_{i \in A_g} \One{P_i \leq \frac{w_i\widehat{k}_g f(P)}{n_g} }}{\widehat{k}_g f(P)} = \frac1{n_g} \sum_{i \in A_g} w_i \dotfrac{\One{P_i \leq \widetilde{f}_g(P)}}{\widetilde{f}_g(P)}, \end{align*} where we have defined $\widetilde{f}_g(P) \ensuremath{:\, =} \frac{w_i\widehat{k}_g f(P)}{n_g}$. Taking expectations on both sides and applying \lemref{power}(b) immediately proves \lemref{power3}(b). (Specifically, we know that $P \mapsto \widehat{k}_g$ is a nonincreasing function of $P$, and $P\mapsto f(P)$ is also assumed to be nonincreasing; therefore, $\widetilde{f}_g$ is also nonincreasing in $P$.) Given that \lemref{power3}(b) is proved, the proof of \lemref{power3}(d) follows exactly the same argument as above, except that in the very last equation, $P_i$ is replaced by $S_i$, and \lemref{power3}(b) is invoked in place of \lemref{power}(b). \subsection{Proof of inverse-binomial Lemma~\lowercase{\ref{lem:power2}}}\label{sec:power3-proof} The lower bound follows immediately from Jensen's inequality, since $\EE{Z} = 1 + b \sum_{i=1}^d a_i$. We split the argument for the upper bound into three cases. \paragraph{Case 1: integer weights} First, suppose that all the weights $a_i$ are integers, that is, $a_i\in\{0,1\}$ for all $i$. In this case, we have $Z\sim 1 + \textnormal{Binomial}(k,b)$, where $k$ is the number of weights $a_i$ that are equal to $1$. A simple calculation shows that \begin{align*} \EE{\frac{1}{1+\textnormal{Binomial}(k,b)}} &= \sum_{z=0}^k\frac1{1+z} \binom{k}{z} b^{z} (1-b)^{k-z} \\ &= \frac1{b(1+k)} \sum_{z=0}^k \binom{k+1}{z+1} b^{z+1} (1-b)^{(k+1)-(z+1)}\\ & = \frac1{b(1+k)} \cdot\PP{\textnormal{Binomial}(k+1,b)\leq k}\\ &\leq \frac{1}{b(1+k)} = \frac{1}{b(1+\sum_i a_i)}. \end{align*} \paragraph{Case 2: one non-integer weight} Suppose that exactly one of the weights $a_i$ is a non-integer. Without loss of generality we can take $a_1=\dots=a_k=1$, $a_{k+1}=c$, $a_{k+2}=\dots=a_n=0$, for some $k\in\{0,\dots,n-1\}$ and some $c\in(0,1)$. Let $A=Z_1 + \dots + Z_{k+1}\sim \textnormal{Binomial}(k+1,b)$, and $Y=Z_{k+1}\sim\textnormal{Bernoulli}(b)$. Note that $\PPst{Y=1}{A} = \frac{A}{1+k}$. Then \begin{align*} \EE{\frac{1}{Z}} &= \EE{\frac{1}{1+A - (1-c)Y}}\\ &= \EE{\EEst{\frac{1}{1+A - (1-c)Y}}{A}}\\ &= \EE{\frac{1}{1+A} \cdot \PPst{Y=0}{A} + \frac{1}{c+A} \cdot\PPst{Y=1}{A}}\\ &= \EE{\frac{1}{1+A} + \left(\frac{1}{c+A} - \frac{1}{1+A}\right)\cdot\PPst{Y=1}{A}}\\ &= \EE{\frac{1}{1+A} + \frac{1-c}{(c+A)(1+A)}\cdot\frac{A}{1+k}}\\ &\leq \EE{\frac{1}{1+A} + \frac{1-c}{(c+1+k)(1+A)}\cdot\frac{1+k}{1+k}}, \end{align*} where the inequality holds since $\frac{A}{c+A}\leq \frac{1+k}{1+k+c}$ because $0\leq A\leq k+1$. Simplifying, we get \begin{small} \begin{align*} \EE{\frac{1}{Z}} \leq \EE{\frac{1}{1+A}}\cdot \frac{2+k}{1+k+c} \leq \frac{1}{b(2+k)}\cdot\frac{2+k}{1+k+c} = \frac{1}{b(1+k+c)} = \frac{1}{b(1+\sum_i a_i)}, \end{align*} \end{small} where the inequality uses the fact that $\EE{\frac{1}{1+\textnormal{Binomial}(k+1,b)}}\leq \frac{1}{b(2+k)}$ as calculated in Case 1. \paragraph{Case 3: general case} Now suppose that there are at least two non-integer weights, $0<a_i\leq a_j<1$. Let $C=\sum_{\ell\neq i,j}a_{\ell}Z_{\ell}$, then $Z=1+C+a_iZ_i+a_jZ_j$. Let $\alpha = \min\{a_i,1-a_j\}>0$. Then \begin{small} \begin{multline*} \EEst{\frac{1}{Z}}{C} = b^{2}\cdot\frac{1}{1+C+a_i+a_j} + b(1-b)\cdot\frac{1}{1+C+a_i} + b(1-b)\cdot\frac{1}{1+C+a_j} + (1-b)^{2}\cdot \frac{1}{C}\\ \leq b^{2}\cdot\frac{1}{1+C+a_i+a_j} + b(1-b)\cdot\frac{1}{1+C+(a_i-\alpha)} + b(1-b)\cdot\frac{1}{1+C+(a_j+\alpha)} + (1-b)^{2}\cdot \frac{1}{C}, \end{multline*} \end{small} where the inequality follows from a simple calculation using the assumption that $\alpha \leq a_i\leq a_j \leq 1- \alpha$. Now, define a new vector of weights $\tilde{a}$ where $\tilde{a}_i = a_i - \alpha, \tilde{a}_j = a_j + \alpha$ and $\tilde{a}_\ell = a_\ell$ if $\ell \notin \{i,j\}$. Defining $\widetilde{Z} = 1+\sum_\ell \tilde{a}_\ell Z_\ell$, the above calculation proves that $\EE{\frac{1}{Z}}\leq \EE{\frac{1}{\widetilde{Z}}}$ (by marginalizing over $C$). Note that $\sum_i a_i = \sum_i\tilde{a}_i$, but $\tilde{a}_i$ has (at least) one fewer non-integer weight. Repeating this process inductively, we see that we can reduce to the case where there is at most one non-integer weight (i.e., Case 1 or Case 2). This proves the lemma. \section{Proof of propositions about \textnormal{\texttt{p-filter}}}\label{sec:prop-proofs} \subsection{Proof of ``maximum-corner'' Proposition~\lowercase{\ref{prop:max}}} \label{sec:proof_max} For each $m$, by definition of $\widehat{k}^{(m)}$, there is some $k^{(m)}_1,\dots,k^{(m)}_{m-1},k^{(m)}_{m+1},\dots,k^{(m)}_M$ such that \begin{align} \label{eqn:in_T} (k^{(m)}_1,\dots,k^{(m)}_{m-1},\widehat{k}^{(m)},k^{(m)}_{m+1},\dots,k^{(m)}_M)\in\widehat{\mathcal{K}}\;. \end{align} Thus, for each $m'\neq m$, $\widehat{k}^{(m')}\geq k^{(m)}_{m'}$ by definition of $\widehat{k}^{(m')}$. Then \begin{align*} \widehat{\mathcal{S}}(k^{(m)}_1,\dots,k^{(m)}_{m-1},\widehat{k}^{(m)},k^{(m)}_{m+1},\dots,k^{(m)}_M) \subseteq \widehat{\mathcal{S}}(\widehat{k}^{(1)},\dots,\widehat{k}_{m-1},\widehat{k}^{(m)},\widehat{k}_{m+1},\dots,\widehat{k}^{(M)})\;, \end{align*} because $\widehat{\mathcal{S}}(k^{(1)},\dots,k^{(M)})$ is a nondecreasing function of $(k^{(1)},\dots,k^{(M)})$, and this immediately implies \[ \Sh^{(m)}(k^{(m)}_1,\dots,k^{(m)}_{m-1},\widehat{k}^{(m)},k^{(m)}_{m+1},\dots,k^{(m)}_M) \subseteq \Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}_{m-1},\widehat{k}^{(m)},\widehat{k}_{m+1},\dots,\widehat{k}^{(M)}).\] Therefore, for each layer $m$, \begin{align*} \sum_{g \in \Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(m)})} u^{(m)}_g \geq \sum_{g \in \Sh^{(m)}(k^{(m)}_1,\dots,k^{(m)}_{m-1},\widehat{k}^{(m)},k^{(m)}_{m+1},\dots,k^{(m)}_M)} u^{(m)}_g \geq \widehat{k}^{(m)}, \end{align*} where the second inequality holds by observation~\eqnref{in_T}, and by definition of $\widehat{\mathcal{K}}$ as the set of feasible vectors. Since this holds for all $m$, this proves that $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ is itself a feasible vector, and hence $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)}) \in \widehat{\mathcal{K}}$. \subsection{Proof of ``halting'' Proposition~\lowercase{\ref{prop:alg}}}\label{sec:proof_alg} First we introduce some notation: let $(k^{(1)}_{(s)},\dots,k^{(M)}_{(s)})$ be the vector after the $s$th pass through the algorithm. We prove that $k^{(m)}_{(s)}\geq \widehat{k}^{(m)}$ for all $m,s$, by induction. At initialization, $k^{(m)}_{(0)} = G^{(m)} \geq \widehat{k}^{(m)}$ for all $m$. Now suppose that $k^{(m)}_{(s-1)}\geq \widehat{k}^{(m)}$ for all $m$; we now show that $k^{(m)}_{(s)}\geq \widehat{k}^{(m)}$ for all $m$. To do this, consider the $m$-th layer of the $s$-th pass through the algorithm. Before this stage, we have vectors $k^{(1)}_{(s)},\dots,k^{(m-1)}_{(s)},k^{(m)}_{(s-1)},k^{(m+1)}_{(s-1)},\dots,k^M_{(s-1)}$, and we now update $k^{(m)}_{(s)}$. Applying induction also to this inner loop, and assuming that $k^{(m')}_{(s)}\geq \widehat{k}^{(m')}$ for all $m'=1,\dots,m-1$, we can now prove that $k^{(m)}_{(s)}\geq\widehat{k}^{(m)}$. By definition of the algorithm, \begin{footnotesize} \begin{align}\label{eqn:kms} k^{(m)}_{(s)} = \max_{k'^{(m)} \in \{0,1,\dots,G^{(m)}\}}\left\{ k'^{(m)} : \sum\limits_{g \in \Sh^{(m)}(k^{(1)}_{(s)},\dots,k^{(m-1)}_{(s)},k'^{(m)},k^{(m+1)}_{(s-1)},\dots,k^{(M)}_{(s-1)})} u^{(m)}_g \geq k'^{(m)} \right\}. \end{align} \end{footnotesize} Since $k^{(m')}_{(s)}\geq \widehat{k}^{(m')}$ for all $m'=1,\dots,m-1$, and $k^{(m')}_{(s-1)}\geq \widehat{k}^{(m')}$ for all $m'=m+1,\dots,M$, we have \begin{align*} \sum \limits_{g \in \Sh^{(m)}(k^{(1)}_{(s)},\dots,k^{(m-1)}_{(s)},\widehat{k}^{(m)},k^{(m+1)}_{(s-1)},\dots,k^{(M)}_{(s-1)})} u^{(m)}_g ~\geq~ \sum\limits_{g \in \Sh^{(m)}(\widehat{k}^{(1)},\dots,\widehat{k}^{(m-1)},\widehat{k}^{(m)},\widehat{k}^{(m+1)},\dots,\widehat{k}^{(M)})} u_g^{(m)}, \end{align*} since $\Sh^{(m)}(\vec{k})$ is a nondecreasing function of $\vec{k}$ by definition. The right-hand side of this expression is in turn $\geq \widehat{k}^{(m)}$ by definition of $(\widehat{k}^{(1)},\dots,\widehat{k}^{(M)})$ being feasible. Therefore, $\widehat{k}^{(m)}$ is in the feasible set for Eq.~\eqnref{kms}, and so we must have $k^{(m)}_{(s)}\geq \widehat{k}^{(m)}$. By induction, this is then true for all $s,m$, as desired. Now suppose that the algorithm stabilizes at $(k^{(s)}_1,\dots,k^{(s)}_M)$, after $s$ full passes. After completing the $m$th layer of the last pass through the algorithm, we had vectors $k^{(1)}_{(s)},\dots,k^{(m)}_{(s)},k^{(m+1)}_{(s-1)},\dots,k^{(M)}_{(s-1)}$; however, since the algorithm stops after the $s$th pass, this means that $k^{(m')}_{(s-1)}=k^{(m')}_{(s)}$ for all $m'$. Using this observation in the definition of $k^{(m)}_{(s)}$, we see that \begin{align*} \sum \limits_{g \in \Sh^{(m)}(k^{(1)}_{(s)},\dots,k^{m-1}_{(s)},k^{(m)}_{(s)},k^{(m+1)}_{(s)},\dots,k^{(M)}_{(s)})} u^{(m)}_g \geq k^{(m)}_{(s)}. \end{align*} This means that $(k_{(s)}^{(1)},\dots,k_{(s)}^{(M)})\in\widehat{\mathcal{K}}$, and so $k_{(s)}^{(m)}\leq \widehat{k}^{(m)}$ for all $m$ by the definition of $\widehat{k}^{(1)},\ldots,\widehat{k}^{(m)}$ and \propref{max}. But by the induction above, we also know that $k_{(s)}^{(m)}\geq \widehat{k}^{(m)}$ for all $m,s$, thus completing the proof. \section{Discussion and extensions} \label{sec:disc} The procedures that we have analyzed and generalized do not fully cover the huge literature on FDR-controlling procedures. For example, \texttt{p-filter} is a generalized multi-dimensional step-up procedure, but much work has also been done on alternative styles of procedures, such as step-down, step-up-down and multi-step methods. For example, \citet{benjamini1999step} propose step-down procedures that control FDR under independence. Also, procedures by \citet{benjamini1999distribution} and \citet{romano2006stepdown} provably control FDR under arbitrary dependence, with \citet{gavrilov2009adaptive} extending them to adaptive control under independence. Two-step adaptive procedures have been analyzed in \citet{benjamini2006adaptive} under independence, and by \citet{blanchard2009adaptive} under dependence. Different methods of incorporating weights into such procedures have also been studied, cf.~a different notion of the weighted Simes $p$-value proposed by~\citet{BH97}. The super-uniformity lemmas (Lemma~\ref{lem:power} and, in the grouped setting, Lemma~\ref{lem:power3}), can be used to quickly prove FDR control under dependence for many of the above procedures, and may be a useful tool for designing new multiple testing procedures in broader settings\footnote{An analog of the super-uniformity lemma has also been discovered in the online FDR setting and has proved useful for designing new algorithms \cite{javanmard2018online,RYWJ17,ramdas2018saffron}.}. For example, it has been used to derive a decentralized procedure for FDR control on sensor networks \cite{ramdas2017qute} and a sequential algorithm for FDR control on directed acyclic graphs \cite{ramdas2018dagger}. This lemma was also used to derive a ``post-selection BH procedure'' \cite{brzyski2017controlling}: if a set $S \subseteq [n]$ of hypotheses was selected by the user in an arbitrary monotone data-dependent manner (see footnote~1), one way to find a subset $T \subseteq S$ that controls the FDR is to run BH on $S$ at level $\widetilde \alpha:= \alpha |S| / n$. Indeed, \[ \small \textnormal{FDR} = \EE{\dotfrac{\sum_{i \in S \cap \mathcal{H}_0} \One{P_i \leq \widetilde \alpha\frac{ | T|}{| S|} }}{| T|}} \leq \sum_{i \in \mathcal{H}_0} \frac{\alpha}{n} \cdot \EE{\dotfrac{ \One{P_i \leq \alpha\frac{ | T|}{n}} }{\alpha\frac{|T|}{n}}} \leq \alpha \frac{|\mathcal{H}_0|}{n}. \] Notice that the post-selection BH procedure reduces to BH in the absence of selection, that is when $S = [n]$. As another particularly simple but striking example, consider the following novel ``structured BH procedure''\footnote{This procedure was independently discovered recently by \citet{katsevich2018controlling}, along with several other substantial extensions.}. Suppose we wish to insist that only certain subsets of $[n]$ are allowed to be rejected; let $\mathcal{K} \subseteq 2^{[n]}$ be the set of such allowed rejection sets (these could be determined by known logical constraints or structural requirements). Then, if the $p$-values are positively dependent, we may reject the largest set $T \in \mathcal{K}$ such that all its $p$-values are less than $\alpha|T|/n$. Completely equivalently, one can define $\widehat{\fdp}(S) = \frac{n \cdot \max_{i \in S} P_i}{|S|}$ and reject the largest set $T \in \mathcal{K}$ such that $\widehat{\fdp}(T) \leq~\alpha$. This procedure controls FDR under positive dependence due to a trivial one-line proof using Lemma~\ref{lem:power}: \[ \small \textnormal{FDR} = \EE{\dotfrac{\sum_{i \in T \cap \mathcal{H}_0} \One{P_i \leq \alpha \frac{ |T|}{n} }}{|T|}} \leq \sum_{i \in \mathcal{H}_0} \frac{\alpha}{n} \cdot \EE{\dotfrac{ \One{P_i \leq \alpha \frac{|T|}{n}} }{\alpha \frac{|T|}{n}}} \leq \alpha \frac{|\mathcal{H}_0|}{n}. \] Again, notice that the structured BH procedure reduces to BH in the absence of structural constraints, that is when $\mathcal{K}=2^{[n]}$. Of course, except for special structured settings, the \emph{largest} set $T \in \mathcal{K}$ may not be efficiently computable in general. When it is infeasible, \emph{any} set $T\in \mathcal{K}$ such that $\widehat{\fdp}(T) \leq \alpha$ may be chosen, and FDR control will be maintained, and heuristics can be used to find large sets. (In both the above examples, one may instead use reshaping to control for arbitrary dependence.) While there exist works that can incorporate a single layer of groups \cite{hu2010false}, these often provide guarantees only for the finest partition. Alternative error metrics have been discussed by \citet{benjamini2014selective}, who devise a way to take a single partition of groups into account and control a \emph{selective} FDR. This idea has been extended by \citet{peterson2016many} and \citet{bogomolov2017testing} to partitions that form a hierarchy (i.e., a tree). However, none of these aforementioned papers have been extended to handle arbitrary non-hierarchical partitions, leftover or overlapping groups, both sets of weights, adaptivity or reshaping. Recently, \citet{katsevich17mkf} derived a knockoff \textnormal{\texttt{p-filter}}~that extended the work of \citet{barber2016p} in two ways: it allows the group $p$-values to be formed by procedures other than Simes' (like this paper), and it can use knockoff statistics instead of $p$-values. In both settings it provides FDR control at a constant (between 1 and 2) times the target FDR. While their work can handle arbitrary non-hierarchical partitions (since it uses the same \textnormal{\texttt{p-filter}}~framework) along with knockoff statistics, it also does not handle null-proportion adaptivity, both sets of weights, reshaping, leftover or overlapping groups, and so on. We believe that many of the algorithmic ideas and proof techniques (especially the lemmas) introduced here may generalize to these related works, and could be an avenue for future work. Finally, as a last very general extension, it was recently noted by \citet{katsevich2018towards} that the $\textnormal{\texttt{p-filter}}$~framework can arbitrarily ``stack'' together different layers, where each layer uses a different type of FDR-controlling algorithm (ordered testing, knockoffs, online algorithms, interactive algorithms, and so on), and the $\textnormal{\texttt{p-filter}}$~framework can be simply used as a wrapper to ensure internal consistency. \section*{Acknowledgments} We thank Wenyu Chen for helping implement the new $\textnormal{\texttt{p-filter}}$ algorithm. We thank Eugene Katsevich, Etienne Roquain, Aditya Guntuboyina and Fanny Yang for relevant discussions. The authors are also thankful to audience members at the Statistics departments of Stanford, Wharton, UC Davis, the St. Louis Workshop on Higher Order Asymptotics and Post-Selection Inference, and the NIPS Workshop on Adaptive Data Analysis, whose questions partly shaped this work. This work was supported in part by the Office of Naval Research under grant number W911NF-16-1-0368, the Air Force Office of Scientific Resesarch under grant number AFOSR-FA9550-14-1-0016, by NSF award DMS-1654076, and by an Alfred P. Sloan fellowship. {
2,877,628,089,869
arxiv
\section{INTRODUCTION} Dark streaks have been observed propagating downhill on high-latitude dunes on Mars during local springtime. M\"{o}hlmann and Kereszturi (2010; hereafter MK2010) have attributed these streaks to the flow of thin sheets of water (or some other liquid), and derived a relation between the measured speed of such flows and the thickness of the liquid layer. The purpose of this note is to correct two conceptual errors in MK2010 which affect the derivation of the velocity profile of such flows, and interpretations of their observed speed, depth, and mass flux. (Note also that the last phrase of Section 2 of MK2010 should state that direct measurements of local increases in the surface temperature due to dune darkening are {\it not} available yet.) \section{MODEL} My model is fundamentally the same as that in MK2010 ({\it cf.} their Fig. 2): a sheet of liquid with uniform thickness $h$, constant density $\rho$, and dynamic viscosity $\eta$, is flowing down an inclined plane at a fixed angle $\alpha$ from the local horizontal, under the influence of the vertical acceleration of gravity $g$. As in MK2010, let $x$ be the downslope coordinate, $z$ the upward coordinate perpendicular to $x$, and $y \equiv z/h$. Henceforth $x$ can be ignored, and the speed of the flow can be written simply as $v(z)$ or $v(y)$. However, MK2010 make their first conceptual error in using the Navier-Stokes equation for incompressible flow of a fluid with constant viscosity; their Eq. (2) is \begin{equation} \eta \frac{d^2}{dz^2} v(z) = -\rho g \sin\alpha. \end{equation} Most liquids may safely be treated as incompressible, but MK2010 next assume that the viscosity of the fluid depends very strongly on the vertical coordinate, due to freezing at its upper and lower surfaces; their Eq. (3) is \begin{equation} \eta(y) = \frac{\eta_0}{y[1-y]}. \end{equation} Under these circumstances the usual Navier-Stokes equation (1) is inadequate, and a more general version is required: \begin{equation} \frac{d}{dz} [ \eta \frac{d}{dz} v(z) ] = \frac{d}{dy} [ \eta \frac{d}{dy} v(y) ]/h^2 = -\rho g \sin\alpha. \end{equation} Integrating Eq. (3) above gives the viscous stress \begin{equation} \eta \frac{d}{dy} v(y) = C -y h^2 \rho g \sin\alpha \end{equation} as a linear function of height $y$. Then equating this to zero at the top $y=1$ gives \begin{equation} C = h^2 \rho g \sin\alpha \end{equation} by the no-stress boundary condition at the free surface. Now by using Eq. (2), Eq. (4) can be rearranged as \begin{equation} \eta_0 \frac{d}{dy} v(y) = C y[1-y]^2 = C [y -2y^2 +y^3], \end{equation} and integrated as \begin{equation} v(y) = C[y^2/2 -2y^3/3 +y^4/4]/\eta_0 +C'. \end{equation} Here the second constant of integration $C'$ must vanish by the no-slip boundary condition $v(0)=0$ at the bottom. As their second conceptual error, MK2010 integrate Eq. (1) twice, but then apply a symmetry condition $dv/dy = 0$ at the mid-plane $y=1/2$. This is equivalent to imposing a no-slip condition at the free surface, rather than the correct no-stress condition. As a result, MK2010 obtain the spurious solution \begin{equation} v(y) = C[y/12 -y^3/6 +y^4/12]/\eta_0 ; \end{equation} note also that their Eq. (5) has the opposite sign as Eq. (8) above. \begin{figure} \centerline{\psfig{figure=profile.eps,width=6.5in,height=6.5in}} \caption{ Profiles of flow velocity $v$ and shear $dv/dy$, both normalized by $\frac{C}{12 \eta_0}$, as functions of dimensionless height $y$. Dashed curves refer to solution (8) after MH2010, while solid curves refer to my corrected solution (7). The horizontal dotted line denotes the midplane $y=1/2$, while the vertical one denotes $v=0$ or $dv/dy=0$. } \end{figure} \newpage \section{RESULTS} Figure 1 compares solution (8) above (dashed curves) with my corrected solution (7) (solid curves). Note how $v$ is symmetric with respect to the midplane $y=1/2$ for solution (8), while its corresponding shear $dv/dy$ is antisymmetric. The viscous stress $\eta \; dv/dy$ is not shown, because it is infinite at $y=0$ and $y=1$ for solution (8). Solution (8) may be rewritten in symmetric form as \begin{equation} v(\zeta) = C[5/192 -\zeta^2/8 +\zeta^4/12]/\eta_0 , \end{equation} where $\zeta \equiv y -1/2$. Eq. (9) above makes it clear that this velocity profile is not a parabolic curve, as in channel flow, but rather a quartic curve. In fact, it is a biquadratic; that is, a quadratic in $\zeta^2$. The flow speed $v$ vanishes at both the top and bottom for solution (8), and peaks at $v_{max} = \frac{5 C}{192 \eta_0}$ at the midplane $y=1/2$ ($\zeta=0$); while the corresponding shear $dv/dy$ ranges from $\frac{C}{12 \eta_0}$ at the bottom, through zero at the midplane, to $\frac{-C}{12 \eta_0}$ at the top. In contrast, neither $v$ nor $dv/dy$ possesses any symmetry for my corrected solution (7). Note how $v$ vanishes at the bottom $y=0$, and peaks at $v_{max} = \frac{C}{12 \eta_0}$ at the top $y=1$; while the shear $dv/dy$ vanishes at both the top and bottom, and peaks at $\frac{4C}{27 \eta_0}$ at $y=1/3$. Note that the peak speed $v_{max}$ is 16/5 = 3.2 times greater for my solution (7) than for solution (8). MK2010 also assumed that $v_{max}$ is the observed propagation speed of the dark dune streaks, and used it to find their Eq. (7) for the thickness $h$ of the flow: \begin{equation} h \approx \sqrt{ \frac{192 \eta_0 v_{max}}{5 \rho g \sin\alpha} }. \end{equation} Using my solution (7) instead to estimate $h$ gives \begin{equation} h \approx \sqrt{ \frac{12 \eta_0 v_{max}}{\rho g \sin\alpha} }. \end{equation} Note that $h$ from formula (11) above is only $\sqrt{5/16} \approx$ 0.559 times as deep as from formula (10); for the example given by MK2010, Eq. (11) gives a layer of brine only 1.2 mm thick, rather than 2.2 mm from Eq. (10). Furthermore, the mean speed of the flow may be defined as \begin{equation} \bar{v} \equiv \int_0^1 v(y) dy . \end{equation} Then the mean speed for my Eq. (7) is $\frac{C}{20 \eta_0}$, or 0.60 $v_{max}$. For comparison, the mean speed for Eq. (8) is only $\frac{C}{60 \eta_0}$, or 0.64 $v_{max}$. In either case, note that the flux (mass per unit length per unit time) is just $\rho h \bar{v}$. Then for a given $v_{max}$, the flux from my Eq. (7) is only 0.524 times as great as from Eq. (8). \newpage \section{DISCUSSION} MK2010 have attributed dark streaks observed propagating downhill on high-latitude dunes on Mars to the flow of thin sheets of water (or some other liquid), and derived relation (10) between the measured speed of such flows and the thickness of the liquid layer. However, MK2010 made two important conceptual errors in their derivation: First, they used the Navier-Stokes equation for flow of an incompressible fulid of constant viscosity; most liquids are nearly incompressible, but MK2010 also assumed a very non-uniform viscosity, rendering the Navier-Stokes equation inapplicable. Second, they assumed that the velocity profile of the flow is symmetric about its midplane; this is equivalent to imposing a no-slip boundary condition at the top of the liquid layer. A no-slip condition is appropriate at the bottom of the flow, but there is no justification for it at the top; rather, a no-stress condition is required there. Correcting both errors results in the revised relation (11) between the speed and thickness of the flow. Comparing Eqs. (10) and (11) then reveals that Eq. (10) over-estimates the thickness of the liquid layer and the corresponding mass flux by factors of almost two. However, the viscosity of some concentrated brines on the surface of Mars may vary with composition by several orders of magnitude ({\it cf.} Fig. 5 of MK2010). Because the derived depth of the liquid layer is inversely proportional to the square root of the viscosity in both Eqs. (10) and (11), the resulting uncertainty in thickness may be dominated by uncertainty in composition, rather than by errors in the formula. \setlength{\parindent}{-0.2in} \begin{center} REFERENCE \end{center} M\"{o}hlmann, D., and A. Kereszturi (2010). Viscous liquid film flow on dune slopes of Mars. \\ {\it Icarus} {\bf 207}, 654--658. \end{document}
2,877,628,089,870
arxiv
\section{Introduction} In this work, we provide some mathematical---specifically, operator theoretic---foundational underpinnings to positive operator-valued measures and functions that are integrable with respect to these objects. It is our desire to make connections between quantum information theory and operator theory and so we first introduce the quantum context and then the pure mathematical context below. A physical system in quantum mechanics is described mathematically by a complex separable Hilbert space $\mathcal{H}$. The observable properties of the physical system are represented by a positive, operator-valued measure (or the normalized version of such an object, called a quantum probability measure), which arises from the measurement theoretical analysis of quantum mechanics: If a quantum system undergoes a series of preparation procedures so that it is in state $\rho$, measurements are made, giving rise to a set $X$ of outcomes, with associated $\sigma$-algebra $\mathcal{O}(X)$ of Borel sets of $X$. One then considers a map $\nu$ acting on $\mathcal{O}(X)$ and taking values in the positive cone of bounded operators acting on $\mathcal H$, with the property that it is ultraweakly countably additive. The measurement outcome statistics associated to $\nu$ are given by the induced complex measure $\nu_\rho$, defined by $\nu_\rho(E) = \tr(\rho \nu(E))$ for all $E\in \mathcal O(X)$. See \cite{BuschOp} for a systematic presentation of the probabilistic structure of quantum mechanics. In more mathematical terms, we are concerned with positive operator-valued measures from the Borel sets of a locally compact Hausdorff space $X$ into $\mathcal B(\mathcal H)$ for a finite or separable Hilbert space $\mathcal H$. One considers the so-called quantum random variables with respect to $\nu$, that is, measurable functions $\psi: X\rightarrow \mathcal B(\mathcal H)$ and their integrals against $\nu$. The motivation behind this is stated in \cite{FK} as the desire for a notion of an operator-valued averaging, i.e., the quantum expected value of $\psi$. This theory has been developed in \cite{MPR} and \cite{PRLyapunov}. Other variants of this setup in the literature include \cite{pol} and \cite{chiri2007, chiri2010}, the latter pointing out the connection between regular operator-valued expectations and quantization maps in geometric quantization. One of the main goals of this paper is to bring a theorem of Komiya's \cite{Komiya} into this context. Komiya proves, for matrix majorization, that for $X, Y\in M_{m,n}(\mathbb C)$ that $X \prec Y$ if and only if $\psi(X) \leq \psi(Y)$ for every real-valued, permutation-invariant, convex function $\psi$ on $M_{m,n}(\mathbb C)$. This is shown by using the fact that the bistochastic matrices are the convex hull of the permutation matrices. Our main theorem then is that one quantum random variable is majorized by another if and only if the evaluation of the first is always smaller than or equal to the second under every real-valued, permutation-invariant, convex function. This goal is the driving force of the paper, since to define majorization one needs bistochastic operators, which in turn need a robust $L^1$ function theory. However, while this $L^1$ theory can be established in general, we are only able to introduce bistochastic operators in a much more limited way, specifically for classical measures, $\mu I_\mathcal H$, where the bistochastic operators are inherited from the classical $L^1(X,\mu)$. Even so, this variation of Komiya's theorem takes some work to develop. The more general question to define and characterize majorization in arbitrary $L^1$ spaces of quantum random variables is not discussed. There is a rich area of study in non-commutative majorization. Besides Komiya's discussions of matrix majorization \cite{Komiya} there are many directions of majorization of operators discussed by Hiai \cite{Hiai}. Of particular interest, is the log-majorization (Araki-Lieb-Thirring inequality) of Kosaki \cite{Kosaki} on noncommutative $L^p$-spaces over arbitrary von Neumann algebras, developed by Haagerup \cite{Haagerup}. As far as the authors are aware, the $L^1$ space defined in this paper is unrelated to that of Haagerup. The paper is organized as follows: In Section \ref{sec:prelim} we review operator-valued measures and majorization. In Section \ref{sec:L1Norm}, we consider the span of POVM-integrable quantum random variables, revealing certain natural candidates for a norm to be unsuitable before defining a seminorm which leads to a good choice for the $L^1$ functions into $\mathcal B(\mathcal H)$ which we show has desirable properties. In Section \ref{sec:boundedmult}, we define a $\mathcal B(\mathcal H)$-valued bracket (a generalized inner product) between elements of our constructed $L^1$ space and the von Neumann algebra of essentially bounded quantum random variables, focusing on the issue of when multiplication of elements of these two sets yields a bounded operator. This leads to a weak topology. In Section \ref{sec:Bistoch}, prompted by classical majorization on vectors in $\mathbb{R}^n$ as well as majorization in the $L^1([0,1])$ setting, where majorization is equivalent to the existence of a stochastic or doubly stochastic matrix (depending on the context), we consider bistochastic operators on the aforementioned Banach space. In Section \ref{sec:MD}, we define majorization in this new context and prove the analogue of Komiya's theorem that was discussed above. \section{Preliminaries}\label{sec:prelim} \subsection{Positive operator-valued measures} Let $\mathcal B(\mathcal H)$ be the algebra of all bounded operators on $\mathcal H$ for a finite-dimensional or separable Hilbert space $\mathcal H$. Define $\mathcal{T}(\mathcal{H})$ as the Banach space of all trace-class operators: all operators in $\mathcal B(\mathcal H)$ which have a finite trace under any orthonormal basis; the norm of this space is the trace norm $\|T\|_1=\tr\sqrt{T^*T}$. Note that the dual of $\mathcal T(\mathcal H)$ is $\mathcal B(\mathcal H)$: $\mathcal T(\mathcal H)^* = \mathcal{B}(\mathcal H)$, with the duality given by $\braket{T}{A}=\tr(TA)$ for all $T\in \mathcal T(\mathcal H)$ and all $A\in \mathcal B(\mathcal H)$. The set $\mathcal S(\mathcal H)$ of all positive, trace-one trace-class operators $\rho$ (called \emph{states} or density operators) is a convex subset of $\mathcal T(\mathcal H)$. It should be emphasized here to avoid confusion that in infinite dimensions there are many states (unital, positive, linear functionals) that do not arise in this way. However, it should be noted that $\mathcal S(\mathcal H)$ is still separating for $\mathcal B(\mathcal H)$. See \cite[Chapter 1]{Holevo} for further details of these fundamental mathematical underpinnings of quantum theory. Throughout, $X$ is a locally compact Hausdorff space and $\mathcal{O}(X)$ is the $\sigma$-algebra of Borel sets of $X$. \begin{definition}\cite{Larson, Paulsen, MPR} A map $\nu : \mathcal{O}(X) \to \mathcal{B}(\mathcal{H})$ is an \emph{operator-valued measure (OVM)} if it is ultraweakly countably additive: for every countable collection $\{E_k\}_{k \in \mathbb N} \subseteq \mathcal{O}(X)$ with $E_i \cap E_j = \emptyset$ for $i \neq j$ we have \[ \nu\left(\bigcup_{k\in \mathbb N} E_k \right) = \sum_{k \in \mathbb N} \nu(E_k)\,, \] where the convergence on the right side of the equation above is with respect to the ultraweak topology of $\mathcal{B}(\mathcal{H})$, that is, \[ \tr\left(s \sum_{k=1}^n \nu(E_k)\right) \rightarrow \tr\left(s \sum_{k=1}^\infty \nu(E_k)\right), \quad \forall s\in \mathcal S(\mathcal H). \] An OVM $\nu$ is \begin{enumerate}[(i)] \item \emph{bounded} if $\sup\{\|\nu(E)\| : E\in \mathcal O(X)\} < \infty$, \item \emph{positive} if $\nu(E) \in \mathcal{B}(\mathcal H)_+$, for all $E\in \mathcal O(X)$; such an OVM is called a \emph{positive operator-valued measure} (POVM), \item \emph{regular} if the induced complex measure $\tr(\rho\nu(\cdot))$ is regular for every $\rho\in \mathcal T(\mathcal H)$. \item a \emph{positive operator-valued probability measure} or \emph{quantum probability measure} if it is positive and $\nu(X) = I_{\mathcal{H}}$. \end{enumerate} \end{definition} Note: A POVM is necessarily bounded. Often the word \emph{observable} is used interchangeably with POVM \cite{BuschOp}, although occasionally it is used to refer to normalized POVMs \cite[Chapter 3]{Davies}. We also note that some authors consider normalization as part of the definition of a POVM \cite{BuschQT}. To avoid confusion, we use the terminology \emph{quantum probability measure} to refer to a normalized POVM, which is consistent with e.g.\ \cite{clean4, FK, FKP, MPR}. We adopt the notation $\povm{\mathcal H}{X}$ to refer to the set of all POVMs $\nu : \mathcal{O}(X) \to \mathcal{B}(\mathcal{H})_+$ and $\qpm{\mathcal H}{X}$ to refer to the set of all quantum probability measures $\nu : \mathcal{O}(X) \to \mathcal{B}(\mathcal{H})_+$. A (classical or operator-valued) measure $\omega_1$ is \emph{absolutely continuous} with respect to either a classical or operator-valued measure $\omega_2$, denoted $\omega_1\ll_{\rm ac}\omega_2$, if $\omega_1(E)=0$ whenever $\omega_2(E)=0$, where $E\in\mathcal{O}(X)$ (for classical measures, $\mathcal{O}(X)$ is typically denoted by $\Sigma$ in the literature) and 0 is interpreted as either the scalar zero or the zero operator, as applicable. Let $\nu\in \povm{\mathcal H}{X}$. For a fixed state $\rho\in \mathcal S(\mathcal H)$, the induced complex measure $\nu_\rho$ on $X$ is defined by $\nu_\rho(E) = \tr(\rho \nu(E))$ for all $ E\in \mathcal O(X)$. As discussed in \cite{MPR}, $\nu$ and $\nu_\rho$ are mutually absolutely continuous for any full-rank $\rho\in \mathcal S(\mathcal H)$. Let $\nu_{i,j}$ be the complex measure defined by $\nu_{i,j}(E) = \langle \nu(E)e_j,e_i\rangle, E\in \mathcal O(X)$, where $\{e_k\}$ form an orthonormal basis for $\mathcal H$. Let $\rho\in\mathcal S(\mathcal H)$ be full-rank, that is, injective. Then $\nu_{i,j} \ll_{\rm ac} \nu_\rho$ and so, by the classical Radon-Nikod\'ym theorem, there is a unique $\frac{d\nu_{i,j}}{d\nu_\rho} \in L_1(X, \nu_\rho)$ such that \[ \nu_{i,j}(E) = \int_E \frac{d\nu_{i,j}}{d\nu_\rho} d\nu_\rho, \ E\in \mathcal O(X). \] One can then define the {\em Radon-Nikod\'ym derivative} of $\nu$ with respect to $\nu_\rho$ to be \cite{MPR, FPS} \[ \frac{d\nu}{d\nu_\rho} = \sum_{i,j\geq 1} \frac{d\nu_{i,j}}{d\nu_\rho} \otimes e_{i,j}. \] An operator-valued function $f: X \rightarrow \mathcal{B}(\mathcal H)$ that is Borel measurable (that is, the associated complex-valued functions $x\to \tr(s f(x))$ are Borel measurable functions for every state $s \in \mathcal S(\mathcal H)$) is known as a {\em quantum random variable}. The Radon-Nikod\'ym derivative $\frac{d\nu}{d\nu_\rho}$ is said to exist if it is a quantum random variable; i.e.\ it takes every $x$ to a bounded operator. By \cite[Corollary 2.13]{MPR}, if $\frac{d\nu}{d\nu_{\rho_0}}$ exists for some full-rank $\rho_0\in \mathcal S(\mathcal H)$, then $\frac{d\nu}{d\nu_{\rho}}$ exists for all full-rank $\rho\in \mathcal S(\mathcal H)$, so there is no need to specify a particular full-rank $\rho_0$. Integrability of a quantum random variable with respect to a positive operator-valued measure is defined as follows. \begin{definition}\cite{FPS, MPR} Let $\nu : \mathcal O(X) \rightarrow \mathcal B(\mathcal H)$ be a POVM such that $\frac{d\nu}{d\nu_\rho}$ exists, for a full-rank $\rho \in \mathcal S(\mathcal H)$. A positive quantum random variable $f: X \rightarrow \mathcal{B}(\mathcal H)$ is {\em $\nu$-integrable} if the function \[ f_s(x) = \tr\left( s\left(\frac{d\nu}{d\nu_\rho}(x) \right)^{1/2} f(x) \left(\frac{d\nu}{d\nu_\rho}(x) \right)^{1/2} \right) \] is $\nu_\rho$-integrable for every state $s\in \mathcal S(\mathcal H)$. If $f$ is $\nu$-integrable then the integral of $f$ with respect to $\nu$, denoted $\int_X f d\nu$, is implicitly defined by the formula \[ \tr\left( s\int_X f d\nu\right) = \int_X f_s d\nu_\rho. \] \end{definition} If $\nu = \mu I_\mathcal H$ for a positive complex measure $\mu$ then we know that $\frac{d\nu}{d\nu_\rho} = I_\mathcal H$ and if $f = [f_{i,j}]$ is taken with respect to an orthonormal basis in $\mathcal H$ then integration is defined entrywise: \[ \int_X f d\nu = \left[ \int_X f_{i,j} d\mu \right]. \] The properties of the integral are explored in depth in \cite{MPR, PRLyapunov}. Note that any quantum random variable $f: X \rightarrow \mathcal{B}(\mathcal H)$ can be decomposed as the sum of four positive quantum random variables, for instance $({\rm Re} f)_+, ({\rm Re} f)_-, ({\rm Im} f)_+$, and $({\rm Im} f)_-$ but other choices as well; the definition of $\nu$-integrable can thus be extended to arbitrary quantum random variables provided all four positive functions are $\nu$-integrable. \subsection{Majorization} Majorization is a preorder first defined on vectors in $\mathbb R^n$. Let $x, y\in \mathbb{R}^n$. Then $x$ is \emph{majorized} by $y$, denoted $x\prec y$, if \begin{eqnarray*} \sum_{j=1}^{k}x^{\downarrow}_{j}\leq \sum_{j=1}^{k}y^{\downarrow}_{j}\quad \forall k\in \{1,\dots,n-1\} \end{eqnarray*} with equality when $k=n$, where $x$ has been reordered so that $x^{\downarrow}_{1}\geq x^{\downarrow}_{2}\geq \cdots \geq x^{\downarrow}_{n}$ (and similarly for $y$). Alternatively, $x\prec y$ if and only if there exists a doubly stochastic matrix $S$ such that $x=Sy$ (this is a well-known result of Hardy-Littlewood-P\'{o}lya \cite[Theorem 8]{HLP}). One can define continuous majorization in the context of functions in $L^1$: \begin{definition}\label{def:classicdecarrange} Let $(X, \mathcal O(X), \mu)$ be a finite positive measure space and $f\in L^1(X, \mu)$. The \emph{distribution function} of $f$ is $d_f : \mathbb R \rightarrow [0,\mu(X)]$ defined by \[ d_f(s) = \mu(\{x : f(x) > s\}) \] and the \emph{decreasing rearrangement} of $f$ is $f^\downarrow : [0,\mu(X)] \rightarrow \mathbb R$ defined by \[ f^\downarrow(t) = \sup\{ s : d_f(s) \geq t\}. \] \end{definition} \begin{definition}\label{def:cont} Let $(X_i, \mathcal{O}(X_i), \mu_i)$, $i=1,2$, be finite measure spaces for which $a=\mu_1(X_1)=\mu_2(X_2)$. Then $f\in L^1(X_1,\mu_1)$ is \emph{majorized} by $g\in L^1(X_2, \mu_2)$, denoted $f\prec g$, if \begin{eqnarray*} \int_0^t f^\downarrow d x&\leq & \int_0^t g^\downarrow d x\quad \forall\, 0\leq t\leq a\\ \textnormal{and }\int_0^a g^\downarrow d x&=& \int_0^a f^\downarrow d x, \end{eqnarray*} where integration is against Lebesgue measure. \end{definition} This is also called the \emph{strong spectral order} \cite{Chong}. As in the vector case, majorization is related to a certain class of operators. In particular, an operator $B: L^1(X_1,\mu_1) \rightarrow L^1(X_2,\mu_2)$ between finite measure space where $\mu_1(X_1) = \mu_2(X_2)$ is called \textit{bistochastic, doubly stochastic}, or \textit{Markov}, if \begin{enumerate} \item $B$ is positive \item $\displaystyle{\int_{X_2} Bf d\mu_2 = \int_{X_1} f d\mu_1}$, \ and \item B1 = 1 \end{enumerate} where $1$ here refers to the constant function 1 in each of the spaces $L^1(X_i, \mu_i), i=1,2$. The following is a combination of the well-known result by Hardy-Littlewood-P\'{o}lya \cite[Theorem 10]{HLP} extended by Chong \cite[Theorem 2.5]{Chong} and that of Ryff \cite{Ryff1965} and Day \cite{Day} \begin{theorem}\label{thm:continuousmajorization} Let $(X_i, \mathcal{O}(X_i), \mu_i)$, $i=1,2$, be finite measure spaces for which $\mu_1(X_1)=\mu_2(X_2)$. If $f\in L^1(X_1, \mu_1)$ and $g\in L^1(X_2, \mu_2)$ then the following are equivalent: \begin{itemize} \item $f \prec g$ \item $\displaystyle{\int_{X_1} \psi(f(x))dx \leq \int_{X_2} \psi(g(x))dx}$ for all convex functions $\psi : \mathbb R\rightarrow \mathbb R$ \\ \item There is a bistochastic operator $B$ such that $Bg = f$. \end{itemize} \end{theorem} \section{The L$^1$-norm}\label{sec:L1Norm} We wish to find a generalization of the L$^1$-norm in the POVM context. Recall that $X$ is a locally compact Hausdorff space and $\mathcal H$ is finite-dimensional or separable. First, we consider the following inequalities. \begin{lemma} Suppose $\nu \in \povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}$ exists and $f : X \rightarrow \mathcal B(\mathcal H)$ is $\nu$-integrable. Then \[ \left\| \int_X f(x) d\nu(x)\right\| \leq \int_X \|f(x)\|\left\|\frac{d\nu}{d\nu_\rho}(x)\right\| d\nu_\rho(x). \] Furthermore, if $\nu = \mu I$ where $\mu$ is a positive classical measure on $X$ then \[ \left\| \int_X f(x) d\nu(x)\right\| \leq \int_X \|f(x)\| d\mu(x). \] \end{lemma} \begin{proof} Recall that the dual norm on $\mathcal B(\mathcal H)$ induced by the predual is the operator norm. One then calculates that \begin{align*} \left\| \int_X f(x) d\nu(x)\right\| & \ = \sup_{s\in \mathcal S(\mathcal H)} \left|\tr\left(s\int_X f(x) d\nu(x)\right)\right| \\ & \ = \sup_{s\in \mathcal S(\mathcal H)} \left|\int_X \tr\left( s \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}f(x)\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2} \right)d\nu_\rho(x)\right| \\ & \ \leq \sup_{s\in\mathcal S(\mathcal H)} \int_X \left|\tr\left( s \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}f(x)\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2} \right)\right|d\nu_\rho(x) \\ & \ \leq \int_X \left\| \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}f(x)\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}\right\|d\nu_\rho(x) \\ & \ \leq \int_X \|f(x)\|\left\|\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}\right\|^2d\nu_\rho(x) \end{align*} which establishes the desired inequality. The second inequality in the statement of the lemma is immediate after observing that for any full-rank $\rho$ one has that $\nu = \mu I$ implies that $\frac{d\nu}{d\nu_\rho} = I$. \end{proof} In the case of self-adjoint quantum random variables we can say slightly more, but the following lemma is unlikely to be true in general. \begin{lemma} Suppose $\nu \in \povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}$ exists and $f : X \rightarrow \mathcal B(\mathcal H)$ is $\nu$-integrable and self-adjoint. Then \[ \left\| \int_X f(x) d\nu(x)\right\| \leq \left\| \int_X \|f(x)\| I_\mathcal H d\nu(x)\right\|. \] \end{lemma} \begin{proof} For all $x\in X$ we have that \[ -\|f(x)\|I_\mathcal H \leq f(x) \leq \|f(x)\|I_\mathcal H. \] Therefore, by the comparison theorem we have that \[ -\int_X \|f(x)\|I_\mathcal H d\nu \leq \int_X f(x)d\nu \leq \int_X\|f(x)\|I_\mathcal H d\nu \] and the conclusion follows. \end{proof} One may believe that $\left\|\int_X \|f(x)\|I_\mathcal H d\nu(x)\right\|$ would be a good candidate for an L$^1$-norm. Namely, it reminds one of the Lebesgue-Bochner norm $\int_X \|f(x)\| d\mu$ on $L^1(X,\mu)\hat\otimes_\pi \mathcal B(\mathcal H)$, where $\hat\otimes_\pi$ is the projective tensor product. Indeed we can say more in finite dimensions. \begin{lemma}\label{lemma:norm-estimate} Suppose $\nu \in \povm{{\mathbb C}^{n}}{X}$ and $f: X \rightarrow M_n$ is $\nu$-integrable. Then \[ \left\|\int_X \|f(x)\|I_n d\nu(x)\right\| \leq \left\|\int_X \sum_{1\leq i,j\leq n} |f_{i,j}(x)|I_n d\nu(x)\right\| \leq n^2\left\|\int_X \|f(x)\|I_n d\nu(x)\right\|. \] \end{lemma} \begin{proof} It is immediate after recalling that in $M_n$ we have that for all $x\in X$ \[ \|f(x)\| \leq \sum_{1\leq i,j\leq n} |f_{i,j}(x)| \leq n^2\|f(x)\|. \] \end{proof} This shows that in finite dimensions, $\nu$-integrability is equivalent to this proposed norm being finite. However, this quantity is too much of an overestimate in general and many good functions will not be bounded. \begin{example}\label{example:Linftynotnormdense} Let $X = [0,1]$, $\mathcal H$ be countably infinite dimensional, and $\nu = \mu I_\mathcal H$ where $\mu$ is Lebesgue measure. Consider $f(x) = \sum_{n\geq1} 2^{n}\chi_{(\frac{1}{2^{n}}, \frac{1}{2^{n-1}})}(x)e_{n,n}$. This results in $\int_X f(x) d\nu = I_{\mathcal H}$ but $\left\|\int_X \|f(x)\|I_\mathcal H d\nu\right\| = \infty$. \end{example} A second possibility for an L$^1$-norm by analogy seems to be $\left\| \int_X |f(x)| d\nu(x) \right\|$. However, this cannot be a norm as it does not satisfy the triangle inequality, (cf.~\cite{BK} for many more oddities about the operator absolute value): \begin{example} Let $A=\left[\begin{matrix}1 & 0 \\ 0 &0\end{matrix}\right]$ and $B=\left[\begin{matrix}0 & 1 \\ 0 &0\end{matrix}\right]$. Then $\|A+B\| = \sqrt 2$ but $|A| + |B| = I$, thus \[ |A+B| \nleq |A| + |B| \quad \textrm{and} \quad \|A+B\| \nleq \||A|+|B|\|. \] We can turn this into a counterexample to the above proposed norm by letting $X=\{0,1\}$, $\nu(0) = \nu(1) = I_2$, $f(0) = g(1) = A$ and $g(0) = f(1) = B$. Hence, \begin{align*} \||f(0) + g(0)| + |f(1) + g(1)|\| & = 2\|A+B\| \\ &\nleq 2\||A|+|B|\| \\ &= \||f(0)| + |f(1)|\| + \||g(0)| + |g(1)|\|. \end{align*} Therefore, $\left\| \int_X |f(x)| d\nu(x)\right\|$ does not satisfy the triangle inequality. \end{example} We now develop an L$^1$-norm that is better adapted to POVM-integrable quantum random variables. \begin{definition} Let $\nu \in \povm{\mathcal H}{X}$ and define $$\mathcal L^1_\mathcal H(X,\nu)= {\rm span}\{f:X\rightarrow \mathcal B(\mathcal H) : \nu\textrm{-integrable, positive quantum random variable}\}.$$ For every $f\in \mathcal L^1_\mathcal H(X,\nu)$ define \[ \|f\|_1 = \inf\left\{ \left\| \int_X \sum_{k=1}^4 f_k \ d\nu \right\| : f=f_1 - f_2 + i(f_3 - f_4), f_k\in \mathcal L, f_k\geq 0, k=1,\dots, 4 \right\}. \] \end{definition} We may write \ $\|f\|_{1, \nu}$ to emphasize the POVM $\nu$ that $f$ is being integrated against. Notice that this is a similar idea to the previous non-norm $\left\| \int_X |f(x)| d\nu(x)\right\|$ but, as we will see, with the added benefit that it actually leads to a norm. \begin{proposition} Let $\nu \in \povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}$ exists. Then $\|\cdot\|_1$ is a semi-norm on $\mathcal L^1_\mathcal H(X,\nu)$ such that $\|f^*\|_1 = \|f\|_1$. \end{proposition} \begin{proof} Suppose $f,g\in \mathcal L^1_\mathcal H(X,\nu)$. For every $f_k, g_k\in \mathcal L_\nu, 1\leq k\leq 4$ such that $f_k,g_k\geq 0$, $f = f_1 - f_2 + i(f_3 - f_4)$ and $g = g_1 - g_2 + i(g_3 - g_4)$ we have that \begin{align*} \|f+g\|_1 & \leq \left\| \int_X \sum_{k=1}^4 f_k+g_k \ d\nu\right\| \\ & \leq \left\| \int_X \sum_{k=1}^4 f_k \ d\nu\right\| + \left\| \int_X \sum_{k=1}^4 g_k \ d\nu\right\|. \end{align*} Therefore, by taking infimums on the right, we obtain $\|f+g\|_1 \leq \|f\|_1 + \|g\|_1$. Lastly, we have \begin{align*} \|f\|_1 & = \inf\left\{ \left\| \int_X \sum_{k=1}^4 f_k \ d\nu \right\| : f=f_1 - f_2 + i(f_3 - f_4), f_k\in \mathcal L, f_k\geq 0, 1\leq k \leq 4 \right\} \\ & = \inf\left\{ \left\| \int_X \sum_{k=1}^4 f_k \ d\nu \right\| : f^*=f_1 - f_2 + i(f_4 - f_3), f_k\in \mathcal L, f_k\geq 0, 1\leq k\leq 4 \right\} \\ & = \|f^*\|_1. \end{align*} \end{proof} \begin{lemma}\label{lemma:equivalencetoclassical} Let $\nu \in \povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}$ exists. For all $\nu$-integrable quantum random variables $f:X\rightarrow \mathcal B(\mathcal H)$ \[ \int_X f d\nu = \int_X\left( \frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2}\right) d\nu_\rho I_\mathcal H \] and so \[ \|f\|_{1,\nu} \geq \left\| \frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2}\right\|_{1,\nu_\rho I_\mathcal H}. \] Furthermore, this is an equality if $\frac{d\nu}{d\nu_\rho}(x) \in \mathcal B(\mathcal H)^{-1}$. \end{lemma} \begin{proof} For all $s\in \mathcal S(\mathcal H)$ one has that \begin{align*} \tr\left( s\int_X f d\nu \right) & = \int_X \tr\left(s\frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2} \right)d\nu_\rho \\ & \tr\left(s\int_X \frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2} d\nu_\rho I_\mathcal H\right). \end{align*} Thus because these states are separating, i.e.\ when $f\in \mathcal L^1_\mathcal H(X,\nu)$ is such that $\|f\|_1 \neq 0$, then there exists a state $s\in \mathcal S(\mathcal H)$ such that $f_s \neq 0 \in L^1(X, \nu_\rho)$, the two integrals are equal. As for the norm inequality, if $f_k\geq 0 \in \mathcal L^1_\mathcal H(X,\nu), 1\leq k\leq 4$, such that $f = f_1 - f_2 + i(f_3 - f_4)$ then by the equality above \begin{align*} \left\|\int_X \sum_{k=1}^4 f_k d\nu \right\| & = \left\| \int_X \frac{d\nu}{d\nu_\rho}^{1/2}\left(\sum_{k=1}^4 f_k\right)\frac{d\nu}{d\nu_\rho}^{1/2} d\nu_\rho I_\mathcal H \right\| \\ & = \left\| \int_X \sum_{k=1}^4 \frac{d\nu}{d\nu_\rho}^{1/2} f_k\frac{d\nu}{d\nu_\rho}^{1/2} d\nu_\rho I_\mathcal H \right\| \\ & \geq \left\| \frac{d\nu}{d\nu_\rho}^{1/2}f_1\frac{d\nu}{d\nu_\rho}^{1/2} - \frac{d\nu}{d\nu_\rho}^{1/2}f_2\frac{d\nu}{d\nu_\rho}^{1/2} \right. \\ & \left.\quad \quad \quad \quad +\ i\left(\frac{d\nu}{d\nu_\rho}^{1/2}f_3\frac{d\nu}{d\nu_\rho}^{1/2} - \frac{d\nu}{d\nu_\rho}^{1/2}f_4\frac{d\nu}{d\nu_\rho}^{1/2}\right) \right\|_{1,\nu_\rho I_\mathcal H} \\ & = \left\| \frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2}\right\|_{1,\nu_\rho I_\mathcal H}. \end{align*} Taking the infimum over all possible $f_k$ we get that \[ \|f\|_{1,\nu} \geq \left\| \frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2}\right\|_{1,\nu_\rho I_\mathcal H}. \] Now suppose that $g_k\in \mathcal L^1_\mathcal H(X,\nu_\rho I_\mathcal H), 1\leq k\leq 4$, such that $\frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2} = g_1 - g_2 + i(g_3 - g_4)$. If $\frac{d\nu}{d\nu_\rho}(x) \in \mathcal B(\mathcal H)^{-1}$ for all $x\in X$ then define $f_k = \frac{d\nu}{d\nu_\rho}^{-1/2}g_k\frac{d\nu}{d\nu_\rho}^{-1/2} \geq 0$ for $1\leq k\leq 4$. These are in $\mathcal L^1_\mathcal H(X,\nu)$ since \begin{align*} \|f_k\|_{1,\nu} & = \left\| \int_X \frac{d\nu}{d\nu_\rho}^{-1/2}g_k\frac{d\nu}{d\nu_\rho}^{-1/2} d\nu \right\| \\ & = \left\| \int_X \frac{d\nu}{d\nu_\rho}^{1/2}\left(\frac{d\nu}{d\nu_\rho}^{-1/2}g_k\frac{d\nu}{d\nu_\rho}^{-1/2}\right) \frac{d\nu}{d\nu_\rho}^{1/2} d\nu_\rho I_\mathcal H\right\| \\ & = \left\| \int_X g_k d\nu_\rho I_\mathcal H \right\| \\ & = \|g_k\|_{1,\nu_\rho I_\mathcal H} < \infty. \end{align*} Moreover, \begin{align*} f_1 - f_2 + i(f_3 - f_4) & = \frac{d\nu}{d\nu_\rho}^{-1/2}(g_1 - g_2 + i(g_3 - g_4))\frac{d\nu}{d\nu_\rho}^{-1/2} \\ & = \frac{d\nu}{d\nu_\rho}^{-1/2}\left(\frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2}\right)\frac{d\nu}{d\nu_\rho}^{-1/2} \\ & = f. \end{align*} Using the same calculations as earlier in this proof, we have \[ \|f\|_{1,\nu} \leq \left\| \sum_{k=1}^4 f_k \right\|_{1,\nu} = \left\|\sum_{k=1}^4 g_k\right\|_{1,\nu_\rho I_\mathcal H} \] and taking the infimum over all $g_k$, we obtain \[ \|f\|_{1,\nu} \leq \left\| \frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2}\right\|_{1,\nu_\rho I_\mathcal H}. \] \end{proof} To further illustrate how this semi-norm behaves consider the following example which arises in \cite{BK}. \begin{example} Let $X=\{0,1\}$ and $\nu(i) = I_2, i=0,1$. Consider the function $f: \{0,1\} \rightarrow M_2$ given by \[ f(0) = \left[\begin{matrix} 4 & 4\\ 4& 4\end{matrix}\right]\quad \textrm{and} \quad f(1) = \left[\begin{matrix}3 & 0 \\ 0&-3\end{matrix}\right]. \] Then \[ \left\| \int_X f(x) d\nu\right\| = \|f(0) + f(1)\| = \left\| \left[\begin{matrix} 7&4\\ 4&1\end{matrix}\right] \right\| = 9, \quad \textrm{and} \] \[ \left\| \int_X |f(x)| d\nu\right\| = \|f(0) + |f(1)|\| = \left\| \left[\begin{matrix} 7&4\\ 4&7\end{matrix}\right] \right\| = 11. \] However, consider $f_1,f_2:\{0,1\} \rightarrow M_2$ given by \[ f_1(0) = f(0), \ f_1(1) = \left[\begin{matrix} 4& -2\\-2& 1\end{matrix}\right], \ f_2(0) = 0_2, \ \textrm{and} \ f_2(1) = \left[\begin{matrix} 1&-2 \\ -2&4\end{matrix}\right]. \] This gives that $f_1,f_2 \geq 0$ and $f = f_1 - f_2$ and so \[ \|f\|_1 \leq \left\|\int_X f_1(x) + f_2(x) d\nu\right\| = \left\| f(0) + f_1(1) + f_2(1) \right\| = \left\| \left[\begin{matrix} 9&0\\ 0&9\end{matrix}\right] \right\| = 9. \] \end{example} Throughout this paper an important dynamic is how this proposed 1-topology compares against various forms of the ultraweak topology. \begin{lemma}\label{lemma:statetopologyweaker} Let $\nu \in \povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}$ exists. If $f\in \mathcal L^1_\mathcal H(X,\nu)$ and $s\in \mathcal S(\mathcal H)$ then \[ \int_X |f_s|d\nu_\rho \ \leq \ \|f\|_1. \] \end{lemma} \begin{proof} Let $f_i\in \mathcal L^1_\mathcal H(X,\nu), f_i\geq 0, 1\leq i\leq 4$ such that $f= f_1-f_2 + i(f_3 - f_4)$. We have that \[ -f_1 - f_2 \leq f_1 - f_2 \leq f_1 + f_2 \quad \textrm{and} \quad -f_3 -f_4 \leq f_3 - f_4 \leq f_3+f_4. \] Recalling that $g_s = \tr\left(s\left(\frac{d\nu}{d\nu_\rho}\right)^{1/2}g\left(\frac{d\nu}{d\nu_\rho}\right)^{1/2}\right)$, this implies that \[ -(f_1 + f_2)_s \leq (f_1 - f_2)_s \leq (f_1+f_2)_s \quad \textrm{and} \quad -(f_3 + f_4)_s \leq (f_3 - f_4)_s \leq (f_3 + f_4)_s \] and so $|(f_1-f_2)_s| \leq (f_1+f_2)_s$ and $|(f_3 -f_4)_s| \leq (f_3+f_4)_s$. Therefore, \begin{align*} \int_X |f_s| d\nu_\rho & \leq \int_X |(f_1-f_2)_s| + |(f_3 - f_4)_s|\ d\nu_\rho \\ & \leq \int_X (f_1 + f_2)_s + (f_3+f_4)_s\ d\nu_\rho \\ & = \int_X (f_1+f_2+f_3+f_4)_s\ d\nu_\rho \\ & = \tr\left( s\int_X f_1 + f_2 + f_3 + f_4 \ d\nu \right) \\ & \leq \left\|\int_X f_1 + f_2 + f_3 + f_4 \ d\nu \right\| \end{align*} and the conclusion follows by taking the infimum over all such decompositions. \end{proof} In finite dimensions, with some conditions on the Radon-Nikod\'ym derivative, we get that the two semi-norms developed in this section are equivalent. In \cite{PRLyapunov} the authors introduced the von Neumann algebra of essentially bounded quantum random variables \begin{align*} L^\infty_\mathcal H(X,\nu) & = \{ h:X\rightarrow \mathcal B(\mathcal H) \ \textrm{qrv} \ : \exists M\geq 0, \|h(x)\| \leq M \ \textrm{a.e wrt} \ \nu \} \\ & = L^\infty(X,\nu_\rho) \ \bar\otimes\ \mathcal B(\mathcal H) \end{align*} which is needed in the following proposition and throughout the rest of the paper. Note that the norm this comes with is defined as \[ \|f(x)\|_\infty := \Big\| \|f(x)\| \Big\|_{L^\infty(X,\nu_\rho)} \] since $\|f(x)\| \in L^\infty(X,\nu_\rho)$. \begin{proposition}\label{prop:finitecomparable} Suppose $\mathcal H= \mathbb C^n$, $\nu\in\povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho} \in M_n$ is invertible almost everywhere ($\frac{d\nu}{d\nu_\rho} \in M_n^{-1}$ a.e.), and $\frac{d\nu}{d\nu_\rho}, \frac{d\nu}{d\nu_\rho}^{-1} \in L^\infty_\mathcal H(X,\nu)$. For $f\in \mathcal L^1_\mathcal H(X,\nu)$ self-adjoint we have \[ \|f\|_1 \leq \left\| \int_X |f(x)| d\nu\right\| \leq \left\| \int_X \|f(x)\| I_n d\nu\right\| \leq n \left\|\frac{d\nu}{d\nu_\rho}\right\|_\infty \left\|\frac{d\nu}{d\nu_\rho}^{-1}\right\|_\infty \|f\|_1. \] \end{proposition} \begin{proof} The first two inequalities are true in general without the finite-dimensional or boundedness conditions. This is because $f$ can be written as the sum of its positive and negative parts, $f = f_+ - f_-$, and so \[ \|f\|_1 \leq \|f_+ + f_-\|_1 = \||f|\|_1 = \left\| \int_X |f(x)| d\nu\right\|. \] The second inequality follows easily since $|f(x)| \leq \|f(x)\|I_\mathcal H$. Towards the last inequality, first consider \begin{align*} \left\| \int_X \|f(x)\| I_n d\nu\right\| & = \sup_{s\in \mathcal S(\mathcal H)} \tr\left( s \int_X \|f(x)\| I_n d\nu\right) \\ & = \sup_{s\in \mathcal S(\mathcal H)} \int_X \tr\left(s \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}\|f(x)\|I_n \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2} \right) d\nu_\rho \\ & = \sup_{s\in \mathcal S(\mathcal H)} \int_X \|f(x)\| \tr\left(s \frac{d\nu}{d\nu_\rho}(x)\right)d\nu_\rho \\ & \leq \left\| \frac{d\nu}{d\nu_\rho}\right\|_\infty \int_X \|f(x)\| d\nu_\rho. \end{align*} Here we are using the fact that $\nu$ and $\nu_\rho$ are mutually absolutely continuous, meaning $L^\infty_\mathcal H(X,\nu) = L^\infty_\mathcal H(X,\nu_\rho)$. We will need the following nice fact about positive operators: if $A,B \geq 0$ then $-B \leq A-B \leq A$ and so $\|A-B\| \leq \max\{\|A\|,\|B\|\} \leq \|A+B\|$. Now for each $\epsilon > 0$ there exists $f_1, f_2 \geq 0 \in L^1_\mathcal H(X,\nu)$ such that $f = f_1 - f_2$ and $\|f_1 + f_2\|_1 < \|f\|_1 + \epsilon$. For all $x\in X$ we have that \begin{align*} & \|f(x)\| \\ & = \|f_1(x) - f_2(x)\| \\ & \leq \|f_1(x) + f_2(x)\| \\ & = \left\| \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{-1/2}\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}(f_1(x) + f_2(x))\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{-1/2} \right\| \\ & \leq \left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty\left\|\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}(f_1(x) + f_2(x))\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}\right\| \\ & \leq \left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty \tr\left(\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}(f_1(x) + f_2(x))\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2} \right) \\ & = \left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty\sum_{i=1}^n\tr\left(e_{i,i}\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}(f_1(x) + f_2(x))\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}\right) \\ & = \left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty\sum_{i=1}^n (f_1 + f_2)_{e_{i,i}}(x). \end{align*} Therefore, by the comparison theorem, an earlier calculation and Lemma \ref{lemma:statetopologyweaker} we get that \begin{align*} \left\|\int_X \|f(x)\|I_\mathcal H d\nu\right\| & \leq \left\| \frac{d\nu}{d\nu_\rho}\right\|_\infty \int_X \|f(x)\| d\nu_\rho \\ & \leq \left\| \frac{d\nu}{d\nu_\rho}\right\|_\infty\left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty \sum_{i=1}^n \int_X (f_1 + f_2)_{e_{i,i}} d\nu_\rho \\ & \leq n \left\| \frac{d\nu}{d\nu_\rho}\right\|_\infty\left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty \|f_1 + f_2\|_1 \\ & < n \left\| \frac{d\nu}{d\nu_\rho}\right\|_\infty\left\|\frac{d\nu}{d\nu_\rho}^{-1} \right\|_\infty (\|f\|_1 + \epsilon). \end{align*} \end{proof} Define $\mathcal I = \{f\in \mathcal L^1_\mathcal H(X,\nu) : \|f\|_1 = 0\}$ and let $L^1_\mathcal H(X,\nu) = \mathcal L^1_\mathcal H(X,\nu)/\mathcal I$. The previous lemma implies that the 1-topology on $L^1_\mathcal H(X,\nu)$ is stronger than the topology $(f_n)_s \rightarrow f_s$ for all $s\in \mathcal S(\mathcal H)$. \begin{theorem} $L^1_\mathcal H(X,\nu)$ is a Banach space, that is, it is complete in the 1-norm for $\nu\in \povm{\mathcal H}{X}$ where $\frac{d\nu}{d\nu_\rho}$ exists. \end{theorem} \begin{proof} Let $\{f^{(n)}\}$ be a Cauchy sequence in $L^1_\mathcal H(X,\nu)$. There exists an increasing sequence of numbers $\{k_n\}_{n\in \mathbb N}$ such that \[ \|f^{(l)} - f^{(m)}\|_1 < \frac{1}{2^{n+1}}, \quad \forall l,m \geq k_n. \] Since $f^{(k_1)} \in L^1_\mathcal H(X,\nu)$ there exist $f_{0,i} \geq 0, 1\leq i\leq 4$ such that $f^{(k_1)} = f_{0,1} - f_{0,2} + i(f_{0,3} - f_{0,4})$ such that \[ \left\| \int_X f_{0,1} + f_{0,2} + f_{0,3} + f_{0,4} d\nu\right\| < \|f^{(k_1)}\|_1 + 1. \] Similarly, $f^{(k_{n+1})} - f^{(k_n)} \in L^1_\mathcal H(X,\nu)$ and so there exists $f_{n,i} \geq 0, 1\leq i\leq 4$ such that $f^{(k_{n+1})} - f^{(k_n)} = f_{n,1} - f_{n,2} + i(f_{n,3} - f_{n,4})$ such that \[ \left\| \int_X f_{n,1} + f_{n,2} + f_{n,3} + f_{n,4} d\nu\right\| < \|f^{(k_{n+1})} - f^{(k_n)}\|_1 + \frac{1}{2^{n+1}} < \frac{1}{2^n}. \] Hence, by the triangle inequality \[ \left\| \int_X \sum_{n=0}^\infty f_{n,1} + f_{n,2} + f_{n,3} + f_{n,4} d\nu\right\| < \|f^{(k_1)}\|_1 + \sum_{n=0}^\infty \frac{1}{2^n} = \|f^{(k_1)}\|_1 + 2. \] Thus, $f_i := \sum_{n=0}^\infty f_{n,i}\geq 0$ is $\nu$-integrable, $1\leq i\leq 4$, and so $f := f_1 - f_2 + i(f_3 - f_4) \in \mathcal L^1_\mathcal H(X,\nu)$. Consider now that for each $m\geq 1$, by telescoping, we have that \begin{align*} \|f - f^{(k_m)}\|_1 & = \left\|f^{(k_1)} + \sum_{n=1}^\infty (f^{(k_{n+1})} - f^{(k_n)}) - f^{(k_m)}\right\|_1 \\ & = \left\|f^{(k_m)} + \sum_{n=m}^\infty (f^{(k_{n+1})} - f^{(k_n)}) - f^{(k_m)}\right\|_1 \\ & = \left\| \sum_{n=m}^\infty (f^{(k_{n+1})} - f^{(k_n)}) \right\|_1 \\ & < \sum_{n=m}^\infty \frac{1}{2^n} \\ & = \frac{1}{2^{m-1}}. \end{align*} Therefore, $f^{(n)} \rightarrow f$ in $\|\cdot\|_1$ and the conclusion follows. \end{proof} Finally in this section we relate $L^\infty_\mathcal H(X,\nu)$ and $L^1_\mathcal H(X,\nu)$. \begin{proposition}\label{prop:denseinstatetopology} Suppose $\frac{d\nu}{d\nu_\rho}(x) \in \mathcal B(\mathcal H)^{-1}$ for all $x\in X$ and $\frac{d\nu}{d\nu_\rho}, \frac{d\nu}{d\nu_\rho}^{-1} \in L^\infty_\mathcal H(X,\nu)$. There is a natural inclusion of $L^\infty_\mathcal H(X,\nu)$ in $L^1_\mathcal H(X,\nu)$ with \[ \|g\|_1 \leq 2\|g\|_\infty \|\nu(X)\|, \quad \forall g\in L^\infty_\mathcal H(X,\nu). \] Moreover, $L^\infty_\mathcal H(X,\nu)$ is dense in $L^1_\mathcal H(X,\nu)$ in the state topology, $(f_n)_s \rightarrow f_s$ for all $s\in \mathcal S(\mathcal H)$. \end{proposition} \begin{proof} If $g\in L^\infty_\mathcal H(X,\nu)$ then it is easy to see that $(Re\: g)_\pm, (Im\: g)_\pm \in L^\infty_\mathcal H(X,\nu)$ as well. Now since $\nu\in \povm{\mathcal H}{X}$ is necessarily a finite measure we have \begin{align*} \|g\|_1 & \leq \|(Re\: g)_+ - (Re\: g)_-\|_1 + \|(Im\: g)_+ - (Im\: g)_-\|_1 \\ & \leq \left\| \int_X (Re\: g)_+ + (Re\: g)_- d\nu \right\| + \left\| \int_X (Im\: g)_+ + (Im\: g)_- d\nu\right\| \\ & \leq \left\| \int_X \|(Re\: g)_+ + (Re\: g)_-\| I_\mathcal H d\nu \right\| + \left\| \int_X \|(Im\: g)_+ + (Im\: g)_-\| I_\mathcal H d\nu\right\| \\ & \leq \|Re\: g\|_\infty \left\| \int_X I_\mathcal H d\nu\right\| + \|Im\: g\|_\infty \left\| \int_X I_\mathcal H d\nu\right\| \\ & \leq 2\|g\|_\infty \|\nu(X)\|. \end{align*} Additionally, for $g\neq 0$ we have, by the boundedness of the Radon-Nikod\'ym derivative, that \[ 0\neq \tilde g = \frac{d\nu}{d\nu_\rho}^{1/2}g\frac{d\nu}{d\nu_\rho}^{1/2} \in L^\infty_\mathcal H(X,\nu_\rho I_\mathcal H), \] which implies that $g_s \in L^\infty(X,\nu_\rho)$ for every $s\in \mathcal S(\mathcal H)$. Now if all $g_s = 0$ then $g$ would need to be 0 and so there exists an $s\in\mathcal S(\mathcal H)$ such that $g_s \neq 0$ in $L^\infty(X,\nu_\rho)$. Hence, \[ 0 < \int_X |g_s| d\nu_\rho \leq \|g\|_1 \] by Lemma \ref{lemma:statetopologyweaker}. Therefore, $L^\infty_\mathcal H(X,\nu)$ sits inside $L^1_\mathcal H(X,\nu)$. Now suppose $f\in L^1_\mathcal H(X,\nu)$ with $f\geq 0$. Since any class representative of $f$ is a quantum random variable from $X$ into $\mathcal B(\mathcal H)$, one can find a sequence of measurable sets $\{E_n\}_{n\in\mathbb N}$ such that \[ \|\chi_{E_n}f\|_\infty \leq n, \quad E_n \subseteq E_{n+1}, \quad \textrm{and} \quad X = \cup_{n=1}^\infty E_n. \] Thus, for all $s\in \mathcal S(\mathcal H)$ \begin{align*} (\chi_{E_n}f)_s & = \chi_{E_n}f_s \end{align*} which converges to $f_s$ in the 1-norm by the monotone convergence theorem. Therefore, since every $L^1_\mathcal H(X,\nu)$ function is the linear combination of four positive functions the conclusion is reached. \end{proof} This proposition implies that if $\mathcal H = \mathbb C^n$ then $L^1_\mathcal H(X,\nu) = \overline{L^\infty_\mathcal H(X,\nu)}^{\|\cdot\|_1}$. In infinite dimensions this will not be the case, for instance Example \ref{example:Linftynotnormdense} cannot be approximated by essentially bounded functions in the 1-norm. \section{Bounded multipliers}\label{sec:boundedmult} Although $L^\infty_\mathcal H(X,\nu)$ is not the dual space of $L^1_\mathcal H(X, \nu)$, we can think of it as a generalization of the dual space. Consider the following ``natural pairing'' or ``bracket'' \[ \langle \cdot,\cdot\rangle: L^1_\mathcal H(X, \nu)\times L^\infty_\mathcal H(X,\nu) \rightarrow \mathcal B(\mathcal H) \] given by \[ \langle f,g\rangle = \int_X fg \ d\nu. \] The main trouble with this is that $fg$ may fail to be in $L^1_\mathcal H(X,\nu)$, or to put it another way, multiplication by $g \in L^\infty_\mathcal H(X,\nu)$ could be an unbounded operator on $L^1_\mathcal H(X,\nu)$. As the following example shows this is a problem in infinite dimensions even when $\nu = \mu I_\mathcal H$. \begin{example} Let $X=[0,1]$, $\mu$ Lebesgue measure on $X$, $\mathcal H = \ell^2(\mathbb N)$, and $\nu = \mu I_\mathcal H$. Let \[ f(x) = \sum_{i=1}^\infty 2^{i}\chi_{(\frac{1}{2^{i}}, \frac{1}{2^{i-1}}]}(x) e_{i,i} \] and so \[ \|f\|_1 = \left\|\int_0^1 f(x) d\nu \right\| = \|I_\mathcal H\| = 1 \] which gives that $f\in L^1_\mathcal H([0,1],\mu I_\mathcal H)$. Now consider \[ g(x) = \sum_{i=1}^\infty \chi_{(\frac{1}{2^{n}}, \frac{1}{2^{n-1}}]}(x) e_{i,1} \in L^\infty_\mathcal H([0,1],\mu I_\mathcal H), \] since $\|g\|_\infty =1$. However, \[ \left\| \int_0^1 f(x)g(x) d\nu \right\| = \left\| \int_0^1 \sum_{i=1}^\infty 2^i \chi_{(\frac{1}{2^{i}}, \frac{1}{2^{i-1}}]}(x) e_{i,1}\right\| = \infty. \] In other words, $g$ is not a bounded right multiplier on $L^1_\mathcal H([0,1],\mu I_\mathcal H)$. \end{example} In general, without putting conditions on the dimension or measure, all we can say is the following: \begin{lemma} For all $f\in L^1_\mathcal H(X,\nu)$ and $g\in L^\infty(X,\nu_\rho)$ one has \[ \|f\cdot gI_\mathcal H\|_1 = \|gI_\mathcal H \cdot f\|_1 \leq 2\|f\|_1\|g\|_\infty. \] \end{lemma} \begin{proof} For every $\epsilon > 0$ there exists $f_i\in L^1_\mathcal H(X,\nu)$ such that $f_i\geq 0, 1\leq i\leq 4$ and $f = f_1 - f_2 + i(f_3 -f_4)$ such that $\|f\|_1 \leq \|f_1 +f_2+f_3+f_4\|_1 < \|f\|_1 + \epsilon$. Now, for $g= g_1+ig_2$ where $g_1$ and $g_2$ are real-valued, we have \begin{align*} \|f\cdot g I_\mathcal H\|_1 &\leq \sum_{i=1}^2\|g_if_1 - g_if_2 + i(g_if_3 - g_if_4)\|_1 \\ & \leq \sum_{i=1}^2\left\| \int_X |g_i|f_1 + |g_i|f_2 + |g_i|f_3 + |g_i|f_4 d\nu \right\| \\ & \leq \sum_{i=1}^2\left\| \int_X \|g\|_\infty (f_1+f_2+f_3+f_4)d\nu \right\| \\ & < 2\|g\|_\infty (\|f\|_1 + \epsilon). \end{align*} \end{proof} Now we can prove a version of the Cauchy-Schwarz inequality in this context. \begin{lemma}\label{lemma:CauchySchwarz} If $f\in L^1_\mathcal H(X,\nu)$ and $g\in L^\infty(X,\nu_\rho)$ then \[ \|\langle f,gI_\mathcal H\rangle\| \leq 4\|f\|_1\|g\|_\infty. \] \end{lemma} \begin{proof} Suppose $f_1,f_2\in L^1_\mathcal H(X,\nu)$ such that $f_1,f_2\geq 0$. Then \[ -f_1 - f_2 \leq f_1 - f_2 \leq f_1+f_2 \] which implies that \[ - \int_X f_1 + f_2\: d\nu \leq \int_X f_1 -f_2\: d\nu \leq \int_X f_1 + f_2\: d\nu \] and so \[ \left\|\int_X f_1 -f_2\: d\nu\right\| \leq \left\|\int_X f_1 + f_2\: d\nu\right\|. \] Hence, for every $f_i\in L^1_\mathcal H(X,\nu), f_i\geq 0, 1\leq i\leq 4$ such that $f=f_1-f_2 + i(f_3 -f_4)$ we have \begin{align*} \left\|\int_X f\: d\nu \right\| & \leq \left\|\int_X f_1 - f_2\: d\nu \right\| + \left\|\int_X f_3 - f_4\: d\nu \right\| \\ & \leq \left\|\int_X f_1 + f_2\: d\nu \right\| + \left\|\int_X f_3 + f_4\: d\nu \right\| \\ & \leq 2\left\|\int_X f_1 + f_2 + f_3 + f_4\: d\nu \right\|. \end{align*} So \[ \left\| \int_X f\: d\nu\right\| \leq 2\|f\|_1. \] Therefore, by the last lemma, we have \[ \|\langle f,gI_\mathcal H\rangle\| = \left\| \int_X fgI_\mathcal H\: d\nu\right\| \leq 2\|fgI_\mathcal H\|_1 \leq 4\|f\|_1\|g\|_\infty. \] \end{proof} \begin{proposition} Suppose $\nu = \mu I_\mathcal H \in \povm{\mathcal H}{X}$ where $\mu$ is a positive, finite measure on $X$. If $f\in L^1_\mathcal H(X,\nu)$ and $A\in \mathcal B(\mathcal H)$ then $Af$ and $fA$ are in $L^1_\mathcal H(X,\nu)$ with \[ \|Af\|_1 = \|fA\|_1 \leq 4\left(1+\|A\|^2\right)\|f\|_1. \] \end{proposition} \begin{proof} First suppose that $f\geq 0$. For $\lambda = \{1,-1, i, -i\}$ we have that \[ \left[\begin{matrix}f & \bar\lambda fA^* \\ \lambda Af & AfA^*\end{matrix}\right] = \left[\begin{matrix}I_\mathcal H &0\\ 0 &\lambda A\end{matrix}\right]\left[\begin{matrix}f & f \\ f & f\end{matrix}\right]\left[\begin{matrix}I_\mathcal H& 0\\ 0 & \lambda A\end{matrix}\right]^* \geq 0. \] A nice trick is that this gives \[ 0 \leq \left\langle \left[\begin{matrix}f & \bar\lambda fA^* \\ \lambda Af & AfA^*\end{matrix}\right] \left[\begin{matrix} x \\ x\end{matrix}\right], \left[\begin{matrix} x \\ x\end{matrix}\right] \right\rangle = \langle (f + \lambda Af + \bar\lambda fA^* + AfA^*)x,x \rangle \] for all $x\in \mathcal H$ and so $f + \lambda Af + \bar\lambda fA^* + AfA^* \geq 0$. Now \begin{align*} Re(Af) & = \frac{1}{2}(Af + fA^*) \\ & = \frac{1}{4}(f + Af + fA^* + AfA^*) - \frac{1}{4}(f - Af - fA^* + AfA^*) \end{align*} and \begin{align*} Im(Af) & = \frac{1}{2}(-iAf + ifA^*) \\ & = \frac{1}{4}(f -iAf + ifA^* + AfA^*) - \frac{1}{4}(f +iAf - ifA^* + AfA^*), \end{align*} both differences of positive operators. Hence, \begin{align*} \|Af\|_1 & \leq \left\|\sum_{\lambda = \{1, -1, i, -i\}} \frac{1}{4}(f + \lambda Af + \bar\lambda fA^* + AfA^*)\right\|_1 \\ & = \|f+AfA^*\|_1 \\ & \leq \|f\|_1 + \|AfA^*\|_1 \\ & = \|f\|_1 + \sup_{s\in \mathcal S(\mathcal H)} \tr\left(s \int_X AfA^* d\nu \right) \\ & = \|f\|_1 + \sup_{s\in \mathcal S(\mathcal H)} \int_X \tr(sAf(x)A^*)d\mu \\ & = \|f\|_1 + \sup_{s\in \mathcal S(\mathcal H)} \int_X \tr(A^*sA f(x))d\mu \\ & = \|f\|_1 + \sup_{s\in \mathcal S(\mathcal H)} \tr\left(A^*sA \int_X f d\nu \right) \\ & = \|f\|_1 + \sup_{s\in \mathcal S(\mathcal H), \tr(A^*sA)\neq 0} \tr(A^*sA) \tr\left(\frac{1}{\tr(A^*sA)} A^*sA \int_X f d\nu\right) \\ & \leq \|f\|_1 + \|A^*A\|\|f\|_1 \\ & = (1 + \|A\|^2)\|f\|_1. \end{align*} The general case follows easily. In particular, letting $\epsilon > 0$, there exists $f_i \geq 0 \in L^1_\mathcal H(X,\nu), 1\leq i\leq 4$, such that $f = f_1-f_2 + i(f_3-f_4)$ and $\|\sum_{i=1}^4 f_i\|_1 < \|f\|_1 + \epsilon$. By the above argument we have \begin{align*} \|Af\|_1 & \leq \|Af_1 - Af_2 + i(Af_3 - Af_4)\|_1 \\ & \leq \sum_{i=1}^4 \|Af_i\|_1 \\ & \leq \sum_{i=1}^4 (1+\|A\|^2)\|f_i\|_1 \\ & \leq 4(1+\|A\|^2)\left\|\sum_{i=1}^4 f_i\right\|_1 \\ & < 4(1+\|A\|^2)(\|f\|_1 + \epsilon). \end{align*} \end{proof} \begin{corollary}\label{cor:Aderivmultiplier} Suppose $\nu\in\povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}(x) \in \mathcal B(\mathcal H)^{-1}$ for all $x\in X$ and $\frac{d\nu}{d\nu_\rho}, \frac{d\nu}{d\nu_\rho}^{-1} \in L^\infty_\mathcal H(X,\nu)$. If $f\in L^1_\mathcal H(X,\nu)$ and $A\in \mathcal B(\mathcal H)$ then \[ \left\|\frac{d\nu}{d\nu_\rho}^{-1/2}A\frac{d\nu}{d\nu_\rho}^{1/2}f\right\|_1 = \left\|f\frac{d\nu}{d\nu_\rho}^{1/2}A\frac{d\nu}{d\nu_\rho}^{-1/2}\right\|_1 \leq 4\left(1+\|A\|^2\right)\|f\|_1. \] \end{corollary} \begin{proof} By the previous proposition and repeated uses of Lemma \ref{lemma:equivalencetoclassical} we have \begin{align*} \left\| \frac{d\nu}{d\nu_\rho}^{-1/2}A\frac{d\nu}{d\nu_\rho}^{1/2}f \right\|_{1, \nu} & = \left\|\frac{d\nu}{d\nu_\rho}^{1/2} \left(\frac{d\nu}{d\nu_\rho}^{-1/2}A\frac{d\nu}{d\nu_\rho}^{1/2}f\right)\frac{d\nu}{d\nu_\rho}^{1/2} \right\|_{1, \nu_\rho I_\mathcal H} \\ & = \left\|A\frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2} \right\|_{1, \nu_\rho I_\mathcal H} \\ & \leq 4(1+\|A\|^2)\left\|\frac{d\nu}{d\nu_\rho}^{1/2}f\frac{d\nu}{d\nu_\rho}^{1/2} \right\|_{1, \nu_\rho I_\mathcal H} \\ & = 4(1+\|A\|^2)\|f\|_{1,\nu}. \end{align*} \end{proof} In finite dimensions, every multiplication operator is bounded, assuming some conditions on the Radon-Nikod\'ym derivative. \begin{proposition}\label{prop:boundedthenallaremultipliersinfd} Suppose $\mathcal H= \mathbb C^n$, $\nu\in\povm{\mathcal H}{X}$ such that $\frac{d\nu}{d\nu_\rho}(x) \in M_n^{-1}$ for all $x\in X$ and $\frac{d\nu}{d\nu_\rho}, \frac{d\nu}{d\nu_\rho}^{-1} \in L^\infty_\mathcal H(X,\nu)$. If $f\in L^1_\mathcal H(X,\nu)$ and $g\in L^\infty_\mathcal H(X,\nu)$ then $fg\in L^1_\mathcal H(X,\nu)$ with \[ \|fg\|_1 \leq n \left\|\frac{d\nu}{d\nu_\rho}\right\|_\infty \left\| \frac{d\nu}{d\nu_\rho}^{-1}\right\|_\infty \|f\|_1 \|g\|_\infty. \] \end{proposition} \begin{proof} By Proposition \ref{prop:finitecomparable} we have that \begin{align*} \|fg\|_1 & \leq \left\| \int_X \|f(x)g(x)\|I_n d\nu \right\| \\ & \leq \left\| \int_X \|f(x)\|\|g\|_\infty I_n d\nu \right\| \\ & = \|g\|_\infty \left\|\int_X \|f(x)\|I_n d\nu\right\| \\ & \leq \|g\|_\infty n \left\|\frac{d\nu}{d\nu_\rho}\right\|_\infty \left\| \frac{d\nu}{d\nu_\rho}^{-1}\right\|_\infty \|f\|_1 < \infty. \end{align*} \end{proof} However, if the boundedness condition is dropped, multipliers can become unbounded even in finite dimensions. \begin{example} Let $\mathcal H = \mathbb C^2$, $X=[0,1]$, $\mu$ be Lebesgue measure and \[ \nu = \left[ \begin{matrix} \mu & 0 \\ 0 & \sum_{i=1}^\infty 2^{i/2}\chi_{(\frac{1}{2^i},\frac{1}{2^{i-1}}]} \mu \end{matrix} \right]. \] Now, $\nu$ is a POVM because it is positive and finite \[ \int_0^1 \sum_{i=1}^\infty 2^{i/2}\chi_{(\frac{1}{2^i},\frac{1}{2^{i-1}}]} d\mu = \sum_{i=1}^\infty \frac{1}{{\sqrt 2}^i} = \frac{1}{1-\frac{1}{\sqrt 2}} = \frac{\sqrt 2}{\sqrt 2 - 1} < \infty. \] Let $f(x) = \sum_{i=1}^\infty 2^{i/2}\chi_{(\frac{1}{2^i},\frac{1}{2^{i-1}}]}(x) e_{1,1}$ which by the above calculation gives that $f\in L^1_\mathcal H([0,1], \nu)$. However, for $U = \left[\begin{matrix} 0& 1 \\ 1 & 0 \end{matrix}\right] \in L^\infty_\mathcal H([0,1],\nu)$ one has \begin{align*} \|U^*fU\|_1 & = \left\| \sum_{i=1}^\infty 2^{i/2}\chi_{(\frac{1}{2^i},\frac{1}{2^{i-1}}]}(x) e_{2,2} \right\|_1 \\ & = \left| \int_0^1 \sum_{i=1}^\infty 2^i\chi_{(\frac{1}{2^i},\frac{1}{2^{i-1}}]}(x) d\mu \right| \\ & = \infty. \end{align*} Therefore, multiplication by $U$ is not bounded on $L^1_\mathcal H([0,1], \nu)$ even though $\mathcal H = \mathbb C^2$. \end{example} Even though the set of bounded multipliers may be difficult to characterize we can move forward using only those arising from $L^\infty(X,\nu_\rho)$. To this end, define the following subspace of linear functionals on $L^1_\mathcal H(X,\nu)$ \[ \mathcal F(X,\nu) = \operatorname{span}\{\tr(s\langle \cdot, g I_\mathcal H\rangle) : s\in \mathcal S(\mathcal H), g\in L^\infty(X,\nu_\rho)\}. \] \begin{proposition}\label{prop:fsep} The family $\{\langle \cdot, g I_\mathcal H\rangle : g\in L^\infty(X,\nu_\rho)\}$ is separating and $\mathcal F(X,\nu)$ is a separating subspace of linear functionals on $L^1_\mathcal H(X,\nu)$. \end{proposition} \begin{proof} Suppose $f\in L^1_\mathcal H(X,\nu)$ such that $\|f\|_1 \neq 0$. There must exist $s\in \mathcal S(\mathcal H)$ such that $f_s \neq 0 \in L^1(X, \nu_\rho)$. Otherwise, $\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}f(x)\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2} = 0$ for almost all $x\in X$ with respect to $\nu$ (equally $\nu_\rho$) and so $f\equiv 0$ in $L^1_\mathcal H(X,\nu)$. This implies that there exists a $g\in L^\infty(X,\nu_\rho)$ such that \begin{align*} 0\neq \int_X f_sg d\nu_\rho & = \int_X \tr\left(s \left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2}f(x)g(x)I_\mathcal H\left(\frac{d\nu}{d\nu_\rho}(x)\right)^{1/2} \right) d\nu_\rho \\ &= \tr\left(s \int_X f(x) g(x)I_{\mathcal H} d\nu \right) \\ &= \tr(s \langle f, gI_\mathcal H\rangle ). \end{align*} Therefore, the conclusion is reached. \end{proof} One needs to be quite careful here as it seems unlikely that this family recovers the 1-norm on $L^1_\mathcal H(X,\nu)$. However, we can show that it detects positivity. \begin{lemma}\label{lemma:positivedetect} Suppose $f\in L^1_\mathcal H(X,\nu)$, then $f\geq 0$ if and only if $\langle f,gI_\mathcal H\rangle \geq 0$ for all $g\in L^\infty(X,\nu_\rho)$ such that $g\geq 0$. \end{lemma} \begin{proof} By the proof of the previous proposition it is easy to see that \begin{align*} f\geq 0 &\quad \Leftrightarrow \quad f_s \geq 0, & \forall s\in \mathcal S(\mathcal H) \\ &\quad \Leftrightarrow \quad f_sg\geq 0, & \forall s\in \mathcal S(\mathcal H), g\geq 0 \in L^\infty(X,\nu_\rho) \\ &\quad \Leftrightarrow \quad gf \geq 0, & \forall g\geq 0 \in L^\infty(X,\nu_\rho) \\ &\quad \Leftrightarrow \quad \langle f,gI_\mathcal H\rangle \geq 0, & \forall g\geq 0 \in L^\infty(X,\nu_\rho). \end{align*} \end{proof} We say a sequence $\{f_i\}_{i\geq 1}$ in $L^1_\mathcal H(X,\nu)$ is \textit{weakly converging} to $f\in L^1_\mathcal H(X,\nu)$ if it is converging weakly with respect to the family $\mathcal F(X,\nu)$. This is the same as \[ \langle f_i, gI_\mathcal H\rangle \rightarrow \langle f,gI_\mathcal H\rangle, \quad \forall g\in L^\infty(X,\nu_\rho) \] with convergence in the ultraweak topology of $\mathcal B(\mathcal H)$. \section{Bistochastic operators}\label{sec:Bistoch} Throughout this section, as before, $\nu\in \povm{\mathcal H}{X}$. The following definition echoes that of the classical bistochastic operator. \begin{definition} A linear operator $B$ is called a {\em bistochastic operator} on $L^1_\mathcal H(X, \nu)$ if \begin{enumerate} \item $B$ is positive, \item $\int_X Bf d\nu = \int_X f d\nu , \quad \forall f\in L^1_\mathcal H(X,\nu)$, \item $BI_\mathcal H = I_\mathcal H$, \end{enumerate} where $I_\mathcal H$ above refers to the constant function $I_\mathcal H$ in $L^1_\mathcal H(X,\nu)$. The set of all bistochastic operators on $L^1_\mathcal H(X, \nu)$ is denoted by $\mathfrak{B}(X,\nu)$. \end{definition} \begin{lemma}\label{lem:bistochasticselfadjoint} Every bistochastic operator $B$ is self-adjoint, meaning that for every $f\in L^1_\mathcal H(X,\nu)$ we have $B(f^*) = B(f)^*$. \end{lemma} \begin{proof} Let $f\in L^1_\mathcal H(X,\nu)$. There exist $f_i\in L^1_\mathcal H(X,\nu), f_i\geq 0$, $i=1, \dots, 4$, such that $f = f_1 - f_2 + i(f_3 - f_4)$. Hence, \begin{align*} B(f^*) & = B(f_1 - f_2 - i(f_3 - f_4)) \\ & = B(f_1) - B(f_2) - i(B(f_3) - B(f_4)) \\ & = (B(f_1) - B(f_2) + i(B(f_3) - B(f_4)))^* \\ & = (B(f))^*. \end{align*} \end{proof} \begin{lemma}\label{lem:bistoinfinitycontractive} Every bistochastic operator takes $L^\infty_\mathcal H(X,\nu)$ to itself and is bounded in the $\infty$-norm. Furthermore, it is contractive on all self-adjoint functions. \end{lemma} \begin{proof} Let $B\in \mathfrak{B}(X,\nu)$ and let $f\in L^\infty_\mathcal H(X, \nu)$ be self-adjoint. Then $\|f\|_\infty I_\mathcal H \pm f \geq 0$ almost everywhere. If $B \in \mathfrak B(X,\nu)$ then \[ \|f\|_\infty I_\mathcal H \pm Bf = B(\|f\|_\infty I_\mathcal H \pm f) \geq 0. \] Therefore, $Bf$ is essentially bounded by $\|f\|_\infty$. By the linearity of $B$, the result holds. \end{proof} \begin{proposition} Every bistochastic operator is contractive with respect to the $\|\cdot\|_1$-norm. \end{proposition} \begin{proof} Let $f\in L^1_\mathcal H(X,\nu)$ and $B\in \mathfrak B(X,\nu)$. For every $f_k\in L^1_\mathcal H(X,\nu), f_k\geq 0, 1\leq k\leq 4$ such that $f = f_1 -f_2 + i(f_3 - f_4)$ we have that \begin{align*} \|Bf\|_1 & = \|Bf_1 - Bf_2 + i(Bf_3 - Bf_4)\|_1 \\ & \leq \left\| \int_X \sum_{k=1}^4 Bf_k \ d\nu \right\| \\ & = \left\| \int_X \sum_{k=1}^4 f_k \ d\nu \right\|. \end{align*} Taking the infimum over all such combinations implies the conclusion. \end{proof} The easiest class of bistochastic operators to study are those which arise from classical bistochastic theory. Consider $\nu = \mu I_\mathcal H$ for some finite, positive measure $\mu$, which gives that $\frac{d\nu}{d\nu_\rho}(x) = I_\mathcal H$ and $\nu_\rho = \mu$. For this, we use $L^1(X,\mu)$ but with a compatible 1-norm $\int_X |f|_1 d\mu$ where \[ |f(x)|_1 = |Re\: f(x)| + |Im\: f(x)|. \] This implies that $L^1(X,\mu) I_\mathcal H \subset L^1(X,\nu)$. Most classical sources seem to only consider real-valued functions so this is no different than the usual norm. Regardless, this new norm and the usual norm on $L^1$ are comparable since $|f(x)| \leq |f(x)|_1 \leq 2|f(x)|$. Thus, with this choice of norm we have that $L^1(X,\mu)$ is norm closed. The set of bistochastic operators on the classical $L^1(X,\mu)$ is denoted $\mathfrak B(L^1(X,\mu))$. \begin{theorem} If $\nu = \mu I_\mathcal H$ for some finite, positive measure $\mu$, then every $B\in \mathfrak B(L^1(X,\mu))$ extends to a bistochastic operator in $\mathfrak B(X,\nu)$ by the formula \[ B\left(fA\right) = B(f)A, \quad \forall f\in L^1(X,\mu), \ A\in \mathcal B(\mathcal H). \] \end{theorem} \begin{proof} Since $\mathcal H$ is separable then we can view every $f\in L^1_\mathcal H(X,\mu I_\mathcal H)$ as \[ f(x) = [f_{i,j}(x)] \in \mathcal B(\mathcal H) \] where $f_{i,j} \in L^1(X,\mu), i,j\geq 1$. By hypothesis define $B$ on $L^1_\mathcal H(X,\mu I_\mathcal H)$ by \[ B(f) = [B(f_{i,j})]. \] Linearity is automatic, $B(I) = I$, and positivity follows from the fact that the action of $B$ on $f$ is akin to $B\otimes id$ and $B$ is positive. Now for every $f\in L^1_\mathcal H(X,\mu I_\mathcal H), f\geq 0$ we have that \begin{align*} \|B(f)\|_1 & = \sup_{s\in\mathcal S(\mathcal H)} \tr\left(s\int_X B(f) d\mu I_\mathcal H\right) \\ & = \sup_{s\in\mathcal S(\mathcal H)} \int_X \tr(sB(f)) d\mu \\ & = \sup_{s\in\mathcal S(\mathcal H)} \int_X B(\tr(sf)) d\mu \\ & = \sup_{s\in\mathcal S(\mathcal H)} \int_X \tr(sf) d\mu \\ & = \sup_{s\in\mathcal S(\mathcal H)} \tr\left(s\int_X f d\mu I_\mathcal H\right) \\ & = \|f\|_1. \end{align*} Therefore, by the triangle inequality, $B(f) \in L^1_\mathcal H(X,\mu I_\mathcal H)$. An important step used above that will be used again in Section 6, which in fact comes automatically with the definition of $B$, is $(Bf)_s = B(f_s)$. \end{proof} We will refer to the extension developed in the previous theorem by $B$ as well and the set of such bistochastic operators as $\mathfrak B(L^1(X,\mu))$ still. We have no example of a bistochastic operator on $L^1_\mathcal H(X,\mu I_\mathcal H)$ that does not arise in this way. \begin{corollary}\label{cor:bracketmodularity} If $\nu = \mu I_\mathcal H$ for some finite, positive measure $\mu$, then for every $B\in \mathfrak B(L^1(X,\mu))$, $f\in L^1(X,\mu), A\in \mathcal B(\mathcal H)$ and $g\in L^\infty(X,\mu)$ \[ \langle B(fA), g I_\mathcal H\rangle = \langle B(f),g\rangle A. \] \end{corollary} \begin{proof} This is a straightforward calculation: \begin{align*} \langle B(fA),g I_\mathcal H\rangle & = \langle B(f)A, gI_\mathcal H\rangle \\ & = \int_X B(f)AgI_\mathcal H \: d\mu I_\mathcal H \\ & = \left(\int_X B(f)g d\mu\right)A \\ & = \langle B(f),g\rangle A. \end{align*} \end{proof} Lastly, we turn to topology again. Suppose $B_i, B\in \mathfrak B(X,\nu), i\geq 1$. We say that $B_i$ is \textit{WOT-convergent} to $B$ if $B_if$ weakly converges to $Bf$ for all $f\in L^1_\mathcal H(X,\nu)$, that is \[ \langle B_i(f), gI_\mathcal H\rangle \rightarrow \langle B(f), gI_\mathcal H\rangle, \quad \forall f\in L^1_\mathcal H(X,\nu), g\in L^\infty(X,\nu_\rho) \] in the ultraweak topology on $\mathcal B(\mathcal H)$. \section{Majorization of quantum random variables}\label{sec:MD} Recall that if $f\in L^1_\mathcal H(X,\mu I)$ and $s\in \mathcal T(\mathcal H)$ then we define $f_s\in L^1(X,\mu)$ by \[ f_s(x) = \tr(sf(x)) \in L^1(X,\mu). \] We now introduce several possible majorization partial orders which relate to multivariate majorization \cite[Chapter 15]{marshallolkin} and \cite{JoeVerducci}. \begin{definition}\label{def:multi} Suppose $f,g\in L^1_\mathcal H(X,\mu I)$ and are self-adjoint where $\mu$ is a finite, positive, complex measure. We say that \begin{enumerate} \item $f\prec g$ if there exists a bistochastic operator $B\in \mathfrak B(L^1(X,\mu))$ such that $Bg=f$, \item $f\prec_T g$ if $f_t \prec g_t$ for all $t\in \mathcal T(\mathcal H)_{sa}$, and \item $f\prec_S g$ if $f_s \prec g_s$ for all $s\in \mathcal S(\mathcal H)$. \end{enumerate} \end{definition} \begin{proposition} For $f,g\in L^1_\mathcal H([0,1],\mu I)$ self-adjoint we have that \[ f \prec g \ \ \Rightarrow \ \ f\prec_T g \ \ \Rightarrow \ \ f\prec_S g. \] \end{proposition} \begin{proof} The second implication is automatic. For the first implication assume that there is a bistochastic operator $B\in \mathfrak B(L^1_{\mathcal H})$ such that $Bg=f$. By the definition of $B\in\mathfrak B(L^1_{\mu I_\mathcal H})$, for every $t\in \mathcal T(\mathcal H)_{sa}$ we have that $(Bf)_t = B(f_t)$. Therefore, by \cite{Ryff63} $g_t = B(f_t) \prec f_t$ for all self-adjoint $t$ and the conclusion follows. \end{proof} If $\mathcal H = \mathbb C$ then the converse is true by the majorization theory of $L^1$ by Theorem \ref{thm:continuousmajorization}. However, these partial orders are distinct in higher dimensions. \begin{example} Arising from an example of Joe and Verducci \cite{JoeVerducci}, define $f,g\in L^1_{\mathbb C^2}([0,1],\mu I)$ by \[ f = \left[\begin{smallmatrix}1 \\ & 4\end{smallmatrix}\right]\chi_{[0,\frac{1}{2}]} + \left[\begin{smallmatrix}3 \\ & 2\end{smallmatrix}\right]\chi_{(\frac{1}{2},1]} \quad \textrm{and} \quad g = \left[\begin{smallmatrix}1 \\ & 2\end{smallmatrix}\right]\chi_{[0,\frac{1}{2}]} + \left[\begin{smallmatrix}3 \\ & 4\end{smallmatrix}\right]\chi_{(\frac{1}{2},1]}. \] For every state $s\in \mathcal S(\mathbb C^2)$, whose diagonal will be non-negative numbers $a,b$, we have \begin{align*} f_s & = (a+4b)\chi_{[0,\frac{1}{2}]} + (3a +2b)\chi_{(\frac{1}{2},1]}, \quad \textrm{and}\\ g_s & = (a+2b)\chi_{[0,\frac{1}{2}]} + (3a +4b)\chi_{(\frac{1}{2},1]}. \end{align*} \cite{JoeVerducci} proves that $(a+4b, 3a+2b) \prec (a+2b, 3a+4b)$ as vectors, for all $a,b\geq0$, and so $f\prec_S g$. However, the same paper also points out that $(1-4, 3-2) = (-3,1)$ is not majorized by $(1-2,3-4) = (-1,-1)$ and thus for $t=\left[\begin{smallmatrix}1 \\ & -1\end{smallmatrix}\right]$ we have that $f_t \nprec g_t$. Therefore, $f\prec_S g$ but $f \nprec_T g$. \end{example} \begin{example} Arising from an example of Malamud \cite{Malamud}, define $f,g\in L^1_{\mathbb C^2}([0,1],\mu I)$ by \begin{align*} f & = \left[\begin{smallmatrix}12 \\ & 12\end{smallmatrix}\right]\chi_{[0,\frac{1}{4}]} + \left[\begin{smallmatrix}12 \\ & 12\end{smallmatrix}\right]\chi_{(\frac{1}{4}, \frac{1}{2}]} + \left[\begin{smallmatrix}5 \\ & 3\end{smallmatrix}\right]\chi_{(\frac{1}{2}, \frac{3}{4}]} + \left[\begin{smallmatrix}3 \\ & 5\end{smallmatrix}\right]\chi_{(\frac{3}{4}, 1]}, \quad \textrm{and} \\ g & = \left[\begin{smallmatrix}8 \\ & 16\end{smallmatrix}\right]\chi_{[0,\frac{1}{4}]} + \left[\begin{smallmatrix}16 \\ & 8\end{smallmatrix}\right]\chi_{(\frac{1}{4}, \frac{1}{2}]} + \left[\begin{smallmatrix}0 \\ & 0\end{smallmatrix}\right]\chi_{(\frac{1}{2}, \frac{3}{4}]} + \left[\begin{smallmatrix}8 \\ & 8\end{smallmatrix}\right]\chi_{(\frac{3}{4}, 1]}. \end{align*} Using the same methodology as the previous example, \cite{Malamud} implies that $f_t \prec g_t$ for every $t\in \mathcal T(\mathbb C^2)^{sa}$. However, they also prove that there can be no bistochastic operator taking $g$ to $f$. Therefore, $f \prec_T g$ but $f\nprec g$. \end{example} Now to the main theory of this section. Recall from the introduction that Komiya \cite{Komiya} proves for $X, Y\in M_{m,n}(\mathbb C)$ that $X \prec Y$ if and only if $\psi(X) \leq \psi(Y)$ for every real-valued, permutation-invariant, convex function $\psi$ on $M_{m,n}(\mathbb C)$. The permutation matrices are significant here because the convex hull of the permutation matrices is the set of bistochastic matrices. For the measure space $(X,\mu)$ we define $\mathcal P_{\operatorname{inv}}$ to be the set of all invertible measure-preserving transformations. In particular, this is the set of all measurable functions $\phi : X\rightarrow X$, with a measurable inverse, that satisfies the measure-preservation property: \[ \mu(\phi^{-1}(E)) = \mu(E), \quad \forall E\in \mathcal O(X). \] We use the notation $C_\phi$ to denote the right-composition operator: $C_\phi(f)=f\circ\phi$. If $\phi\in \mathcal P_{\operatorname{inv}}$ then $C_\phi$ is a bistochastic operator. Brown \cite{Brown} has proved a similar convexity result for bistochastic operators on $L^1$ (though the paper is in the Markov operator context) under some conditions on the measure space. Namely, the convex hull $\operatorname{conv}(C_\phi:\phi\in \mathcal P_{\operatorname{inv}})$ of the composition operators of invertible measure-preserving maps is dense in the bistochastic operators in the weak operator topology arising from $L^p$ for every $1 < p< \infty$. \begin{proposition}\label{prop:brown} Suppose $X$ is a product of unit intervals and $\mu$ is the corresponding product of Lebesgue measures. If $B$ is a bistochastic operator in $\mathfrak B(L^1(X,\mu))$ then there exists a sequence of bistochastic operators $B_i\in \operatorname{conv}(C_\phi:\phi\in \mathcal P_{\operatorname{inv}})$ such that $B_i$ is WOT-convergent to $B$. Moreover, $\mathfrak B(L^1(X,\mu))$ is WOT-compact and convex. \end{proposition} \begin{proof} $B$ is a bistochastic operator on $L^1(X,\nu)$. As mentioned above, Brown \cite[Theorem 1]{Brown} proves that $\operatorname{conv}(C_\phi:\phi\in \mathcal P_{\operatorname{inv}})$ is dense in the bistochastic operators in the WOT-topology, meaning that there is a sequence $B_i\in \operatorname{conv}(C_\phi:\phi\in \mathcal P_{\operatorname{inv}})$ such that \[ \langle B_i h, g \rangle \rightarrow \langle Bh,g\rangle, \quad \forall h,g\in L^\infty(X,\mu). \] Now let $f\in L^1_\mathcal H(X,\mu I_\mathcal H), s\in \mathcal S(\mathcal H), 0\neq g\in L^\infty(X,\mu)$ and $\epsilon > 0$. By Proposition \ref{prop:denseinstatetopology}, that $L^\infty_\mathcal H(X,\mu I_\mathcal H) \simeq L^\infty(X,\mu)\ \bar\otimes\ \mathcal B(\mathcal H)$ is dense in $L^1_\mathcal H(X,\mu I_\mathcal H)$ in the state topology, there exists $h = \sum_{j=1}^m h_j A_j$ with $h_j\in L^\infty(X,\mu), 0\neq A_j\in\mathcal B(\mathcal H), 1\leq j\leq m$ such that \[ \|(f - h)_s\|_1 < \frac{\epsilon}{9\|g\|_\infty}. \] By the above we know that there exists an $N\in \mathbb N$ such that for all $i\geq N$ and $1\leq j\leq m$ we have that \[ \left|\left\langle \big(B_i-B\big)(h_j), g\right\rangle\right| < \frac{\epsilon}{9m\|A_i\|}. \] Hence, for all $i\geq N$, using Lemma \ref{lemma:CauchySchwarz} and Corollary \ref{cor:bracketmodularity}, \begin{align*} &\left|\tr\left(s\left\langle \big(B_i - B\big)(f),g I_\mathcal H \right\rangle\right)\right| \\ & \quad \quad \leq \left|\tr\left(s\left\langle \big(B_i - B\big)(f-h), g I_\mathcal H\right\rangle\right)\right| +\left|\tr\left(s\left\langle \big(B_i - B\big)(h), gI_\mathcal H \right\rangle\right)\right| \\ & \quad \quad\leq 4\left\|(B_i - B)((f-h)_s)\right\|_1\|g\|_\infty + \left\|\sum_{i=1}^m\langle (B_i - B)(h_j)A_j, gI_\mathcal H\rangle \right\| \\ & \quad \quad\leq 8\|(f-h)_s\|_1\|g\|_\infty + \sum_{j=1}^m \left\|\langle (B_i - B)(h_j), gI_\mathcal H\rangle A_j\right\| \\ & \quad \quad < \frac{8\epsilon}{9} + \sum_{j=1}^m \frac{\epsilon}{9m} = \epsilon. \end{align*} Therefore, $\langle B_i(f), gI_\mathcal H\rangle$ converges to $\langle B(f),gI_\mathcal H\rangle$ in the ultraweak topology of $\mathcal B(\mathcal H)$ for all $f\in L^1_\mathcal H(X,\mu I_\mathcal H)$ and $g\in L^\infty(X,\mu)$. The last statement of the proposition easily follows by Brown's theorem \cite[Theorem 1]{Brown}, which proves that $\mathfrak B(L^1(X,\mu))$ is convex and compact in the weak topology. This is accomplished by the fact that $\mathfrak B(L^1(X,\mu))$ is a WOT-closed subset of the unit ball. Additionally, the closure of a convex hull will be convex. \end{proof} \begin{definition} A real-valued convex function $\psi : L^1_\mathcal H(X, \mu I) \rightarrow \mathbb R$ is said to be {\em permutation-invariant} if for every $ \sigma \in \mathcal P_{\operatorname{inv}}$ we have \[ \psi(f\circ\sigma) = \psi(f) \quad \forall f\in L^1_\mathcal H(X, \mu I). \] \end{definition} \begin{theorem} Suppose $X$ is a product of unit intervals and $\mu$ is the corresponding product of Lebesgue measures. Let $\tilde f, f\in L^1_\mathcal H(X, \mu I)$. Then $\tilde f \prec f$ if and only if $\psi(\tilde f) \leq \psi(f)$ for every real-valued, weakly-continuous, permutation-invariant, convex function on $L^1_\mathcal H(X, \mu I)$. \end{theorem} \begin{proof} The following proof is Komiya's argument \cite{Komiya} adapted to our context. Let $B\in \operatorname{conv}(C_\phi:\phi\in \mathcal P_{\operatorname{inv}})$ Namely, there exist $\sigma_1,\dots, \sigma_n\in P_{\operatorname{inv}}$ and $\lambda_1,\dots,\lambda_n$ positive numbers where $\sum_{i=1}^n \lambda_i = 1$ such that $B = \sum_{i=1}^n \lambda_i C_{\sigma_i}$. For any real-valued, weakly-continuous, permutation-invariant, convex function $\psi$ on $L^1_\mathcal H(X, \mu I)$ we have \begin{align*} \psi(B(f)) &= \psi\left(\sum_{i=1}^n \lambda_i C_{\sigma_i}(f)\right) \\ & \leq \sum_{i=1}^n \lambda_i\psi(C_{\sigma_i}(f)) \\ & = \sum_{i=1}^n \lambda_i \psi(f) \\ & = \psi(f). \end{align*} Now suppose that $B$ is an arbitrary bistochastic operator in $\mathfrak B(L^1(X,\mu))$. By Proposition \ref{prop:brown} there exists a sequence $\{B_i\}$ of bistochastic operators in the convex hull described above such that $B_i$ WOT-converges to $B$. Therefore, because $\psi(B_i f) \leq \psi(f)$, it follows that since $\psi$ is weakly continuous we have that $\psi(Bf) \leq \psi(f)$. For the converse, assume that $\psi(\tilde f) \leq \psi(f)$ for all real-valued, weakly-continuous, permutation-invariant, convex function $\psi$. For every $s_1,\dots, s_m \in \mathcal S(\mathcal H)$ and $g_1,\dots, g_m \in L^\infty(X,\mu)$ we know that $\varphi(\cdot) = \sum_{i=1}^m \tr(s_i\langle \cdot,g_i\rangle)$ is an arbitrary element of $\mathcal F(X,\mu I_\mathcal H)$. Now consider the function \[ \psi_{\varphi}(\cdot) = \sup\{Re(\varphi\circ B(\cdot)) : B \in \mathfrak{B}(L^1(X,\mu))\}. \] By Lemma \ref{lemma:CauchySchwarz} \begin{align*} |Re(\varphi\circ B(h))| & \leq \left|\sum_{i=1}^m \tr(s_i\langle B(h),g_i I_\mathcal H\rangle) \right| \\ & \leq \sum_{i=1}^m |(\tr(s_i\langle B(h),g_iI_\mathcal H\rangle))| \\ & \leq \sum_{i=1}^m \|\langle B(h), g_i\rangle\| \\ & \leq \sum_{i=1}^m 4\|B(h)\|_1\|g_i\|_\infty \\ & \leq \|h\|_1\left(4\sum_{i=1}^m \|g_i\|_\infty\right) \end{align*} and so $\psi_{\varphi}(h)$ exists for every $h\in L^1_\mathcal H(X,\mu I_\mathcal H)$ since it is the supremum of a bounded set of real numbers. Now suppose $h_i\in L^1_\mathcal H(X,\mu I_\mathcal H)$ weakly converges to $h$. Since $B\in \mathfrak B(L^1(X,\mu))$ is norm continuous then it is also weak-weak continuous in the $\mathcal F(X,\mu I_\mathcal H)$-topology. Thus, $B(h_i)$ weakly converges to $B(h)$ and $\psi_{\varphi}$ is then weakly continuous. For any $\sigma\in \mathcal P_{\operatorname{inv}}$ we have that the map $B\mapsto BC_\sigma$ is a bijection on $\mathfrak B(L^1(X,\mu))$. Hence, \begin{align*} \psi_{\varphi}(C_\sigma(h)) & = \sup\{Re(\varphi\circ B(C_\sigma(h))) : B \in \mathfrak{B}(L^1(X,\mu))\} \\ & = \psi_{\varphi}(h) \end{align*} and $\psi_{\varphi}$ is permutation invariant. Lastly, $\psi_\varphi$, being the supremum of a family of linear functions, is sublinear and so is convex. Thus, by assumption we have that $\psi_{\varphi}(\tilde f) \leq \psi_{\varphi}(f)$ for every $\varphi \in \mathcal F(X,\mu I_\mathcal H)$. By contradiction, assume that $\tilde f\neq B(f)$ for every choice of $B\in \mathfrak{B}(L^1(X,\mu))$. From the last proposition we know that $\{B(f) : B\in \mathfrak B(L^1(X,\mu))\}$ is weakly compact in the $\mathcal F(X,\mu I_\mathcal H)$-topology as well as convex. Hence, the Hahn-Banach Separation Theorem implies that there exists $\varphi \in \mathcal F(X,\mu I_\mathcal H)$ and $t\in \mathbb R$ such that \[ Re(\varphi(f)) > t > Re(\varphi(B(f))), \ \forall B\in \mathfrak{B}(L^1(X,\mu)). \] Therefore, \[ \psi_{\varphi}(\tilde f) \geq Re(\varphi(\tilde f)) > \psi_{\varphi}(f), \] a contradiction. \end{proof} \section*{Acknowledgements} S.P.\ was supported by NSERC Discovery Grant number 1174582, the Canada Foundation for Innovation (CFI) grant number 35711, and the Canada Research Chairs (CRC) Program grant number 231250. C.R. was supported by NSERC Discovery Grant 2019-05430. The authors would like to thank the anonymous reviewer for their helpful comments and suggestions. \begin{bibdiv} \begin{biblist} \bib{BK}{article}{ author={Bhatia, Rajendra}, author={Kittaneh, Fuad}, title={The matrix arithmetic-geometric mean inequality revisited}, date={2008}, journal={Linear Algebra and Its Applications}, volume={428}, number={8-9}, pages={2177\ndash 2191}, review={\MR{2401646}}, doi={10.1016/j.laa.2007.11.030}, } \bib{Brown}{article}{ author={Brown, James}, title={Approximation theorems for Markov operators}, date={1966}, journal={Pacific Journal of Mathematics}, volume={16}, number={1}, pages={13\ndash 23}, review={\MR{192552}}, } \bib{BuschOp}{book}{ author={Busch, Paul}, author={Grabowski, Marian}, author={Lahti, Pekka~J}, title={Operational quantum physics}, publisher={Springer Science \& Business Media}, date={1997}, volume={31}, review={\MR{1356220}}, } \bib{BuschQT}{book}{ author={Busch, Paul}, author={Lahti, Pekka~J}, author={Mittelstaedt, Peter}, title={The quantum theory of measurement}, publisher={Springer Science \& Business Media}, date={1996}, volume={2}, review={\MR{1419313}}, } \bib{chiri2007}{article}{ author={Chiribella, Giulio}, author={D'Ariano, Giacomo~Mauro}, author={Schlingemann, Dirk}, title={How continuous quantum measurements in finite dimensions are actually discrete}, date={2007}, journal={Physical review letters}, volume={98}, number={19}, pages={190403}, review={\MR{2308924}}, doi={10.1103/PhysRevLett.98.190403}, } \bib{chiri2010}{article}{ author={Chiribella, Giulio}, author={D'Ariano, Giacomo~Mauro}, author={Schlingemann, Dirk}, title={Barycentric decomposition of quantum measurements in finite dimensions}, date={2010}, journal={Journal of mathematical physics}, volume={51}, number={2}, pages={022111}, review={\MR{2605024}}, doi={10.1063/1.3298681}, } \bib{Chong}{article}{ author={Chong, Kong~Ming}, title={Some extensions of a theorem of {H}ardy, {L}ittlewood and {P}{\'o}lya and their applications}, date={1974}, journal={Canad. J. Math}, volume={26}, pages={1321\ndash 1340}, review={\MR{352377}}, doi={10.4153/CJM-1974-126-1}, } \bib{Davies}{book}{ author={Davies, Edward~Brian}, title={Quantum theory of open systems}, publisher={Academic Press London}, date={1976}, review={\MR{0489429}}, } \bib{Day}{article}{ author={Day, Peter~W}, title={Decreasing rearrangements and doubly stochastic operators}, date={1973}, journal={Transactions of the American Mathematical Society}, volume={178}, pages={383\ndash 392}, review={\MR{318962}}, doi={10.2307/1996707}, } \bib{clean4}{article}{ author={Farenick, Douglas}, author={Floricel, Remus}, author={Plosker, Sarah}, title={Approximately clean quantum probability measures}, date={2013}, journal={Journal of Mathematical Physics}, volume={54}, number={5}, pages={052201}, review={\MR{3098922}}, doi={10.1063/1.4803682}, } \bib{FK}{article}{ author={Farenick, Douglas}, author={Kozdron, Michael~J}, title={Conditional expectation and {B}ayes' rule for quantum random variables and positive operator valued measures}, date={2012}, journal={Journal of Mathematical Physics}, volume={53}, number={4}, pages={042201}, review={\MR{2953266}}, doi={10.1063/1.3703069}, } \bib{FKP}{article}{ author={Farenick, Douglas}, author={Kozdron, Michael~J}, author={Plosker, Sarah}, title={Spectra and variance of quantum random variables}, date={2016}, journal={Journal of Mathematical Analysis and Applications}, volume={434}, number={2}, pages={1106\ndash 1122}, review={\MR{3415711}}, doi={10.1016/j.jmaa.2015.09.055}, } \bib{FPS}{article}{ author={Farenick, Douglas}, author={Plosker, Sarah}, author={Smith, Jerrod}, title={Classical and nonclassical randomness in quantum measurements}, date={2011}, journal={Journal of Mathematical Physics}, volume={52}, number={12}, pages={122204}, review={\MR{2907638}}, doi={10.1063/1.3668081}, } \bib{Larson}{article}{ author={Han, Deguang}, author={Larson, David}, author={Liu, Bei}, author={Liu, Rui}, title={Operator-valued measures, dilations, and the theory of frames}, date={2014}, ISSN={0065-9266}, journal={Mem. Amer. Math. Soc.}, volume={229}, number={1075}, pages={viii+84}, review={\MR{3186831}}, } \bib{Haagerup}{article}{ author={Haagerup, Uffe}, title={$L\sp{p}$-spaces associated with an arbitrary von Neumann algebra}, language={English, with French summary}, conference={ title={Alg\`ebres d'op\'{e}rateurs et leurs applications en physique math\'{e}matique}, address={Proc. Colloq., Marseille}, date={1977}, }, book={ series={Colloq. Internat. CNRS}, volume={274}, publisher={CNRS, Paris}, }, date={1979}, pages={175--184}, review={\MR{560633}}, } \bib{HLP}{article}{ author={Hardy, Godfrey~H}, author={Littlewood, John~E}, author={P{\'o}lya, Gyorgy}, title={Some simple inequalities satisfied by convex functions}, date={1929}, journal={Messenger Math}, volume={58}, number={145-152}, pages={310}, } \bib{Hiai}{article}{ author={Hiai, Fumio}, title={Log-majorizations and norm inequalities for exponential operators}, conference={ title={Linear operators}, address={Warsaw}, date={1994}, }, book={ series={Banach Center Publ.}, volume={38}, publisher={Polish Acad. Sci. Inst. Math., Warsaw}, }, date={1997}, pages={119--181}, review={\MR{1457004}}, } \bib{Holevo}{book}{ author={Holevo, Alexander~S}, title={Statistical structure of quantum theory}, publisher={Springer Science \& Business Media}, date={2003}, volume={67}, review={\MR{1889193}}, doi={10.1007/3-540-44998-1}, } \bib{JoeVerducci}{article}{ author={Joe, Harry}, author={Verducci, Joseph}, title={Multivariate majorization by positive combinations}, date={1992}, journal={Lecture Notes-Monograph Series}, pages={159\ndash 181}, review={\MR{1228063}}, doi={10.1214/lnms/1215461950}, } \bib{Komiya}{article}{ author={Komiya, Hidetoshi}, title={Necessary and sufficient conditions for multivariate majorization}, date={1983}, journal={Linear algebra and its applications}, volume={55}, pages={147\ndash 154}, review={\MR{719870}}, doi={10.1016/0024-3795(83)90173-8}, } \bib{Kosaki}{article}{ author={Kosaki, Hideki}, title={An inequality of Araki-Lieb-Thirring (von Neumann algebra case)}, journal={Proc. Amer. Math. Soc.}, volume={114}, date={1992}, number={2}, pages={477--481}, issn={0002-9939}, review={\MR{1065951}}, doi={10.2307/2159671}, } \bib{Malamud}{article}{ author={Malamud, S.~M.}, title={Inverse spectral problem for normal matrices and the {G}auss-{L}ucas theorem}, date={2005}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={357}, number={10}, pages={4043\ndash 4064}, review={\MR{2159699}}, doi={10.1090/S0002-9947-04-03649-9}, } \bib{marshallolkin}{book}{ author={Marshall, AW}, author={Olkin, I}, author={Arnold, BC}, title={Inequalities: Theory of majorization and its applications}, publisher={Springer-Verlag, New York}, date={2011}, review={\MR{2759813}}, doi={10.1007/978-0-387-68276-1}, } \bib{MPR}{article}{ author={McLaren, Darian}, author={Plosker, Sarah}, author={Ramsey, Christopher}, title={On operator valued measures}, journal={Houston J. Math.}, volume={46}, date={2020}, number={1}, pages={201--226}, review={\MR{4137283}}, } \bib{Paulsen}{book}{ author={Paulsen, Vern}, title={Completely bounded maps and operator algebras}, series={Cambridge Studies in Advanced Mathematics}, publisher={Cambridge University Press, Cambridge}, date={2002}, volume={78}, review={\MR{1976867}}, } \bib{PRLyapunov}{article}{ author={Plosker, Sarah}, author={Ramsey, Christopher}, title={An operator-valued {L}yapunov theorem}, date={2019}, journal={Journal of Mathematical Analysis and Applications}, volume={469}, number={1}, pages={117\ndash 125}, review={\MR{3857513}}, doi={10.1016/j.jmaa.2018.09.003}, } \bib{pol}{article}{ author={Polterovich, Leonid}, title={Symplectic geometry of quantum noise}, date={2014}, journal={Communications in Mathematical Physics}, volume={327}, number={2}, pages={481\ndash 519}, review={\MR{3183407}}, doi={10.1007/s00220-014-1937-9}, } \bib{Ryff63}{article}{ author={Ryff, John}, title={On the representation of doubly stochastic operators}, date={1963}, journal={Pacific Journal of Mathematics}, volume={13}, number={4}, pages={1379\ndash 1386}, review={\MR{163171}}, } \bib{Ryff1965}{article}{ author={Ryff, John~V}, title={Orbits of $l^1$-functions under doubly stochastic transformation}, date={1965}, journal={Transactions of the American Mathematical Society}, volume={117}, pages={92\ndash 100}, review={\MR{209866}}, doi={10.2307/1994198}, } \end{biblist} \end{bibdiv} \end{document}
2,877,628,089,871
arxiv
\section{Introduction}\label{sec:intro} With the observation of gravitational waves of merging black holes~\cite{Abbott:2016blz} and the first direct picture of a black hole shadow in the center of the galaxy M87~\cite{Akiyama:2019cqa}, the possibilities to observe the behavior of gravity in the strong field regime has increased enormously. The newly obtained data is the perfect basis to understand the viability range of general relativity and possible modified gravity theories, suggested as its generalization. Here we derive the influence of a teleparallel modification of general relativity on the photon sphere of black holes and on the perihelion shift of elliptic orbits in spherical symmetry. This is a step to investigate the influence of teleparallel gravity on more realistic spinning black holes with axial symmetry. Teleparallel theories of gravity are formulated in terms of a tetrad of a spacetime metric and a spin connection, instead of in terms of a spacetime metric and its Levi-Civita connection~\cite{Aldrovandi:2013wha}. This structure allows for the construction of a huge variety of theories of gravity beyond general relativity, among them the most famous model, the so-called $f(T)$ gravity~\cite{Cai:2015emx,Ferraro:2006jd}. In this theory, the Lagrangian is given by an arbitrary function $f$ of the torsion scalar $T$, which defines the teleparallel equivalent formulation of general relativity (TEGR). Numerous viability criteria for $f(T)$ gravity have been derived in the context of cosmology~\cite{Bengochea:2008gz,Cai:2011tc,Bamba:2010wb,Dent:2011zz}. However, not much work has been done in spherical and axial symmetry, mostly due to the lack of analytic solutions of the field equations. To solve the $f(T)$ gravity field equations in spherical symmetry in all generality for arbitrary $f$ is a difficult task. The main challenge is to find the tetrad, to which one can consistently associate a vanishing spin-connection. This tetrad does not only have to satisfy the symmetric part and anti-symmetric part (the spin-connection part) of the field equations~\cite{Tamanini:2012hg}, but also must yield a torsion scalar which vanishes for the Minkowski spacetime limit. A further subtlety is that some solutions of the $f(T)$ field equations yield a constant torsion scalar $T$. In this case $f(T)$ gravity is identical to TEGR plus a cosmological constant and nothing new is obtained. The latter feature is for example present in the first study which tried to find spherically symmetric solutions~\cite{Ferraro:2011ks} and also in a later study which used the Noether's symmetry approach to find solutions~\cite{Paliathanasis:2014iva}. There are only a few publications deriving exact solutions with the correct field equations (see for example~\cite{Daouda:2012nj}). In addition, some regular black hole solutions (perturbatively and exact) have been found correctly in~\cite{Aftergood:2014wla,Boehmer:2019uxv}. When one considers matter, there are some works which have studied the possibility of constructing stars or wormhole solutions in different teleparallel theories of gravity~\cite{Bohmer:2011si,Jamil:2012ti,Bahamonde:2016jqq,Jawad:2015uea,Horvat:2014xwa,Boehmer:2011gw,Ilijic:2018ulf,Pace:2017dpu,Pace:2017aon}. Overall the issue of finding exact spherically symmetric solutions in $f(T)$-gravity is still an open problem. Instead of looking for full analytical solutions, an alternative way to study the astrophysical effects of modified theories of gravity is to employ perturbation theory. The influence of deviations from TEGR can be investigated, by setting $f(T) = T + (1/2)\epsilon \alpha T^p$. This model contains TEGR (and GR) in the limit the perturbation parameter $\epsilon$ or the coupling constant $\alpha$ goes to zero. It is assumed that the deviation from TEGR is small, ($\epsilon \ll 1$), and hence, only first order terms in $\epsilon$ are relevant in all calculations. In this paper, we consider perturbations around two different background geometries: Minkowski spacetime and Schwarzschild spacetime. For the first case we keep the exponent $p >1/2$ and solve the spherically symmetric perturbative $f(T)$ field equations in vacuum. We find that, to first order, the teleparallel perturbation of general relativity has no influence at all. This finding is in conflict with results found earlier in \cite{Ruggiero:2015oka}. A problem with these earlier derivations is that the $f(T)$ field equations presented in \cite{Ruggiero:2015oka}, Eqs.~(8) through (10), do not have Schwarzschild geometry as a solution for $f(T) = T$ and vanishing matter. Moreover the tetrad used there does not yield a torsion scalar which vanishes in the Minkowski spacetime limit. These shortcomings made us redo the calculations, while paying particular attention to the consistency of the perturbation theory. For the second case, the first order field equations are more involved and cannot be solved for general $p$. To find the astrophysical impact of the parameter $p$, we derive perturbative solutions for $p=2$ to $10$, from which we calculate the circular particle orbits for massless particles and the perihelion shift for nearly circular massive particle orbits. The circular orbits of massless particles define the photon sphere, which is interesting in particular, since it defines the edge of the shadow of the black hole. For $p=2$ the perihelion shift has already been studied in the literature and we recover the results from \cite{DeBenedictis:2016aze}. Comparing our calculation with the previous one demonstrates explicitly that the covariant formulation of teleparallel gravity works well. We employ a vanishing spin connection and a non-diagonal tetrad, while in \cite{DeBenedictis:2016aze} a non-vanishing spin-connection and a diagonal tetrad was used. The main aim of this paper is to present a careful derivation of the first order $f(T)$ field equations in the single tetrad framework for the models mentioned above and to derive the perturbative solutions around Minkowski and Schwarzschild geometry. Eventually this procedure gives insights about the phenomenological consequences of the teleparallel corrections to general relativity. This work prepares a more general study where we will derive the phenomenological consequences of teleparallel perturbations of Kerr geometry, with and without cosmological constant. The article is structured as follows: In Sec.~\ref{sec:covtelegrav} we give an overview about the covariant formulation of $f(T)$ gravity and then, we find the corresponding field equations for any spherically symmetric spacetime. Sec.~\ref{sec:weak} is devoted to studying the weak power-law f(T) model for perturbations around Minkowski and Schwarzschild geometries to find the correct metric coefficients which solve the first order field equations. The particle motion phenomenology for the squared power-law $f(T)$ case for the Schwarzschild background is studied in Sec.~\ref{sec:pheno}, deriving the deviation from TEGR (or GR) of the photon sphere and the perihelion shift. We conclude our main results in Sec.~\ref{sec:conclusion} Throughout the paper we denote $h^{a}{}_{\mu}$ and $h_{a}{}^{\mu}$ for the tetrad and its inverse, respectively, where Latin indices refer to tangent space indices and Greek to spacetime indices. Our signature convention is $(+,-,-,-)$ and we work in units where $G=c=1$. \section{Covariant formulation of $f(T)$ gravity in spherical symmetry}\label{sec:covtelegrav} Throughout this paper we employ the covariant formulation of teleparallel gravity \cite{Krssak:2015oua} in the Weitzenb\"ock gauge, also called the pure tetrad formalism. That means we consider a tetrad, its torsion, and a vanishing spin connection. All degrees of freedom are encoded in the tetrad which, in the end, solves the symmetric and the antisymmetric part of the $f(T)$ field equations, and yields a vanishing torsion scalar in the Minkowski spacetime limit. We would like to stress that this is as equivalent as considering a non-vanishing spin connection and another tetrad, which together solve the symmetric and antisymmetric parts of the field equations \cite{Golovnev:2017dox, Hohmann:2017duq, Krssak:2015oua}. \subsection{Covariant teleparallel gravity} The fundamental variables in teleparallel theories of gravity are the tetrad of a Lorentzian metric $g = \eta_{ab} \theta^a \otimes \theta^b$, which can be expressed in local coordinates as \begin{align} \theta^a = h^a{}_{\mu}\mathrm{d} x^\mu, \quad e_a = h_a{}^\mu \partial_\mu,\quad \theta^a(e_b) = \delta^a_b \quad \Rightarrow \quad g_{\mu\nu} = \eta_{ab}h^a{}_\mu h^b{}_\nu \,, \end{align} and a flat, metric compatible spin connection that is generated by local Lorentz matrices $\Lambda^a{}_b$ \begin{align} \omega^a{}_{b\mu} = \omega^a{}_{b\mu}(\Lambda) = \Lambda^a{}_c \partial_\mu (\Lambda^{-1})^c{}_{b},\quad \eta_{ab}\Lambda^a{}_c\Lambda^b{}_d = \eta_{cd}\,, \end{align} which poses torsion \begin{align} T^a{}_{\mu\nu} = 2 \left(\partial_{[\mu}h^a{}_{\nu]} + \omega^a{}_{b[\mu} h^b{}_{\nu]}\right)\,. \end{align} In the Weitzenb\"ock gauge, the spin connection is set to be zero ($\omega^a{}_{b\mu}=0$) and so, the torsion tensor reduces to $T^a{}_{\mu\nu} = \partial_{[\mu}h^a{}_{\nu]}$. From here on we will work in the Weitzenb\"ock gauge. This is equivalent to having a non-vanishing spin-connection and a tetrad which solve their respective anti-symmetric part of the field equations. A detailed discussion about this equivalence can be found in Refs.~\cite{Golovnev:2017dox,Hohmann:2017duq,Krssak:2015oua}. The teleparallel equivalent of General Relativity (TEGR) is constructed from the action \begin{align} S_{\rm TEGR}= \int \mathrm{d}^4x\ |h| \left( \frac{1}{2\kappa^2} T + \mathcal{L}_{\rm m}(g,\Psi) \right)\,,\label{TEGR} \end{align} where $\kappa^2=8\pi$, $|h|=\det (h^a{}_\mu)=\sqrt{-g}$ is the determinant of the tetrad, $\mathcal{L}_{\rm m}(g,\Psi)$ is the matter Lagrangian for matter minimally coupled to gravity via the metric generated by the tetrads, and the so-called torsion scalar $T$ reads as follows \begin{align} T = T^a{}_{\mu\nu}S_a{}^{\mu\nu} =\frac{1}{2} \left(h_a{}^\sigma g^{\rho \mu} h_b{}^\nu + 2 h_b{}^\rho g^{\sigma \mu} h_a{}^\nu + \frac{1}{2} \eta_{ab} g^{\mu\rho} g ^{\nu\sigma} \right) T^a{}_{\mu\nu} T^b{}_{\rho\sigma}\,. \end{align} The superpotential $S_a{}^{\mu\nu}$ is given by $S_a{}^{\mu\nu} = \frac{1}{2}(K^{\mu\nu}{}_a - h_a{}^\mu T_\lambda{}^{\lambda\nu} + h_a{}^\nu T_\lambda{}^{\lambda\mu})$ in terms of the contortion tensor $K^{\mu\nu}{}_a = \frac{1}{2}(T^{\nu\mu}{}_a + T_a{}^{\mu\nu} - T^{\mu\nu}{}_a)$, and the appearing components of the metric are understood as a function of the tetrads. The modified teleparallel theory of gravity we are investigating is $f(T)$ gravity, which is a straightforward generalisation of the action~\eqref{TEGR} as follows: \begin{align} S_{f(T)} = \int \mathrm{d}^4x\ |h| \left( \frac{1}{2\kappa^2} f(T) + \mathcal{L}_{\rm m}(g,\Psi) \right)\,. \end{align} The function $f$ is an arbitrary function of the torsion scalar. Variation with respect to the tetrad $h^a{}_\mu$ yields the field equations~\cite{Krssak:2015oua} \begin{align}\label{eq:fT} \frac{1}{4}f(T) h_a{}^\mu + f_T\ \left( T^b{}_{\nu a} S_b{}^{\mu \nu } + \frac{1}{h}\partial_{\nu}(h S_a{}^{\mu \nu }) \right) + f_{TT}\ S_a{}^{\mu\nu} \partial_\nu T &= \frac{1}{2}\kappa^2 \Theta_a{}^\mu\,, \end{align} with $\Theta_a{}^\mu$ being the energy-momentum tensor of the matter field, $f_T=\partial f/\partial T$ and $f_{TT}=\partial^2 f/\partial T^2$. They can be rewritten purely in terms of spacetime indices by contraction with $g_{\mu\rho}$ and $h^a{}_\sigma$ to take the form \begin{align} H_{\sigma\rho} = \frac{1}{2}\kappa^2 \Theta_{\sigma\rho}\,. \end{align} Their symmetric part is sourced by the energy-momentum tensor, while their anti-symmetric part is a vacuum constraint for the matter models we consider. The latter is equal to the variation of the action with respect to the flat spin-connection components \cite{Golovnev:2017dox, Hohmann:2017duq}, \begin{align} H_{(\sigma\rho)} = \frac{1}{2}\kappa^2 \Theta_{(\sigma\rho)}, \quad H_{[\sigma\rho]} =0\,. \end{align} The explicit form of these equations can be found for example in Eqs.~(26) and (30) in \cite{Hohmann:2018rwf} by setting the scalar field $\phi$ to zero. We do not display these here since we will derive the spherically symmetric field equations directly from \eqref{eq:fT}. \subsection{Spherical Symmetry in $f(T)$ gravity}\label{ref:ssecSphfT} In this section, the $f(T)$ field equations for a spherically symmetric spacetime will be derived. Let us first start with the following spherically symmetric metric in standard spherical coordinates $(t,r,\theta, \phi)$: \begin{equation} ds^2=A \,dt^2-B\, dr^2-r^2(d\theta^2+\sin^2\theta d\phi^2)\,,\label{metric} \end{equation} where $A=A(r)$ and $B=B(r)$ are positive functions which depend on the radial coordinate. This means we consider the outside region of possible black holes. To calculate the field equations, we employ the following off-diagonal tetrad~\cite{Bohmer:2011si} \begin{equation} h^a{}_{\nu}=\left( \begin{array}{cccc} \sqrt{A} & 0 & 0 & 0 \\ 0 & \sqrt{B} \cos (\phi ) \sin (\theta ) & r \cos (\phi ) \cos (\theta ) & -r \sin (\phi ) \sin (\theta ) \\ 0 & \sqrt{B} \sin (\phi ) \sin (\theta ) & r \sin (\phi ) \cos (\theta ) & r \cos (\phi ) \sin (\theta ) \\ 0 & \sqrt{B} \cos (\theta ) & -r \sin (\theta ) & 0 \\ \end{array} \right)\label{tetrad}\,. \end{equation} This tetrad together with a vanishing spin connection consistently defines a spherically symmetric teleparallel geometry. Hence, it is consistent to derive the $f(T)$-field equations \eqref{eq:fT} from this tetrad with vanishing spin connection. Equivalently one could choose a diagonal tetrad with a non-vanishing spin connection~\cite{Hohmann:2019nat}, as was done in \cite{DeBenedictis:2016aze}. For this setup the torsion scalar becomes \begin{align}\label{scaletorsion} T= -\frac{2 \left(\sqrt{B(r)}-1\right) \left(r A'(r)-A(r) \sqrt{B(r)}+A(r)\right)}{r^2 A(r) B(r)}\,. \end{align} Clearly, if $A\rightarrow 1$ and $B\rightarrow 1$ (Minkowski limit), the torsion scalar vanishes. Calculating the field equations~\eqref{eq:fT} contracted with $h^a{}_\sigma$ and an anisotropic fluid energy momentum-tensor, defined by the energy density $\rho = \Theta^0{}_0$, the radial and the lateral pressures $- p_{r} = \Theta^1{}_1 $ and $- p_{l} = \Theta^2{}_2 $ respectively, we find the non-vanishing independent spherically symmetric $f(T)$ field equations are the diagonal components $H^\mu{}_\mu$ (no sum taken), \begin{eqnarray} \frac{1}{2} \kappa ^2 \rho&=&\frac{r B(\sqrt{B}-1) A'+A (r B'+2 B^{3/2}-2 B)}{2 r^2 A B^2}f_T+\frac{(\sqrt{B}-1) }{r B}T' f_{TT}+\frac{1}{4} f \,,\label{Eq1}\\ \frac{1}{2} \kappa ^2 p_{r}&=& - \frac{r (\sqrt{B}-2) A'+2 A (\sqrt{B}-1) }{2 r^2 A B}f_{T} - \frac{1}{4} f\,,\label{Eq2}\\ \frac{1}{2} \kappa ^2 p_{l}&=&\frac{-r^2 B A'^2+r A \left(-r A' B'-4 B^{3/2} A'+2 B\left(r A''+3 A'\right)\right)+A^2 \left(-2 r B'-8 B^{3/2}+4 B^2+4 B\right)}{8 r^2 A^2 B^2}f_{T}\nonumber\\ &&+\frac{r A'-2 A (\sqrt{B}-1) }{4 r A(r) B(r)}T'f_{TT}-\frac{1}{4} f\,,\label{Eq3} \end{eqnarray} where primes denotes derivatives with respect to the radial coordinate. There are only three independent equations since $H^\phi{}_\phi \sim H^\theta{}_\theta$. This also demonstrates that the tetrad~\eqref{tetrad} is a so-called good tetrad, i.e., it solves the anti-symmetric field equations $H_{[\mu\nu]}=0$ with vanishing spin connection. We like to remark here that our choice of tetrad is not the only good tetrad in this sense. The tetrad presented in \cite{Tamanini:2012hg} could be chosen equally well from this point of view. However not all of these good tetrads, in the sense of the field equations, yield consistently a torsion scalar that vanishes for the Minkowski spacetime limit. A consistency check of the field equations is to set $f(T) = T$, hence $f_{T} = 1$ and $f_{TT}=0$, and to see if for $\rho=p_r=p_l=0$, Schwarzschild geometry ($A(r) = B(r)^{-1} = 1 - 2M/r$) solves the field equations, which is the case. The last remark leads us to the point that the equations we derived above differ from the spherically symmetric $f(T)$ field equations in \cite{Ruggiero:2015oka}. First, their field equations~(8)-(10) are not solved by Schwarzschild geometry for $f(T)=T$ and $\rho=p_r=p_l=0$, as it is the case for our equations. Second the off-diagonal tetrad they choose has the problem that its torsion scalar does not vanish for the Minkowski spacetime limit, which means that it is not the correct tetrad to which one associates a vanishing spin connection. We will now solve the $f(T)$ field equations for a specific power law choice for $f$ to first order in a perturbation around Minkowski and Schwarzschild spacetime geometry. In each instance we will compare our results and the ones obtained in \cite{Ruggiero:2015oka} and \cite{DeBenedictis:2016aze}. \section{Weak power law $f(T)$ gravity} \label{sec:weak} In this section we turn our focus to a power-law $f(T)$ model that reads as \begin{equation}\label{eq:weakfT} f(T)=T+\frac{1}{2}\epsilon\,\alpha T^p\,, \end{equation} where $\alpha$ and $p$ are constants and $\epsilon\ll 1$ is a small tracking parameter that, similarly as it was done in~\cite{DeBenedictis:2016aze}, is used to make the series expansion in a coherent way. It allows us to easily track quantities that are considered to be small throughout our calculations. Since we are interested in perturbations of a Schwarzschild background, the ansatz we employ for the metric coefficients is \begin{align} A(r)&=1-\frac{2M}{r}+\epsilon \, a(r)\,,\\ B(r)&= \left(1-\frac{2M}{r} \right)^{-1}+\epsilon \, b(r)\,,\label{expansion} \end{align} where $a(r)$ and $b(r)$ are functions of the radial coordinate. If one uses the above metric coefficients in the $f(T)$ power-law spherically symmetric field equations~\eqref{Eq1}--\eqref{Eq3} and then expands up to first order in $\epsilon$, the equations become \begin{eqnarray} \frac{1}{2} \kappa ^2\epsilon \rho&=& \epsilon \left( \alpha \frac{(-1)^{p+1}\ 2^{p-3}\ (p-1)}{(r^2 \mu)^p}\ (\mu-1)^{2p-1}\left(\mu-1+p(1+\mu(2+5\mu)) \right) - \frac{\mu^2 (\mu^2 - 2) b(r)}{2 r^2} + \frac{\mu^4 b'(r)}{2 r}\right)\label{Eq1b}\,,\\ \frac{1}{2}\kappa^2 \epsilon p_{r}&=&\epsilon \left( - \alpha \frac{(-1)^{p+1}\ 2^{p-3}\ (p-1)}{(r^2 \mu)^p} + \frac{ a'(r)}{2 r} + \frac{(\mu ^2-1) a(r)}{2 \mu ^2 r^2} - \frac{\mu ^2 b(r)}{2 r^2} \right)\,,\label{Eq2b}\\ \frac{1}{2}\kappa^2 \epsilon p_{l} &=& \epsilon \left( - \alpha \frac{(-1)^{p+1}\ 2^{p-5}\ (p-1)}{\mu (r \mu)^{p}}\ (\mu-1)^{2p} (p+2(2+p)\mu+5p \mu^2)\right. \nonumber \\ &+&\left.\frac{1}{4} a''(r) + \frac{\left(3 \mu ^2-1\right)a'(r)}{8 \mu ^2 r} - \frac{\left(\mu ^4-1\right) a(r)}{8 \mu ^4 r^2} - \frac{\mu^2 \left(\mu ^2+1\right) b'(r)}{8 r} + \frac{\left(\mu ^4-1\right) b(r)}{8 r^2} \right)\,,\label{Eq3b} \end{eqnarray} where $\mu=(1-2M/r)^{1/2}$ was introduced for simplicity and we assumed that the energy momentum-tensor is zero to zeroth order in $\epsilon$. The latter assumption is necessary in order to have Schwarzschild geometry as consistent zeroth order solution. As usual in perturbation theory at this stage, the small parameter $\epsilon$ drops out of the equations and they can be solved for the first order perturbations $a(r)$ and $b(r)$. Eq.~\eqref{Eq2b} is an algebraic equation for $b(r)$ that can be easily solved, yielding \begin{eqnarray} b(r)&=& \alpha \frac{(-1)^p\ 2^{p-2}\ (p-1) r^2 (\mu-1)^{2p}}{\mu^2 (r^2 \mu)^p} + \frac{(\mu ^2-1) a(r)}{\mu ^4} + \frac{r a'(r)}{\mu^2} - \frac{\kappa ^2 r^2 }{\mu ^2 }p_r \label{bis}\,. \end{eqnarray} Inserting this result for $b(r)$ in \eqref{Eq1b} and~\eqref{Eq3b}, while setting $p_r = p_l$, we obtain one remaining partial differential equation, which can be solved for $a(r)$ \begin{equation} a''+\frac{2 a'}{r}-\frac{\alpha 2^{p-3} \left(4 (\mu -1) \mu ^2+\left(5 \mu ^3+7 \mu ^2+3 \mu +1\right) p^2-\left(9 \mu ^3+3 \mu ^2+3 \mu +1\right) p\right) r^{-3 p} \left(-\frac{(\mu -1)^2 r}{\mu }\right)^p}{(\mu -1) \mu ^2}=0\,, \end{equation} In order to continue solving the equations, we will separate the study into two branches: A) $M=0$, and $p_r=p_{l}=-\rho=-\Lambda$ as it was studied in Ref.~\cite{Ruggiero:2015oka}, with the additional constraint $p > 1/2$ to guarantee well-defined field equations. We redo the calculations of \cite{Ruggiero:2015oka}, since we find a completely different result for the $M=0$. B) $M\neq0$, $\rho=p_r=p_l=0$ and $p=2$ to $10$. For $p=2$ we reproduce the result from \cite{DeBenedictis:2016aze}. \subsection{Minkowski background ($M=0$)} If one assumes $p>1/2$, perturbations around a Minkowski background can be studied by setting $M=0$ ($\mu=1$). It can immediately be seen from Eqs.~\eqref{Eq1b}-\eqref{Eq3b} that the influence of the teleparallel perturbation (e.g. the terms proportional to $\alpha$) drops out. As a consequence one obtains the usual first order (A)dS Schwarzschild spacetime geometries as solutions of the perturbed field equations with a cosmological constant as first order matter source, i.e., $p_r=p_{l}=-\rho=-\Lambda$. The perturbation functions $a(r)$ and $b(r)$ are easily determined from \eqref{Eq1b}--\eqref{Eq3b}. The metric coefficients $A(r)$ and $B(r)$ become \begin{align} A(r)&=1 + \epsilon \left(C_2 - \frac{C_1}{r} - \frac{1}{3}\kappa ^2 \Lambda r^2 \right)\,,\label{A}\\ B(r)&=1 + \epsilon \left(\frac{C_1}{r} + \frac{1}{3}\kappa ^2 \Lambda r^2 \right)\,,\label{B} \end{align} for all $p > 1/2$. Here $C_1$ and $C_2$ are integration constants labelling the linearised Schwarzschild solution of general relativity and a constant shift of the Minkowski metric respectively. Usually they are chosen to be $C_1 = 2M$ and $C_2 = 0$. The $\Lambda$ term appears due to a non-vanishing cosmological constant we assumed as a first order matter source. The solutions we find are completely different to the ones presented in \cite{Ruggiero:2015oka}. The source of this discrepancy lies in our choice of the tetrad \eqref{tetrad}, to which we associate a vanishing spin connection. The tetrad chosen in the previous work had the drawback that its torsion scalar, see Eq.~(9) in \cite{Ruggiero:2015oka}, does not vanish in the Minkowski spacetime limit $A\to 1$, $B\to 1$, but gives $8/r^2$. In turn this leads to an infinite action for Minkowski spacetime. As we mentioned earlier, our tetrad avoids this complication by having a vanishing torsion scalar for the Minkowski spacetime limit. \subsection{Schwarzschild background $M\neq0$}\label{ssec:schw} In this section, we will focus our study on perturbations of Schwarzschild geometry ($M\neq0$) induced by weak power-law $f(T)$ gravity \begin{eqnarray} f(T)=T+\frac{1}{2}\alpha \epsilon\, T^p\,, \end{eqnarray} exemplary for $p=2$ to $p=4$, since a solution for general $p$ cannot be obtained. For higher integer values of $p$ the solutions have a similar form, but it is not further insightful to display the long expressions. The general structure of the vacuum solutions,i.e.\ $\rho=p_r=p_l=0$, for all $p$ is \begin{align} A(r)&= 1 - \frac{2M}{r} +\epsilon\left(-\frac{C_1}{r}+C_2 + \alpha \bar a(r)\right)\,,\label{astruct}\\ B(r)&= \frac{1}{1 - \frac{2M}{r}} +\epsilon\left(\frac{\left(\frac{C_1}{r}-\frac{2 C_2 M}{r}\right)}{\left(1-\frac{2 M}{r}\right)^{2}} + \alpha \bar b(r)\right)\,,\label{bstruct}\,. \end{align} The integration constants $C_1$ and $C_2$ are determined in a power series expansion in $\frac{1}{r}$, such that the zeroth and first order terms in this expansion vanish. Physically this means we use the integration constants to avoid an influence of the teleparallel perturbation in the weak field, respectively, large distance, limit. After integration constants have been found, the solutions take the form \begin{align} A(r)&= 1 - \frac{2M}{r} +\epsilon\alpha \hat a(r)\,,\label{astruct2}\\ B(r)&= \frac{1}{1 - \frac{2M}{r}} +\epsilon \hat b(r)\,.\label{bstruct2} \end{align} For $p=2$ we reproduce the solutions found in \cite{DeBenedictis:2016aze} \begin{eqnarray} A(r)&=& 1- \frac{2M}{r}+\epsilon\left(-\frac{C_1}{r}+C_2-\alpha\left[\frac{M^2+6 M r+r^2}{M r^3}-\frac{16 \left(1-\frac{2 M}{r}\right)^{3/2}}{3 M^2}+\frac{(1-\frac{3 M}{r})}{2 M^2} \ln \left(1-\frac{2M}{r}\right)\right]\right)\,,\label{afin}\\ B(r)&=& \frac{1}{1- \frac{2M}{r}} + \epsilon\left(\frac{\left(\frac{C_1}{r}-\frac{2 C_2 M}{r}\right)}{\left(1-\frac{2 M}{r}\right)^{2}} -\alpha\left[-\frac{8 (3 M^2-7 M r+2 r^2)}{3 M r^3 (1-\frac{2 M}{r})^{3/2}} +\frac{25 M-23 r}{r^3 (1-\frac{2 M}{r})^2}+\frac{\ln\left(1-\frac{2M}{r}\right)}{2Mr(1-\frac{2M}{r})^{2}}\right]\right)\,,\label{bfin} \end{eqnarray} which can be be expressed conveniently in terms of the variable $\mu=(1-2M/r)^{1/2}$, giving \begin{eqnarray} A(r)&=& \mu^2 +\epsilon\left(- \frac{C_1}{r}+C_2 - \alpha \frac{51-93 \mu^2 -128 \mu^3 + 45 \mu^4 - 3 \mu^6 - 12 (1-3 \mu^2) \ln(\mu)}{6 r^2 (1 - \mu^2)^2}\right)\,,\label{afinX}\\ B(r)&=& \frac{1}{\mu^2}+ \epsilon\left(\frac{\left(\frac{C_1}{r} - (1 - \mu^2)C_2\right)}{\mu^4} + \alpha \frac{63 -24 \mu + 12 \mu^2 +64 \mu^3 - 75 \mu^4 + 24 \mu^5 - 12 \ln(\mu)}{6 r^2 \mu^4 (1 - \mu^2)}\right)\,.\label{bfinX} \end{eqnarray} Determining the integration constants from the $\frac{1}{r}$ expansions \begin{align} A(r) \sim \left(\frac{16 \alpha}{3 M^2} + C_2 \right) - \left(\frac{16 \alpha}{M}+ C_1\right)\frac{1}{r} + \mathcal{O}\Big(\frac{1}{r^2}\Big),\quad B(r) \sim \left(\frac{16 \alpha}{3M} + C_1 - 2 M C_2\right)\frac{1}{r} + \mathcal{O}\Big(\frac{1}{r^2}\Big) \end{align} yields $C_2 = -16\alpha/(3 M^2)$ and $C_1 = -16 \alpha/M$. For easy comparison with previous approaches we display the solutions \eqref{afinX} and \eqref{bfinX} one more time with this choice of integration constants \begin{align} A(r)&= \mu^2 + \epsilon\alpha \frac{ 13 - 99 \mu^2 + 128 \mu^3 - 45 \mu^4 + 3 \mu^6 + 12(1-3\mu^2) \ln(\mu)}{6 r^2 (1 - \mu^2)^2}\,,\label{afinX2}\\ B(r)&= \frac{1}{\mu^2} - \epsilon \alpha \frac{1 + 24 \mu - 12 \mu^2 - 64 \mu^3 + 75 \mu^4 - 24 \mu^5 + 12 \ln(\mu)}{6 r^2 \mu^4 (1-\mu^2)}\,.\label{bfinX2} \end{align} The leading order terms for the torsion scalar \eqref{scaletorsion} for the weak squared power-law case in a Schwarzschild background behaves as \begin{align}\label{eq:Tscalarsol} T = -\frac{2 (\mu -1)^2}{\mu r^2} + \epsilon \alpha \frac{13 - 36 \mu + 108 \mu^2 - 184 \mu^3 + 135 \mu^4 - 36 \mu^5 + 12 \ln{(\mu)}}{6 r^4 \mu^3}\,. \end{align} For the specific choice of $C_1$ and $C_2$ we find that in the $M\to 0$ limit $A(r) \to 1$, $B(r)\to 1$, and $T\to 0$. This result coincides with the one presented in \cite{DeBenedictis:2016aze}. For $p=3$ and $p=4$ we find similar solutions that can be expressed as \begin{align} A(r) = \mu^2 + \epsilon \,\alpha\, \hat a(r)\,,\quad B(r) = \frac{1}{\mu^2} + \epsilon \,\alpha\, \hat b(r)\,,\quad T(r) = -\frac{2 (\mu -1)^2}{\mu r^2} + \epsilon\, \alpha\, \hat T(r)\,. \end{align} \begin{itemize} \item $p=3$ \begin{subequations}\label{eq:p3} \begin{align} \hat a(r) &= \bigg[ -280 \mu^{12} + 945 \mu^{11} + 1120 \mu^{10} - 8295 \mu^9 + 6984 \mu^8 + 18060 \mu^7 - 37632 \mu^6 + 1260 \mu^5 + 86520 \mu^4\nonumber\\ & - 62909 \mu^3 - 10080 \mu^2 - 7560 \left(7 \mu^2-1\right) \mu \log (\mu) + 178\mu + 2520 \bigg] \left(315 r^4 \mu \left(\mu^2-1\right)^4\right)^{-1}\,,\\ \hat b(r) &= \bigg[ 3780 \mu^{11}-19040 \mu^{10}+27405 \mu^9+16560 \mu^8-81480 \mu^7+56448 \mu^6+44100 \mu^5-77280 \mu^4+23940 \mu^3\nonumber\\ &+10080 \mu^2-9553 \mu+7560 \mu \log (\mu)+5040\bigg] \left(315 r^4 \mu^5 \left(\mu^2-1\right)^3\right)^{-1} \,,\\ \hat T(r) &= \bigg[7(\mu-1) \big(6300 \mu^{10}-30380 \mu^9+44905 \mu^8+8545 \mu^7-88055 \mu^6+74233 \mu^5+12493 \mu^4-49667 \mu^3\nonumber\\ &+27193 \mu^2-10607 \mu-2520\big)+7560 \mu \log (\mu)\bigg]\left(315 r^6 \mu^4\left(\mu^2-1\right)^2\right)^{-1}\,. \end{align} \end{subequations} \item $p=4$ \begin{subequations}\label{eq:p4} \begin{align} \hat{a}(r) &= -4 \bigg[-6435 \mu^{18}+36960 \mu^{17}-32175 \mu^{16}-221760 \mu^{15} + 552552 \mu^{14} +145600 \mu^{13}-1963962 \mu^{12}\nonumber\\ & +1693120 \mu^{11} + 2642640 \mu^{10}-5436288 \mu^9+330330 \mu^8 + 7495488 \mu^7-6846840 \mu^6-4804800 \mu^5\nonumber\\ &+3912986 \mu^4 +2882880 \mu^3 +55139 \mu^2 + 720720\left(11 \mu^2-1\right) \mu^2 \log (\mu)-480480 \mu\nonumber\\ &+45045\bigg] \left( 15015 r^6 \mu^2 \left(\mu^2-1\right)^6\right)^{-1}\,,\\ \hat{b}(r) &= - 4 \bigg[120120 \mu^{17}-842985 \mu^{16}+1940400 \mu^{15}-60060 \mu^{14}-7141680 \mu^{13}+10198188 \mu^{12}+3443440 \mu^{11}\nonumber\\ &-20810790 \mu^{10}+13590720 \mu^9+12222210 \mu^8-20612592 \mu^7+4504500 \mu^6+8888880 \mu^5-6666660 \mu^4\nonumber\\ &+720720 \mu^3+1451534 \mu^2-720720 \mu^2 \log (\mu)-1081080 \mu+135135\bigg] \left(15015 r^6 \mu^6 \left(\mu^2-1\right)^5\right)^{-1}\,,\\ \hat{T}(r) &= 2 \bigg[1441440 \mu^2 \log (\mu)-2 (\mu-1) \Big(210210 \mu^{16}-1443585 \mu^{15}+3379695 \mu^{14}-1004685 \mu^{13}-9227445 \mu^{12} \nonumber\\ &+15024783 \mu^{11}-270497 \mu^{10}-21081287 \mu^9+18275173 \mu^8+4731643 \mu^7-15880949 \mu^6+7362271 \mu^5\nonumber\\ & + 2197111\mu^4-3388469 \mu^3+1656571 \mu^2+225225 \mu-45045\Big)\bigg]\left(15015 r^8 \mu^5 \left(\mu^2-1\right)^4\right)^{-1}\,. \end{align} \end{subequations} \end{itemize} For the other higher values of $p$ the calculation follows the same scheme and their solutions are similar to the above ones. Since they are lengthy and not insightful, we will not present them explicitly. \section{Particle motion Phenomenology}\label{sec:pheno} To relate the influence of a teleparallel modification of general relativity to observables, we study the motion of test particles in the background solution defined by the metric coefficients \eqref{afin} and \eqref{bfin}. We explicitly derive the photon sphere around the black hole and the perihelion shift of nearly circular orbits. Nowadays the photon sphere is of particular interest since it defines the edge of the shadow of a black hole while the perihelion shift was already derived in \cite{DeBenedictis:2016aze}, but on the basis of an erroneous solution, as we discussed above. \subsection{Geodesic equation and effective potential} The worldline $q(\tau)$ of test particles in a curved spacetime is determined by the Euler-Lagrange (EL) equations \begin{equation}\label{4} \frac{d}{d\tau}\left(\frac{\partial \mathcal{L}}{\partial \dot{q}^\mu}\right)-\frac{\partial \mathcal{L}}{\partial q^\mu}=0\,, \end{equation} of the Lagrangian \begin{equation}\label{7} 2\mathcal{L}=g_{\mu\nu}\dot{q}^{\mu}\dot{q}^{\nu}=\left(1-\frac{2 M}{r}+ \epsilon\, a(r)\right)\dot{t}^2-\left(\frac{1}{1-\frac{2 M}{r}}+ \epsilon\, b(r)\right)\dot{r}^2-r^2\dot{\theta}^2-r^2\sin^2{\theta}\dot{\phi}^2\,, \end{equation} where $q^{\mu}(\tau)=(t(\tau), x(\tau), \theta(\tau), \phi(\tau))$, and $\dot{q}^\mu$ denotes the derivative of $q^\mu$ with respect to the affine parameter $\tau$. The perturbation functions $a(r)$ and $b(r)$ under consideration can be read off in \eqref{afin} and \eqref{bfin}. To solve the EL equations we employ the usual scheme for spherically symmetric spacetimes: we restrict ourselves to motion in the equatorial plane and set $\theta=\pi/2$, and we derive the usual conserved quantities the energy $k$ and angular momentum $h$ \begin{align} k &= \frac{\partial \mathcal{L}}{\partial \dot t} = \left(1-\frac{2 M}{r}+ \epsilon\, a(r)\right)\dot{t}\,,\\ h &= \frac{\partial \mathcal{L}}{\partial \dot \phi} = r^2 \dot \phi\,. \end{align} We obtain the effective potential to first order in $\epsilon$ from the constancy of the Lagrangian, expressed in terms of the conserved quantities \begin{align}\label{17} \left(1 - \epsilon \frac{a(r)}{1-\frac{2 M}{r}}\right)\frac{k^2}{1-\frac{2 M}{r}}-\left(\frac{1}{1-\frac{2 M}{r}}+ \epsilon b(r)\right)\dot{r}^2 - \frac{h^2}{r^2} + \mathcal{O}(\epsilon^2) = \sigma\,, \end{align} where $\sigma = 0$ for massless particles and $\sigma = 1$ for massive particles. For further calculations we rearrange Eq.~\eqref{17} as \begin{align} 0 &= \frac{1}{2}\dot r^2 - \frac{1}{2} k^2 + \frac{1}{2} \frac{h^2}{r^2}\left(1-\frac{2M}{r}\right) + \frac{1}{2} \sigma \left(1-\frac{2M}{r}\right) \nonumber\\ &+ \frac{\epsilon}{2} \left[ k^2 \left( \frac{a(r)}{1-\frac{2M}{r}} + b(r)\left(1-\frac{2M}{r}\right)\right) - b(r) \frac{h^2}{r^2}\left(1-\frac{2M}{r}\right)^2 - \sigma b(r) \left(1-\frac{2M}{r}\right)^2\right]\,, \end{align} so we can read off the effective potential to first order in $\epsilon$ \begin{align}\label{eq:pot} V(r) &= - \frac{1}{2} k^2 + \frac{1}{2} \left(1-\frac{2M}{r}\right) \left( \frac{h^2}{r^2} + \sigma \right) \nonumber\\ &+ \frac{\epsilon}{2} \left[ k^2 \left( \frac{a(r)}{1-\frac{2M}{r}} + b(r)\left(1-\frac{2M}{r}\right)\right) - b(r)\left( \sigma + \frac{h^2}{r^2} \right)\left(1-\frac{2M}{r}\right)^2\right] \end{align} from \begin{align} \frac{1}{2}\dot r^2 + V(r)=0\,. \end{align} For the analysis of the perihelion shift, it is necessary to reparametrise $r(\tau)$ as $r(\phi)$, which amounts to the equation \begin{align}\label{eq:rphi} \frac{1}{2}\frac{\dot r^2}{\dot \phi ^2} + \frac{1}{\dot \phi^2}V(r) = \frac{1}{2}\left(\frac{dr}{d\phi}\right)^2 + \frac{r^4}{h^2}V(r)=0\,. \end{align} \subsection{Photon sphere and perihelion shift}\label{ssec:phsph} For circular orbits (e.g $r=\textrm{const.},\ \dot r=0$) the effective potential and its derivative have to vanish. We perturb the radial coordinate of the circular orbit $r_c = r_0+\epsilon\, r_1$, the angular momentum $h = h_0 + \epsilon \,h_1$, and the energy $k = k_0 + \,\epsilon k_1$ and solve both equations $V = 0$ and $V' = 0$ order by order. For circular photon orbits, $\sigma = 0$, solving the zeroth order equations yields \begin{align} r_0 = 3M,\quad h_{0\pm} = \pm 3 \sqrt{3} k_0 M \end{align} and the first order perturbation gives, for the different values of $p$, the following numerical values: \begin{align} (p=2) \quad r_1 &\approx 14133.8000 \cdot 10^{-6} \frac{\alpha}{M}\,, && (p=3) \quad r_1 \approx -1362.5400 \cdot 10^{-6} \frac{\alpha}{M^3}\,, \\ (p=4) \quad r_1 &\approx 121.3220 \cdot 10^{-6} \frac{\alpha}{M^5}\,, && (p=5) \quad r_1 \approx -10.2582 \cdot 10^{-6} \frac{\alpha}{M^7}\,, \\ (p=6) \quad r_1 &\approx 8.3757 \cdot 10^{-6} \frac{\alpha}{M^9}\,, && (p=7) \quad r_1 \approx -0.6670 \cdot 10^{-6} \frac{\alpha}{M^{11}}\,, \\ (p=8) \quad r_1 &\approx 0.0521 \cdot 10^{-6} \frac{\alpha}{M^{13}}\,, && (p=9) \quad r_1 \approx -0.0040 \cdot 10^{-6} \frac{\alpha}{M^{15}}\,, \\ (p=10)\quad r_1 &\approx 0.0003 \cdot 10^{-6} \frac{\alpha}{M^{17}}\,. \end{align} We clearly see that for positive $\alpha$ and even $p$ the teleparallel perturbation of general relativity yields a larger photon sphere around a spherically symmetric black hole and thus predicts a larger black hole shadow. For odd $p$ a smaller shadow is predicted. Moreover it is evident that the larger $p$ the smaller the first order influence of the teleparallel perturbation. The relation between the photon sphere and teleparallel perturbations of general relativity is investigated here for the first time. For circular timelike orbits, $\sigma = 1$ it is also possible to solve the equations $V = 0$ and $V' = 0$. However the appearing expressions are not very insightful. The important observation is that teleparallely perturbed general relativity, not surprisingly, allows for circular orbits, which will be the starting point for the derivation of the perihelion shift now. We consider a perturbation around a circular orbit $r_c$ and plug the ansatz $r(\phi)= r_{c} + r_\phi(\phi)$ into \eqref{eq:rphi} and obtain \begin{align} \left(\frac{d r_\phi}{d\phi}\right)^2 = - 2 \frac{(r_c + r_\phi)^4}{h^2} V(r_c + r_\phi)\,. \end{align} Assuming that the ratio $r_\phi/r_c$ is small, the right-hand side can be expanded into powers of this parameter to second order \begin{align} \left(\frac{d r_\phi}{d\phi}\right)^2 = - \frac{r_c^4}{h^2} V''(r_c)r_\phi^2 + \mathcal{O}\left(\tfrac{r_\phi^3}{r_0^3}\right) \,, \end{align} where we used that for circular orbits $V(r_c) = 0$ and $V'(r_c)=0$, as discussed above. The solution $r_\phi$ thus oscillates with the wave number $K = \sqrt{\frac{r_c^4}{h^2} V''(r_c)}$ and the perihelion shift is given by \begin{align} \Delta \phi =2\pi\Big(\frac{1}{K}-1\Big) =2\pi \left(\frac{h}{r_c^2\sqrt{V''(r_c)}}-1\right)\,. \end{align} To derive the explicit expression for the perihelion shift for massive objects we consider the potential $V$ with $\sigma=1$, see \eqref{eq:pot}, its first derivative $V'$ and its second derivative $V''$. We evaluate the equations $V(r_c) = 0$ and $V'(r_c) = 0$ with $h= h_0 + \epsilon\, h_1$ and $k = k_0 +\epsilon\, k_1$. The zeroth order of these equations determines $h_0(r_c)$ and $k_0(r_c)$ as \begin{align} h_{0\pm} = \pm \frac{\sqrt{M}r_c}{r_c-3M},\quad k_{0\pm} = \pm \frac{2M-r_c}{\sqrt{r_c(r_c-3 M)}}\,. \end{align} The first order determines $h_1(r_c)$ and $k_1(r_c)$. Depending on the choice of the sign of $h_0$ we obtain two different solutions for $h_1$ (the sign of $k_0$ is irrelevant here) \begin{align} h_{1\pm} = \mp \frac{r_c^2(2 M a(r_c) - r_c (r_c - 2M) a'(r_c))}{4 \sqrt{M}\sqrt{r_c - 3M}^3}\,. \end{align} The sign labeling $h_{1\pm}$ refers to the sign chosen of the zeroth order $h_{0\pm}$, which was chosen to calculate $h_{1\pm}$. There is no need to derive $k_1$ explicitly, since it does not enter the perihelion shift equation. Having obtained the constants of motion for the circular orbit we can derive the perihelion shift by plugging the values into $V''(r, k_0, h_0, h_1)$ to obtain $V''(r_c)$ alone. Due to the different solutions for the constants of motion there exist two options to derive the perihelion shift \begin{align} \Delta\phi(h_{0+},h_{1+})\,, \quad \Delta\phi(h_{0-},h_{1-})\,, \end{align} which are related to each other through \begin{align} \Delta\phi(h_{0-},h_{1-}) &= - 4\pi - \Delta\phi(h_{0+},h_{1+})\,. \end{align} Expanding the perihelion shift into a power series in the variable $q = \frac{M}{r_c}$ yields \begin{align}\label{eqn:perihelionshift} \Delta\phi(h_{0+},h_{1+}) &= 6 \pi q + 27 \pi q^2 + \mathcal{O}(q^3) + \epsilon\,\,\pi \hat\Delta\phi_{p} + \mathcal{O}(\epsilon^2)\,, \end{align} $\hat\Delta\phi_{p}$ is the leading order teleparallel perturbation of the usual GR result. For the different values of $p$ it is given by \begin{align}\label{eqn:contribution} \hat\Delta\phi_{p=2}= & \frac{8 q^2}{r_c^2} \,, && \hat\Delta\phi_{p=3}= - \frac{48 q^4}{r_c^4}\,, && \hat\Delta\phi_{p=4}= \frac{192 q^6}{r_c^6}\,, && \hat\Delta\phi_{p=5}= -\frac{640 q^8}{r_c^8}\,, && \hat\Delta\phi_{p=6}= \frac{1920 q^{10}}{r_c^{10}}\,, \nonumber\\ \hat\Delta\phi_{p=7}= &-\frac{5376 q^{12}}{r_c^{12}}\,, && \hat\Delta\phi_{p=8}= \frac{14336 q^{14}}{r_c^{14}}\,, && \hat\Delta\phi_{p=9}= -\frac{36864 q^{16}}{r_c^{16}}\,, && \hat\Delta\phi_{p=10}= \frac{92160 q^{18}}{r_c^{18}}\,. \end{align} The qualitative behaviour of the perihelion shift is always the same, only the numerical values differ. As for the photon sphere, the higher $p$, the smaller the influence of the teleparallel perturbation and corrections to the perihelion shift appear only in higher orders in $q$. Since the influence of the teleparallel perturbation decreases with higher power in $p$ the most strict bound on $\alpha$ is obtained for $p=2$ and is the one obtained in \cite{DeBenedictis:2016aze}. We expect to be able to find stronger bounds from the upcoming study on teleparallel perturbations of rotating black holes. \section{Conclusion}\label{sec:conclusion} In this paper we presented the first order influence of a teleparallel power law $f(T)$ gravity perturbation of general relativity, in spherical symmetry. The central results of this article are as follows: \begin{itemize} \item To first order, a power law perturbation of the type $f(T) = T + \frac{\alpha}{2}T^p$ yields no teleparallel correction to Minkowski spacetime for $p>1/2$. \item The explicit derivation of the first order teleparallel $f(T) = T + \frac{\alpha}{2}T^p$ corrections to Schwarzschild geometry for $p=2$ to $p=10$, displayed in Eqs.~\eqref{afinX} and \eqref{bfinX} for $p=2$ and in \eqref{eq:p3} and \eqref{eq:p4} for $p=3$ and $p=4$, respectively. The perturbed solutions for higher power-law parameter $p$ have the same structure but they are lengthy and for this reason, we do not present them. \end{itemize} The latter allowed us to calculate the teleparallel modifications of the photon sphere and the perihelion shift: two observables which are experimentally accessible and can be used to check the viability of $f(T)$ models. For both observables we find that the larger $p$, the smaller the influence of the teleparallel modification. Thus, the $p=2$ model is most constraint from the perihelion shift of mercury, which is $|\alpha|_{\rm max}=2.20\cdot 10^{20} \,\mathrm{km}^{2}$ according to \cite{DeBenedictis:2016aze}. We expect to find further, stronger constraints, for the different models by studying rotating black holes and their phenomenology. The results we presented improve and extend existing results on first order power law $f(T)$-models, which were presented in~\cite{Ruggiero:2015oka} and \cite{DeBenedictis:2016aze}. In the first article the tetrad chosen was not compatible with a vanishing spin connection and the field equations were incorrect. During our derivations, we paid particular attention to present all necessary steps in the perturbation theory, so that our results are easily reproducible. An important opportunity, that our presented approach here offers is to investigate the connection between the vanishing of the first order contributions of teleparallel corrections around Minkowski spacetime and the non-vanishing of the corrections around Schwarzschild geometry, with the degrees of freedom of $f(T)$ gravity; the latter being debated in the literature~\cite{Golovnev:2018wbh,Ferraro:2018axk,Ferraro:2018tpu}. Here we considered static perturbations; in the future, non-static spherically and axially symmetric gravitational waves from weak power law $f(T)$ gravity will be investigated and complement the gravitational wave analysis of $f(T)$ gravity around Minkowski and FLRW geometry~\cite{Farrugia:2018gyz,Golovnev:2018wbh,Nunes:2018evm} and also at the astrophysical level with compact binary coalescences~\cite{Nunes:2019bjq}. This paper is a first step towards a complete phenomenological catalogue of observables, which shall be derived in weak power law $f(T)$-gravity to systematically check its consistency with observations. The next step in this program is to consider axially symmetric perturbations around Kerr spacetime, to derive the change in the photon regions, which will have an imprint on the predictions of the shape of the black hole's shadow. \begin{acknowledgments} The authors would like to thank Jackson Levi Said, Gabriel Farrugia, Andrew DeBenedictis and Sasa Ilijic for very fruitful discussions. CP was supported by the Estonian Research Council and the European Regional Development Fund through the Center of Excellence TK133 ``The Dark Side of the Universe''. KF gratefully acknowledges support by the DFG, within the Research Training Group \textit{Models of Gravity} and mobility funding from the European Regional Development Fund through \textit{Dora Plus}. SB is supported by Mobilitas Pluss N$^\circ$ MOBJD423 by the Estonian government. \end{acknowledgments}
2,877,628,089,872
arxiv
\section{Introduction} In this paper, we study the behavior of solutions to the Cauchy problem \begin{equation} \label{THEWAVEEQ0} \begin{cases} \Box w=\mathcal{N}(\partial w), \\ w(0,x)=\Phi_0(x)+\epsilon \phi_0(x), \quad \partial_t w(0,x)= \Phi_1(x)+\epsilon \phi_1(x) \end{cases} \end{equation} in Minkowski space with initial data $\Phi_i(x)$, $\phi_i(x)\in C_0^{\infty}(\mathbb{R}^3)$. The nonlinearity $\mathcal{N}(\partial w)$ is assumed to satisfy the null condition, that is, $\mathcal{N}(0)=D\mathcal{N}(0)=0$ and the quadratic part of $\mathcal{N}(\partial w)$ is $A^{\alpha\beta}\partial_\alpha w\partial_\beta w$ with constant coefficients $A^{\alpha\beta}$ such that $A^{\alpha\beta}\xi_\alpha \xi_\beta=0$ whenever $\xi_0^2=\xi_1^2+\xi_2^2+\xi_3^2$. \bigskip In \cite{alinhac-sls}, S. Alinhac studied the stability of large solutions to the quasilinear wave equations \begin{equation*} \begin{cases} \Box w+g^{\alpha\beta\gamma}\partial_{\gamma}w\cdot \partial_{\alpha\beta}w=0,\\ w(0,x)=\Phi_0(x), \quad \partial_t w(0,x)= \Phi_1(x) \end{cases} \end{equation*} in Minkowski space, where $g^{\alpha\beta\gamma}$ are constants satisfying the null condition ( see \cite{klnull}). More specifically, starting with a global solution $\Phi(t, x)\in C^\infty(\mathbb{R}^{3+1})$, consider the Cauchy problem with perturbed initial data $(\Phi(0, x)+\epsilon\phi_0, \partial_t\Phi(0, x)+\epsilon\phi_1)$. He showed that if $\Phi$ satisfies the condition \begin{equation} \label{alinhaccond} |g^{ij\gamma}\partial_{\gamma}\Phi\cdot\xi_i\xi_j|\leq \alpha_0 \sum\limits_{i=1}^{3}|\xi_i|^2,\quad \sum\limits_{|k|\leq 7}|\Gamma^k\partial \Phi|\leq C_0 (1+t)^{-1}(1+|r-t|)^{-\frac 1 2} \end{equation} for some positive constants $\alpha_0<1$ and $C_0$, then the solution exists globally and is close to $\Phi$. Here $\Gamma$ denotes the collection of Lorentz vector fields, see \cite{klinvar}. The problem of global stability of $\Phi$ can be reduced to the following small data Cauchy problem \begin{equation*} \begin{cases} \Box \phi+g^{\alpha\beta\gamma}\partial_{\gamma}\phi\cdot \partial_{\alpha\beta}\phi+g^{\alpha\beta\gamma}\partial_\alpha \Phi\partial_{\beta\gamma}\phi+g^{\alpha\beta\gamma} \partial_{\beta\gamma}\Phi\partial_\alpha\phi=0,\\ \phi(0,x)=\epsilon \phi_0(x), \quad \partial_t \phi(0,x)= \epsilon \phi_1(x) \end{cases} \end{equation*} with given function $\Phi$ satisfying condition \eqref{alinhaccond}. The approach in \cite{alinhac-sls} relies on the vector field method. In particular, S. Alinhac used the scaling vector field $S=t\partial_t +r\partial_r$ with weights growing in $t$ as commutators. The use of such weighted vector fields requires one to make the rather strong assumption that the given solution $\Phi(t, x)$ decays uniformly in time $t$ as in \eqref{alinhaccond}. \bigskip In this paper, we use the approach developed in \cite{newapp}, \cite{yang1} to treat the problem of global stability of solutions to nonlinear wave equations. we use a new method for proving decay for linear problem, developed by M. Dafermos and I. Rodnianski in \cite{newapp}. This new method avoids the use of vector fields containing positive weights in $t$, e.g., $S=t\partial_t+r\partial_r$, $L_i=x_i\partial_t+t\partial_i$. Traditionally, the vector fields from the set $\Gamma$, including $S$, $L_i$, are used as multipliers or commutators. The new approach only commutes the equation with $\partial_t$, $x_i\partial_j-x_j\partial_i$ and allows us to obtain the stability results under conditions on $\Phi$ weaker than those imposed by inequalities \eqref{alinhaccond}. We now describe the assumptions and the main results. \bigskip We denote $(\partial_t, \partial_{x_1}, \partial_{x_2}, \partial_{x_3})$ by $\partial$ and $(\partial_{x_1}, \partial_{x_2}, \partial_{x_3})$ by $ \nabla$ under the coordinates $(t, x_1, x_2, x_3)$. We also use the null coordinates $u=\frac{t-r}{2}$, $v=\frac{t+r}{2}$ defined by the standard polar coordinates $(t, r, \omega)$ in Minkowski space. The vector fields, used as commutators , are \[ Z=\{ \Omega_{ij}, \partial_t=T\},\quad \Omega_{ij}=x_i\partial_j-x_j\partial_i, \] where Greek indices run from 0 to 3 while the Latin indices run from 1 to 3. \begin{Def} \label{weakwave} We call $\Phi\in C^{\infty}(\mathbb{R}^{3+1})$ a $(\delta, \alpha, t_0, R_1, C_0)$-\textbf{weak wave} if \begin{enumerate}[(i):] \item $|\partial\Phi(t, x)|\leq C_0, \quad t\leq t_0$, \item $|\partial\Phi(t, x)|\leq C_0(1+r)^{-\frac 1 2}(1+(t-|x|)_+)^{-\frac 1 2-4\alpha}, \quad |x|\geq R_1,\quad t\geq t_0$, \item $|\partial_v \Phi(t, x)|\leq C_0 (1+r)^{-1-3\alpha}, \quad |x|\geq R_1 , \quad t\geq t_0 $, \item $|\partial \Phi(t, x)|\leq \delta\alpha(1+r)^{-1-\alpha}, \quad |x|\leq R_1 , \quad t\geq t_0$ \end{enumerate} for some positive constants $\delta, \alpha, t_0, R_1, C_0$, where $\partial_v=\partial_t+\partial_r$. Here $(t-|x|)_+=\max\{0, t-|x|\}$. Without loss of generality, we assume $\alpha\leq \frac{1}{4}$ and $R_1\leq t_0$. \end{Def} \begin{remark} Solution of a free wave equation in Minkowski space $\Box \Phi=0$ with compactly supported initial data decays uniformly in time $t$ and is always a $(\delta, \alpha, t_0, R_1, C_0)$-weak wave for some constants $\delta, \alpha, t_0, R_1, C_0$ . We remark here that a weak wave does not have to decay uniformly in time $t$ in the cylinder $\{(t, x)||x|\leq R_1\}$. \end{remark} In our argument, we estimate the decay of the solution with respect to the foliation $\Sigma_{\tau}$, defined as \begin{align*} &S_\tau:=\{u=u_\tau, v\geq v_\tau\},\\ &\Sigma_\tau:=\{t=\tau, r\leq R\}\cup S_\tau, \end{align*} where $u_\tau=\frac{\tau-R}{2}$, $v_\tau=\frac{\tau+R}{2}$. The radius $R$ is a to-be-fixed constant. The corresponding energy flux is $$ E[\phi](\tau):=\int_{r\leq R}|\partial\phi|^2dx + \int_{S_\tau}\left(|\partial_v\phi|^2+|\mbox{$\nabla \mkern-13mu /$\,}\phi|^2\right)r^2dvd\omega, $$ where $\mbox{$\nabla \mkern-13mu /$\,}$ denotes the induced covariant derivative on the sphere of radius $r$. We denote \[ E_{0}=\sum\limits_{|k|\leq 4}\int_{\mathbb{R}^{3}}|\partial Z^k\tilde\phi(0, x)|^2dx, \] where $\tilde\phi(0, x)=\phi_0(x)$, $\partial_t\tilde\phi(0, x)=\phi_1(x)$. Here $k$ stands for multiple indices, namely if $k=(k_1, k_2)$, then $Z^k=\Omega^{k_1}T^{k_2}$, $\Omega= \Omega_{ij}$. And if $k\leq \tilde{k}$, then $k_1\leq \tilde{k}_1$, $k_2\leq \tilde{k}_2$. In addition to the assumption that the nonlinearity $\mathcal{N}(\partial w)$ satisfies the null condition, we assume $\mathcal{N}$ is smooth and \[ \mathcal{N}(\partial\Phi+\partial\phi)=\mathcal{N}(\partial\Phi)+A^{\mu\nu}\partial_\mu\Phi\partial_\nu\phi+\mathcal{N}^{\mu}(\partial\Phi)\partial_\mu\phi +\mathcal{N}^{\mu\nu}(\partial\Phi)\partial_\mu\phi\partial_\nu\phi+O(|\partial\phi|^3), \] when $\partial\phi$ is small. The coefficients $\mathcal{N}^\mu(\partial\Phi)$, $\mathcal{N}^{\mu\nu}(\partial\Phi)$ satisfy \begin{equation} \label{HH} \begin{split} & |Z^\beta\mathcal{N}^{\mu}(\partial\Phi)|\leq C(\partial\Phi)\sum\limits_{|\beta'|\leq |\beta|}|Z^{\beta'}\partial\Phi|^{2+\alpha_0},\quad \forall |\beta|\leq 4,\\ &|Z^\beta\mathcal{N}^{\mu\nu}(\partial\Phi)|\leq C(\partial\Phi)\sum\limits_{|\beta'|\leq |\beta|}|Z^{\beta'}\partial\Phi|^{\alpha_0},\quad\forall |\beta|\leq 4 \end{split} \end{equation} for some positive constant $\alpha_0$. The constant $C(\partial\Phi)$ depends only on $\sum\limits_{|\beta|\leq4}\|Z^\beta \partial\Phi\|_{C^0}$. We now state our main results. \begin{thm} \label{maintheorem} Suppose the nonlinearity $\mathcal{N}(\partial w)$ satisfies the null condition and condition \eqref{HH}. Let $\Phi\in C^{\infty}(\mathbb{R}^{3+1})$ be a solution of \eqref{THEWAVEEQ0} when $\epsilon=0$. Assume $Z^k\Phi$ is $(\delta, \alpha, t_0, R_1, C_0)$-weak wave, $\forall |k|\leq 4$. Suppose the initial data $\phi_0(x), \phi_1(x)$ are smooth and supported in $\{|x|\leq R_0\}$. Then there exists $\delta_0>0$, depending only on the constants $A^{\alpha\beta}$, and $\epsilon_0>0$, depending on $E_{0}$, $R_0$, $A^{\alpha\beta}$, $\alpha_0$, $\alpha$, $t_0$, $R_1$, $C_0$, such that for any $\delta<\delta_0$, $\epsilon<\epsilon_0$, there exists a unique global smooth solution $w$ of equation \eqref{THEWAVEEQ0} with the property that $\exists$ positive constant $R$, depending on $t_0$, $\alpha$, $\alpha_0$, $R_1$, $C_0$, $R_0$, such that for the foliation $\Sigma_\tau$ with radius $R$, the difference $\phi=w-\Phi$ satisfies \begin{itemize} \item[(1)] Energy decay $$E[\phi](\tau)\leq C E_0\epsilon^2 (1+\tau)^{-1-\frac 1 2\alpha'},\quad \alpha'=\min\{\frac{\alpha_0}{6}, \alpha\}. $$ \item[(2)] Pointwise decay: \begin{align*} &\qquad\quad|\phi|\leq C\sqrt{E_0}\epsilon (1+r)^{-1},\\ &\sum\limits_{|\beta|\leq 2}|\partial^\beta\phi|\leq C \sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+|t-r+R|)^{-\frac 1 2-\frac{\alpha'}{4}},\quad \alpha'=\min\{\frac{\alpha_0}{6}, \alpha\}, \end{align*} \end{itemize} where $C$ depends on $R$, $\alpha_0$, $\alpha$, $t_0$, $R_1$, $C_0$. \end{thm} \begin{remark} The weak decay of $\partial\Phi$ in the spatial direction ( $(1+|x|)^{-\frac 1 2}$) excludes general cubic nonlinearities of $\mathcal{N}(\partial\Phi)$( cubic nonlinearities satisfying the null condition are allowed). However if condition $(ii)$ in the definition of \textbf{weak wave} $\Phi$ is improved to \begin{equation*} \sum\limits_{|\beta|\leq 4}|\partial Z^{\beta}\Phi|\leq C_0(1+r)^{-\frac 1 2-\alpha}(1+(t-|x|)_+)^{-\frac 1 2-4\alpha}, \end{equation*} then it is sufficient to assume \[ |Z^\beta\mathcal{N}^{\mu}(\partial\Phi)|\leq C(\partial\Phi)\sum\limits_{|\beta'|\leq |\beta|}|Z^{\beta'}\partial\Phi|^{2},\quad \forall |\beta|\leq 4. \] This allows any cubic( or higher) nonlinearity of $\mathcal{N}(\partial w)$. \end{remark} \bigskip Since $\Phi(t, x)$ solves \eqref{THEWAVEEQ0} for $\epsilon=0$, the problem of global stability of $\Phi$ is then reduced to the following small data Cauchy problem \begin{equation} \label{THEWAVEEQ} \begin{cases} \Box\phi+N(\Phi, \phi)+L(\partial\phi)=F(\partial\phi),\\ \phi(0, x)= \epsilon\phi_0(x), \quad \phi_t(0, x)=\epsilon\phi_1(x), \end{cases} \end{equation} where $N(\Phi, \phi)=B^{\alpha\beta}\partial_\alpha\Phi\cdot\partial_\beta\phi$, $L(\partial\phi)=L^\mu(t, x)\partial_\mu\phi$. The nonlinearity $F(\partial\phi)$ is of the form \begin{align*} &F(\partial\phi)=A^{\mu\nu}\partial_\mu \phi\partial_\nu\phi+Q(\partial\phi)+\textnormal{cubic and higher order terms of } \partial\phi,\\ &Q(\partial\phi)=h^{\mu\nu}(t,x)\partial_\mu\phi\partial_\nu\phi. \end{align*} Here $A^{\mu\nu}$, $B^{\mu\nu}$ are constants satisfying the null condition \cite{klnull} and $\Phi(t, x)$, $L^\mu(t, x)$, $h^{\mu\nu}(t,x )$ are given functions. For the stability problem, we have $B^{\mu\nu}=-2A^{\mu\nu}$ and $\phi=w-\Phi$. However, it is of independent interest to consider the above small data Cauchy problem with linear terms $N(\Phi, \phi)$, $L^{\mu}\partial_\mu\phi$ and quadratic terms $h^{\mu\nu}(t, x)\partial_\mu\phi\partial_\nu\phi$ where the functions $\Phi(t, x)$, $L^\mu(t, x)$, $h^{\mu\nu}(t, x)$ decay rather weakly, given as follows: For positive constants $\delta$, $\alpha$, $t_0$, $R_1$, $C_0$, we assume $Z^k\Phi$ is $(\delta, \alpha, t_0, R_1, C_0)$-weak wave, $\forall |k|\leq 4$ and \[ |\partial^2 Z^\beta \Phi|\leq C_0,\quad \forall |\beta|\leq 2. \] Similarly, we assume \[ |\partial Z^\beta L^\mu|\leq C_0, \quad \forall |\beta|\leq 2. \] For $t\leq t_0$, we assume $Z^\beta L^\mu(t, x)$, $Z^\beta h^{\mu\nu}(t,x)$ are bounded, that is, \[ |Z^\beta L^\mu(t, x)|+|Z^\beta h^{\mu\nu}(t, x)|\leq C_0,\quad \forall |\beta|\leq 4, \quad \forall t\leq t_0. \] For $t\geq t_0$, we assume \[ |Z^{\beta}h^{\mu\nu}(t, x)|\leq C_0 (1+|x|)^{-\frac{3}{2}\alpha},\quad \forall |\beta|\leq 4, \quad \forall t\geq t_0 \] and $L^\mu(t, x)$ satisfies \textbf{one} of the following two conditions \begin{equation} \label{Lcond1} |Z^{\beta}L^\mu (t, x)|\leq \delta \alpha (1+|x|)^{-1-3\alpha},\quad \forall |\beta|\leq 4,\quad \forall t\geq t_0 \end{equation} or \begin{equation} \label{Lcond2} |Z^{\beta}L^\mu (t, x)|\leq C_0 (1+|x|)^{-1-3\alpha}(1+(t-|x|)_{+})^{-\alpha},\quad \forall |\beta|\leq 4,\quad \forall t\geq t_0. \end{equation} Theorem \ref{maintheorem} follows from: \begin{thm} \label{maintheorem2} Let $\Phi(t, x)$, $L^\mu(t, x)$, $h^{\mu\nu}(t,x )$ be given smooth functions satisfying the above conditions. $A^{\mu\nu}$, $B^{\mu\nu}$ are constants satisfying the null condition. Assume the initial data $\phi_0(x)$, $\phi_1(x)$ are smooth and supported in $\{|x|\leq R_0\}$. Then there exists $\delta_0>0$, depending only on the constants $B^{\mu\nu}$, and $\epsilon_0>0$, depending on $E_{0}$,$R_0$, $A^{\mu\nu}$, $B^{\mu\nu}$, $\alpha, t_0, R_1, C_0$, such that for any $\delta<\delta_0$, $\epsilon<\epsilon_0$, there exists a unique global smooth solution $\phi$ of the equation \eqref{THEWAVEEQ} with the property that $\exists$ positive constant $R$, depending on $t_0, \alpha, R_1, C_0, R_0$, such that for the foliation $\Sigma_\tau$ with radius $R$, the solution $\phi$ satisfies \begin{itemize} \item[(1)] Energy decay $$E[\phi](\tau)\leq C E_0\epsilon^2 (1+\tau)^{-1-\frac 1 2\alpha}. $$ \item[(2)] Pointwise decay \begin{align*} &\qquad\quad|\phi|\leq C\sqrt{E_0}\epsilon (1+r)^{-1},\\ &\sum\limits_{|\beta|\leq 2}|\partial^\beta\phi|\leq C \sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+|t-r+R|)^{-\frac 1 2-\frac{\alpha}{4}}, \end{align*} \end{itemize} where $C$ depends on $R$, $\alpha_0$, $\alpha, t_0, R_1, C_0$. \end{thm} \bigskip \begin{remark} Notice that $\alpha$ can be arbitrarily small. The decay assumptions on $L^\mu(t, x)$( condition \eqref{Lcond1}) and $h^{\mu\nu}(t, x)$ are sharp in the sense that there exists soliton solution( independent of time t) to the linear wave equation if $L^\mu(t, x)$ behaves like $(1+|x|)^{-1}$ and any nontrivial $C^3$ solution of the equation $$\Box \phi=\phi_t^2$$ with compactly supported initial data blows up in finite time \cite{johnblowup}. \end{remark} \begin{remark} We can also consider equation \eqref{THEWAVEEQ} with zeroth order linear term $L_0(t, x) \phi$, leading to the same conclusion provided that $L_0(t, x)$ decays like $(1+|x|)^{-3-a}$. Hence for the stability problem of large solution( Theorem \ref{maintheorem}), specific dependence on $w$ of the nonlinearity $\mathcal{N}(w)$ is also allowed. \end{remark} \begin{remark} For simplicity, we consider the equations in Minkowski space. However, as in \cite{yang1}, the same conclusion holds on curved background $(\mathbb{R}^{3+1}, g)$ with metric $g$ merely $C^1$ close to the Minkowski metric and coinciding with the Minkowski metric outside the cylinder $\{(t, x)||x|\leq R\}$. \end{remark} \begin{remark} It is not necessary to require that the initial data have compact support. The general assumption on the initial data can be that the following quantity \[ \sum\limits_{|k|\leq 4}\int_{\mathbb{R}^{3}}r^{1+\alpha}|\partial Z^k\tilde\phi(0, x)|^2dx, \quad \tilde\phi(0, x)=\phi_0(x), \quad\partial_t\tilde\phi(0, x)=\phi_1(x), \] is finite. \end{remark} The small data global existence result of nonlinear wave equations satisfying the null condition in Minkowski space was first obtained by D. Christodoulou \cite{ChDNull} and S. Klainerman \cite{klnull}. The approach of \cite{klnull} used the vector field method, introduced by S. Klainerman in \cite{klinvar}. Various applications of the vector field method to nonlinear wave equations could be found in ~\cite{klmulti}, \cite{gx-lindblad2}, ~\cite{sogge-metcalfe-nakamura}, ~\cite{sogge-metcalfe}, ~\cite{sideris-multispeed}, ~\cite{soggemulti} . In particular, the celebrated global nonlinear stability of Minkowski space has been proven by Christodoulou-Klainerman \cite{kcg} and later by Lindblad-Rodnianski \cite{SMigor}. \bigskip The main difficulty of considering nonlinear wave equation \eqref{THEWAVEEQ} with linear terms $B^{\mu\nu}\partial_\mu\Phi(t, x)\partial_\nu\phi$, $L^\mu(t, x)\partial_\mu\phi$ and quadratic terms $h^{\mu\nu}(t, x)\partial_\mu\phi\partial_\nu\phi$ is the rather weak decay of the functions $\Phi(t, x)$, $L^\mu(t, x)$, $h^{\mu\nu}(t, x)$. Previous works have relied on the fact that these functions decay to zero uniformly in time $t$, which is not necessary in this context. In fact, we even allow these functions to stay static( independent of $t$) in the cylinder $\{(t, x)||x|\leq R_1\}$. Although we require $\delta$ to be sufficiently small, which is the only smallness assumption here, $\delta$ depends only on the constants $B^{\mu\nu}$. \bigskip Our argument here is similar to that in ~\cite{yang1}, which relies on a new approach, developed by M. Dafermos and I. Rodnianski in ~\cite{newapp}. This new approach is a combination of an integrated local energy inequality and a p-weighted energy inequality in a neighborhood of the null infinity. However, due to the weak decay of $\partial\Phi$, we are not able to obtain the integrated local energy inequality and the p-weighted energy inequalities separately as in ~\cite{yang1}. We thus consider these two inequalities together, see Proposition \ref{mainprop} in Section 2 for details. \bigskip The plan of this paper is as follows: we establish an integrated energy inequality in the whole space time and two p-weighted energy inequalities in Section 2. In Section 3, we use Proposition ~\ref{mainprop} to obtain the decay of the energy as well as the pointwise decay of the solution under appropriate boostrap assumptions; in the last two sections, we close our boostrap argument and conclude our main theorems. \textbf{Acknowlegements} The author is deeply indebted his advisor Igor Rodnianski for suggesting this problem. He thanks Igor Rodnianski for sharing numerous valuable thoughts as well as many helpful comments on the manuscript. \section{Notations and Preliminaries} In Minkowski space, we recall the energy-momentum tensor \[ {\mathbb T}_{\mu\nu}[\phi]=\partial_\mu\phi\partial_\nu\phi-\frac12 g_{\mu\nu}\partial^{\gamma}\phi\partial_{\gamma}\phi. \] Given a vector field $X$, we define the currents \[ J^X_\mu[\phi]= {\mathbb T}_{\mu\nu}[\phi]X^\nu, \qquad K^X[\phi]= {\mathbb T}^{\mu\nu}[\phi]\pi^X_{\mu\nu}, \] where $\pi^X_{\mu\nu}=\frac12 \mathcal{L}_Xg_{\mu\nu}$ is the deformation tensor of the vector field $X$. Recall that \[ D^\mu J^X_\mu[\phi] = X(\phi)\Box\phi+K^X[\phi]. \] Let $n$ be the unit normal vector field to hypersurfaces, $d\sigma$ the induced surface measure. We denote $d\textnormal{vol}$ as the volume form in Minkowski space. In null coordinates, we define the null infinity from $\tau_1$ to $\tau_2$ as follows \begin{equation*} \mathcal I_{\tau_1}^{\tau_2} :=\{(u,v,\omega)|u_{\tau_1}\leq u \leq u_{\tau_2}, v=\infty\}. \end{equation*} The corresponding energy flux is $$I[\phi]_{\tau_1}^{\tau_2}:=\left.\int_{\mathcal I_{\tau_1}^{\tau_2}} \left((\partial_u\phi)^2+|\mbox{$\nabla \mkern-13mu /$\,}\phi|^2\right) r^2dud\omega\right|_{v=\infty}, $$ which is interpreted as a limit when $v\rightarrow\infty$. Denote $$\tilde{E}[\phi](\tau)=E[\phi](\tau)+I[\phi]_{0}^{\tau}. $$ Here $E[\phi](\tau)$ is defined in the introduction with the foliation $\Sigma_\tau$. \bigskip Taking a vector field $$X=f(r)\partial_r,$$ for some function $f(r)$, consider the region bounded by the hypersurfaces $\Sigma_{\tau_1}$ and $\Sigma_{\tau_2}$. By Stoke's formula, we have the identity \begin{align} \notag&\quad\int_{{\Sigma}_{\tau_1}}J^X_\mu[\phi]n^\mu d\sigma - \int_{{\Sigma}_{\tau_2}}J^X_\mu[\phi]n^\mu d\sigma-\int_{\mathcal I_{\tau_1}^{\tau_2}}J^X_\mu[\phi]n^\mu d\sigma\\ &=\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}D^\mu J^X_{\mu}[\phi]d\textnormal{vol}=\int_{\tau_1}^{\tau_2} \int_{\Sigma_\tau}\Box\phi \cdot X(\phi) + K^X[\phi] d\textnormal{vol}, \label{energyeq} \end{align} where \begin{align*} &K^X[\phi]=\mathbb{T}^{\mu\nu}[\phi]\pi^X_{\mu\nu}=(\frac 1 2 f'+r^{-1}f) (\partial_t\phi)^2+(\frac 1 2 f'-r^{-1}f)(\partial_r\phi)^2-\frac 1 2 f'|\mbox{$\nabla \mkern-13mu /$\,}\phi|^2. \end{align*} We use $f'$ to denote $\partial_r f$ throughout this paper. Choose another function $\chi$ of $r$, we have the equality \begin{align*} -\chi\partial^{\gamma}\phi\partial_{\gamma}\phi + \frac 1 2\Box\chi\cdot\phi^2 =\frac 1 2 D^{\mu}\left(\partial_{\mu}\chi\cdot \phi^2 - \chi\partial_{\mu}\phi^2\right) + \chi\phi\Box\phi. \end{align*} Add the above equality to both sides of \eqref{energyeq}. Define the current \begin{equation*} \label{mcurent} \tilde{J}_{\mu}^X[\phi]=J_{\mu}^X[\phi] - \frac 1 2\partial_{\mu}\chi \cdot\phi^2 + \frac 1 2 \chi\partial_{\mu}\phi^2. \end{equation*} Then we get \begin{align} \label{menergyeq} &\int_{{\Sigma}_{\tau_1}}\tilde{J}^X_\mu[\phi]n^\mu d\sigma - \int_{{\Sigma}_{\tau_2}}\tilde{J}^X_\mu[\phi]n^\mu d\sigma-\int_{\mathcal I_{\tau_1}^{\tau_2}}\tilde{J}^X_\mu[\phi]n^\mu d\sigma\\ \notag&=\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau} (f\partial_r\phi+\phi\chi)\Box \phi+ (r^{-1}f + \frac 1 2 f' - \chi)(\partial_t\phi)^2\\ \notag&\qquad +(\chi-r^{-1}f+\frac 1 2 f')(\partial_r\phi)^2+(\chi-\frac 1 2 f')|\mbox{$\nabla \mkern-13mu /$\,}\phi|^2- \frac 1 2\Box\chi\cdot\phi^2. \end{align} For $X=T=\partial_t$ in ~\eqref{energyeq}, we have the energy inequality \begin{equation} \label{energyeqT} \tilde{E}[\phi](\tau_2)\leq \tilde{E}[\phi](\tau_1)+2\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|\Box \phi||\partial_t\phi|d\textnormal{vol}. \end{equation} We now fix the radius $R$ of the foliation $\Sigma_\tau$. First under the assumptions in Theorem \ref{maintheorem2}, we choose new positive constants $\alpha'$, $t_0'$, $R_1'$ such that \begin{align} \label{impsmall} &|\partial_v Z^k\Phi|\leq \delta\alpha' (1+|x|)^{-1-3\alpha'}, \quad t\geq t'_0,\quad |x|\geq R_1',\quad |k|\leq 4,\\ \label{impcond4} &|\partial Z^k\Phi|\leq 2\delta \alpha'(1+|x|)^{-1-\alpha'}, \quad t\geq t_0', \quad |x|\leq R_1',\quad |k|\leq 4,\\ \label{t0} & (1+t_0')^{\alpha'} \delta \alpha'\geq C_0. \end{align} In face, since $Z^k\Phi$ is $(\delta, \alpha, t_0, R_1, C_0)$-weak wave $\forall |k|\leq 4$, choose $\alpha'=\frac 1 2 \alpha$ and $R_1'$ large enough such that \eqref{impsmall} holds. Then \eqref{impcond4} and \eqref{t0} are satisfied if $t_0'$ is sufficiently large. The other conditions are also satisfied for these new constants $\alpha'$, $t_0'$, $R_1'$. Then the radius $R$ can be fixed as \[ R=t_0'+R_0, \] where $R_0$ is the radius of the support of the initial data. To avoid too many constants, we still use the constants $\alpha$, $R_1$, $t_0$ to denote $\alpha'$, $R_1'$, $t_0'$ respectively in the sequel. The following several lemmas, which have been proven in \cite{yang1}, will be used later on. \begin{lem} \label{lem1} On $S_\tau\cup\mathcal I_{0}^{\tau}$ \begin{equation*} \label{sphereinbd} r\int_{\omega}|\phi|^2 d\omega\leq \tilde{E}[\phi](\tau). \end{equation*} \end{lem} \begin{lem} \label{lem2} If $\phi$ is smooth, then \begin{equation*} \label{phiboundH} \int_{r\leq R}\left(\frac{\phi}{1+r}\right)^2dx+\int_{S_{\tau}}\left(\frac{\phi}{1+r}\right)^2r^2dvd\omega\leq 6\tilde{E}[\phi](\tau). \end{equation*} \end{lem} \begin{cor} \label{cor1} In the exterior region $r\geq R$ \begin{equation*} \label{phipsieq} \left|\int_{S_\tau}|\partial_v(r\phi)|^2dvd\omega-\int_{S_\tau}\phi_v^2 \quad r^2dvd\omega \right|\leq 2\tilde{E}[\phi](\tau). \end{equation*} \end{cor} \begin{lem} \label{prop1} Suppose $f$ and $\chi$ satisfy $$|f|\leq C_1, \quad|\chi|\leq \frac{C_1}{1+r}, \quad|\chi'|\leq \frac{C_1}{(1+r)^{2}}$$ for some constant $C_1$, then \begin{equation*} \label{bdrycontrol} \left|\int_{\Sigma_\tau}\tilde{J}_{\mu}^X[\phi]n^{\mu} d\sigma\right|\leq 6C_1\tilde{E}[\phi](\tau). \end{equation*} \end{lem} \begin{remark} If $\tilde{E}[\phi](\tau)$ is finite, all the above statements are also valid if we replace $\tilde{E}[\phi](\tau)$ with $E[\phi](\tau)$. \end{remark} \bigskip Finally, we denote \begin{equation*} \begin{split} &\bar\partial\phi:=(\partial\phi, \frac{\phi}{1+r}), \qquad \overline{\partial_v}\phi:=(\partial_v\phi, \mbox{$\nabla \mkern-13mu /$\,}\phi),\\ & g(p, \tau):=\int_{S_\tau}r^{p}|\partial_v\psi|^2dvd\omega, \qquad \bar g(p, \tau):= \int_{S_\tau}r^{p}|\overline{\partial_v}\psi|^2dvd\omega,\\ &G[\beta, p]_{\tau_1}^{\tau_2}:=\int_{\tau_1}^{\tau_2}(1+\tau)^{-\beta}g(p, \tau)d\tau, \qquad \bar G[\beta, p]_{\tau_1}^{\tau_2}:=\int_{\tau_1}^{\tau_2}(1+\tau)^{-\beta}\bar g(p, \tau)d\tau,\\ &D^\beta[F]_{\tau_1}^{\tau_2}:=\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|F|^2(1+r)^{\beta+1}d\textnormal{vol} \end{split} \end{equation*} for $\forall p\geq0,\quad \beta\in \mathbb{R}^1$. Here $\psi=r\phi$, $\partial_v=\partial_t+\partial_r$ and $\bar{\partial}\phi$ is a four dimensional vector with norm $|\bar{\partial}\phi|^2=|\partial\phi|^2+\frac{\phi^2}{(1+r)^2}$, similarly for $\overline{\partial_v}\phi$. Throughout this paper, we use the notation $A\lesssim B$ for the inequality $A\leq C B$ with some constant $C$, depending on $R$, $A^{\mu\nu}$, $B^{\mu\nu}$, $\alpha$, $t_0$, $R_1$, $C_0$. \section{Weighted Energy Inequalities} In this section, we use the multiplier method to establish an integrated local energy inequality and two p-weighted energy inequalities. The integrated local energy inequality was first proven by C. S. Morawetz in \cite{mora2}. We follow the method in \cite{dr3} to obtain the integrated local energy inequality here. In \cite{newapp}, M. Dafermos and I. Rodnianski introduced the p-weighted energy inequalities in a neighborhood of null infinity. These two estimates, which the decay of the energy flux $E[\phi](\tau)$ relies on, were shown separately in ~\cite{yang1}. Due to the weak decay of the functions $\Phi(t, x)$, $L^\mu(t, x)$, we are not able to show these two estimates separately. We hence consider them together. \bigskip Consider the following linear wave equation \begin{equation} \label{LWAVEEQ} \begin{cases} \Box\phi+N(\Phi, \phi)+L(\partial\phi)=F,\\ \phi(0, x)=\epsilon \phi_1(x), \quad\partial_t\phi(0, x)=\epsilon\phi_1(x), \end{cases} \end{equation} where $ N(\Phi, \phi)=B^{\mu\nu}\partial_\mu\Phi\cdot\partial_\nu\phi$, $L(\partial\phi)=L^\mu(t,x) \partial_\mu\phi$. We have the following key estimates. \begin{prop} \label{mainprop} Suppose that $\Phi$ is a $(\delta, \alpha, t_0, R_1, C_0)$-weak wave for positive constants $\delta, \alpha, t_0, R_1, C_0$. Assume the given functions $L^\mu(t,x)$ satisfy \begin{align*} |L^\mu(t, x)|\leq C_0,\quad t\leq t_0 \end{align*} and \textbf{one} of the following two conditions \[ |L^\mu(t, x)|\leq \delta \alpha(1+|x|)^{-1-3\alpha},\quad t\geq t_0, \] or \[ |L^\mu (t, x)|\leq C_0 (1+|x|)^{-1-3\alpha}(1+(t-|x|)_{+})^{-\alpha},\quad \forall t\geq t_0. \] Suppose the constants $\alpha, t_0, \delta,C_0$ obey the relation \eqref{t0}. Then there exists $\delta_0>0$, depending only on the constants $B^{\mu\nu}$, such that for all $ \delta<\delta_0$, solution $\phi$ of equation \eqref{LWAVEEQ} has the following properties: \begin{itemize} \item[(1)] Integrated local energy estimate \begin{equation} \label{ILE0} \begin{split} \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{\alpha+1}}dxd\tau &\lesssim E[\phi](\tau_1)+D^{\alpha}[F]_{\tau_1}^{\tau_2}+(1+\tau_1)^{-\alpha}D^{2\alpha}[F]_{\tau_1}^{\tau_2}\\ &+(1+\tau_1)^{-1-2\alpha}\left(g(1+2\alpha, \tau_2)+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\right). \end{split} \end{equation} \item[(2)] Energy bound \begin{equation} \label{eb} \begin{split} E[\phi](\tau_2)&\lesssim E[\phi](\tau_1)+D^{\alpha}[F]_{\tau_1}^{\tau_2}+(1+\tau_1)^{-\alpha}D^{2\alpha}[F]_{\tau_1}^{\tau_2}\\ &+(1+\tau_1)^{-1-2\alpha}\left(g(1+2\alpha, \tau_2)+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\right). \end{split} \end{equation} \item[(3)] p-weighted energy inequalities in a neighborhood of null infinity \begin{align} \notag g(1, \tau_2)+ \int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau&\lesssim g(1, \tau_1)+\tau_1^{1-\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+\tau_1^{1-\alpha}E[\phi](\tau_1)+\int_{\tau_1}^{\tau_2}(1+\tau)^{-\alpha}D^{2\alpha} [F]_{\tau_1}^{\tau_2}\\ \label{pWE1} &+(1+\tau_1)^{-2\alpha}\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau+(1+\tau_1)^{-2\alpha}g(1+2\alpha, \tau_1),\\ \notag g(1+2\alpha, \tau_2)+ \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}&\lesssim g(1+2\alpha, \tau_1)+(1+\tau_1)^{1-\alpha}E[\phi](\tau_1) \\ \label{pWE1a} &+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau+(1+\tau_1)^{1+\frac 1 2\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}. \end{align} \end{itemize} \end{prop} \begin{remark} We mention here that variants and generalizations of estimate \eqref{ILE0} can also be found in \cite{sogge-metcalfe2}, \cite{sogge-metcalfe}. \end{remark} The following corollary will be used to derive the energy decay estimates when commuting the equation with the vector fields $Z$. \begin{cor} \label{D2aQp} Assume the given functions $\Phi(t, x)$, $L^\mu(t, x)$ and the constant $\delta$ satisfy the conditions in the above proposition. Then for solution $\phi$ of ~\eqref{LWAVEEQ}, we have estimates for $N=B^{\mu\nu}\partial_\mu\Phi\cdot\partial_\nu\phi$, $L=L^\mu\partial_\mu\phi$ \begin{equation*} \label{D2aQ} \begin{split} D^{2\alpha}[N]_{\tau_1}^{\tau_2}+D^{2\alpha}[L]_{\tau_1}^{\tau_2}&\lesssim E[\phi](\tau_1)+D^{\alpha}[F]_{\tau_1}^{\tau_2}+(1+\tau_1)^{-\alpha}D^{2\alpha}[F]_{\tau_1}^{\tau_2}\\ &\qquad+(1+\tau_1)^{-1-2\alpha}\left(g(1+2\alpha, \tau_2)+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\right). \end{split} \end{equation*} \end{cor} Under appropriate boostrap assumptions on the nonlinearity $F$, the above inequalities lead to decay of the energy flux $E[\phi](\tau)$. We discuss the integrated local energy inequality and the p-weighted energy inequalities separately. And then combine them together to prove the above proposition. The following two lemmas will be used frequently. First define \[ A=10\sup\limits_{\mu, \nu} \{|B^{\mu\nu}|\}. \] \begin{lem} \label{lnullQ} Let $N=B^{\mu\nu}\partial_\mu\Phi\partial_\nu\phi$. Then \begin{equation*} \label{nullQ} |rN|\leq A\left(|\partial\Phi||\overline{\partial_v}\psi|+|\partial_v\Phi||\partial\psi|+|\partial\Phi||\phi|\right),\quad \psi=r\phi. \end{equation*} \end{lem} \begin{proof} By our notations \[ r N=rB^{\mu\nu}\partial_\mu\Phi\cdot\partial_\nu\phi=B^{\mu\nu}\partial_\mu\Phi\cdot\partial_\nu\psi-B^{\mu\nu}\partial_\mu\Phi\partial_\nu r\cdot\phi. \] The lemma then follows from the fact that $B^{\mu\nu}$ satisfies the null condition and the inequality $|\partial r|\leq 1$. \end{proof} \begin{lem}[Gronwall's Inequality] \label{lGronwall} Suppose $A(\tau)$, $E(\tau)$ are nonnegative functions on $[\tau_1, \tau_2]$ . Assume that $E(\tau)$ is nondecreasing on this interval and $\beta$ is a positive number. If \[ A(\tau)\leq E[\tau]+C\int_{\tau_1}^{\tau}(1+s)^{-1-\beta}A(s)ds, \qquad \forall\tau\in[\tau_1, \tau_2], \] then \begin{equation*} \label{Gronwall} A(\tau)\leq \exp\left(C{\beta}^{-1}(1+\tau_1)^{-\beta}\right)E(\tau), \qquad \forall\tau\in[\tau_1, \tau_2]. \end{equation*} \end{lem} \begin{proof} See ~\cite{sogge}. \end{proof} \subsection{Integrated Local Energy Inequality} We follow the idea used in ~\cite{dr3} by choosing appropriate functions $f$ and $\chi$ such that the coefficients on the right hand side of \eqref{menergyeq} are positive. The left hand side can be controlled by the energy flux $\tilde{E}[\phi]$ by Lemma \ref{prop1}. We thus end up with an integrated energy inequality in the whole space time. We now discuss this in detail. Take $$f=\beta-\frac{\beta}{(1+r)^{\alpha}},\quad \chi=r^{-1}f,\quad \beta=\frac{2}{\alpha}.$$ Notice that \[ \frac{(1+r)^\alpha-1}{r}\geq \frac{\alpha}{1+r}. \] We have \begin{align*} &r^{-1}f+\frac 1 2 f'-\chi=\chi-r^{-1}f+\frac 1 2 f'=\frac{1}{(1+r)^{1+\alpha}},\\ &\chi-\frac 1 2 f'=\frac{\beta\left((1+r)^\alpha-1\right)}{r(1+r)^{\alpha}}-\frac{1}{(1+r)^{1+\alpha}}\geq\frac{1}{(1+r)^{\alpha+1}},\\ &-\frac 1 2\Box \chi=\frac{\alpha+1}{r(1+r)^{2+\alpha}}. \end{align*} Hen the energy inequalities ~\eqref{menergyeq}, ~\eqref{energyeqT} together with Lemma \ref{prop1} imply that \begin{equation} \label{ILEQ} \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}\leq 12\beta\tilde{E}[\phi](\tau_1) +13\beta\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|F-N-L||\bar\partial\phi|d\textnormal{vol}. \end{equation} To proceed, we have to estimate the linear terms $N(\Phi, \phi)$, $L(\partial\phi)$. We first consider the case $\tau_2\geq\tau_1\geq t_0$. For $\tau\geq t_0$, notice that on $\Sigma_\tau$ \[ C_0(1+|x|)^{-\alpha}(1+(t-|x|)_{+})^{-\alpha}\leq C_0(1+\tau)^{-\alpha}\leq C_0(1+t_0)^{-\alpha}\leq\delta \alpha \] by the inequality \eqref{t0}(we have assumed this inequality in Proposition \ref{mainprop}). Hence under the conditions on the functions $L^{\mu}(t, x)$ in Proposition \ref{mainprop}, we always have \[ \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|L(\partial\phi)||\bar\partial\phi|d\textnormal{vol}\leq \delta\alpha\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}(1+r)^{-1-\alpha}|\bar\partial\phi|^2d\textnormal{vol}. \] For $N(\Phi,\phi)$, we consider it inside and outside the cylinder $\{|x|\leq R_1\}$ separately. When $r=|x|\leq R_1$, the null structure of $N(\Phi, \phi)$ is not necessary. We has to rely on the smallness of $\delta$. Since $\Phi$ is a weak wave, condition $(iv)$ of Definition \ref{weakwave} implies that \begin{equation*} \label{Qin} |N|=|B^{\alpha\beta}\partial_\alpha\Phi\partial_\beta\phi|\leq A \delta\alpha (1+r)^{-1-\alpha}|\partial\phi|. \end{equation*} For $r\geq R_1$, the null structure of $N(\Phi, \phi)$ is of particular importance. By Lemma \ref{lnullQ}, it suffices to estimate the three terms $r^{-1}|\partial \Phi||\overline{\partial_v}\psi||\bar\partial\phi|$, $r^{-1}|\partial_v\Phi||\partial \psi||\bar\partial\phi|$, $r^{-1}|\partial\Phi||\phi||\bar\partial\phi|$. Without loss of generality, assume $R_1\geq 1$. For the second term, inequality \eqref{impsmall} shows that \begin{equation*} \label{pavPhipsi} |\partial_v\Phi||r^{-1}\partial\psi||\bar\partial\phi|\leq 2\delta \alpha(1+r)^{-1-\alpha}|\bar\partial\phi|^2. \end{equation*} On $\Sigma_\tau\cap \{|x|\geq R_1\geq 1\}$, for the first term, we have \begin{align*} \notag r^{-1}|\partial\Phi||\overline{\partial_v}\psi||\bar\partial\phi|&\leq C(1+r)^{-\frac{3}{2}}(1+\tau)^{-\frac 1 2-4\alpha}|\overline{\partial_v}\psi||\bar\partial\phi|\\ \label{paPhipavpsi} &\leq \delta\alpha(1+r)^{-1-\alpha}|\bar\partial\phi|^2+C(1+\tau)^{-1-8\alpha}r^{-2+\alpha}|\overline{\partial_v} \psi|^2. \end{align*} Here we denote $C$ as a constant depending on $\alpha$, $R=t_0+R_0$, $A^{\alpha\beta}$, $B^{\alpha\beta}$, $C_0$, $\delta$. Similarly for the third term, we have \begin{equation*} \label{paPhiphi} r^{-1}|\partial\Phi||\phi||\bar\partial\phi|\leq \delta\alpha(1+r)^{-1-\alpha}|\bar\partial\phi|^2+C(1+\tau)^{-1-8\alpha}r^{-2+\alpha}|\phi|^2. \end{equation*} It remains to control $r^{-2+\alpha}|\phi|^2$. We use the Hardy's inequality outside the cylinder $\{|x|\leq R\}$. By Lemma ~\ref{sphereinbd}, we have \begin{equation} \label{phi2bd} \begin{split} \int_{\omega}|\psi|^2(\tau,v,\omega)d\omega&\leq C\int_{\omega}|\psi|^2(\tau, v_\tau, \omega)d\omega +C\left(\int_{v_\tau}^v\int_{\omega}|\partial_v\psi|d\omega dv\right)^2\\ &\leq C\tilde{E}[\phi](\tau) + C\int_{v_\tau}^v\int_{\omega}r^{1+2\alpha}|\partial_v\psi|^2d\omega dv\int_{v_\tau}^v r^{-1-2\alpha}dv\\ &\leq C\tilde{E}[\phi](\tau)+ C g(1+2\alpha, \tau),\quad (\tau, v, \omega)\in S_\tau, \end{split} \end{equation} where $v_\tau=\frac{R+\tau}{2}$, $v=\frac{r+t}{2}$. Hence for all $p\leq 1+2\alpha$ \begin{equation} \label{phi21a} \int_{S_\tau}r^{p-3\alpha}\phi^2dvd\omega=\int_{v_\tau}^{\infty}r^{p-2-3\alpha}\int_{\omega}|\psi|^2d\omega dv \leq C\tilde{E}[\phi](\tau)+ C g(1+2\alpha, \tau). \end{equation} On the other hand, Lemma ~\ref{lem2} shows that \begin{equation*} \int_{S_\tau}\phi^2 dvd\omega \leq C \tilde{E}[\phi](\tau). \end{equation*} Interpolate with \eqref{phi21a} for $p=1+2\alpha$. We derive \begin{equation*} \label{phi2a} \begin{split} \int_{S_\tau}r^\alpha\phi^2dvd\omega&\leq C \tilde{E}[\phi](\tau)^{1-\gamma}\left(\tilde{E}[\phi](\tau)+g(1+2\alpha, \tau)\right)^{\gamma}\\ &\leq C\tilde{E}[\phi](\tau)+C\tilde{E}[\phi](\tau)^{1-\gamma}g(1+2\alpha, \tau)^{\gamma}, \end{split} \end{equation*} where $\gamma=\frac{\alpha}{1-\alpha}$. This gives estimates for $\phi^2$ outside the cylinder $\{|x|\leq R \}$. In the region $R_1\leq r\leq R$, using Sobolev embedding and Lemma \ref{sphereinbd}, we get \begin{equation*} \label{phi2bdin} \int_{\omega}\phi^2 d\omega\leq C \left.\int_{\omega}\phi^2d\omega\right|_{r=R}+C\int_{r\leq R}|\partial_r\phi|^2dx\\leq C\tilde{E}[\phi](\tau). \end{equation*} Therefore we can estimate $r^{-2+\alpha}|\phi|^2$ outside the cylinder $\{|x|\leq R_1\}$ as follows \begin{equation*} \begin{split} \int_{\{r\geq R_1\}\cap \Sigma_\tau}r^{-2+\alpha}|\phi|^2d\sigma&=\int_{R_1\leq r\leq R}r^{-2+\alpha}\phi^2dx+\int_{S_\tau}r^\alpha \phi^2dvd\omega\\ &\leq C\tilde{E}[\phi](\tau)+C\tilde{E}[\phi](\tau)^{1-\gamma}g(1+2\alpha, \tau)^{\gamma}. \end{split} \end{equation*} Inside the cylinder $\{|x|\leq R_1\}$, we use the assumption that $\partial\Phi$ is small. Hence combining the above estimates, we can bound the linear term $(|N(\Phi, \phi)|+|L(\partial\phi)|)|\bar\partial \phi|$ in \eqref{ILEQ} as follows \begin{align*} \beta \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}(|N|+|L|)|\bar\partial\phi|d\textnormal{vol}&\leq A\delta\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+C\int_{\tau_1}^{\tau_2}\frac{\tilde{E}[\phi](\tau)}{(1+\tau)^{1+8\alpha}}d\tau+ C \bar G[1+8\alpha, \alpha]_{\tau_1}^{\tau_2}\\ &+C\left(\int_{\tau_1}^{\tau_2}(1+\tau)^{-1-2\alpha}\tilde{E}[\phi](\tau)d\tau\right)^{1-\gamma}\left(\int_{\tau_1}^{\tau_2}(1+\tau)^{-7+4\alpha}g(1+2\alpha,\tau) d\tau\right)^{\gamma}\\ &\leq A\delta\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+C\int_{\tau_1}^{\tau_2}\frac{\tilde{E}[\phi](\tau)}{(1+\tau)^{1+2\alpha}}d\tau\\ &\qquad+ C G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+C \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}, \end{align*} where we used H$\ddot{o}$lder's inequality and Jensen's inequality \[ a^{1-\gamma}b^{\gamma}\leq (1-\gamma)a +\gamma b,\quad \forall a, b>0. \] For the inhomogeneous term $|F||\bar \partial \phi|$ in \eqref{ILEQ}, we have \begin{equation*} \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|F||\bar\partial\phi|d\textnormal{vol}\leq \delta\alpha \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+ C D^\alpha[F]_{\tau_1}^{\tau_2} \end{equation*} If we choose $$\delta_0=\frac{A}{100},$$ then for all $\delta<\delta_0$, inequality \eqref{ILEQ} implies that \begin{equation*} \begin{split} \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}&\lesssim \tilde{E}[\phi](\tau_1)+ D^\alpha[F]_{\tau_1}^{\tau_2}+\int_{\tau_1}^{\tau_2}\frac{\tilde{E}[\phi](\tau)}{(1+\tau)^{1+2\alpha}}d\tau\\ &+ G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}. \end{split} \end{equation*} Similarly, the energy inequality ~\eqref{energyeqT} shows that \begin{equation*} \begin{split} \tilde{E}[\phi](\tau_2)&\lesssim \tilde{E}[\phi](\tau_1)+\int_{\tau_1}^{\tau_2} \frac{\tilde{E}[\phi](\tau)}{(1+\tau)^{1+2\alpha}}d\tau+ G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}+D^\alpha[F]_{\tau_1}^{\tau_2}. \end{split} \end{equation*} We Gronwall's inequality to control the second term on the right hand side of the above inequality. We thus have \begin{equation} \label{EING} \tilde{E}[\phi](\tau_2)\lesssim \tilde{E}[\phi](\tau_1)+ G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}+D^\alpha[F]_{\tau_1}^{\tau_2}. \end{equation} Then the above integrated local energy inequality is improved to \begin{equation} \label{ILEG} \begin{split} \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}&\lesssim \tilde{E}[\phi](\tau_1)+ D^\alpha[F]_{\tau_1}^{\tau_2}+ G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}. \end{split} \end{equation} We have shown \eqref{EING}, \eqref{ILEG} for all $\tau_2\geq \tau_1\geq t_0$. We claim that these two inequalities hold for all $\tau_2\geq \tau_1\geq 0$. In fact, when $\tau_1\leq \tau_2\leq t_0$, the finite speed of propagation for wave equation ~\cite{sogge} shows that $\phi$ vanishes when $r\geq R=t_0+R_0$. Hence we can show \[ \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}(|N|+|L|)|\bar\partial\phi|dxd\tau\lesssim \int_{\tau_1}^{\tau_2} \int_{r\leq R}|\bar\partial\phi|^2dxd\tau\lesssim \int_{\tau_1}^{\tau_2}\tilde{E}[\phi](\tau)d\tau, \] When considering the energy inequality \eqref{energyeqT}, $\int_{\tau_1}^{\tau_2}\tilde{E}[\phi](\tau)d\tau$ can be absorbed by using Gronwall's inequality since $\tau_2\leq t_0$. Hence we can conclude ~\eqref{EING}, ~\eqref{ILEG} for all $0\leq\tau_1\leq \tau_2\leq t_0$. For the case $\tau_1\leq t_0\leq \tau_2$, split the interval $[\tau_1, \tau_2]$ into $[\tau_1, t_0]$ and $[t_0, \tau_2]$, on which we have two separate inequalities. Combining them together, we get ~\eqref{EING}, ~\eqref{ILEG}. Therefore \eqref{EING}, \eqref{ILEG} hold for all $0\leq \tau_1\leq \tau_2$. \bigskip We end this section by making a remark. We have used the modified energy flux $\tilde{E}[\phi](\tau)$ instead of $E[\phi](\tau)$ to make the above argument rigorous. We claim that the inequalities \eqref{EING}, ~\eqref{ILEG} hold if we replace $\tilde{E}[\phi](\tau)$ with $E[\phi](\tau)$. In fact, it is sufficient to consider the case when \[ \tilde{E}[\phi](\tau_1)+D^\alpha[F]_{\tau_1}^{\tau_2}+ G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2} \] is finite. By \eqref{EING}, this shows that $\tilde{E}[\phi](\tau)$ is finite for all $\tau\in[\tau_1, \tau_2]$. Thus Remark 1 shows that all the above statements hold if we replace $\tilde{E}[\phi](\tau)$ with $E[\phi](\tau)$ for $\tau\in[\tau_1, \tau_2]$. In the sequel, we no longer use the modified energy flux $\tilde{E}[\phi](\tau)$ for the reason argued here. \subsection{p-weighted Energy inequality} We revisit the p-weighted energy inequalities developed by M. Dafermos and I. Rodnianski in ~\cite{newapp}. Rewrite the equation ~\eqref{LWAVEEQ} in null coordinates \begin{equation} \label{waveqpsi} -\partial_u \partial_v \psi+\mbox{$\Delta \mkern-13mu /$\,} \psi=r(F-N-L),\quad \psi:=r\phi, \end{equation} where $\mbox{$\Delta \mkern-13mu /$\,}$ denotes the Laplacian on the sphere with radius $r$. Multiplying the equation by $r^p \partial_v\psi$ and integration by parts in the region bounded by the two null hypersurfaces $S_{\tau_1}, S_{\tau_2}$ and the hypersurface $\{r=R\}$, we obtain \begin{align} \notag &\int_{S_{\tau_2}} r^p (\partial_v\psi)^2 dvd\omega +2\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{p+1}(F-N-L)\partial_v\psi dvd\tau d\omega\\ \notag & +\int_{\tau_1}^{\tau_2}\int_{S_\tau} r^{p-1} \left (p(\partial_v\psi)^2 + (2-p) |\mbox{$\nabla \mkern-13mu /$\,}\psi|^2\right)dvd\tau d\omega +\int_{\mathcal I_{\tau_1}^{\tau_2}} r^p |\mbox{$\nabla \mkern-13mu /$\,}\psi|^2 dud\tau d\omega\\ \label{pWE} =& \int_{S_{\tau_1}}r^p (\partial_v\psi)^2 dvd\omega +\int_{\tau_1}^{\tau_2} r^p \left (|\mbox{$\nabla \mkern-13mu /$\,}\psi|^2- (\partial_v\psi)^2\right)d\omega d\tau |_{r=R}. \end{align} We claim that we can estimate the boundary terms on $r=R$ as follows \begin{equation} \begin{split} &\left|\int_{\tau_1}^{\tau_2} r^p \left (|\mbox{$\nabla \mkern-13mu /$\,}\psi|^2- (\partial_v\psi)^2\right)d\omega d\tau \right|_{r=R}\\ &\lesssim E[\phi](\tau_1)+ G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}+D^\alpha[F]_{\tau_1}^{\tau_2}. \end{split} \label{bdestR} \end{equation} Since $R$ is a fixed constant, it suffices to show \eqref{bdestR} for $p=0$. Thus take $p=0$ in the identity \eqref{pWE}. The energy term on the null hypersurfaces $S_{\tau_1}$, $S_{\tau_2}$, $\mathcal{I}_{\tau_1}^{\tau_2}$ can be bounded by $\tilde{E}(\tau_2)+\tilde{E}(\tau_1)$, which can be estimated by using the energy inequality \eqref{EING}. We use the improved integrated local energy estimates for $\mbox{$\nabla \mkern-13mu /$\,}\phi$ to bound the third term in \eqref{pWE}. Recall that when $r\geq R$, we in fact have the improved lower bound \[ \frac{1}{r}\lesssim \frac{\beta\left((1+r)^\alpha-1\right)}{r(1+r)^{\alpha}}-\frac{1}{(1+r)^{1+\alpha}}=\chi-\frac 1 2 f' \] instead of $\frac{1}{(1+r)^{1+\alpha}}$ we have used in \eqref{menergyeq} to obtain \eqref{ILEQ}. Thus we actually can show that \[ \int_{\tau_1}^{\tau_2}\int_{S_\tau} r^{-1} |\mbox{$\nabla \mkern-13mu /$\,}\psi|^2dvd\omega d\tau=\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{|\mbox{$\nabla \mkern-13mu /$\,}\phi|^2}{r}d\textnormal{vol}\lesssim E[\phi](\tau_1)+\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|F-N-L||\bar{\partial}\phi|d\textnormal{vol}. \] For the inhomogeneous term, notice that \[ \int_{\tau_1}^{\tau_2}\int_{S_\tau}|r(F-N-L)\partial_v\psi|dvd\omega d\tau\lesssim \int_{\tau_1}^{\tau_2}\int_{S_\tau} |F-N-L||\bar\partial\phi|d\textnormal{vol}. \] We have already shown that this term can be bounded by the right hand side of \eqref{bdestR} in the previous section. Thus the inequality \eqref{bdestR} follows. \bigskip Now, to make use of the identity \eqref{pWE}, we need to control the inhomogeneous term $r^{p+1}(F-N-L)\partial_v\psi$ as all the other terms have a positive sign or are bounded. Due to the different structures of $F$, $N$, $L$, we discuss them separately. The most difficult term is the linear term $r^{p+1}N(\Phi, \phi)\partial_v\psi$ satisfying the null condition. For this term, by Lemma ~\ref{lnullQ}, it suffices to estimate the following three terms \[ r^p|\partial\Phi||\overline{\partial_v}\psi||\partial_v\psi|,\quad r^p|\partial\Phi||\phi||\partial_v\psi|, \quad r^p|\partial_v\Phi||\partial\psi||\partial_v\psi|. \] In application, $p\in (0, 2)$. In particular, the coefficients $p$, $2-p$ in \eqref{pWE} are positive. From the decay assumptions on $\Phi$( see Definition \ref{weakwave}), we estimate the first term as follows \begin{equation} \label{Phibarpsi} \begin{split} 2r^p|\partial \Phi||\overline{\partial_v}\psi||\partial_v\psi|&\leq2 r^{p-\frac 1 2}(1+(t-|x|)_+)^{-\frac 1 2-4\alpha}|\overline{\partial_v}\psi||\partial_v\psi|\\ &\leq\epsilon_1 r^{p-1}|\overline{\partial_v}\psi|^2+\frac{C}{\epsilon_1}r^p(1+\tau)^{-1-8\alpha}|\partial_v\psi|^2,\quad \forall \epsilon_1>0. \end{split} \end{equation} The first term will be absorbed if $p>\epsilon_1$, $2-p>\epsilon_1$, while the second term will be controlled by using Gronwall's inequality. Similarly for the second term $r^p|\partial\Phi||\phi||\partial_v\psi|$, we can show \begin{equation*} 2r^p|\partial \Phi||\phi||\partial_v\psi|\lesssim r^{p-1+3\alpha}(1+\tau)^{-6\alpha}|\partial_v\psi|^2+r^{p-3\alpha}(1+\tau)^{-1-2\alpha}|\phi|^2. \end{equation*} We use interpolation to further bound the first term on the right hand side of the above inequality. Notice that \[ p\cdot \frac{5\alpha}{1+2\alpha}\geq p-1+3\alpha,\quad p\leq 1+2\alpha. \] Using H$\ddot{o}$lder's inequality and Jensen's inequality, we have \[ r^{p-1+3\alpha}(1+\tau)^{-6\alpha}\leq \left(r^p(1+\tau)^{-1-\alpha}\right)^{\frac{5\alpha}{1+2\alpha}}\cdot \left((1+\tau)^{-\alpha}\right)^{1-\frac{5\alpha}{1+2\alpha}}\leq \tau^{-\alpha}+r^p(1+\tau)^{-1-\alpha}. \] We use estimate \eqref{phi21a} to bound $r^{p-3\alpha}\phi^2$. Summarizing, we can show that for $p\leq 1+2\alpha$ \begin{equation} \label{Phiphip} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^p|\partial \Phi||\phi||\partial_v\psi|dvd\omega d\tau\\ &\lesssim \tau_1^{-\alpha}G[0, 0]_{\tau_1}^{\tau_2}+G[1+\alpha, p]_{\tau_1}^{\tau_2}+ G[1+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+\int_{\tau_1}^{\tau_2}\frac{\tilde{E}[\phi](\tau)}{(1+\tau)^{1+2\alpha}}d\tau. \end{split} \end{equation} It remains to handle the third term $r^p|\partial_v\Phi||\partial\psi||\partial_v\psi|$. We estimate this term and the linear term $r^{p+1}L(\partial\phi) \partial_v\psi$ together due to the similar assumptions on $\partial_v\Phi$, $L^\mu(t, x)$. The difficulty for estimating these two terms is that we are not allowed to use Cauchy-Schwartz's inequality as we did previously. However notice that the integrated local energy is expected to decay in $\tau$( $(1+\tau)^{-1-\alpha}$). We can put some positive weights of $\tau$ in the integrated local energy such that it is still bounded. To start with, observe that when $|x|\geq R\geq 1$, we have \[ r^p|\partial_v\Phi||\partial\psi|+r^{p+1}|L(\partial\phi)|\lesssim r^{p-1-3\alpha}|\partial\psi|+r^{p+1-1-3\alpha}|\partial\phi|\lesssim r^{p-3\alpha}|\bar\partial \phi|. \] Thus we can bound \begin{equation*} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^p|\partial_v \Phi||\partial\psi||\partial_v\psi|+r^{p+1}|L(\partial\phi)||\partial_v\psi|dvd\omega d\tau \lesssim\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{p-3\alpha}|\bar\partial\phi||\partial_v\psi|dvd\omega d\tau\\ &\lesssim\left(\int_{\tau_1}^{\tau_2}\tau^{1-\alpha}\int_{S_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}\right)^\frac 1 2\left(G[1-\alpha, 2p-1-5\alpha]_{\tau_1}^{\tau_2}\right)^\frac 1 2\\ &\lesssim \frac{1}{\epsilon_2}\int_{\tau_1}^{\tau_2}\tau^{1-\alpha}\int_{S_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+\epsilon_2 \left(G[0, 0]_{\tau_1}^{\tau_2}\right)^{\frac{2\alpha}{1+\alpha}} \left(G[1+\alpha, p]_{\tau_1}^{\tau_2}\right)^{\frac{1-\alpha}{1+\alpha}}\\ &\lesssim \frac{1}{\epsilon_2}\int_{\tau_1}^{\tau_2}\tau^{1-\alpha}\int_{S_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+\epsilon_2 \left(G[0, 0]_{\tau_1}^{\tau_2} +G[1+\alpha, p]_{\tau_1}^{\tau_2}\right) \end{split} \end{equation*} for all positive number $\epsilon_2\leq 1$. Here we have used the fact \[ 2p-1-5\alpha-\frac{1-\alpha}{1+\alpha}p\leq 0, \quad p\leq 1+2\alpha. \] We now have to show that the first term on the right hand side is bounded. We rely on the following lemma. \begin{lem} \label{lweightILE} Suppose $f(\tau)$ is smooth. Then for any $\beta\neq 0$, we have the identity \begin{equation*} \int_{\tau_1}^{\tau_2}s^\beta f(s)ds=\beta\int_{\tau_1}^{\tau_2}\tau^{\beta-1}\int_{\tau}^{\tau_2}f(s)ds d\tau+\tau_1^{\beta}\int_{\tau_1}^{\tau_2}f(s)ds. \end{equation*} \end{lem} \begin{proof} Let \[ F(\tau)=\int_{\tau}^{\tau_2}f(s)ds. \] Integration by parts gives the lemma. \end{proof} Apply the lemma to $\beta=1-\alpha$, $f(\tau)=\int_{S_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\sigma$. Then the integrated local energy inequality \eqref{ILEG} implies that \begin{equation*} \begin{split} \int_{\tau_1}^{\tau_2}\tau^{1-\alpha}\int_{S_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}&\lesssim \tau_1^{-\alpha}\int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau+\tau_1^{1-\alpha}E[\phi](\tau_1)+\int_{\tau_1}^{\tau_2}\tau^{-\alpha}D^\alpha[F]_{\tau}^{\tau_2}d\tau\\ &\qquad+G[1+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+\bar G[2\alpha, \alpha]_{\tau_1}^{\tau_2}+\tau_1^{1-\alpha} D^\alpha[F]_{\tau_1}^{\tau_2}. \end{split} \end{equation*} Since in application only two p-weighted energy inequalities associated to $p=1$ and $p=1+2\alpha$ are considered, we use interpolation to bound $\bar G[2\alpha, \alpha]$ \[ \bar G[2\alpha, \alpha]_{\tau_1}^{\tau_2}\lesssim \epsilon_2\epsilon_3 \bar G[2\alpha, 2\alpha]_{\tau_1}^{\tau_2}+\frac{1}{\epsilon_2\epsilon_3}\tau_1^{-2\alpha}\bar G[0, 0]_{\tau_1}^{\tau_2} \] for all positive $\epsilon_3$, where $\epsilon_2$ is the constant appeared before. Our ultimate goal is to derive the decay of the energy flux $E[\phi](\tau)$ on $\Sigma_\tau$. The almost energy flux $\bar{g}(0, \tau)$ on $S_\tau$ is related to $E[\phi](\tau)$ by the following lemma. \begin{lem} \label{lgbarE} \begin{equation*} E[\phi](\tau)\lesssim \bar g(0, \tau)+2\int_{r\leq R}|\partial\phi|^2+\phi^2 dx\lesssim \tilde{E}[\phi](\tau). \end{equation*} \end{lem} \begin{proof} In fact note that \begin{equation*} \begin{split} \bar g(0, \tau)+2\int_{r\leq R}|\partial\phi|^2+\phi^2 dx&=\int_{S_\tau}r^2(\partial_v\phi)^2+\partial_v(r\phi^2) +r^2|\mbox{$\nabla \mkern-13mu /$\,}\phi|^2dvd\omega +2\int_{r\leq R}|\partial\phi|^2+\phi^2 dx\\ &=E[\phi](\tau)+\left.\int_{\omega}r\phi^2d\omega\right|_{v_\tau}^{\infty}+\int_{r\leq R}|\partial\phi|^2+2\phi^2dx. \end{split} \end{equation*} Lemma ~\ref{sphereinbd} and Lemma \ref{phiboundH} imply that \[ \bar g(0, \tau)+2\int_{r\leq R}|\partial\phi|^2+\phi^2 dx\lesssim \tilde{E}[\phi](\tau). \] To prove the other side of the inequality, it suffices to show that \[ \left.\int_{\omega}r\phi^2(\tau, R, \omega)d\omega\right|_{r=R}\leq \int_{r\leq R}|\partial_r\phi|^2+2\phi^2 dx. \] Without loss of generality, assume $R\geq 2$. Notice that \begin{equation*} R^3\int_{\omega}\phi^2(\tau, R, \omega)d\omega=\int_{0}^{R}\int_{\omega}\partial_r(r^3\phi^2)d\omega dr\leq 3\int_{r\leq R} \phi^2dx + R\int_{r\leq R}|\partial_r\phi|^2+\phi^2dx. \end{equation*} Hence \[ R\int_{\omega}\phi^2(\tau, R, \omega)d\omega\leq \int_{r\leq R}|\partial_r\phi|^2+2\phi^2 dx\leq \int_{r\leq R}|\partial\phi|^2+2\phi^2 dx. \] Thus the lemma holds. \end{proof} Since $G[0, 0]_{\tau_1}^{\tau_2}\leq \bar G[0, 0]_{\tau_1}^{\tau_2}$, using Lemma \ref{lgbarE}, we can control $\bar{G}[0, 0]$ in terms of $E[\phi](\tau)$ \[ G[0, 0]_{\tau_1}^{\tau_2}\leq\bar G[0, 0]_{\tau_1}^{\tau_2}\leq \int_{\tau_1}^{\tau_2}\tilde{E}[\phi](\tau)d\tau. \] Summarizing, we can show that \begin{equation} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^p|\partial_v \Phi||\partial\psi||\partial_v\psi|+r^{p+1}|L(\partial\phi)||\partial_v\psi|dvd\omega d\tau\\ &\lesssim \left(\epsilon_2+\frac{\tau_1^{-\alpha}}{\epsilon_2}+\frac{\tau_1^{-2\alpha}}{\epsilon_2^2\epsilon_3}\right)\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+\tau_1^{1-\alpha}E[\phi](\tau_1)+\int_{\tau_1}^{\tau_2}\tau^{-\alpha}D^\alpha[F]_{\tau}^{\tau_2}d\tau\\ &\qquad+\tau_1^{1-\alpha} D^\alpha[F]_{\tau_1}^{\tau_2}+\frac{1}{\epsilon_2}G[1+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+ \epsilon_3\bar G[2\alpha, 2\alpha]_{\tau_1}^{\tau_2}+G[1+\alpha, p]_{\tau_1}^{\tau_2}. \end{split} \label{ppavPhi} \end{equation} Here we used the argument in the end of previous section to replace $\tilde{E}[\phi](\tau)$ with $E[\phi](\tau)$. We must remark here that the implicit constants before the other terms on the right hand side of \eqref{ppavPhi} may also depend on $\epsilon_i$. However, since $\epsilon_i$ will be chosen to depend only on $R$, $\alpha$, $B^{\alpha\beta}$, $C_0$, the omitted dependence will not affect the argument in the sequel. \bigskip Finally, we treat the inhomogeneous term $r^{p+1}F\cdot \partial_v \psi$ in \eqref{pWE}. Since $D^{2\alpha}[F]_{\tau_1}^{\tau_2}$ is expected to decay in $\tau$, we put some positive weights of $\tau$ in $D^{2\alpha}[F]_{\tau}^{\tau_2}$ and estimate it by using Lemma \ref{lweightILE} applied to $\beta=p-\frac{3}{2}\alpha$, $f(\tau)=\int_{S_\tau}r^{1+2\alpha}|F|^2d\sigma$. We can show that \begin{equation} \label{pWEFi} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{S_\tau}2r^{p+1}F\cdot\partial_v\psi dvd\tau d\omega\\ &\leq \epsilon_4 G[p-\frac{3}{2}\alpha, 2p-1-2\alpha]_{\tau_1}^{\tau_2} + \frac{1}{\epsilon_4}\int_{\tau_1}^{\tau_2}(1+\tau)^{p-\frac{3}{2} \alpha}\int_{S_\tau}|F|^2 r^{1+2\alpha}d\textnormal{vol}\\ &\lesssim \epsilon_4 G[p-\frac{3}{2}\alpha, 2p-1-2\alpha]_{\tau_1}^{\tau_2}+\frac{1}{\epsilon_4}\int_{\tau_1}^{\tau_2}\tau^{p-1-\frac{3}{2}\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau +\frac{1}{\epsilon_4}\tau_1^{p-\frac{3}{2}\alpha}D^{2\alpha}[F]_{\tau_1}^{\tau_2} \end{split} \end{equation} for any $0<\epsilon_4\leq 1$ and $\tau_2\geq\tau_1\geq t_0$. \subsection{Proof of Proposition \ref{mainprop}} Having controlled $\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{p+1}(F-N-L)\partial_v\psi dvd\omega d\tau$, we are now able to prove Proposition \ref{mainprop}. First let $$\epsilon_1= \frac{1-2\alpha}{2A},\quad A=\max \{|B^{\mu\nu}|\}.$$ Hence for $p=1$ or $1+2\alpha$, the third term in ~\eqref{pWE} dominates the first term on the right hand side of ~\eqref{Phibarpsi}. Set $p=1+2\alpha$ in ~\eqref{pWE} and $\epsilon_4=1$ in ~\eqref{pWEFi}. Combining the estimates ~\eqref{bdestR}, ~\eqref{Phibarpsi}, ~\eqref{Phiphip}, ~\eqref{ppavPhi}, we infer that \begin{equation*} \begin{split} g(1+2\alpha, \tau_2)+ \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}&\lesssim g(1+2\alpha, \tau_1)+\left(\epsilon_2+\frac{\tau_1^{-\alpha}}{\epsilon_2}+\frac{\tau_1^{-2\alpha}}{\epsilon_2^2\epsilon_3}\right)\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\\ &+\tau_1^{1-\alpha}E[\phi](\tau_1)+\tau_1^{1+\frac 1 2\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+ \epsilon_3\bar G[0, 2\alpha]_{\tau_1}^{\tau_2} +\frac{1}{\epsilon_2}G[1+\frac 1 2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}. \end{split} \end{equation*} Now suppose the implicit constant before $\epsilon_3 \bar G[0, 2\alpha]$ is $C_1$, which is independent of $\epsilon_2$, $\epsilon_3$. Take $$\epsilon_3=\frac{1}{2C_1}.$$ We remark here that we can choose different $\epsilon_i$ for different values of $p$. In particular, we conclude that $\epsilon_3 \bar G[0, 2\alpha]$ can be absorbed by the left hand side. Then apply Gronwall's inequality( Lemma \ref{lGronwall}). We can control the last term $\frac{1}{\epsilon_2}G[1+\frac 1 2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}$ and conclude that \begin{equation} \label{pWE1abarG} \begin{split} g(1+2\alpha, \tau_2)+ \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}&\lesssim g(1+2\alpha, \tau_1) +\tau_1^{1+\frac 1 2\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\\ &+\exp\left(\frac{2\tau_1^{-\frac 1 2\alpha}}{\alpha\epsilon_2}\right)\left(\epsilon_2+\frac{\tau_1^{-\alpha}}{\epsilon_2}+\frac{\tau_1^{-2\alpha}}{\epsilon_2^2}\right)\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+\tau_1^{1-\alpha}E[\phi](\tau_1). \end{split} \end{equation} The integral of the energy on the right hand side can be estimated when we combine \eqref{pWE1abarG} with the p-weighted energy inequality for $p=1$. \bigskip Now take $p=1$ in \eqref{pWE}. First, we use interpolation to estimate the first term $G[1-\frac{3}{2}\alpha, 1-2\alpha]_{\tau_1}^{\tau_2}$ on the right hand side of ~\eqref{pWEFi} \[ G[1-\frac{3}{2}\alpha, 1-2\alpha]_{\tau_1}^{\tau_2}\leq\left(G[1+\frac 1 2 \alpha, 1]_{\tau_1}^{\tau_2}\right)^{1-2\alpha} \left(G[0, 0]_{\tau_1}^{\tau_2}\right)^{2\alpha}\leq G[1+\frac 1 2 \alpha, 1]_{\tau_1}^{\tau_2}+G[0, 0]_{\tau_1}^{\tau_2}. \] To retrieve the full energy $E[\phi](\tau)$ from $\bar{g}(0, \tau)$, by Lemma \ref{lgbarE}, add \[ 2\int_{\tau_1}^{\tau_2}\int_{r\leq R} |\partial\phi|^2+\phi^2dxd\tau \] to both sides of ~\eqref{pWE}. Then the integrated local energy estimate ~\eqref{ILEG} restricted to the region $r\leq R$ and Gronwall's inequality imply that \begin{align*} g(1, \tau_2)+ \int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau&\lesssim g(1, \tau_1)+\left(\epsilon_2+\frac{\tau_1^{-\alpha}}{\epsilon_2}+\epsilon_4+\frac{\tau_1^{-2\alpha}}{\epsilon_2^2}\right)\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+\int_{\tau_1}^{\tau_2}\tau^{-\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\\ &+\tau_1^{1-\alpha}E[\phi](\tau_1)+\tau_1^{1-\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+\tau_1^{-2\alpha} \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}+\frac{1}{\epsilon_2}G[1+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}, \end{align*} where we choose $\epsilon_3=1$. Assume the implicit constant before $\int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau$ in the above inequality is $C_3$, which is independent of $\epsilon_2$ and $\epsilon_4$. Then take \[\epsilon_4=\frac{1}{2C_3}.\] We get \begin{align} \notag &g(1, \tau_2)+ \int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau\lesssim g(1, \tau_1)+\left(\epsilon_2+\frac{\tau_1^{-\alpha}}{\epsilon_2}+\frac{\tau_1^{-2\alpha}}{\epsilon_2^2}\right)\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+\int_{\tau_1}^{\tau_2}\tau^{-\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\\ \label{pWE1barG} &\qquad\qquad\qquad+\tau_1^{1-\alpha}E[\phi](\tau_1)+\tau_1^{1-\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+\tau_1^{-2\alpha} \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}+\frac{1}{\epsilon_2}G[1+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}. \end{align} Now let $C_4$ be the implicit constant before $\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau$ in both ~\eqref{pWE1abarG} and ~\eqref{pWE1barG}, which is independent of $\epsilon_2$. Then let \[ \epsilon_2=\frac{1}{4C_4} \] and choose a constant $T_0\geq t_0$ such that \[ T_0^{-\frac 1 2\alpha}\leq\frac{\alpha}{2}\epsilon_2. \] In particular, for $\tau_1\geq T_0$, we have \[ C_4\left(\epsilon_2+\frac{\tau_1^{-\alpha}}{\epsilon_2}+\frac{\tau_1^{-2\alpha}}{\epsilon_2^2}\right) \leq C_4\left(\epsilon_2+\frac{T_0^{-\alpha}}{\epsilon_2}+\frac{T_0^{-2\alpha}}{\epsilon_2^2}\right)\leq \frac 1 2. \] We combine ~\eqref{pWE1abarG} and ~\eqref{pWE1barG} together to control $\int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau$. For $\tau_2\geq \tau_1\geq T_0$, we first estimate $\bar G[0, 2\alpha]_{\tau_1}^{\tau_2}$, $G[1+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}$ in ~\eqref{pWE1barG} by using ~\eqref{pWE1abarG}. Then combining all them together, we can show that the coefficient of $\int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau$ on the right hand side can be bounded by \[ \frac 1 2+C_4T_0^{-2\alpha} \frac{e}{2}+\frac{C_4}{\epsilon_2}\frac{T_0^{-2\alpha}}{2\alpha}\frac{e}{2}<\frac{3}{4}. \] Thus $\int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau$ can be absorbed and we can conclude that \begin{align*} g(1, \tau_2)+ \int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau&\lesssim g(1, \tau_1)+\tau_1^{1-\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+\tau_1^{1-\alpha}E[\phi](\tau_1)+\int_{\tau_1}^{\tau_2}\tau^{-\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\\ &\qquad+\tau_1^{-2\alpha}\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau+\tau_1^{-2\alpha}g(1+2\alpha, \tau_1), \end{align*} which, in turn, improves ~\eqref{pWE1abarG} to \begin{equation*} \begin{split} g(1+2\alpha, \tau_2)+ \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}&\lesssim g(1+2\alpha, \tau_1)+\tau_1^{1-\alpha}E[\phi](\tau_1)+ \tau_1^{1+\frac 1 2\alpha} D^{2\alpha}[F]_{\tau_1}^{\tau_2}+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau. \end{split} \end{equation*} This proves ~\eqref{pWE1} and ~\eqref{pWE1a} for all $\tau_2\geq\tau_1\geq T_0$. \bigskip For $t_0\leq \tau_1\leq \tau_2\leq T_0$, we make use of the boundedness of $\tau$. Let $\epsilon_2=1$. Inequality ~\eqref{pWE1abarG} shows that \begin{equation*} g(1+2\alpha, \tau_2)+\bar G[0, 2\alpha]_{\tau_1}^{\tau_2}\lesssim g(1+2\alpha, \tau_1)+\int_{\tau_1}^{\tau_2} E[\phi] (\tau)d\tau+ D^{2\alpha}[F]_{\tau_1}^{\tau_2}+E[\phi](\tau_1). \end{equation*} By Lemma ~\ref{lgbarE}, we have \begin{equation*} \bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}\lesssim \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}+ \bar G[0,0]_{\tau_1}^{\tau_2} \lesssim g(1+2\alpha, \tau_1)+\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+ D^{2\alpha}[F]_{\tau_1}^{\tau_2}+E[\phi](\tau_1). \end{equation*} Combining with the energy inequality ~\eqref{EING}, we obtain \[ E[\phi](\tau_2)\lesssim g(1+2\alpha, \tau_1)+\int_{\tau_1}^{\tau_2} E[\phi](\tau)d\tau+ D^{2\alpha}[F]_{\tau_1}^{\tau_2}+E[\phi](\tau_1). \] Thus Gronwall's inequality indicates that \begin{equation*} \int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau\lesssim g(1+2\alpha, \tau_1)+ D^{2\alpha}[F]_{\tau_1}^{\tau_2}+E[\phi](\tau_1) \end{equation*} as $\tau_1\leq \tau_2\leq T_0$. Hence \eqref{pWE1} and \eqref{pWE1a} follow from ~\eqref{pWE1abarG}, ~\eqref{pWE1barG}. \bigskip For $\tau_1\leq \tau_2\leq t_0$, the finite speed of propagation for wave equation ~\cite{sogge} shows that $g(p, \tau)$ vanishes. Thus ~\eqref{pWE1}, ~\eqref{pWE1a} hold. For general $\tau_2\geq \tau_1\geq 0$, divide the interval $[\tau_1, \tau_2]$ into three(possibly two) such intervals: $[\tau_1, t_0]$, $[t_0, T_0]$ and $[T_0, \tau_2]$. Then ~\eqref{pWE1}, ~\eqref{pWE1a} follow by combining those three(or two) inequalities together. This completes the proof for ~\eqref{pWE1}, ~\eqref{pWE1a}. \bigskip Having proven ~\eqref{pWE1} and ~\eqref{pWE1a}, we can improve the integrated local energy inequality ~\eqref{EING} and the energy inequality ~\eqref{ILEG} as follows: Integrate \eqref{pWE1a} from $\tau_1$ to $\tau_2$. We obtain \begin{align*} &G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+\bar G[1+2\alpha, \alpha]_{\tau_1}^{\tau_2}\leq G[2+2\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}+(1+\tau_1)^{-1-2\alpha} \bar G[0, 2\alpha]_{\tau_1}^{\tau_2}\\ &\lesssim E[\phi](\tau_1)+\tau_1^{-\alpha}D^{2\alpha}[F]_{\tau_1}^{\tau_2}+(1+\tau_1)^{-1-2\alpha} \left(g(1+2\alpha, \tau_2)+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha}D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\right), \end{align*} which, together with \eqref{EING}, ~\eqref{ILEG}, implies ~\eqref{ILE0}, ~\eqref{eb}. We thus finished proof for Proposition \ref{mainprop}. \bigskip To show Corollary ~\ref{D2aQp}, take $p=1+2\alpha$ in ~\eqref{phi21a}. Interpolation shows that \begin{equation*} \int_{S_\tau}r^{2\alpha}\phi^2dvd\omega\lesssim E[\phi](\tau)+E[\phi](\tau)^{1-\frac{2\alpha}{1-\alpha}}g(1+2\alpha, \tau)^{\frac{2\alpha}{1-\alpha}}. \end{equation*} Using Jensen's inequality, we have \begin{align*} \int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{r^{2\alpha}\phi^2}{(1+\tau)^{1+8\alpha}}dvd\omega d\tau& \lesssim \int_{\tau_1}^{\tau_2}\frac{E[\phi](\tau)}{(1+\tau)^{1+2\alpha}}d\tau+G[4-\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}. \end{align*} Therefore for $\tau_2\geq \tau_1\geq t_0$, Lemma ~\ref{nullQ} and Proposition ~\ref{mainprop} imply that \begin{align*} D^{2\alpha}[N]_{\tau_1}^{\tau_2}&=\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|B^{\alpha\beta}\partial_\alpha\Phi\cdot\partial_\beta\phi|^2(1+r)^{1+2\alpha}d\textnormal{vol}\\ &\lesssim\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+\int_{\tau_1}^{\tau_2}\int_{S_\tau}|rN|^2 r^{1+2\alpha}dvd\omega d\tau\\ &\lesssim \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{r^{2\alpha}|\overline{\partial_v}\psi|^2}{(1+\tau)^{1+8\alpha}}+\frac{|\partial\psi|^2}{(1+r)^{1+4\alpha}}+\frac{r^{2\alpha}\phi^2}{(1+\tau)^{1+8\alpha}} dvd\omega d\tau\\ &\lesssim \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}+\bar G[1+8\alpha, 2\alpha]_{\tau_1}^{\tau_2}+\int_{\tau_1}^{\tau_2}\frac{E[\phi](\tau)}{(1+\tau)^{1+2\alpha}}d\tau+G[2+\alpha, 1+2\alpha]_{\tau_1}^{\tau_2}\\ &\lesssim E[\phi](\tau_1)+D^{\alpha}[F]_{\tau_1}^{\tau_2}+(1+\tau_1)^{-\alpha}D^{2\alpha}[F]_{\tau_1}^{\tau_2}\\ &\qquad+(1+\tau_1)^{-1-2\alpha}\left(g(1+2\alpha, \tau_2)+\int_{\tau_1}^{\tau_2}\tau^{\frac 1 2\alpha} D^{2\alpha}[F]_{\tau}^{\tau_2}d\tau\right). \end{align*} For $\tau_1\leq \tau_2\leq t_0$, notice that \[ D^{2\alpha}[N]_{\tau_1}^{\tau_2}\lesssim\int_{\tau_1}^{\tau_2}E[\phi](\tau)d\tau. \] For the linear terms $L(\partial\phi)$, we can show \begin{align*} D^{2\alpha}[L]_{\tau_1}^{\tau_2}=\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}|L(\partial\phi)|^2(1+r)^{1+2\alpha}d\textnormal{vol}&\lesssim \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau} (1+r)^{-2-6\alpha}|\partial\phi|^2(1+r)^{1+2\alpha}d\textnormal{vol}\\ &\lesssim \int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\partial\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}. \end{align*} The corollary then follows from \eqref{ILE0} and ~\eqref{eb}. \section{Decay of the Solution} Under appropriate assumptions on the inhomogeneous term $F$, Proposition \ref{mainprop} leads to the decay of the energy flux $E[\phi](\tau)$. After commuting the equation with the vector fields $Z$, we obtain the pointwise decay of the solution outside the cylinder $\{(t, x)| |x|\leq R\}$ by using Sobolev embedding and inside the cylinder by using elliptic estimates. \begin{prop} \label{energydecay} Suppose there is a constant $C_1$ such that \[ D^{2\alpha}[F]_{\tau_1}^{\tau_2}\leq C_1(1+\tau_1)^{-1-\alpha}, \qquad \forall \tau_2\geq \tau_1\geq 0. \] Then for solution $\phi$ of the linear wave equation \eqref{LWAVEEQ}, we have energy flux decay \[ E[\phi](\tau)\lesssim (\epsilon^2 E_0+C_1)(1+\tau)^{-1-\alpha}. \] \end{prop} \begin{proof} Since the initial data are supported in the region $\{|x|\leq R_0\leq R\}$, the finite speed of propagation shows that $g(1+2\alpha, 0)$ vanishes. Take $\tau_1=0$ in ~\eqref{pWE1a}. We get \begin{equation} \label{pwe1a} g(1+2\alpha, \tau)=\int_{S_\tau}r^{1+2\alpha}(\partial_v\psi)^2dvd\omega\lesssim C_1+\epsilon^2E_0 \end{equation} and \begin{equation} \label{pwe1ai} \int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{2\alpha} (\partial_v\psi)^2dvd\omega d\tau\leq\bar G[0, 2\alpha]_{0}^{\tau_2}\lesssim C_1+\epsilon^2 E_0. \end{equation} We claim that we can choose a dyadic sequence $\{\tau_n\rightarrow\infty\}$ such that \begin{equation} \label{dyadicseq} \int_{S_{\tau_n}}r^{2\alpha}(\partial_v\psi)^2dvd\omega\leq(1+\tau_n)^{-1}\left(C_1+\epsilon^2 E_0\right), \end{equation} where $\tau_n$ satisfies the inequality $\gamma^{-2}\tau_n\leq\tau_{n-1}\leq\gamma^2\tau_n$ for some large constant $\gamma$. In fact, there exists $\tau_n\in[\gamma^{n}, \gamma^{n+1}]$ such that \eqref{dyadicseq} holds. Otherwise $$\int_{\gamma^{n}}^{\gamma^{n+1}}\int_{S_\tau}r^{2\alpha}(\partial_v\psi)^2dvd\omega d\tau \geq \ln\gamma \left(\epsilon^2 E_0+ C_1\right),$$ which contradicts to ~\eqref{pwe1ai} if $\gamma$ is large enough. Take $\tau=\tau_n$ in ~\eqref{pwe1a}. Interpolate with ~\eqref{dyadicseq}. We obtain \begin{equation*} \int_{S_{\tau_n}}r(\partial_v\psi)^2dvd\omega\lesssim (1+\tau_n)^{-2\alpha}\left(\epsilon^2 E_0 +C_1\right). \end{equation*} Then the inequality ~\eqref{pWE1} implies that for $\tau\geq\tau_n$ \begin{equation} \label{pwe10} \begin{split} \int_{S_\tau}r(\partial_v\psi)^2dvd\omega+\int^{\tau}_{\tau_{n}}E[\phi](s)ds&\lesssim (1+\tau_{n})^{-2\alpha}\left(\epsilon^2E_0+C_1\right)+\tau_{n}^{1-\alpha}E[\phi](\tau_{n}). \end{split} \end{equation} On the other hand the energy inequality \eqref{eb} shows that for all $s\leq \tau$ \[ E[\phi](\tau)\lesssim E[\phi](s)+(1+s)^{-1-\alpha}\left(\epsilon^2 E_0 +C_1\right). \] In particular \[ E[\phi](\tau_1)\lesssim E[\phi](0)+\epsilon^2 E_0 +C_1\lesssim \epsilon^2 E_0 +C_1. \] By ~\eqref{pwe10}, we have \begin{equation} \label{tautaun} (\tau-\tau_n)E[\phi](\tau)-\int^{\tau}_{\tau_{n}}(1+s)^{-1-\alpha}\left(\epsilon^2 E_0 +C_1\right)ds \lesssim(1+ \tau_n)^{-2\alpha}\left(\epsilon^2 E_0+C_1\right)+\tau_n^{1-\alpha}E[\phi](\tau_n). \end{equation} In particular for $n=1$ \begin{equation*} E[\phi](\tau)\lesssim (1+\tau)^{-1}\left(\epsilon^2 E_0+C_1\right). \end{equation*} Let $\tau=\tau_{n+1}$ in ~\eqref{tautaun}. We obtain \begin{equation*} (\tau_{n+1}-\tau_n)E[\phi](\tau_{n+1})\lesssim (1+\tau_{n})^{-\alpha}\left(\epsilon^2 E_0+C_1\right). \end{equation*} Since $\tau_n$ are dyadic, we have \begin{equation*} E[\phi](\tau_{n})\lesssim \tau_{n}^{-1-\alpha}\left(\epsilon^2 E_0+C_1\right),\quad \forall n. \end{equation*} Finally, for $\tau\in[\tau_n, \tau_{n+1}]$, we can show \[ E[\phi](\tau)\lesssim E[\phi](\tau_n)+(1+\tau_n)^{-1-\alpha}\left(\epsilon^2 E_0+C_1\right)\lesssim (1+\tau_n)^{-1-\alpha} \left(\epsilon^2 E_0+C_1\right)\lesssim (1+\tau)^{-1-\alpha}\left(\epsilon^2 E_0+C_1\right). \] \end{proof} With the energy flux decay, we can obtain the decay of the spherical average of the solution. \begin{cor} \label{ptdcoutc} Assume that there is a constant $C_1$ such that \[ D^{2\alpha}[F]_{\tau_1}^{\tau_2}\leq C_1(1+\tau_1)^{-1-\alpha}, \qquad \forall \tau_2\geq \tau_1\geq 0. \] Then on the hypersurface $S_\tau$, we have \begin{align*} &\int_{\omega}|r\phi|^2d\omega\lesssim\epsilon^2 E_0 + C_1, \qquad\qquad \qquad \qquad r\geq R, \\ &\int_{\omega}r|\phi|^2d\omega\lesssim(1+\tau)^{-1-\alpha}\left(\epsilon^2 E_0 + C_1\right), \qquad r\geq R. \end{align*} \end{cor} \begin{proof} By Proposition \ref{energydecay}, the first inequality follows from ~\eqref{phi2bd} and ~\eqref{pwe1a}. The second one follows from Lemma ~\ref{lem1}. \end{proof} In order to obtain the pointwise decay of the solution which is usually a consequence of Sobolev embedding, we need energy estimates for the derivative of the solution. For this purpose, we commute the equation with the vector fields $\Omega$ and $T$. Under appropriate assumptions on the inhomogeneous term $F$, we hope to derive the same energy decay for $\Omega^k T^j\phi$. Denote \[ N(\phi_1, \phi_2)=B^{\mu\nu}\partial_\mu\phi_1\cdot\partial_\nu\phi_2, \qquad \forall \phi_1, \phi_2 \in C^{\infty}(\mathbb{R}^{3+1}), \] where we recall that the constants $B^{\alpha\beta}$ satisfy the null condition. \begin{lem} \label{nullstructure} Let $Z$ be $\Omega$ or $T$. Then \[ Z^\beta N(\phi_1, \phi_2)=\sum\limits_{\beta_1+\beta_2=\beta} N(Z^{\beta_1}\phi_1, Z^{\beta_2}\phi_2). \] \end{lem} \begin{proof} Notice that $[\Omega, \partial_r]=[\Omega, \partial_t]=[\Omega, \mbox{$\nabla \mkern-13mu /$\,}]=0$. The lemma then follows from the fact that $B^{\alpha\beta}$ satisfy the null condition. \end{proof} Based on Corollary ~\ref{D2aQp}, we are able to prove the decay of the energy flux of $Z^\beta\phi$ after commuting the linear equation \eqref{LWAVEEQ} with $Z^\beta$. \begin{prop} \label{energydecaycom} Assume that there is a constant $C_1$ such that the inhomogeneous term $F$ in ~\eqref{LWAVEEQ} satisfies the following condition \[ D^{2\alpha}[Z^\beta F]_{\tau_1}^{\tau_2}\leq C_1(1+\tau_1)^{-1-\alpha}, \quad \forall \tau_2\geq \tau_1\geq 0, \quad \forall \beta\leq \beta_0 \] for some multiple indices $|\beta_0|\leq 4$. Assume $\Phi$, $L^\mu(t, x)$ satisfy the conditions in Theorem \ref{maintheorem2}. Then we have \begin{align} \label{inductionE} &E[Z^\beta\phi](\tau)\lesssim \left(C_1+\epsilon^2 E_0\right)(1+\tau_1)^{-1-\alpha},\\ \label{inductionD} &D^{2\alpha}[N(Z^{\beta_1}\Phi, Z^{\beta}\phi)]_{\tau_1}^{\tau_2}+D^{2\alpha}[Z^{\beta_1}L^\mu\cdot Z^{\beta}\partial_\mu\phi)]_{\tau_1}^{\tau_2}\lesssim \left(C_1+\epsilon^2 E_0\right)(1+\tau_1)^{-1-\alpha} \end{align} for $\forall \beta\leq \beta_0$, $|\beta_1|\leq 4$. \end{prop} \begin{proof} We prove the proposition by induction. When $\beta=0$, ~\eqref{inductionE} follows from Proposition ~\ref{energydecay}. Since $Z^{\beta_1}\Phi$ is $(\delta, \alpha, t_0, R_1, C_0)$-weak wave, $\forall |\beta_1|\leq 4$, Corollary ~\ref{D2aQp} and inequality ~\eqref{pwe1a} imply that \[ D^{2\alpha}[N(Z^{\beta_1}\Phi, \phi)]_{\tau_1}^{\tau_2}+D^{2\alpha}[Z^{\beta_1}L^\mu\cdot\partial_\mu\phi]_{\tau_1}^{\tau_2} \lesssim (C_1+\epsilon^2 E_0)(1+\tau_1)^{-1-\alpha},\quad \forall |\beta_1|\leq 4. \] Assume that ~\eqref{inductionE}, ~\eqref{inductionD} hold for all $\beta'<\beta$. Commute the equation \eqref{LWAVEEQ} with $Z^{\beta}$. Using Lemma ~\ref{nullstructure}, we have the equation for $Z^\beta\phi$ \begin{equation} \label{waveeqomktj} \Box(Z^\beta\phi)+N(\Phi,Z^\beta \phi)+L(Z^\beta\phi)=Z^\beta F-\sum\limits_{\beta_1+\beta_2\leq\beta, \beta_2<\beta} N(Z^{\beta_1}\Phi, Z^{\beta_2}\phi)+Z^{\beta_1}L^\mu\cdot Z^{\beta_2}\partial_\mu\phi. \end{equation} Since $\beta_2<\beta$, by the induction assumptions, we get \begin{equation*} D^{2\alpha}\left[Z^\beta F-\sum\limits_{\beta_2<\beta} N(Z^{\beta_1}\Phi, Z^{\beta_2}\phi)+Z^{\beta_1}L^\mu\cdot Z^{\beta_2}\partial_\mu\phi\right]_{\tau_1}^{\tau_2}\lesssim\left(C_1+\epsilon^2 E_0\right)(1+\tau_1)^{-1-\alpha}. \end{equation*} Hence for $Z^\beta\phi$, inequality ~\eqref{inductionE} follows from Proposition ~\ref{energydecay} and inequality \eqref{inductionD} follows from Corollary ~\ref{D2aQp} and Proposition ~\ref{mainprop}. \end{proof} \bigskip Since the angular momentum $\Omega$ is vanishing for $r=0$, we are not able to obtain the pointwise bound of the solution in the cylinder $\{|x|\leq R\}$ by commuting the equation with $\Omega$. We instead rely on elliptic estimates and the vector $T=\partial_t$ as commutators. \begin{lem} \label{H2phil} Assume that there is a constant $C_1$ such that \[ D^{2\alpha}[F]_{\tau_1}^{\tau_2}+ D^{2\alpha}[\partial_t F]_{\tau_1}^{\tau_2}\leq C_1(1+\tau_1)^{-1-\alpha}, \qquad \forall \tau_2\geq \tau_1\geq 0. \] Then for solution of the linear wave equation ~\eqref{LWAVEEQ}, we have \begin{equation*} \int_{r\leq R}|\partial^2\phi|^2dx=\sum\limits_{\mu, \nu=0}^{3}\int_{r\leq R}|\partial_{\mu\nu}\phi|^2dx\lesssim \left(E_0\epsilon^2+C_1\right)(1+\tau)^{-1-\alpha}. \end{equation*} \end{lem} \begin{proof} We first assume that $\tau\geq R$. Take $\beta_0=(0, 1)$ in Proposition ~\ref{energydecaycom}. We have \[ E[T^j\phi](\tau_1)+\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau}\frac{|\bar\partial T^j\phi|^2}{(1+r)^{1+\alpha}}dxd\tau+ D^{2\alpha}[T^j N(\Phi, \phi)+T^j L(\partial\phi)]_{\tau_1}^{\tau_2}\lesssim \left(E_0\epsilon^2+C_1\right)(1+\tau_1)^{-1-\alpha} \] for all $j\leq 1$. Using elliptic estimates ~\cite{elliptic}, we can show that \begin{align} \notag &\int_{r\leq R}|\partial^2\phi|^2dx=\int_{r\leq R}\sum\limits_{i, j=1}^{3}|\partial_{ij}\phi|^2dx+2\sum\limits_{\alpha=0}^{3}\int_{r\leq R}|\partial_\alpha\partial_t\phi|^2dx\\ \notag &\qquad\lesssim E[\partial_t\phi](\tau)+\int_{r\leq 2R}|\Delta \phi|^2+|\phi|^2dx\\ \label{H2phi2R} &\qquad\lesssim E[\partial_t\phi](\tau)+\int_{r\leq 2R}|\partial_{tt}\phi+F-N(\Phi, \phi)-L(\partial\phi)|^2+\phi^2dx\\ \notag &\qquad\lesssim E[\partial_t\phi](\tau)+\int_{r\leq 2R}|\partial_{tt}\phi|^2dx+ \sum\limits_{j\leq 1}\int_{\tau}^{\tau+R}\int_{r\leq 2R}|T^j F|^2+|T^j\phi|^2+|T^jN|^2+|T^j L|^2dxdt. \end{align} Consider the region bounded by $\Sigma_{\tau-R}$ and $t=\tau$. Take $X=T$ in ~\eqref{energyeq}. Note that the vector field $T$ is killing, that is $K^T[\partial_t\phi]=0$. We can conclude that \begin{equation*} \label{energyineq2R} \begin{split} \int_{r\leq 2R}J^T_\mu[\partial_t\phi]n^\mu d\sigma&=\int_{\Sigma_{\tau-R}\cap \{t\leq \tau\}}J^T_\mu[\partial_t\phi]n^\mu d\sigma +\int_{\tau-R}^{\tau}\int_{r\leq R +t-\tau}(TN+TL-\partial_t F)\partial_{tt}\phi d\textnormal{vol}. \end{split} \end{equation*} Apply Cauchy-Schwartz inequality to the last term. We obtain \begin{equation*} \begin{split} & \qquad\int_{\tau-R}^{\tau}\int_{r\leq R +t-\tau}|\partial_t F-TN-TL||\partial_{tt}\phi|d\textnormal{vol}\\ &\lesssim\int_{\tau-R}^{\tau}\int_{\Sigma_t}|\partial_t F-TN-TL|^2(1+r)^{\alpha+1}+\frac{|\partial\pa_t\phi|^2}{(1+r)^{\alpha+1}}dxdt\\ &\lesssim \int_{\tau-R}^{\tau}\int_{\Sigma_t}\frac{|\partial\pa_t\phi|^2}{(1+r)^{1+\alpha}}dxdt+ D^{2\alpha}[\partial_t F]_{\tau-R}^{\tau}+D^{2\alpha}[TN]_{\tau-R}^{\tau}+D^{2\alpha}[TL]_{\tau-R}^{\tau}\\ &\lesssim\left(E_0\epsilon^2+C_1\right)(1+\tau)^{-1-\alpha}. \end{split} \end{equation*} Hence we can estimate \begin{equation*} \begin{split} \int_{r\leq 2R}|\partial_{tt}\phi|^2dx\leq 2\int_{r\leq 2R}J^T_\mu[\partial_t\phi]n^\mu d\sigma &\lesssim E[\partial_t\phi](\tau-R)+\int_{\tau-R}^{\tau}\int_{r\leq R +t-\tau}|\partial_t F-TN-TL||\partial_{tt}\phi|d\textnormal{vol}\\ &\lesssim\left(E_0\epsilon^2+C_1\right)(1+\tau_1)^{-1-\alpha}. \end{split} \end{equation*} Then from ~\eqref{H2phi2R}, we get \begin{align*} \int_{r\leq R}|\partial^2\phi|^2dx&\lesssim\left(E_0\epsilon^2+C_1\right)(1+\tau_1)^{-1-\alpha}+ \sum\limits_{j\leq 1}\int_{\tau-R}^{\tau+R}\int_{\Sigma_t}\frac{|\bar\partial\phi|^2}{(1+r)^{1+\alpha}} +|T^j F|^2+|T^jN|^2+|T^jL|^2\\ &\lesssim \left(E_0\epsilon^2+C_1\right)(1+\tau_1)^{-1-\alpha}+ \sum\limits_{j\leq 1}D^{\alpha}[T^j F]_{\tau-R}^{\tau+R}+D^{2\alpha}[T^j N]_{\tau-R}^{\tau+R}+D^{2\alpha}[T^j L]_{\tau-R}^{\tau+R}\\ &\lesssim \left(E_0\epsilon^2+C_1\right)(1+\tau_1)^{-1-\alpha}. \end{align*} Thus we have proven the lemma for $\tau\geq R$. When $\tau\leq R$, the finite speed of propagation shows that the solution of ~\eqref{LWAVEEQ} vanishes when $|x|\geq t+R_0$. Thus we can replace $\tau-R$ with 0 in the above argument. And the lemma still holds. \end{proof} A corollary of the above lemma is the following pointwise decay of the solution in the cylinder $\{r\leq R\}$. \begin{cor} \label{ptdcinc} Assume that there is a constant $C_1$ such that \[ D^{2\alpha}[F]_{\tau_1}^{\tau_2}+ D^{2\alpha}[\partial_t F]_{\tau_1}^{\tau_2}\leq C_1(1+\tau_1)^{-1-\alpha}, \qquad \forall \tau_2\geq \tau_1\geq 0. \] Then for solution $\phi$ of ~\eqref{LWAVEEQ}, we have \begin{equation*} |\phi|^2\lesssim \left(C_1+\epsilon^2E_0\right)(1+\tau)^{-1-\alpha}, \qquad r\leq R. \end{equation*} \end{cor} \begin{proof} Using Sobolev embedding and Lemma ~\ref{H2phil}, when $|x|\leq R$, we can estimate \begin{align*} \phi^2&\lesssim\int_{r\leq R}\sum\limits_{i, j=1}^{3}|\partial_{ij}\phi|^2+\phi^2dx\\ &\lesssim \int_{r\leq R}|\partial^2\phi|^2dx+\int_{\tau}^{\tau+R}\int_{r\leq R}|\partial_t\phi|^2+|\phi|^2dxdt\\ &\lesssim \left(C_1+\epsilon^2E_0\right)(1+\tau)^{-1-\alpha}, \end{align*} where the last step follows from the integrated local energy inequality ~\eqref{ILE0} restricted to the region $r\leq R$. \end{proof} \section{Boostrap Argument} To solve our nonlinear problem \eqref{THEWAVEEQ}, we use the standard Picard iteration process. We prove, by a boostrap argument, that the nonlinear term $D^{2\alpha}[F]_{\tau_1}^{\tau_2}$ decays, which leads to the decay of the solution $\phi$. We still denote the quadratic nonlinearity $A^{\alpha\beta}\partial_\alpha\phi\partial_\beta\phi$ of $F$ in ~\eqref{THEWAVEEQ} as $N(\phi, \phi)=A^{\mu\nu}\partial_\mu\phi\partial_\nu\phi$, in which the constants $A^{\mu\nu}$ satisfy the null condition. \begin{prop} \label{propboostrap} Suppose $Z^k\Phi$ is $(\delta, \alpha, t_0, R_1, C_1)-$weak wave for all $|k|\leq 3$. Assume $$|\partial^2 Z^\beta\Phi|\leq C_1,\quad \forall |\beta|\leq 1.$$ Assume the functions $L^\mu(t, x)$, $h^{\mu\nu}(t, x)$ satisfy the conditions in Theorem \ref{maintheorem2}. If the nonlinearity $F$ in ~\eqref{THEWAVEEQ} satisfies \begin{align*} &D^{2\alpha}[Z^\beta F]_{\tau_1}^{\tau_2}\leq 2E_0\epsilon^2 (1+\tau_1)^{-1-\alpha}, \quad \forall |\beta|\leq 3, \quad \forall \tau_2\geq \tau_1\geq0,\\ &\int_{r\leq R}|\nabla Z^\beta F|^2dx\leq 2E_0 \epsilon^2(1+\tau)^{-1-\alpha}, \quad \forall |\beta|\leq 1, \quad \forall \tau\geq 0, \end{align*} then \begin{align} \label{nullbd} &D^{2\alpha}[Z^\beta F]_{\tau_1}^{\tau_2}\lesssim E_0^2 \epsilon^4 (1+\tau_1)^{-1-\alpha}, \quad \forall |\beta|\leq 3, \quad \forall \tau_2\geq \tau_1\geq 0,\\ \label{nullbdnabla} &\int_{r\leq R}|\nabla Z^\beta F|^2dx\lesssim E_0^2 \epsilon^4(1+\tau)^{-1-\alpha}, \quad \forall |\beta|\leq 1, \quad \forall \tau\geq 0. \end{align} \end{prop} \begin{remark} \label{remarkpropb} If the given function $\Phi$ is assumed as in Theorem \ref{maintheorem2}, that is, $Z^\beta\Phi$ is $(\delta, \alpha, t_0, R_1, C_1)-$weak wave for all $|\beta|\leq 4$ and $|\partial^2Z^\beta \Phi|\leq C_1$ for all $|\beta|\leq 2$, then the above proposition holds if we replace $|\beta|\leq 3$, $|\beta|\leq 1$ with $|\beta|\leq 4$, $|\beta|\leq 2$ respectively. The reason that we formulate the proposition as above is that three derivatives are the minimum to close the boostrap argument. Four derivatives is needed to obtain $C^2$ solution of the equation \eqref{THEWAVEEQ}. \end{remark} The proof for Proposition \ref{propboostrap} is quite similar to that in \cite{yang1}. For completeness, we repeat it here. Since higher order nonlinearity decays much better, we only consider the quadratic nonlinearities $N(\phi, \phi)$ and $Q(\partial\phi)=h^{\mu\nu}(t, x)\partial_\mu\phi\partial_\nu\phi$. First, Lemma ~\ref{nullstructure} and the assumptions on $h^{\mu\nu}(t, x)$ \[ |Z^\beta h^{\mu\nu}|\lesssim (1+r)^{-\frac{3}{2}\alpha},\quad \forall |\beta|\leq 4 \] imply that \begin{equation} \label{nullbdsplit} \begin{split} D^{2\alpha}[Z^\beta F]_{\tau_1}^{\tau_2}&\lesssim\sum\limits_{\beta_1+\beta_2\leq\beta}\int_{\tau_1}^{\tau_2}\int_{\Sigma_\tau} |N(Z^{\beta_1}\phi, Z^{\beta_2}\phi)|^2(1+r)^{1+2\alpha}+|\partial Z^{\beta_1}\phi|^2|\partial Z^{\beta_2}|^2(1+r)^{1-\alpha}dxd\tau\\ &\lesssim \sum\limits_{\beta_1+\beta_2\leq\beta}\int_{\tau_1}^{\tau_2}\int_{r\leq R}|\partial\phi_1|^2|\partial\phi_2|^2dxd\tau +\sum\limits_{\beta_1+\beta_2\leq\beta}\int_{\tau_1}^{\tau_2}\int_{S_\tau}|N(\phi_1, \phi_2)|^2 r^{1+2\alpha}dxd\tau\\ &\qquad+\sum\limits_{\beta_1+\beta_2\leq\beta}\int_{\tau_1}^{\tau_2}\int_{S_\tau}|\partial\phi_1|^2|\partial\phi_2|^2(1+r)^{1-\alpha}dxd\tau, \end{split} \end{equation} where we denote $\phi_1=Z^{\beta_1}\phi$, $\phi_2=Z^{\beta_2}\phi$. We estimate the three integrals on the right hand side of ~\eqref{nullbdsplit} separately. We use elliptic estimates as well as the extra boostrap assumption ~\eqref{nullbd} to bound the first integral. Estimates of the second integral rely on the null structure of $N(\phi_1, \phi_2)$. The third integral follows from the integrated local energy inequality \eqref{ILE0}. \subsection{Proof of the ~\eqref{nullbd} in the region $r\leq R$} When $r\leq R$, we use elliptic estimates to obtain the pointwise bound of the solution. However since one can only obtain elliptic estimates in a smaller region, we divide the region $r\leq R$ into two parts: $r\leq \frac{R}{2}$ and $r\geq \frac{R}{2}$. In the smaller region $r\leq \frac{R}{2}$, we use elliptic estimates while in the region $r\geq \frac{R}{2}$, we use Sobolev embedding. \bigskip Recall that $\phi_1=Z^{\beta_1}\phi$, $\phi_2=Z^{\beta_2}\phi$, $|\beta_1|+|\beta_2|\leq 3$. Without loss of generality, assume $|\beta_1|\leq |\beta_2|$. In particular we have $|\beta_1|\leq 1$. For $r\leq \frac{R}{2}$, we claim that \begin{equation} \label{ptbdpaphi} |\partial\phi_1|^2\lesssim E_0\epsilon^2(1+\tau)^{-1-\alpha},\quad r\leq \frac{R}{2}. \end{equation} We first verify ~\eqref{ptbdpaphi} for $\partial_t\phi_1$. By ~\eqref{waveeqomktj}, $\partial_t\phi_1$ satisfies the following equation \[ \Box (\partial_t\phi_1)+N(\Phi, \partial_t\phi_1)+L(\partial_t \phi_1)=F_1. \] Since $|\beta_1+(0, 2)|\leq 3$, estimates ~\eqref{inductionD} imply that \[ D^{2\alpha}[F_1]_{\tau_1}^{\tau_2}+ D^{2\alpha}[\partial_t F_1]_{\tau_1}^{\tau_2}\lesssim E_0\epsilon^2 (1+\tau_1)^{-1-\alpha}. \] Thus by Corollary ~\ref{ptdcinc}, we have \[ |\partial_t\phi_1|^2\lesssim E_0\epsilon^2 (1+\tau)^{-1-\alpha}, \qquad r\leq \frac{R}{2}. \] For $\nabla \phi_1$, notice that $\phi_1=Z^{\beta_1}\phi$, $|\beta_1|\leq 1$ and $|\partial^2 Z^{\beta_1}\Phi|\lesssim 1$, $|\partial Z^{\beta_1}L^\mu|\les1$. Using elliptic estimates and Lemma \ref{H2phil}, we have for $|x|\leq \frac{R}{2}$ \begin{equation} \label{Caest} \begin{split} \|\nabla\phi_1\|_{C^\frac 1 2(B_{\frac 1 2 R})}^2&\lesssim \int_{r\leq \frac{R}{2}}\sum\limits_{i, j=1}^{3}|\partial_{ij}\nabla\phi_1|^2+|\nabla\phi_1|^2dx\\ &\lesssim \int_{r\leq R}|\nabla\Delta\phi_1|^2+ |\nabla\phi_1|^2dx\\ &\lesssim \int_{r\leq R}|\nabla\left(\partial_{tt}\phi_1+Z^{\beta_1}F-Z^{\beta_1}N(\Phi, \phi)-Z^{\beta_1}L(\partial\phi)\right)|^2+ |\nabla\phi_1|^2dx\\ &\lesssim E[\partial_{tt}\phi_1](\tau)+E[\phi_1](\tau)+\int_{r\leq R}|\nabla Z^{\beta_1}F|^2+|\nabla^2 \phi_1|^2+|\nabla^2\phi|^2+|\nabla \phi_1|^2+|\nabla\phi|^2dx\\ &\lesssim E_0\epsilon^2(1+\tau_1)^{-1-\alpha}, \end{split} \end{equation} where $B_{\frac 1 2 R}$ denotes the ball $\{r\leq \frac 1 2 R\}$ in $\mathbb{R}^3$. Hence we have proven ~\eqref{ptbdpaphi}, which implies that \begin{align*} \int_{\tau_1}^{\tau_2}\int_{r\leq \frac{R}{2}}|\partial\phi_1|^2|\partial\phi_2|^2dx &\lesssim\int_{\tau_1}^{\tau_2}(1+\tau)^{-1-\alpha}E_0 \epsilon^2 \int_{r\leq\frac{ R}{2}}|\partial\phi_2|^2dxd\tau\\ &\lesssim E_0 \epsilon^2\int_{\tau_1}^{\tau_2}(1+\tau)^{-1-\alpha}E[\phi_2](\tau)d\tau\\ &\lesssim E_0^2 \epsilon^4(1+\tau)^{-1-\alpha}. \end{align*} \bigskip In the region $\frac{R}{2}\leq r\leq R$, we use the angular momentum $\Omega$. By Sobolev embedding on the unit sphere, we have \begin{equation} \label{SemSphere} \int_{\omega}|\partial\phi_1|^2\cdot|\partial\phi_2|^2d\omega\lesssim\sum\limits_{1'}\int_{\omega}|\partial\phi_{1'}|^2 d\omega\cdot\int_{\omega}|\partial\phi_{2}|^2d\omega, \end{equation} where we still denote $\phi_{1'}=Z^{\beta_{1'}}\phi$ for $\beta_{1'}\leq \beta_1+(2, 0)$. Notice that $|\beta_{1'}|+|\beta_2|\leq 3+2=5$. Without loss of generality, we assume $|\beta_2|\leq 2$. Thus by Lemma ~\ref{H2phil}, we have \begin{align} \label{SemrR} &\int_{\omega}|\partial\phi_{2}|^2d\omega\lesssim \int\limits_{\frac{R}{2}\leq r\leq R}|\partial\phi_{2}|^2+ |\partial_r\partial\phi_{2}|^2dx\lesssim E[\phi_{2}](\tau)+\int_{r\leq R}|\partial^2\phi_{2}|^2dx\lesssim (1+\tau)^{-1-\alpha}E_0 \epsilon^2, \end{align} where we have used ~\eqref{inductionD}, ~\eqref{waveeqomktj} and the assumption $|\beta_2|\leq 2$ to verify the conditions in Lemma ~\ref{H2phil}. Since $|\beta_{1'}|\leq 3$, we can show that \begin{align*} \int_{\tau_1}^{\tau_2}\int_{\frac 1 2 R\leq r\leq R}|\partial\phi_1|^2|\partial\phi_2|^2dxd\tau &\lesssim\int_{\tau_1}^{\tau_2}\int_{\frac {R}{2}}^{R}\int_{\omega}|\partial\phi_1|^2|\partial\phi_2|^2d\omega \quad r^2drd\tau\\ &\lesssim\sum\limits_{1'} \int_{\tau_1}^{\tau_2}\int_{\frac {R}{2}}^{R}\int_{\omega}|\partial\phi_{1'}|^2d\omega\int_{\omega}|\partial\phi_{2}|^2d\omega\quad r^2drd\tau\\ &\lesssim \sum\limits_{1'}\int_{\tau_1}^{\tau_2}(1+\tau)^{-1-\alpha}E_0 \epsilon^2 \int_{r\leq R}|\partial\phi_{1'}|^2dxd\tau\\ &\lesssim E_0^2 \epsilon^4(1+\tau)^{-1-\alpha}. \end{align*} Summarizing, we have shown \[ \sum\limits_{\beta_1+\beta_2\leq\beta}\int_{\tau_1}^{\tau_2}\int_{r\leq R}|\partial\phi_1|^2|\partial\phi_2|^2dxd\tau\lesssim E_0^2 \epsilon^4(1+\tau)^{-1-\alpha}. \] \begin{remark} We remark here that ~\eqref{SemrR} is only true when $r$ is bigger that a constant. That is why we need to distinguish the two cases $r\leq \frac 1 2 R$ and $r\geq \frac 1 2 R$. \end{remark} \subsection{Proof of ~\eqref{nullbdnabla}} Note that when $|\beta|\leq 1$ and $\beta_1+\beta_2=\beta$, we have $\beta_1=0$ or $\beta_2=0$. By~\eqref{SemSphere}, we have \begin{equation*} \begin{split} \int_{\omega}|\nabla Z^\beta F|^2d\omega &\lesssim\int_{\omega}|\partial Z^\beta\phi|^2\cdot|\partial^2\phi|^2+|\partial\phi|^2|\partial^2 Z^\beta\phi|^2d\omega\\ &\lesssim\sum\limits_{|\beta'|\leq 2}\int_{\omega}|\partial Z^\beta\phi|^2d\omega\cdot\int_{\omega}|\partial^2 Z^{\beta'}\phi|^2d\omega+\int_{\omega}|\partial Z^{\beta'}\phi|^2d\omega\cdot\int_{\omega}|\partial^2 Z^{\beta}\phi|^2d\omega, \end{split} \end{equation*} where as pointed out previously, we only have to consider the quadratic nonlinearities $N(\phi, \phi)$, $Q(\partial\phi)$. For $r\leq \frac 1 2 R$, the inequality ~\eqref{ptbdpaphi} shows that \[ |\partial Z^\beta\phi|\lesssim \epsilon^2 E_0(1+\tau)^{-1-\alpha}, \quad \forall |\beta|\leq 1. \] For $\frac 1 2 R\leq r\leq R$, the inequality ~\eqref{SemrR} implies that $$\int_{\omega}|\partial Z^{\beta'}\phi|^2d\omega\lesssim \epsilon^2 E_0(1+\tau)^{-1-\alpha}, \quad \forall |\beta'|\leq 2. $$ On the other hand, using Lemma ~\ref{H2phil}, we obtain $$ \int_{r\leq R}|\partial^2 Z^{\beta'}\phi|^2dx\lesssim \epsilon^2 E_0(1+\tau)^{-1-\alpha},\quad \forall|\beta'|\leq 2. $$ Therefore, for all $|\beta|\leq 1$, we can estimate \begin{align*} \int_{r\leq R}|\nabla Z^\beta F|^2dx&\lesssim\epsilon^2 E_0(1+\tau)^{-1-\alpha}\sum\limits_{|\beta'|\leq 2} \int_{r\leq R}|\partial^2 Z^{\beta'}\phi|^2dx\lesssim E_0^2\epsilon^4(1+\tau)^{-1-\alpha}. \end{align*} Hence we have proven ~\eqref{nullbdnabla}. \subsection{Proof of ~\eqref{nullbd} in the region $r\geq R$} We first consider the quadratic term $N(\phi_1, \phi_2)$. The p-weighted energy inequality is about $\psi=r\phi$ instead of $\phi$. For this reason, we expand $N(\phi_1, \phi_2)$ in terms of $\psi$. \begin{lem} \label{lemnullform} Suppose $N(\phi_1, \phi_2)=A^{\alpha\beta}\partial_\alpha\phi_1\partial_\beta\phi_2$ with constants $A^{\alpha\beta}$ satisfying the null condition. Then \begin{equation} \label{nullform} r^4|N(\phi_1, \phi_2)|^2\lesssim\phi_1^2\phi_2^2+\phi_1 ^2\cdot r^2\partial_r^2\phi_2+|\mbox{$\nabla \mkern-13mu /$\,}\psi_1|^2|\mbox{$\nabla \mkern-13mu /$\,}\psi_2|^2+|\partial_v\psi_1|^2|\partial_u\psi_2|^2+|\partial_u\psi_1|^2|\partial_v\psi_2|^2, \end{equation} where $v=\frac{t+r}{2}, u=\frac{t-r}{2}$. \end{lem} \begin{proof} In fact, notice that $$r^2 N(\phi_1, \phi_2)=\phi_1\phi_2+r(\phi_1\phi_2)_r+N(\psi_1, \psi_2) $$ and $$|N(\psi_1, \psi_2)|\lesssim |\partial_v\psi_1|\cdot|\partial_u\psi_2|+|\partial_u\psi_1|\cdot|\partial_v\psi_2|+|\mbox{$\nabla \mkern-13mu /$\,}\psi_1|\cdot|\mbox{$\nabla \mkern-13mu /$\,}\psi_2|. $$ Hence the lemma holds. \end{proof} To estimate the second term in ~\eqref{nullbdsplit}, it suffices to handle the terms on the right hand side of ~\eqref{nullform}. We estimate the first three terms in a uniform way. Let $\Phi_1$ be $\phi_1$ or $\mbox{$\nabla \mkern-13mu /$\,}\psi_1$; $\Phi_2$ be $\phi_2$, $r\partial_r\phi_2$ and $\mbox{$\nabla \mkern-13mu /$\,}\psi_2$ respectively. Recall that $\phi_1=Z^{\beta_1} \phi$, $\phi_2=Z^{\beta_2}\phi$, $|\beta_1|+|\beta_2|\leq 3$. Using Sobolev embedding on the unit sphere, we have \begin{equation*} \int_{\omega}|\Phi_1|^2|\Phi_2|^2d\omega\lesssim\sum\limits_{1', 2'}\int_{\omega}|\Phi_{1'}|^2d\omega\cdot\int_{\omega}|\Phi_{2'}|^2d\omega, \end{equation*} where we let \begin{equation} \label{bs} \begin{cases} \beta_{1'}\leq \beta_1+(2, 0), \quad\beta_{2'}=\beta_2,\quad \textnormal{if } |\beta_1|\leq 1,\\ \beta_{1'}=\beta_1$, \quad $\beta_{2'}\leq \beta_2+(2, 0),\quad \textnormal{if } |\beta_2|\leq 1. \end{cases} \end{equation} In particular $|\beta_{1'}|+|\beta_{2'}|\leq 5$. For the third case when $\Phi_1=\mbox{$\nabla \mkern-13mu /$\,}\psi_1$, $\Phi_2=\mbox{$\nabla \mkern-13mu /$\,}\psi_2$, without loss of generality, we assume $|\beta_{1'}|\leq 2$. Since $\mbox{$\nabla \mkern-13mu /$\,}\psi_1=\Omega\phi_1$, $\Phi_{1'}$ can always be written as $Z^\beta\phi$ for some $|\beta|\leq 3$. Thus by Corollary ~\ref{ptdcoutc}, we have $$ r^2\int_{\omega}|\Phi_{1'}|^2d\omega\lesssim\epsilon^2 E_0,\qquad r\geq R. $$ Recall that $\Phi_{2'}=\phi_{2'}$, $r\partial_r\phi_{2'}$ or $\mbox{$\nabla \mkern-13mu /$\,}\psi_{2'}$. We always have \[ \frac{|\Phi_{2'}|^2}{(1+r)^{3+\alpha}}\lesssim \frac{|\bar\partial\phi_{2'}|^2}{(1+r)^{1+\alpha}}. \] Then the integrated energy inequality ~\eqref{ILE0} implies that \begin{equation*} \begin{split} \int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{2\alpha-3}\Phi_{1}^2\Phi_{2}^2 d\textnormal{vol}&=\int_{\tau_1}^{\tau_2}\int_{v_\tau}^{\infty}\int_{\omega}r^{2\alpha-1}\Phi_{1}^2\Phi_{2}^2 dvd\omega d\tau\\ &\lesssim\sum\limits_{1', 2'} \int_{\tau_1}^{\tau_2}\int_{v_\tau}^{\infty} r^{2\alpha-3}r^2 \int_{\omega}|\Phi_{1'}|^2d\omega\int_{\omega}|\Phi_{2'}|^2d\omega dvd\tau\\ &\lesssim\epsilon^2 E_0\sum\limits_{1', 2'}\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{|\Phi_{2'}|^2}{r^{3-2\alpha}}dvd\omega d\tau\\ &\lesssim\epsilon^2 E_0 \sum\limits_{1', 2'}\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{|\Phi_{2'}|^2}{(1+r)^{3+\alpha}}d\textnormal{vol}\\ &\lesssim \epsilon^4 E_0^2(1+\tau_1)^{-1-\alpha}, \end{split} \end{equation*} where we recall that $\alpha\leq \frac{1}{4}$. We hence have estimated the first three terms in \eqref{nullform}. \bigskip It remains to handle the last two terms $|\partial_v\psi_1|^2|\partial_u\psi_2|^2$, $|\partial_u\psi_1|^2|\partial_v\psi_2|^2$. Since they are symmetric, it suffices to consider $|\partial_v\psi_1|^2|\partial_u\psi_2|^2$. Recall that $\psi_1=rZ^{\beta_1}\phi$, $\psi_2=rZ^{\beta_2}\phi$, $|\beta_1|+|\beta_2|\leq 3$. Define $\beta_{1'}, \beta_{2'}$ as in ~\eqref{bs}. In particular $|\beta_{1'}|+|\beta_{2'}|\leq 5$. We have two cases according to $\beta_{i'}$, $i=1, 2$. \bigskip We first consider the case when $|\beta_{1'}|\leq 2$. The idea is that we bound $|\partial_v\psi_1|$ uniformly and then control $|\partial_u\psi_2|^2$ by the energy flux through the null hypersurface $v=constant$. The following lemma shows that the energy flux through $v=constant$ is bounded. \begin{lem} \label{crossnullen} Consider the region $D=[u_1, u_2]\times[v_1, \infty)\subset S_\tau \times [\tau_1, \tau_2]$. Under the conditions of proposition ~\ref{propboostrap}, we have the energy flux estimate through the hypersurface $v=const$ $$\int_{u_1}^{u_2}\int_{\omega}(\partial_u\psi_2)^2dud\omega \lesssim \epsilon^2 E_0(1+\tau_1)^{-1-\alpha}, $$ where $\psi_2=r\phi_2=rZ^{\beta_2}\phi$. \end{lem} \begin{proof} Back to the energy equation ~\eqref{energyeq}, take $X=T$ on the region $D$. We have \begin{align*} \int_{u_1}^{u_2}J_\mu^T[\phi_2]n^{\mu}d\sigma + \iint\limits_{v\geq v_1, u=u_1 }J_\mu^T[\phi_2]n^{\mu}d\sigma&=\iint\limits_{v\geq v_1,u=u_2 }J_\mu^T[\phi_2]n^{\mu}d\sigma+\int_{I_{\tau_1}^{\tau_2}}J_\mu^T[\phi_2]n^{\mu}d\sigma\\ &\quad+ \int_D \Box\phi_2\cdot \partial_t\phi_2d\textnormal{vol}. \end{align*} Using the estimates ~\eqref{inductionD}, we conclude that \[ D^{2\alpha}[\Box\phi_2]_{\tau_1}^{\tau_2}\lesssim E_0\epsilon^2(1+r)^{-1-\alpha}. \] Then by the integrated local energy inequality \eqref{ILE0} and the energy inequality ~\eqref{eb}, we can show that \begin{align*} \int_{u_1}^{u_2}\int_{\omega}r^2(\partial_u\phi_2)^2d\omega du &\leq 2\int_{u_1}^{u_2}J_\mu^T[\phi_2]n^{\mu}d\sigma\lesssim E_0\epsilon^2 (1+\tau_1)^{-1-\alpha}, \end{align*} where notice that $D\subset S_\tau\times[\tau_1, \tau_2]$. Thus by Corollary ~\ref{ptdcoutc}, we get \begin{align*} \int_{u_1}^{u_2}\int_{\omega}(\partial_u\psi_2)^2d\omega du &= \int_{u_1}^{u_2}\int_{\omega}r^2(\partial_u\phi_2)^2dud\omega + \left.\int_{\omega}r\phi_2^2d\omega \right|_{u_1}^{u_2}\lesssim E_0\epsilon^2(1+\tau_1)^{-1-\alpha}. \end{align*} \end{proof} \bigskip We continue our proof of \eqref{nullbd} when $|\beta_{1'}|\leq 2$. Lemma \ref{crossnullen} and Sobolev embedding on the unit sphere show that \begin{equation*} \begin{split} &\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{2\alpha-3}|\partial_v\psi_1|^2|\partial_u\psi_2|^2 d\textnormal{vol}\\ &=\int_{v_{\tau_1}}^{\infty}\int_{u_{\tau_1}}^{u(v)}r^{2\alpha-1}\int_{\omega}|\partial_u\psi_2|^2|\partial_v\psi_1|^2d\omega du dv\\ &\leq \int_{v_{\tau_1}}^{\infty}\int_{u_{\tau_1}}^{u(v)}|\partial_u\psi_{2'}|^2 d\omega du\cdot \sup\limits_{u}\int_{\omega}r^{2\alpha-1}|\partial_v\psi_{1'}|^2d\omega\quad dv\\ & \lesssim \epsilon^2 E_0(1+\tau_1)^{-1-\alpha}\int_{v_{\tau_1}}^{\infty}\sup\limits_{u}\int_{\omega}r^{2\alpha-1}|\partial_v\psi_{1'}|^2d\omega dv\\ &\lesssim \epsilon^2 E_0(1+\tau_1)^{-1-\alpha}\int_{v_{\tau_1}}^{\infty}\sup\limits_{u}\int_{\omega}r^{2\alpha}|\partial_v\psi_{1'}|^2d\omega dv, \end{split} \end{equation*} where $\beta_{1'}, \beta_{2'}$ are defined as in ~\eqref{bs}. Now, for all $ u\in [u_{\tau_1}, u(v)]$, we have \begin{align*} r^{2\alpha}(\partial_v\psi_{1'})^2&\lesssim\left.r^{2\alpha}(\partial_v\psi_{1'})^2\right|_{u=u_1} + \int_{u_1}^{u_2}r^{2\alpha}(\partial_v\psi_{1'})^2du \\ &\quad\quad+ \int_{u_1}^{u_2}r^{2\alpha}(\partial_u\partial_v\psi_{1'})^2du + \int_{u_1}^{u_2}r^{2\alpha-1}(\partial_v\psi_{1'})^2du\\ &\lesssim \left.r^{2\alpha}(\partial_v\psi_{1'})^2\right|_{u=u_1} + \int_{u_1}^{u_2}r^{2\alpha}(\partial_v\psi_{1'})^2du\\ & \quad\quad+ \int_{u_1}^{u_2}r^{2\alpha}(\mbox{$\Delta \mkern-13mu /$\,}\psi_{1'})^2+ r^{2\alpha+2}|Z^{\beta_{1'}}(F-N-L)|^2du, \end{align*} where we use the wave equation ~\eqref{waveqpsi} in the last step and $u_1=u_\tau, u_2=u(v)$. Integrate on the unit sphere. We obtain \begin{align*} &\int_{v_{\tau_1}}^{\infty}\sup\limits_{u}\int_{\omega}r^{2\alpha}|\partial_v\psi_{1'}|^2 d\omega dv \lesssim \int_{S_{\tau_{1}}}r^{2\alpha}(\partial_v\psi_{1'})^2dvd\omega\\ & +\int_{\tau_{1}}^{\tau_2}\int_{S_\tau}r^{2\alpha}(\partial_v\psi_{1'})^2 + (\mbox{$\nabla \mkern-13mu /$\,}\Omega\phi_{1'})^2r^{2\alpha} +r^{2\alpha+2}|Z^{\beta_{1'}}(F-N-L)|^2 dvd\omega d\tau, \end{align*} where note that $\mbox{$\nabla \mkern-13mu /$\,}=\frac{\Omega}{r}$. We claim that the above inequality can be bounded by $\epsilon^2 E_0$(up to a constant). In fact, the first term can be bounded by $\epsilon^2 E_0$ by ~\eqref{pwe1a}; the second term can be bounded by $\epsilon^2 E_0$ by ~\eqref{pwe1ai}; the third term can be controlled by $(1+\tau_1)^{-1-\alpha}\epsilon^2 E_0$ by the integrated local energy inequality ~\eqref{ILE0}(notice that $|\beta_{1'}|+1\leq3$ and $2\alpha\leq 1-\alpha$ as $\alpha\leq \frac{1}{4}$); the last term can be estimated as \[ D^{2\alpha}[Z^{\beta_{1'}}F]_{\tau_1}^{\tau_2}+D^{2\alpha}[Z^{\beta_{1'}}N]_{\tau_1}^{\tau_2}+D^{2\alpha} [Z^{\beta_{1'}}L]_{\tau_1}^{\tau_2}\lesssim E_0\epsilon^2(1+\tau_1)^{-1-\alpha}, \] where we use the inequality ~\eqref{inductionD}. Summarizing, we have shown $$\int_{v_{\tau_1}}^{\infty}\sup\limits_{u}\int_{\omega}r^{2\alpha}|\partial_v\psi_1|^2 d\omega dv\lesssim \epsilon^2 E_0. $$ In particular, for fixed $r\geq R$, we have \begin{equation} \label{pavphipt} \int_{t_2}^{t_1}\int_{\omega}|\partial_v\psi_1|^2(t, r, \omega)d\omega dt\lesssim \epsilon^2 E_0,\quad \psi_1=r Z^\beta\phi, \quad |\beta|\leq 2. \end{equation} Therefore \begin{equation*} \int_{\tau_1}^{\tau_2}r^{2\alpha-3}\int_{S_\tau}|\partial_v\psi_1|^2|\partial_u\psi_2|^2 d\textnormal{vol}\lesssim E_0^2\epsilon^4(1+\tau_1)^{-1-\alpha}. \end{equation*} \bigskip \textbf{When} $|\beta_{2'}|\leq 2$, we control $|\partial_v\psi_1|^2$ by the energy and bound $\partial_u\psi_2$ uniformly. Similarly, using ~\eqref{pwe1a}, ~\eqref{pwe1ai} and Sovolev embedding, we obtain \begin{align} \notag &\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{2\alpha-1}|\partial_v\psi_1|^2|\partial_u\psi_2|^2 dvd\omega d\tau\\ \label{pro3sup1} &\lesssim \int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{2\alpha-1}|\partial_v\psi_{1'}|^2 \cdot\int_{\omega}|\partial_u\psi_{2'}|^2d\omega \quad dvd\omega d\tau\\ \notag &\lesssim \int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{3\alpha-1}|\partial_v\psi_{1'}|^2 \cdot r^{-\alpha}\int_{\omega}|\partial_u\psi_{2'}|^2d\omega \quad dvd\omega d\tau\\ \notag &\lesssim \epsilon^2 E_0\int_{\tau_1}^{\tau_2} \sup\limits_{v}r^{-\alpha}\int_{\omega}|\partial_u\psi_{2'}|^2d\omega d\tau, \end{align} where $\beta_{1'}, \beta_{2'}$ are defined as in ~\eqref{bs}. For all $v$, we have \begin{align*} r^{-\alpha}(\partial_u\psi_{2'})^2&\lesssim \left.r^{-\alpha}(\partial_u\psi_{2'})^2\right|_{v=v_{\tau_2}}+\left|\int_{v}^{v_{\tau_2}}r^{-1-\alpha}|\partial_u\psi_{2'}|^2dv\right|\\ &\quad\quad +2\left|\int_{v}^{v_{\tau_2}}r^{-\alpha}|\partial_u\psi_{2'}\cdot\partial_v\partial_u\psi_{2'}|dv\right|\\ &\lesssim\left.r^{-\alpha}(\partial_u\psi_{2'})^2\right|_{v=v_{\tau_2}}+\int_{v_\tau}^{\infty}r^{-1-\alpha}|\partial_u\psi_{2'}|^2dv\\ &\quad\quad + \int_{v_\tau}^{\infty}r^{-1-\alpha}(\partial_u\psi_{2'})^2dv + \int_{v_\tau}^{\infty}r^{1-\alpha}(\partial_v\partial_u\psi_{2'})^2dv\\ &\lesssim\left.r^{-\alpha}(\partial_u\psi_{2'})^2\right|_{v=v_{\tau_2}}+ \int_{v_\tau}^{\infty}\frac{(\partial_u\psi_{2'})^2}{r^{1+\alpha}}dv\\ &\quad\quad +\int_{v_\tau}^{\infty}r^{1-\alpha}(\mbox{$\Delta \mkern-13mu /$\,}\psi_{2'})^2dv+ \int_{v_\tau}^{\infty}r^{3-\alpha}|Z^{\beta_{2'}}(F-N-L)|^2dv, \end{align*} where $v_{\tau_2}=\frac{R+\tau_2}{2}$. Integrate on the unit sphere. We get \begin{align*} &\int_{\tau_1}^{\tau_2} \sup\limits_{v}r^{-\alpha}\int_{\omega}|\partial_u\psi_{2'}|^2d\omega d\tau\lesssim \int_{\tau_1}^{\tau_2}\int_{\omega}\left.r^{-\alpha}(\partial_u\psi_{2'})^2\right|_{v=v_{\tau_2}}d\tau \\ &\quad\quad+\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{(\partial_u\psi_{2'})^2}{r^{1+\alpha}}+r^{1-\alpha} (\mbox{$\nabla \mkern-13mu /$\,}\Omega\phi_{2'})^2+r^{3-\alpha}|Z^{\beta_{2'}}(F-N-L)|^2dvd\omega d\tau. \end{align*} We claim that it can be bounded by $(1+\tau_1)^{-1-\alpha}\epsilon^2 E_0$ up to a constant. In fact, the first term can be estimated as follows $$\int_{\tau_1}^{\tau_2}\int_{\omega}\left.r^{-\alpha}(\partial_u\psi_{2'})^2\right|_{v=v_{\tau_2}}d\tau= \int_{u_{\tau_1}}^{u_{\tau_2}}\int_{\omega}r^{-\alpha}|\partial_u\psi_{2'}|^2d\omega du\leq\int_{u_{\tau_1}}^{u_{\tau_2}}\int_{\omega}|\partial_u\psi_{2'}|^2d\omega du \lesssim(1+\tau_1)^{-1-\alpha}\epsilon^2 E_0 $$ by Lemma ~\ref{crossnullen}; the second and third term can also be controlled by $(1+\tau_1)^{-1-\alpha}\epsilon^2 E_0$ by the integrated local energy estimates ~\eqref{ILE0}(notice that $|\beta_{2'}|\leq 2$); the last term can be estimated as follows \[ D^{2\alpha}[Z^{\beta_{1'}}F]_{\tau_1}^{\tau_2} +D^{2\alpha}[Z^{\beta_{1'}}N]_{\tau_1}^{\tau_2}+ D^{2\alpha}[Z^{\beta_{1'}}L]_{\tau_1}^{\tau_2}\lesssim E_0\epsilon^2(1+\tau_1)^{-1-\alpha} \] by the inequality ~\eqref{inductionD}. Hence \begin{equation} \label{dupsi} \int_{\tau_1}^{\tau_2} \sup\limits_{v}r^{-\alpha}\int_{\omega}|\partial_u\psi_{2'}|^2d\omega d\tau\lesssim E_0\epsilon^2(1+\tau_1)^{-1-\alpha}. \end{equation} Plug this into ~\eqref{pro3sup1}. We obtain $$\int_{\tau_1}^{\tau_2}\int_{S_\tau}r^{2\alpha-3}|\partial_v\psi_1|^2|\partial_u\psi_2|^2 d\textnormal{vol}\lesssim \epsilon^4 E_0^2(1+\tau_1)^{-1-\alpha}. $$ Therefore using lemma \ref{lemnullform}, we have shown that \[ \int_{\tau_1}^{\tau_2}\int_{S_\tau}|N(\phi_1, \phi_2)|^2 r^{1+2\alpha}d\textnormal{vol}\lesssim \epsilon^4 E_0^2(1+\tau_1)^{-1-\alpha}. \] \bigskip To show \eqref{nullbd}, it remains to estimate the quadratic term $Z^\beta Q(\partial\phi)$. By \eqref{nullbdsplit}, it suffices to consider the third integral in \eqref{nullbdsplit}. Notice that \[ |\partial\phi_1||\partial\phi_2|\lesssim r^{-2}|\partial_v\psi_1||\partial_u\psi_2|+\sum\limits_{\substack{|\beta_{1'}|+|\beta_{2'}|\leq 4\\\beta_{1'},\beta_{2'}\leq 3}}|Z^{\beta_{1'}}\phi||\partial Z^{\beta_{2'}}\phi|+(|Z\phi|+|\partial_v\phi|)|\partial Z^3\phi|. \] The first term has already been estimated considering that $r^{1-\alpha}\leq r^{1+2\alpha}$(on $S_\tau$, $r\geq R\geq 1$). For the second term, if $|\beta_{1'}|\leq 1$ or $|\beta_{2'}|\leq 1$, then using Sobolev inequality on the unit sphere, we have \[ \int_{\omega}|\phi_{1'}|^2|\partial \phi_{2'}|^2d\omega\lesssim\sum\limits_{|\beta_{1''}|, |\beta_{2''}|\leq 3}\int_{\omega}|\phi_{1''}|^2d\omega\cdot \int_{\omega}|\partial\phi_{2''}|^2d\omega\lesssim r^{-2}\epsilon^2E_0\sum\limits_{ |\beta_{2''}|\leq 3}\int_{\omega}|\partial \phi_{2''}|^2d\omega \] by \eqref{phi2bd} and \eqref{pwe1a}. Then using the integrated local energy inequality \eqref{ILE0}, we obtain \begin{align*} \int_{\tau_1}^{\tau_2}\int_{S_\tau}|\partial\phi_1|^2|\partial\phi_2|^2 (1+r)^{1-\alpha}d\textnormal{vol}\lesssim \epsilon^2 E_0\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{|\partial\phi_{2''}|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}\lesssim \epsilon^4 E_0^2(1+\tau_1)^{-1-\alpha}. \end{align*} If both $|\beta_{1'}|\geq 2$, $|\beta_{2'}|\geq 2$, recall that $|\beta_{1'}|+|\beta_{2'}|\leq|\beta|+1\leq 4$. We conclude that $|\beta_{1'}|=|\beta_{2'}|=2$, for which we use the embedding \[ \int_{\omega}|Z^2\phi|^2|\partial Z^2\phi|^2d\omega\lesssim \|Z^2\phi\|_{H^1(S^2)}^2\|\partial Z^2\phi\|_{H^1(S^2)}^2\lesssim r^{-2}\epsilon^2 E_0 \|\partial Z^2\phi\|_{H^1(S^2)}^2. \] Hence \begin{align*} \int_{\tau_1}^{\tau_2}\int_{S_\tau}|\partial\phi_1|^2|\partial\phi_2|^2 (1+r)^{1-\alpha}d\textnormal{vol}\lesssim \sum\limits_{|\beta|\leq 1}\epsilon^2 E_0\int_{\tau_1}^{\tau_2}\int_{S_\tau}\frac{|\partial Z^2 \Omega^\beta\phi|^2}{(1+r)^{1+\alpha}}d\textnormal{vol}\lesssim \epsilon^4 E_0^2(1+\tau_1)^{-1-\alpha}. \end{align*} For the third term $(|Z\phi|+|\partial_v\phi|)|\partial Z^3\phi|$, using the integrated local energy inequality, it suffices to show that $$(|Z\phi|+|\partial_v\phi|)^2(1+r)^{2}\lesssim \epsilon^2 E_0.$$ In fact, by Corollary \ref{ptdcoutc} and Corollary \ref{ptdcinc}, we have \[ (1+r)^2|Z\phi|^2\lesssim \sum\limits_{|\beta|\leq 3}\int_{\omega}|Z^\beta \phi|d\omega\lesssim \epsilon ^2 E_0. \] To bound $|\partial_v\phi|$, notice that $r\geq R$. Inequality \eqref{pavphipt} implies that \[ \sum\limits_{|\beta|\leq 2}\int_{t_2}^{t_1}\int_{\omega}|Z^\beta\partial_v\psi|d\omega dt\lesssim \epsilon^2 E_0,\quad \psi=r \phi. \] Then using Sobolev embedding on $[t_1, t_2]\times S^2$, we obtain \[ r^2|\partial_v\phi|^2\leq |\partial_v\psi|^2+|\phi|^2\lesssim\epsilon^2 E_0. \] Therefore \begin{equation*} (1+r)^2|\partial \phi|^2\lesssim (1+r)^2(|Z\phi|^2+|\partial_v\phi|^2)\lesssim \epsilon^2 E_0. \end{equation*} In sum, ~\eqref{nullbd} follows from ~\eqref{nullbdsplit}. And have proven Proposition ~\ref{propboostrap}. \begin{remark} We in fact can show that \[ D^{1+\alpha}[Z^\beta F]_{\tau_1}^{\tau_2}\lesssim E_0^2\epsilon^4(1+\tau_1)^{-1-\alpha}. \] However, it is sufficient to consider $D^{2\alpha}[Z^\beta F]_{\tau_1}^{\tau_2}$ in order to close the boostrap argument. \end{remark} \section{Proof of the Main Theorems } We used the foliation $\Sigma_\tau$, part of which is null, in the previous argument. However, we do not have a local existence result with respect to the foliation $\Sigma_\tau$. To solve the nonlinear equation \eqref{THEWAVEEQ}, we use the standard Picard iteration process. Take $\phi_{-1}(t, x)=0$. We solve the following linear wave equation recursively \begin{equation} \label{iteration} \begin{cases} \Box\phi_{n+1}+N(\Phi,\phi_{n+1})+L(\partial\phi_{n+1})=F(\partial\phi_n), \\ \phi_{n+1}(0,x)=\epsilon \phi_0(x), \partial_t\phi_{n+1}(0,x)=\epsilon \phi_1(x). \end{cases} \end{equation} Now suppose the implicit constant in Proposition ~\ref{propboostrap} is $C_1$, which, according to our notation, depends only on $R$, $\alpha$, $t_0$, $C_0$, $A^{\alpha\beta}$, $B^{\alpha\beta}$. Set $$\epsilon_0=\frac{1}{\sqrt{C_1 E_0}}. $$ Then for all $\epsilon\leq \epsilon_0$, we have \[ C_1\epsilon^4 E_0^2\leq \epsilon^2 E_0. \] Thus by the continuity of $F(\partial\phi_n)$, we in fact have shown that the nonlinear term $F$ satisfies \begin{equation*} D^{2\alpha}[Z^{\beta} F(\partial\phi_n)]_{\tau_1}^{\tau_2}\leq C_1E_0^2 \epsilon^4 (1+\tau_1)^{-1-\alpha}\leq E_0\epsilon^2(1+\tau_1)^{-1-\alpha}, \quad \forall |\beta|\leq 4, \quad \forall \tau_2\geq \tau_1\geq 0. \end{equation*} Therefore, Proposition ~\ref{energydecay} implies that \begin{equation*} E[Z^\beta\phi_n](\tau)\lesssim E_0 \epsilon^2 (1+\tau)^{-1-\alpha},\quad \forall n,\quad \forall |\beta|\leq 4. \end{equation*} After using Sobolev embedding on the unit sphere, Corollary ~\ref{ptdcoutc} and Corollary \ref{ptdcinc} indicate that \begin{equation*} \begin{split} &|Z^\beta\phi_n|\lesssim\sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+|t-r+R|)^{-\frac 1 2-\frac{1}{2}\alpha},\quad \forall |\beta|\leq 2,\\ & |\phi_n|\lesssim\sqrt{E_0}\epsilon (1+r)^{-1}. \end{split} \end{equation*} We also need to show that $\phi_n$ is uniformly bounded in $C^2$. We first show that $\phi_n$ is bounded in $C^1$. When $r\leq \frac 1 2 R$, estimates \eqref{ptbdpaphi} implies that \[ |\partial Z^k\phi_n|^2\lesssim E_0\epsilon^2(1+\tau)^{-1-\alpha},\quad \forall k\leq 2. \] Here we have to point out that although there $|\beta_1|\leq 1$(due to the assumption $|\beta_1|+|\beta_2|\leq 3$), the estimate holds for $|\beta_1|\leq 2$ if we assume $|\beta_1|+|\beta_2|\leq 4$, see Remark \ref{remarkpropb}. When $\frac 1 2 R\leq r\leq R$, using \eqref{SemrR} and Sobolev embedding on the unit sphere, we obtain the same estimates as above. For $r\geq R$, the inequality \eqref{dupsi} implies that $$\int_{\tau_1}^{\tau_2} \sup\limits_{v}r^{-\alpha}\int_{\omega}|\partial_u\psi_n|^2d\omega d\tau \lesssim(1+\tau_1)^{-1-\alpha}\epsilon^2E_0,\quad \psi_n=rZ^\beta\phi_n,\quad |\beta|\leq 3. $$ Using Sobolev embedding on $S^2\times[\tau_1, \tau_2]$, we obtain $$|r\partial_u Z^\beta\phi_n|^2\lesssim \phi_n^2+ r^{\alpha}(1+\tau)^{-1-\alpha}\epsilon^2E_0,\quad \forall |\beta|\leq 1. $$ Recall that $\partial_u=\partial_t-\partial_r$ and $|\phi_n|^2, |\partial_t\phi_n|^2\lesssim (1+r)^{-1}(1+\tau)^{-1-\alpha}\epsilon^2E_0$. We can estimate \begin{equation*} |\partial_r Z^\beta\phi_n|\lesssim (1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha}\sqrt{E_0}\epsilon,\quad \forall |\beta|\leq 1. \end{equation*} Since $\mbox{$\nabla \mkern-13mu /$\,}=\frac{\Omega}{r}$, we have shown that outside the cylinder $\{r\leq R\}$ \[ |\partial Z^\beta\phi_n|\lesssim (1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha}\sqrt{E_0}\epsilon,\quad \forall |\beta|\leq 1. \] It remains to control $C^2$ estimates of $\phi_n$. Outside the smaller cylinder $\{r\leq \frac{1}{4}R\}$, we use the equation \eqref{iteration}. In fact, we can write $$\partial_{rr}\phi_{n+1}=F(\partial\phi_n)+\partial_{tt}\phi_{n+1}-\frac{2}{r}\partial_r\phi_{n+1}- \mbox{$\Delta \mkern-13mu /$\,} \phi_{n+1}-N(\Phi, \phi_{n+1})-L(\partial\phi_{n+1}). $$ Since we have already shown that \begin{align*} &|\partial\phi_n|,\quad|\partial_{tt}\phi_{n}|\lesssim E_0\epsilon^2(1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha},\\ & |\mbox{$\Delta \mkern-13mu /$\,}\phi_n|\leq r^{-2}|\Omega^2\phi_n|\lesssim E_0\epsilon^2(1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha}, \end{align*} we conclude that \[ |\partial_{rr}\phi_n|\lesssim \sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha}. \] Hence when $r\leq \frac{R}{4}$, we can show that \[ |\partial^2\phi_n|\leq |\partial_{rr}\phi_n|+|\partial Z\phi_n|\lesssim\sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha} \] For the case when $r\leq \frac{R}{4}$, we rely on elliptic theory. First we have the elliptic equation for $\phi_{n+1}$ \[ \Delta \phi_{n+1}=F(\partial\phi_n)+\partial_{tt}\phi_{n+1}-N(\Phi, \phi_{n+1})-L(\partial\phi_{n+1}). \] Since we have shown from \eqref{Caest} that \[ \|\partial Z^\beta\phi_n\|^2_{C^\frac 1 2(B_{\frac 1 2 R})}\leq \|\nabla Z^\beta\phi_n\|_{C^\frac 1 2(B_{\frac 1 2 R})}^2+\|Z^{\beta+1}\phi_n\|_{C^\frac 1 2(B_{\frac 1 2 R})}^2 \lesssim E_0\epsilon^2(1+\tau)^{-1-\alpha},\quad \forall |\beta|\leq 1, \] we can show that the right hand side of the above elliptic equation is uniformly bounded in $C^{\frac 1 2}(B_{\frac 1 2 R})$. Then Schauder estimates \cite{elliptic} imply that \[ \|\phi_n\|_{C^{2, \frac 1 2}(B_{\frac{1}{4}R})}\lesssim E_0\epsilon^2(1+\tau)^{-1-\alpha}. \] In particular, we have shown that \[ \sum\limits_{|\beta|\leq 2}|\partial^{\beta}\phi_n|\lesssim \sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+\tau)^{-\frac 1 2-\frac 1 2\alpha}. \] Now the classical local existence theory ~\cite{johndelay} shows that there exists a time $t^*>0$ and a unique smooth solution $\phi(t, x)\in C^\infty([0, t^*)\times \mathbb{R}^{3})$ of equation ~\eqref{THEWAVEEQ}. Moreover \[ \phi_{n}(t, x)\rightarrow \phi(t, x) \] in $C^{\infty}([0, t^*)\times \mathbb{R}^3)$. Therefore \[ \sum\limits_{|\beta|\leq 2}|\partial^{\beta}\phi|\lesssim \sqrt{E_0}\epsilon (1+r)^{-\frac 1 2}(1+\tau)^{-1+\frac 1 2\alpha},\quad\forall (t, x)\in [0, t^*)\times \mathbb{R}^3. \] By a theorem of H$\ddot{o}$rmander ~\cite{hormander} that as long as the solution is bounded up to the second order derivatives, the solution exists globally. That is there exists a unique global solution $\phi(t,x)\in C^\infty(\mathbb{R}^{3+1})$ which solves ~\eqref{THEWAVEEQ}. In addition since \[ \phi_{n}(t, x)\rightarrow \phi(t, x),\quad (t, x)\in \mathbb{R}^{3+1}, \] $\phi$ obeys all the estimates of $\phi_n$ obtained above. We thus finished the proof of Theorem \ref{maintheorem2}. \bigskip To prove Theorem \ref{maintheorem}, it suffices to check that the functions $\Phi$, $\mathcal{N}^\mu(\partial\Phi)$, $\mathcal{N}^{\mu\nu}(\partial\Phi)$ satisfy the conditions in Theorem \ref{maintheorem2} that $\Phi$, $L^\mu$, $h^{\mu\nu}$ satisfy correspondingly. In fact, notice that \[ |\partial Z^\beta \mathcal{N}^{\mu}(\partial\Phi)|\leq C(\mathcal{N})|\partial^2 Z^\beta\Phi|,\quad \forall |\beta|\leq 2. \] The boundedness of $\partial^2 Z^\beta \Phi$, for all $|\beta|\leq 2$, follows from the equation for $Z^\beta\Phi$(similarly, express the only unknown term $\partial_{rr}Z^\beta\Phi$ as a combination of terms with known $L^\infty$ norm). Hence $|\partial Z^\beta\mathcal{N}(\partial\Phi)|+|\partial^2 Z^\beta \Phi|$ is bounded by a constant depending on $C_0$ and the nonlinearity $\mathcal{N}$. For the other conditions, when $t\leq t_0$, we have \[ |Z^\beta \mathcal{N}^\mu(\partial\Phi)|+|Z^\beta\mathcal{N}^{\mu\nu}(\partial\Phi)|\leq C(\mathcal{N}, C_0),\quad \forall |\beta|\leq 4. \] When $t\geq t_0$, we can show \begin{align*} &|Z^\beta\mathcal{N}^{\mu}|\leq C(\mathcal{N}, C_0)(1+|x|)^{-1-\frac 1 2\alpha_0}(1+(t-|x|)_{+})^{-1},\\ &|Z^\beta\mathcal{N}^{\mu\nu}(\Phi)|\leq C(\mathcal{N}, C_0)(1+|x|)^{-\frac 1 2 \alpha_0},\quad |\beta|\leq 4. \end{align*} Replace $\alpha$ with $\min\{\frac{\alpha_0}{6}, \alpha\}$. Then the functions $\mathcal{N}^{\mu}(\partial\Phi)$, $\mathcal{N}^{\mu\nu}(\partial\Phi)$ satisfy the conditions in Theorem \ref{maintheorem2}. We thus have the stability result of Theorem \ref{maintheorem}.
2,877,628,089,873
arxiv
\section{Introduction}\label{sec:intro} In this paper is to prove the existence of solutions of \begin{equation}\label{S2E3 \left( \frac{d^{3}H}{d\xi^{3}} + \xi^{2} + a\right) H^{3}=1\,, \quad a\in\mathbb{R}\,. \end{equation} that satisfy the following behaviour \begin{equation}\label{S2E7 H\sim\frac{1}{|\xi|^{\frac{2}{3}}}\quad\mbox{as}\quad|\xi| \to\infty\,. \end{equation} This equation has been deduced in \cite{CV3} (see also \cite{CV4}). It arises in a two dimensional model describing steady coating of a bumpy surface by a thin-film approximation. In particular (\ref{S2E3}) results in the particular case that the motion of the fluid is driven by a balance of capillarity and gravity effects. In some regions the curvature of the substrate induces capillary forces of the same order of magnitude than the gravitational ones. In the steady regime the model describing such flows has the form (cf. (\cite{CV3})): \begin{equation}\label{A1 \frac{\partial}{\partial s} \left(\left(Q(s)+\varepsilon\frac{\partial^{3}h}{\partial s^{3}}\right)h^{3}\right)=0\,, \quad\varepsilon > 0 \end{equation} where we have neglected some non-relevant terms. The variable $s$ stands for the arc-length that parametrises the substrate, and $h$ is the height of the fluid over this surface. The parameter $\varepsilon$ is the ratio of the characteristic height of the fluid and the characteristic radius of curvature of the substrate. The function $Q(s)$ describes the balance between gravitational and the capillary forces induced by the geometry of the substrate, it measures the tendency of the fluid to move in the tangential direction to the substrate as a result of the afore mentioned forces. If the function $Q(s)$ has a constant sign, the motion of the fluid takes place always in the same direction. In such case, (\ref{A1}) can be approximated by the leading order term of (\ref{A1}) \begin{equation}\label{A2 \frac{\partial}{\partial s}(Q(s) h^{3}) =0\,. \end{equation} However, this approximation breaks down and it cannot be uniformly valid for arbitrary values of $s$ if $Q(s)$ changes sign. In such cases (\ref{A2}) predicts the onset of regions, where $Q(s)$ is close to zero, with infinite height $h$, i.e. the fluid accumulates in those regions. As a consequence, the approximation (\ref{A2}) must be replaced by the model (\ref{A1}). In the particular case in which in most of the substrate $Q(s)$ is positive, but there exists a sufficiently small region (of size $\varepsilon^{\frac{3}{17}}$ to be precise), where $Q(s)=0$, a boundary layer analysis shows that, under suitable non-degeneracy conditions, the height of the fluid can be approximated by means of (\ref{S2E3}), the height of the fluid becoming of order $\varepsilon^{-\frac{2}{17}}$. This asymptotic analysis shows also that the solutions of (\ref{S2E3}) describing the stationary flows in those regions must satisfy (\ref{S2E7}). Equations similar to (\ref{A1}) where the main driving terms are the gravity and the curvature of the substrate, have been obtained, in a slightly different context, in \cite{MyersSolidII} and \cite{MyersSolidI}. This model can be obtained also as a particular case of the ones considered in \cite{Royetal} for specific choices of the parameters. See also \cite{Howell} for a model that neglects gravity. Similar problems have been investigated in relation with industrial applications, such as the drainage of (metal) foams (e.g. \cite{StockerHosoi}), manufacture of lenses (e.g. \cite{Howell} and \cite{JChiniK}, although in the later case the effect of gravity can be neglected. These works offer numerical as well as formal (using perturbation methods) results. It is our aim to study the solutions of (\ref{S2E3})-(\ref{S2E7}) rigorously. The main result of the paper is the following: \begin{theorem}\label{Main} For any $a\in\mathbb{R}$ there exists a solution of (\ref{S2E3}) satisfying (\ref{S2E7}). \end{theorem} We sketch the main ideas in the proof of Theorem~\ref{Main}. We first observe that the terms $\xi^{2}H^{3}$ and $1$ in (\ref{S2E3}) give the natural scaling $H\sim|\xi|^{-\frac{2}{3}}\Phi$ with $\Phi\sim1$ (cf. (\ref{S2E7})), which gives the leading order behaviour of (\ref{S2E3}), namely, \[ \left( |\xi|^{-\frac{8}{3}} \frac{d^{3}\Phi}{d\xi^{3}}+1+a\,|\xi |^{-2}\right) \Phi^{3}=1\,, \quad|\xi|\gg1\,. \] A change of variables with behaviour $\tau\sim\frac{9}{17}\xi|\xi|^{\frac{8}{9}}$ as $|\xi|\to\infty$ gives the dominant balance problem \[ \left( \frac{d^{3}\Phi}{d\tau^{3}}+1\right) \Phi^{3}\sim1 \quad \mbox{as}\quad|\xi|\to+\infty\,,\quad \Phi\to 1\mbox{as}\quad |\tau|\to\infty\,. \] In such set of variables $\Phi$ and $\tau$ (\ref{S2E3}) becomes an autonomous dynamical system of the form \begin{eqnarray*} \frac{d^3\Phi}{d\tau^3}+1 = \frac{1}{\Phi^{3}}-(a-1)(\cos\theta)^{2}-F(\theta)\,,\quad \frac{d\theta}{d\tau} = (\cos\theta)^{\frac{26}{9}} \,, \end{eqnarray*} that we shall denote by $(D)$, for the unknown $(\Phi,d\Phi/d\tau,d^{2}\Phi/d\tau^{2},\theta) \in\mathbb{R}^{+}\times\mathbb{R}^{2}\times[-\pi/2,\pi/2]$. Here $\theta$ is defined by $\xi=\tan\theta$ and the function $F$ is a linear combination of $\Phi$ and its derivatives with coefficients that depend only on $\theta$ and that vanish at $\theta=\pm\pi/2$. Thus, this system has the property that the three dimensional subspaces $\{\theta=\pm\pi/2 \}$ are invariant and the flow on them is described by the ODE \begin{equation} \label{ODEtapas \frac{d^{3}\Phi}{d\tau^{3}}+1=\frac{1}{\Phi^{3}}\,. \end{equation} The system associated to (\ref{ODEtapas}) for the unknown $(\Phi,d\Phi/d\tau,d^{2}\Phi/d\tau^{2})$ was studied in \cite{CV}. It has one single critical point, $P_{s}=(1,0,0)$ and therefore $(D)$ has two critical points, $p_{-}=(1,0,0,-\pi/2)$ and $p_{+}=(1,0,0,\pi/2)$. Then, the solutions of (\ref{S2E3}) satisfying the matching conditions (\ref{S2E7}) correspond to solutions of $(D)$ contained in the trajectories that connect the critical point $p_{-}$ as $\tau\to-\infty$ to $p_{+}$ as $\tau\to\infty$. Or equivalently, they are contained in heteroclinic orbits connecting these two critical points. The existence of a heteroclinic orbit for $(D)$ is proved by means of a shooting argument in the direction of decreasing $\tau$. The shooting starts close to the invariant manifold $\{\theta=\pi/2\}$ and the final argument will require information of the flow on the invariant manifold $\{\theta=-\pi/2\}$. For that reason we shall need the following information on (\ref{ODEtapas}). First, that the critical point $P_{s}$ is hyperbolic and has a one-dimensional stable manifold and a two-dimensional stable manifold. Secondly, we proved in \cite{CV} that the only possible asymptotic behaviour of solutions on the stable manifold correspond to either \begin{equation}\label{B1} \lim_{\tau\to-\infty}\Phi(\tau)=\infty \end{equation} or to \begin{equation}\label{B2} \lim_{\tau\to(\tau_{*})^{+}}\Phi(\tau)= 0\quad\mbox{with} \quad\tau_{*}>-\infty\,. \end{equation} We shall also recall later that (\ref{ODEtapas}) has a increasing Lyapunov function, and that this in particular guarantees the non-existence of periodic orbits. To start the shooting we first prove that there exists an invariant two-dimensional centre-stable manifold $\mathcal{V}_{+}$ locally defined near the point $p_{+}$. All the trajectories associated to $(D)$ whose starting initial data is contained in $\mathcal{V}_{+}$ converge to $p_{+}$ as $\tau\to\infty$. We can parametrise the set of trajectories in $\mathcal{V}_{+}$ by means of one real parameter $\nu$ taking values in some large interval. The behaviours (\ref{B1}) and (\ref{B2}) define two sets of values $\nu$. We prove that for very large values of $\nu$ the corresponding trajectory satisfies (\ref{B1}). On the contrary, if $\nu$ is very negative we show that there exists a $\tau_{\ast}=\tau_{\ast}(\nu)$ such that (\ref{B2}). It turns out that the sets of values $\nu$ such that the corresponding trajectories satisfy either (\ref{B1}) or (\ref{B2}) are disjoint open sets. This implies, the existence of $\nu$'s for which the corresponding trajectory does not satisfy neither (\ref{B1}) nor (\ref{B2}). The final step is to show that the trajectories associated to such $\nu$ are globally defined in $\tau\in\mathbb{R}$ and that they satisfy \begin{equation}\label{B3} \left( \frac{1}{\Phi}+\Phi\right) +\left| \frac{d\Phi}{d\tau}\right| +\left| \frac{d^{2}\Phi}{d\tau^{2}}\right| \leq C\,,\quad\mbox{for any}\quad \tau\in\mathbb{R} \end{equation} for some $C>0$, and that \begin{equation}\label{B4 \lim_{\tau\to-\infty}\theta(\tau)=-\frac{\pi}{2} \,. \end{equation} The idea is that if (\ref{B3}) and (\ref{B4}) are satisfied we can use the fact that the dynamics of $(D)$ become close to the ones associated to the trajectories contained in the unstable manifold of $P_{s}$ for the system associated to (\ref{ODEtapas}) and the trajectories have no alternative but to approach $p_{-}$ as $\tau\to-\infty$. The most technical part of the paper is the proofs of (\ref{B3}) and of (\ref{B4}). These require to show that oscillatory behaviours with large amplitude for the solutions of $(D)$ as $\tau\to-\infty$ must have a decreasing amplitude for decreasing $\tau$ if neither (\ref{B1}) nor (\ref{B2}) take place. The key point is that the structure of oscillatory solutions can be identified by looking at the several asymptotic regimes of (\ref{S2E3}). There are, in particular, two very distinctive ones. For instance, the balance $\xi^{2/3}H \sim\infty$ for very negative $\xi$ will be relevant in our analysis. In this case the behaviour of solutions is described by \begin{equation}\label{ecuacionminima1} \frac{d^{3} H}{d\xi^{3}}= - \xi^{2} \,. \end{equation} This equation can be integrated giving that, in such regions, $H$ behaves like a fifth order polynomial. The solutions of (\ref{ecuacionminima1}) are in fact a two-parameter family of polynomials, as we shall see. On the other hand, if $\xi^{2/3}H \sim0$ on bounded intervals, the dominant balance there is given by the equation \begin{equation}\label{ecuacionminima2} \frac{d^{3} H}{d\xi^{3}} = \frac{1}{H^{3}} \,. \end{equation} The analysis of (\ref{ecuacionminima2}) plays a crucial role in our proofs and was already studied in \cite{CV}. The possibility of alternating regions where either (\ref{ecuacionminima1}) or (\ref{ecuacionminima2}) dominates, builds up a scenario where solutions with large oscillations exist: The bouncing region of the oscillations are described by (\ref{ecuacionminima2}) and the maximum amplitude regions are close to solutions of (\ref{ecuacionminima1}). This phenomenon has been already observed for (\ref{ODEtapas}) in \cite{WilsonJones} and explored rigorously in \cite{CV}. In order to prove (\ref{B1}) and (\ref{B2}) we exploit this mechanism of oscillation. We argue by contradiction and assume first that (\ref{B3}) does not hold. This gives (after a number of technical lemmas) that there exists a sequence $\{\tau_{n}^{\ast}\}$ with $\lim_{n\to\infty}\tau_{n}^{\ast}=-\infty$ such that $\Phi(\tau_{n}^{\ast})$ is a local maximum and $\lim_{n\to\infty}\Phi(\tau_{n}^{\ast})=\infty$. We use that the oscillatory solutions with very large amplitude for very negative values of $\tau$ can be approximated, after a suitable rescaling, by a sequence of functions $|\xi|^{2/3}\mathcal{H}_{n}(\xi)$ where each $\mathcal{H}_{n}$ solves (\ref{ecuacionminima1}) in intervals $[\xi(\tau_{n+1}^{min}),\xi(\tau_{n}^{min})]$. The values $\tau_{n}^{min}$ being such that $\Phi(\tau_{n}^{min})$ is the minimum in $(\tau_{n}^{\ast},\tau_{n-1}^{\ast})$. In particular, in such intervals $\mathcal{H}_{n}(\xi)$ are close to a fifth order polynomial solving (\ref{ecuacionminima1}). The matching between two consecutive such functions is done into the inner region where $\Phi$ and $\mathcal{H}_{n}$ become close to $0$, as it turns out, this inner regions lies around $\tau_{n}^{min}$. As we have mentioned the dynamics in such bouncing region are dominated by (\ref{ecuacionminima2}) and the rigorous matching can be adapted from that performed for (\ref{ODEtapas}) (see \cite{CV}): the study of (\ref{ecuacionminima2}) reduces to the one of a phase-plane analysis in which the bouncing can be encoded into the behaviour of a separatrix. This object attracts trajectories for increasing $\xi$, implying that its behaviour is generic. Reading off this behaviour into the functions $\mathcal{H}_{n}$ implies that in the outer region they behave as a polynomial with a double zero near $\xi(\tau_{n+1}^{min})$. This in particular reduces the family of polynomials that give the outer region around each $\tau_{n}^{\ast}$ to a one-parameter family. Moreover, this analysis allows to get information on the relative size of consecutive maxima and minima, namely that the sequence of the maximum values decreases and that the sequence of minimum values increases (as $n \to \infty$) and these contradict the assumption that (\ref{B3}) does not hold. The paper is organised as follows. Section~\ref{sec:preliminaries} is divided in three preliminary parts. First in Section~\ref{sec:summary} we give some results concerning (\ref{ODEtapas}), most of which are proved in \cite{CV}. In Section~\ref{sec:DS} we reformulate (\ref{S2E3}) as a four dimensional dynamical system and reformulate Theorem~\ref{Main} in this setting. The third part is Section~\ref{sec:loc-analysis} where we prove the existence of the centre-unstable manifold around $p_{+}$. Section~\ref{sec:3} is devoted to the analysis of the behaviours (\ref{B1}) and (\ref{B2}) for $(D)$; in Section~\ref{sec:stability} we show stability under small perturbations of solutions that satisfy either of these properties, and in Section~\ref{sec:classes} we give necessary conditions on solutions of (\ref{S2E3}) to satisfy either (\ref{B1}) or (\ref{B2}). With the analysis carried out up to here we can then prove in Section~\ref{sec:4} that there exist solutions on $\mathcal{V}_{+}$ that do not satisfy neither (\ref{B1}) nor (\ref{B2}). We continue by proving that these trajectories of $\mathcal{V}_{+}$ do satisfy (\ref{B3}) and (\ref{B4}). In order to do that we first find in Section~\ref{sec:dynamics} that if (\ref{B3}) is not satisfied the sequences $\{\tau_{n}^{\ast}\}$ and $\{\tau_{n}^{min}\}$, described above, are well defined. Second, in Section~\ref{sec:proof:osci} we find the contradictory results that $\{\Phi(\tau_{n}^{\ast})\}$ is decreasing and that $\{\Phi(\tau_{n}^{min})\}$ is increasing. This part is very technical and needs by itself a few steps. Thus, in Section~\ref{sec:aux} we identify the scales of the outer region and the approximating polynomials near local maxima. This is based on the analysis of the solutions of (\ref{ecuacionminima1}) that is carried out in Appendix~\ref{sec:polynomials}. In Section~\ref{sec:rescale:H} we perform the right scaling of the solutions under consideration and identify the range in which they are approximated by the polynomials. In this section we also prove that the approximating polynomials must have a double zero. This step requires the analysis of (\ref{ecuacionminima2}) given in Appendix~\ref{sec:summary:II} as well as the {\it matching lemma} given in Appendix~\ref{sec:control} (a result that has been adapted from \cite{CV}). In Section~\ref{sec:formal}, with detailed information of the matching regions, we derive an (iterative) expression that relates the elements of the sequence of local maxima and another that relates the local minima, and that contradict that (\ref{B3}) is not satisfied. Finally, in Section~\ref{sec:6} we finish the prove of Theorem~\ref{Main}. Finally, we recall that equations similar to (\ref{ODEtapas}) have been studied intensively, see \cite{CrasterMatar}, \cite{Eggers}, \cite{EggersII}, \cite{Hoc01}, \cite{MyersRev}, \cite{MyersSolidII}, \cite{MyersSolidI}, \cite{ODB97} and \cite{wilson}), to mention a few, where similar equations arise in several related physical situations. Rigorous results concerning such equations can be found also in \cite{BerettaHP} and, concerning travelling wave solutions, in \cite{BS00}, \cite{Boatto}, \cite{KH75}, \cite{Mich88} and \cite{Ren96}. It is interesting to note that many of these models yield higher order ODEs describing oscillatory fluid interfaces. We refer to \cite{CV}, where this aspect and related works are put into context. \section{Preliminaries}\label{sec:preliminaries} \subsection{A summary of results for (\ref{ODEtapas})}\label{sec:summary} We now summarise some properties of (\ref{ODEtapas}), most of which have been proved in \cite{CV} and will be used later in the proof of Theorem~\ref{Main}. It is convenient to rewrite (\ref{ODEtapas}) in the equivalent for \begin{equation}\label{S3E1} \frac{d\Phi}{d\tau}=W\,,\ \frac{dW}{d\tau}=\Psi\,,\ \frac{d\Psi}{d\tau =\frac{1}{\Phi^{3}}-1\,, \end{equation} we then have the following result. \begin{proposition}\label{Tapas1} \begin{enumerate} \item There is a unique critical point for (\ref{S3E1}) in the domain $\{\Phi>0\,, \ W\in\mathbb{R}\,, \ \Psi\in\mathbb{R}\}$ given by \[ P_{s}=(\Phi,W,\Psi)=(1,0,0) \,. \] \item The point $P_{s}$ is hyperbolic. the stable manifold of (\ref{ODEtapas}) at the point $P_{s}$ is tangent to the vector: \[ v_{1}:=\left( \begin{array} [c]{c 3^{-\frac{2}{3}}\\ -3^{-\frac{1}{3}}\\ 1 \end{array} \right) \] and the corresponding eigenvalue is $\lambda_{1}:=-3^{\frac{1}{3}}$. \item At $P_{s}$ there is a two-dimensional unstable manifold locally spanned by the eigenvectors $v_{2}:=(-3^{\frac{1}{6}}/6\,,\,3^{\frac{2}{3}}/6\,,\,1)^T$ and $v_{3}:=(3^{\frac{5}{6}}/6\,,\,3^{\frac{1}{6}}/2\,,\,0)^T$. The eigenvalues associated to the plane spanned by $\{v_{2},v_{3}\}$ are $\lambda_{2}:=3^{\frac{1}{3}}(1+i\,3^{\frac{1}{2}})/2$ and $\lambda_{3}=\overline{\lambda_{2}}$. \item The trajectories associated to (\ref{S3E1}) that are contained in the stable manifold and satisfy $(\Phi,W,\Psi) \not\equiv P_{s}$, behave in one of the two following ways for decreasing $\tau$: Either they are defined for all $\tau\in\mathbb{R}$ and satisfy \begin{equation}\label{P1} \lim_{\tau\to-\infty}(\Phi,W,\Psi)=(\infty,-\infty,\infty) \end{equation} or, alternatively, there exists a $\tau_{\ast}>-\infty$ such that \begin{equation}\label{P2} \lim_{\tau\to(\tau_{\ast})^{+}}\Phi(\tau) =0 \,. \end{equation} Moreover, the points of the stable manifold associated to $P_{s}$ with $\Phi>1$ satisfy (\ref{P1}) and those with $\Phi<1$ satisfy (\ref{P2}). \item Suppose that there exist $\tau_{0}\in \mathbb{R}$ and $C_{0}>1$ such that \[ (\Phi(\tau),W(\tau),\Psi(\tau)) \in \left\{(\Phi,W,\Psi)\in\mathbb{R}^{3}: \ \frac{1}{C_{0}} \leq\Phi\leq C_{0},\ -C_{0}\leq W\leq C_{0}\ ,\ -C_{0}\leq\Psi\leq C_{0}\right\} \] for all $\tau\leq \tau_{0}$, then \begin{equation}\label{S3E3b} \lim_{\tau\to-\infty}(\Phi(\tau),W(\tau),\Psi(\tau)) =P_{s} \end{equation} and the corresponding trajectory is contained in the unstable manifold of $P_{s}$. \end{enumerate} \end{proposition} \begin{proof}[Proof] All the statements of this proposition have been already proved in \cite{CV} except for \textit{(v)}. In order to prove this, we use an argument similar to the one used to prove Lemma~2.4 in \cite{CV}. We first recall that there exists an increasing Lyapunov functional $E$ associated to (\ref{ODEtapas}): \[ E:=\Psi W + \frac{1}{2\Phi^{2}} + \Phi\,,\quad\frac{dE}{d\tau}=\Psi^{2}\geq0\,. \] This and the assumptions made imply that \begin{equation}\label{finite:integral} \int_{-\infty}^{\tau_{0}}\Psi^{2}(s) ds<\infty\,. \end{equation} Using (\ref{S3E1}), it then follows that $\lim_{\tau\to-\infty}\Psi(\tau)=0$. Indeed, arguing by contradiction, one can construct a sequence $\tau_{n}\to-\infty$ such that there exits a $\varepsilon_{0}>0$ such that either $\Psi(\tau_{n})\geq \varepsilon_{0}$ or $\Psi(\tau_{n})\leq -\varepsilon_{0}$. Then (\ref{S3E1}) implies that $d\Psi/d\tau\geq -1$, so either $\Psi(\tau)\geq\varepsilon_{0}+(\tau_{n}-\tau)$ for $\tau>\tau_{n}$ or $\Psi(\tau)\leq -\varepsilon_{0}+(\tau_{n}-\tau)$ for $\tau>\tau_{n}$. But this contradicts (\ref{finite:integral})) since either $\int_{\tau_{n}}^{\tau_{n}+\varepsilon_{0}/2}(\Psi(\tau))^{2}d\tau\geq \varepsilon^{3}/8$ or $\int_{\tau_{n}-\varepsilon_{0}/2}^{\tau_{n}}(\Psi(\tau))^{2}d\tau\geq \varepsilon^{3}/8$ for all $n$. Now the second equation in (\ref{S3E1}) implies that $W$ remains approximately constant as $\tau\to-\infty$ in any finite interval of arbitrary fixed length $L$. Therefore, if there is a subsequence $\{\tau_{n}\}$ with $\lim_{n\to\infty}\tau_{n}=-\infty$ satisfying $\lim_{n\to\infty}W(\tau _{n})\neq 0$, we obtain that $\inf_{\tau\in\left[ \tau_{n},\tau_{n}+L\right] }|W(\tau)| \geq\varepsilon_{0}>0$ for $n$ sufficiently large. It then follows from the first equation in (\ref{S3E1}) that the condition $\frac{1}{C_{0} \leq\Phi\leq C_{0}$ fails if $L$ is assumed to be sufficiently large (integration on the interval $(\tau_{n},\tau_{n}+L)$ for sufficiently large $n$ gives that $|\Phi(\tau_{n}+L)-\Phi(\tau_{n})|>\varepsilon_{0} L>0$, but $|\Phi(\tau_{n}+L)-\Phi(\tau_{n})|<(C_{0}^{2}-1)/C_{0}$ for all $n$ and $L$). Therefore $\lim_{\tau\to-\infty}W(\tau) =0$. Using the last equation in (\ref{S3E1}) as well as the fact that $\lim_{\tau\to-\infty}\Psi(\tau)=0$ it then follows in a similar way that $\lim_{\tau\to-\infty}\Phi(\tau)=1$. This gives (\ref{S3E3b}) and the result follows. \end{proof} The next lemma gives the detailed asymptotic behaviour in both cases (\ref{P1}) and (\ref{P2}): \begin{lemma}\label{Tapas2} The trajectories associated to (\ref{S3E1}) that are contained in the stable manifold and satisfy $(\Phi,W,\Psi) \neq P_{s}$ satisfy that either they are defined for all $\tau\in\mathbb{R}$ and (\ref{P1}) holds with \begin{equation} \lim_{\tau\to-\infty}\frac{\Phi(\tau)}{\tau^{3}}=-\frac{1}{6}\,, \label{P1:prime \end{equation} or, alternatively, there exists a $\tau_{\ast}>-\infty$ such that (\ref{P2}) holds with \begin{equation} \lim_{\tau\to\tau_{\ast}^{+}}\frac{\Phi(\tau) }{(\tau-\tau_{\ast})^{\frac{3}{4}}} =\left( \frac{64}{15}\right)^{\frac{1}{4}}\,. \label{P2:prime \end{equation} \end{lemma} \begin{proof}[Proof] That either (\ref{P1}) or (\ref{P2}) hold is just the statement \textit{(iv)} of Proposition~\ref{Tapas1}. Then, (\ref{P1:prime}) follows from (\ref{P1}) and integrating the equation. Indeed, for all $\Phi_{0}>0$ there exist a $\tau_{0}$ with $|\tau_{0}|$ large enough so that for all $\tau<\tau_{0}$, then \[ \Phi(\tau)>\Phi_{0}>0\,, \ \frac{d\Phi(\tau_{0})}{d\tau}<-\Phi_{0}<0\,, \ \frac{d^{2}\Phi(\tau_{0})}{d\tau^{2}}>\Phi_{0}>0\,. \] Thus \[ -1<\frac{d^{3}\Phi}{d\tau^{3}}<\frac{1}{\Phi_{0}^{3}}-1\,. \] and integrating this expression with $\tau<\tau_{0}<0$ we obtain \[ \left( \frac{1}{\Phi_{0}^{3}}-1\right) \frac{(\tau-\tau_{0})^{3}}{6} \, < \, \Phi(\tau) \, < \, \Phi(\tau_{0}) - \frac{d\Phi(\tau_{0})}{d\tau} (\tau_{0}-\tau) +\frac{d^{2}\Phi(\tau_{0})}{d\tau^{2}} \frac{(\tau-\tau_{0})^{2}}{2} -\frac{(\tau-\tau_{0})^{3}}{6}\,, \] for $|\tau_{0}|$ large enough. Then dividing by $-\tau^{3}/6$ and taking the limit $\tau\to-\infty$ implies (\ref{P1:prime}), since $\tau_{0}$ can be made arbitrarily negative and $\Phi_{0}$ arbitrarily large. In order to prove (\ref{P2:prime}) we use the phase-plane analysis of the Appendix~\ref{sec:summary:II}. We employ the transformation (\ref{SEphi0-trans}) with $\zeta$ replaced by $\tau$ for (\ref{S3E1}) (see also \cite{CV}) that gives the system \begin{equation} \label{Phi-z-system}\frac{d\Phi}{dz}=u\,\Phi\, , \quad\frac{du}{dz} =v+\frac{1}{3}u^{2}\,, \quad\frac{dv}{dz}=1+\frac{5}{3}u\,v-\Phi^{3}\,, \end{equation} which corresponds to (\ref{phi0system}) with $\Phi=0$ in the last equation. For further reference, the flow field of the phase plane of (\ref{phi0system}) is also depicted in Figure~\ref{phase-plane} where, in particular, the direction of the field, the critical point $(u_{e},v_{e})$ and the attractive separatrix $v=\bar{v}(u)$ are shown. Using Lemma~\ref{tozero-bwds} and the behaviour of trajectories of the system (\ref{phi0system}) entering the only critical point $(u_{e},v_{e})$ as $z\to -\infty$ it is easy to show that if \begin{equation} \label{Hyp1} \lim_{z\to -\infty}\Phi(z)\to 0\,, \ \mbox{and} \ \|(u,v)\| \ \mbox{is uniformly bounded as} \ z\to -\infty\,, \end{equation} then, by a bootstrap argument, the trajectory $(u,v)$ remains close to $(u_{e},v_{e})$, in particular the estimate (\ref{W3E6e}) holds for $z$ large enough. That (\ref{Hyp1}) and that $\tau_{\ast}=\lim_{z\to-\infty}\tau(z)$ are satisfied is a consequence of the proof of (\ref{P2}) in \cite{CV} and the transformation (\ref{SEphi0-trans}). Using the first equation in (\ref{Phi-z-system}) one obtains that there exists positive constants $C_{1}$ and $C_{2}$ such that \begin{equation}\label{phi:0:esti} e^{C_{2}e^{\lambda z}} e^{u_{e}(z-z_{0})}\Phi(z_{0}) <\Phi(z)<e^{C_{1}e^{\lambda z}} e^{u_{e}(z-z_{0})}\Phi(z_{0})\,, \end{equation} with $\lambda=\mbox{Re}(\lambda_{+})=(7/2)(1/(15^{\frac{1}{3}})>0$, for all $z<z_{0}$ where $z_{0}<0$ and $|z_{0}|$ is sufficiently large. And this in particular implies that \begin{equation}\label{phi:0:limit} \lim_{z\to-\infty}\Phi(z) e^{-u_{e}z} =\Phi(z_{0})e^{-u_{e}z_{0}}\,. \end{equation} Using now the last equation of (\ref{SEphi0-trans}) and (\ref{phi:0:esti}) one can infer that, considering $\tau$ as a function of $z$ \[ \lim_{z\to-\infty} (\tau(z)-\tau_{\ast})^{\frac{3}{4}} e^{-u_{e}z} =\left( \frac{15}{64}\right)^{\frac{1}{4}}\Phi(z_{0})e^{-u_{e}z_{0}}\,, \] (where we use $u_{e}=(5/9)^{\frac{1}{3}}$ to compute the explicit coefficient). Finally, this and (\ref{phi:0:limit}) imply (\ref{P2:prime}). \end{proof} \subsection{A dynamical systems approach}\label{sec:DS} As anticipated in the Introduction, in this section we reformulate the main result in terms of a Dynamical Systems approach. We first transform (\ref{S2E3}) into a suitable systems of four autonomous ODEs, and (\ref{S2E7}) into its corresponding boundary conditions. Since we are interested in solutions for which $|\xi|^{\frac{2}{3}}H$ remains bounded for all $\xi$ it is convenient to introduce the following change of variables \begin{equation} H(\xi) =\frac{1}{(\xi^{2}+1)^{\frac{1}{3}}}\Phi(\tau) \label{S2E8a \end{equation} where the variable $\tau$ is defined by means of \begin{equation} (\xi^{2} + 1)^{\frac{4}{9}}d\xi= d\tau\,,\quad\tau = \int_{0}^{\xi}(\eta^{2} +1)^{\frac{4}{9}}d\eta\,. \label{S2E8 \end{equation} With this transformation, we have that \begin{align} \frac{dH}{d\xi} & = -\frac{2}{3} \frac{\xi}{(\xi^{2}+1)^{\frac{4}{3}}}\Phi\, +\,(\xi^{2}+1)^{\frac{1}{9}}\frac{d\Phi}{d\tau}\,, \label{1st:dH}\\ \frac{d^{2}H}{d\xi^{2}} & = -\frac{2}{3} \frac{1-\frac{5}{3}\xi^{2}}{(\xi^{2}+1)^{\frac{7}{3}}}\Phi -\frac{4}{9}\frac{\xi}{(\xi^{2}+1)^{\frac{8}{9}}}\frac{d\Phi}{d\tau} \, +\, (\xi^{2}+1)^{\frac{5}{9}}\frac{d^{2}\Phi}{d\tau^{2}}\,, \label{2nd:dH \end{align} and (\ref{S2E3}) becomes \begin{equation}\label{S2E9} \frac{d^{3}\Phi}{d\tau^{3}} =\frac{1}{\Phi^{3}}-\frac{\xi(\tau)^{2}+ a}{\xi(\tau)^{2}+1}-F(\tau) \,, \quad n\in\mathbb{N}\,,\quad s\in[-1,0] \end{equation} with \begin{equation}\label{S2E9bis} F(\tau) =\frac{16}{3}\frac{\xi}{(\xi^{2}+1)^{\frac{10}{3}}} \left(1-\frac{14}{9}\frac{\xi^{2}}{ \xi^{2}+1} \right) \Phi + \frac{1}{(\xi^{2}+1)^{\frac{17}{9}}}\left( \frac{208}{81}\frac{\xi^{2}}{\xi^{2}+1} -\frac{10}{9}\right) \frac{d\Phi}{d\tau} +\frac{2}{3}\frac{\xi}{(\xi^{2}+1)^{\frac{13}{9}}} \frac{d^{2}\Phi}{d\tau^{2}}\,, \end{equation} where $\xi$ is given as a function of $\tau$ by means of (\ref{S2E8}). In other words, we use $\tau$ as independent variable, while $\xi$ becomes a dependent one, making the system autonomous. It is convenient to transform $\xi$ further into a new variable that takes values in a compact set, namely, we define the variable $\theta$ by \begin{equation} \xi=\tan\theta\quad\theta\in\left[ -\frac{\pi}{2},\frac{\pi}{2}\right] \,. \label{xi \end{equation} Finally, we can reformulate (\ref{S2E8}) and (\ref{S2E9})-(\ref{S2E9bis}) as \begin{align} \frac{d\Phi}{d\tau} & =W\,,\label{compact1}\\ \frac{dW}{d\tau} & =\Psi\,,\label{compact2}\\ \frac{d\Psi}{d\tau} & =\frac{1}{\Phi^{3}}-1-(a-1)(\cos\theta)^{2} -\left[ \left( \frac{16}{3}\sin\theta-\frac{224}{27}(\sin\theta)^{3}\right) (\cos\theta)^{\frac{17}{3}}\,\Phi\right. \nonumber\\ & \left. +\left( \frac{208}{81}(\sin\theta)^{2}-\frac{10}{9}\right) (\cos\theta)^{\frac{34}{9}}\,W+\frac{2}{3}\sin\theta(\cos\theta)^{\frac{17}{9}}\,\Psi \right] \,,\label{compact3}\\ \frac{d\theta}{d\tau} & =(\cos\theta)^{\frac{26}{9}} \,, \label{compact4 \end{align} that has critical points \begin{equation}\label{stst} p_{-}:=\left( 1,0,0,-\frac{\pi}{2}\right) \quad\mbox{and} \quad p_{+}:=\left( 1,0,0, \frac{\pi}{2}\right)\,. \end{equation} We aim to prove the following theorem: \begin{theorem}\label{hetcon} There exists a heteroclinic connection of the system (\ref{compact1})-(\ref{compact4}) between the points $p_{-}$ and $p_{+}$ given in (\ref{stst}). \end{theorem} We notice that Theorem~\ref{Main} is just a corollary of Theorem~\ref{hetcon}; this is implied by (\ref{S2E7}) and (\ref{S2E8a}). We point out that the system (\ref{compact1})-(\ref{compact4}) reduces to (\ref{S3E1}) on the subspaces $\theta=-\pi/2$ and $\theta=+\pi/2$. We shall take advantage of this fact in some of the arguments that follow. \subsection{Existence of the centre-stable manifold}\label{sec:loc-analysis} We now proceed to describe in detail the construction of a centre-stable manifold at $p_{+}$ that we denote by $\mathcal{V}_{+}$. Let us first define a set of transformations $F_{\tau}(x),\ \tau\in\mathbb{R}$ for any given $x\in\mathbb{R}^{+}\times\mathbb{R}^{2}\times\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ by means of \begin{equation}\label{C1} (\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))=F_{\tau}(x) \end{equation} where $(\Phi,W,\Psi,\theta)$ solves (\ref{compact1})-(\ref{compact4}) with $(\Phi(0),W(0),\Psi(0),\theta(0))=x$. Classical ODE theory ascertains that the family of transformations $F_{\tau}(\cdot)$ is well defined in some suitable interval $\tau\in(\tau_{1}(x),\tau_{2}(x))$. We have the following result. \begin{proposition}\label{set:shooting} There exists a two-dimensional $C^{1}$ manifold $\mathcal{V}_{+}$ contained in the ball $B_{\delta}(p_{+})\cap\mathbb{R}^{3}\times[-\pi/2,\pi/2]$ for some $\delta>0$ sufficiently small, tangent to the subspace spanned by the vectors \begin{equation} \tilde{v}_{1}=\left( \begin{array} [c]{c 3^{-\frac{2}{3}}\\ -3^{-\frac{1}{3}}\\ 1\\ 0 \end{array} \right) \,,\quad\tilde{v}_{4}=\left( \begin{array} [c]{c 0\\ 0\\ 0\\ 1 \end{array} \right) \, . \label{stable:eigenvec \end{equation} If $x\in\mathcal{V}_{+}$, the flow $F_{\tau}$ defined in (\ref{C1}) is defined for any $\tau>0$ and \begin{equation} F_{\tau}(x) \in\mathcal{V}_{+}\quad\mbox{for any} \quad\tau\geq0 \label{mansta \end{equation} with \begin{equation} \lim_{\tau\to\infty}F_{\tau}(x) =p_{+}\,. \label{limman \end{equation} \end{proposition} \begin{proof}[Proof] In order to apply standard results it is convenient to extend the range of values of $\theta$, replacing $\cos\theta$ by $|\cos\theta|$, where the system (\ref{compact1})-(\ref{compact4}) is defined. The resulting system can be defined in a neighbourhood of $p_{+}$ and the right hand side of (\ref{compact1})-(\ref{compact4}) is in $C^{\frac{17}{9}}(\mathbb{R}^{3}\times(-\pi/2,\pi/2))$. Since $\frac{17}{9}>1$ we can apply the results in \cite{Gallay}. In our setting, this means the existence of a two-dimensional manifold $\mathcal{V}_{+}\in C^{\frac{17}{9}}(\mathbb{R}^{3}\times(-\pi /2,\pi/2))$ tangential to the plane spanned by $\{\tilde{v}_{1}, \tilde{v}_{4}\}$ at $p_{+}$ that remains invariant under the flow $F_{\tau}$ if one can prove that the corresponding trajectories on this manifold remain inside a ball $B_{\delta}(p_{+})$ for some small $\delta>0$. Let us show that $\mathcal{V}_{+}$ is invariant. Let us consider a four-dimensional cube $\mathcal{Q}=[1-\delta/2,1+\delta/2]\times[-\delta/2,\delta/2]^{2}\times [\pi/2-\delta/2,\pi/2]$ contained in a ball $B_{\delta}(p_{+})$. The cube has four pairs of parallel -$3$ dimensional- sides. One pair with normal direction $\tilde{v}_{1}$, another pair with normal direction $\tilde{v}_{4}$, the other two pairs of parallel sides contain a plane parallel to the one spanned by $\tilde{v}_{1}$ and $\tilde{v}_{4}$. The set $\mathcal{Q}\cap\mathcal{V}_{+}$ gives four $C^{1}$ curves and, due to the tangency of $\mathcal{V}_{+}$ to the plane spanned by $\tilde{v}_{1}$ and $\tilde{v}_{4}$, two are contained in each of the parallel sides of the cube that are orthogonal to $\tilde{v}_{1}$, and the other two are contained in parallel subspaces orthogonal to $\tilde{v}_{4}$. More specifically, one of later is contained in the subspace $\mathbb{R}^{3}\times\{\theta=\pi/2\}$. Notice that $\mathcal{V}_{+}\cap(\mathbb{R}^{3}\times\{\theta=\pi/2\})$ gives a portion of the stable manifold associated to (\ref{S3E1}) for $\delta$ small enough. Therefore, if $x\in\mathcal{V}_{+}\cap(\mathbb{R}^{3}\times\{\theta=\pi/2\})$, $F_{\tau}(x)\in B_{\delta}(p_{+})$ for arbitrary values of $\tau>0$. On the other hand, for the curve contained in a subspace with constant $\theta<\pi/2$ and orthogonal to $\tilde{v}_{4}$, we use the fact that $\theta$ is increasing, thus trajectories could only scape the cube through the other boundaries that intersect $\mathcal{V}_{+}$. But the points $x$ on the other two boundary curves satisfy $\tilde{v}_{1}\cdot(x-p_{+})=\pm c\delta$ for some $c>0$ (small or at most of order one). We then use that \[ \left. \frac{d}{d\tau} \left( \frac{\langle\tilde{v}_{1}\cdot(F_{\tau}(x)-p_{+})\rangle^{2}}{2} \right) \right|_{\tau=0} =\langle\tilde{v}_{1}\cdot(x-p_{+})\rangle\left\langle \tilde{v}_{1}\cdot\left. \frac{dF_{\tau}(x) }{d\tau} \right|_{\tau=0}\right\rangle \,. \] Since the manifold $\mathcal{V}_{+}$ is tangent to the plane spanned by $\tilde{v}_{1}$ and $\tilde{v}_{4}$ it follows, using \textit{(iii)} in Proposition~\ref{Tapas1} as well as (\ref{stable:eigenvec}) that \begin{equation}\label{C2 \left. \frac{d}{d\tau} \left( \frac{\langle\tilde{v}_{1}\cdot(F_{\tau}(x)-p_{+})\rangle^{2}}{2} \right) \right|_{\tau=0} =-3^{\frac{1}{3}}\langle\tilde{v}_{1}\cdot(x-p_{+})\rangle^{2} + o\left( \langle\tilde{v}_{1}\cdot(x-p_{+}) \rangle^{2}\right) \,. \end{equation} Therefore, if $\delta$ is sufficiently small this quantity is negative and the trajectories in $\mathcal{V}_{+}$ remain always in the ball $B_{\delta}(p_{+})$ and (\ref{mansta}) follows. It only remains to show (\ref{limman}). To this end, we observe that (\ref{compact4}) implies $\lim_{\tau\to\infty}\theta(\tau)=\pi/2$. Using (\ref{C2}) we then obtain (\ref{limman}). \end{proof} For further reference, let us denote by $\Pi\subset\mathbb{R}^{4}$ the affine plane spanned by the stable eigenvectors at $p_{+}$, namely, \begin{equation}\label{parPlane} \Pi=\{ w=(\nu,\sigma)\in p_{+}+\mathbb{R}^{4}:\ w-p_{+}= \nu\tilde{v}_{1}+\sigma\tilde{v}_{4} \ \nu\,, \sigma\in\mathbb{R} \}\,, \end{equation} with $\tilde{v}_{1}$ and $\tilde{v}_{4}$ as in (\ref{stable:eigenvec}). Every $w\in\Pi$ can be identified by its coordinates, thus we write $w=(\nu,\sigma)$ and $p_{+}=(0,0)$ with this set of coordinates. Since $\mathcal{V}_{+}$ is tangent to $\Pi$ at $p_{+}$, there exist local differentiable parametrisation of $\mathcal{V}_{+}$. \begin{lemma}[Local parametrisation of $\mathcal{V}_{+}$]\label{trivia} Let $\Pi$ be given by (\ref{parPlane}). There exists a $\delta_{0}>0$ and a differentiable mapping $\Lambda:\Pi\to\mathbb{R}^{4}$ that maps a neighbourhood of $\Pi$ into $\mathcal{V}_{+}\cap B_{\delta_{0}}(p_{+})\cap\{\theta\leq\pi/2\}$. Moreover, $\partial_{\nu}\Lambda(0,0)$, $\partial_{\sigma}\Lambda(0,0)\in\Pi$. \end{lemma} \section{Analysis of the behaviours (\ref{B1}) and (\ref{B2})}\label{sec:3} \subsection{Stability}\label{sec:stability} We now prove that both asymptotic behaviours (\ref{B1}) and (\ref{B2}) represent two disjoint open sets of solutions of (\ref{compact1})-(\ref{compact4}). More precisely, we have the following results: \begin{lemma}\label{continfinity} Suppose that $F_{\tau}(x)$ is a solution of (\ref{compact1})-(\ref{compact4}) with $x\in\mathbb{R}^{+}\times\mathbb{R}^{2} \times(-\pi/2,\pi/2)$. Let us also assume that for such a solution $\lim_{\tau\to-\infty}\Phi(\tau)=\infty$. Then, there exists a $\delta=\delta(x)>0$ sufficiently small such that for any $y\in B_{\delta}(x) \cap(\mathbb{R}^{+}\times\mathbb{R}^{2}\times(-\pi/2,\pi/2))$ $F_{\tau}(y) = (\tilde{\Phi}(\tau), \tilde{W}(\tau),\tilde{\Psi}(\tau),\tilde{\theta}(\tau))$ satisfies \begin{equation}\label{B5 \lim_{\tau\to-\infty}\tilde{\Phi}(\tau) =\infty\,. \end{equation} \end{lemma} \begin{proof}[Proof] It is convenient to use, in order to prove the result, the original equation (\ref{S2E3}) that is equivalent in the set $\mathbb{R}^{+}\times\mathbb{R}^{2}\times(-\pi/2,\pi/2)$ to the system (\ref{compact1})-(\ref{compact4}) by means of the change of variables (\ref{S2E8a}), (\ref{S2E8}). We first recall that (\ref{compact4}) implies that $\theta\to-\pi/2$ as $\tau\to-\infty$. Therefore, by (\ref{xi}), $\lim_{\tau\to-\infty}\xi=-\infty$. On the other hand our hypothesis on $\Phi$ as well as (\ref{S2E8a}) and (\ref{S2E8}) imply that \begin{equation}\label{B7} \lim_{\xi\to-\infty}|\xi|^{\frac{2}{3}} H(\xi) = \infty\,, \end{equation} then this and (\ref{S2E3}) yields the existence of a $\xi_{0}=\xi_{0}(x)<0$ with $|\xi_{0}|$ large enough such that \ \frac{d^{3}H}{d\xi^{3}}\leq-\frac{\xi^{2}}{2} \quad \mbox{for all}\quad \xi\leq\xi_{0}\,. \ Integration this expression give \begin{align*} \frac{d^{2}H(\xi)}{d\xi^{2}} & \geq-\frac{\xi^{3}}{6} + \frac{\xi_{0}^{3}}{6} +\frac{d^{2}H(\xi_{0})}{d\xi^{2}}\,,\\ \frac{dH(\xi)}{d\xi} & \leq-\frac{\xi^{4}}{24} + \frac{\xi_{0}^{4}}{24} + \left( \frac{d^{2}H(\xi_{0})}{d\xi^{2}} +\xi_{0}^{3}\right) (\xi-\xi_{0}) + \frac{dH(\xi_{0})}{d\xi \end{align*} Therefore, there exist a $\xi_{1}=\xi_{1}(x) <0$ with $|\xi_{1}|$ large enough such that \begin{equation}\label{B7a} \frac{d^{2}H(\xi)}{d\xi^{2}} >0\,,\quad \frac{dH(\xi)}{d\xi}<0\quad\mbox{for}\quad \xi\leq\xi_{1} \,. \end{equation} We assume, without loss of generality, that $\xi_{1}^{2}+a>0$ by taking $\xi_{1}$ even larger is necessary. Now for $F_{\tau}(y)=(\tilde{\Phi}(\tau),\tilde{W}(\tau),\tilde{\Psi}(\tau), \tilde{\theta}(\tau))$ with $y\in B_{\delta}(x)\cap(\mathbb{R}^{+}\times\mathbb{R}^{2}\times(-\pi/2,\pi/2))$ we define $\tilde{H}(\xi)=(\xi^{2}+1)^{-1/3}\tilde{\Phi}(\tau)$. Since the changes of variables (\ref{S2E8a}) and (\ref{S2E8}) are smooth, it follows, using (\ref{B7}) and (\ref{B7a}) and standard continuous dependence arguments for ODEs, that \begin{equation}\label{B8} \tilde{H}(\xi_{1}) \geq\frac{2^{\frac{1}{3}}}{(\xi_{1}^{2}+a)^{\frac{1}{3}}}\,, \quad\frac{d^{2}\tilde{H}(\xi_{1})}{d\xi^{2}} >0\,, \quad\frac{d\tilde{H}(\xi_{1})}{d\xi} <0\,. \end{equation} Integration of (\ref{S2E3}) for the unknown $\tilde{H}$ and (\ref{B8}) imply that for all $\xi\leq \xi_{1}$ \begin{equation}\label{B9} \tilde{H}(\xi)\geq\frac{2^{\frac{1}{3}}}{(\xi_{1}^{2}+a)^{\frac{1}{3}}} +\int_{\xi}^{\xi_{1}}\int_{s_{1}}^{\xi_{1}}\int_{s_{2}}^{\xi_{1}} (s_{3}^{2}+a) ds_{3} ds_{2}ds_{1} -\int_{\xi}^{\xi_{1}} \int_{s_{1}}^{\xi_{1}}\int_{s_{2}}^{\xi_{1}} \frac{ds_{3}}{(\tilde{H}(s_{3}))^{3}} ds_{2} ds_{1} \,. \end{equation} Suppose that \begin{equation}\label{F1} \tilde{H}(s) \geq\frac{2^{\frac{1}{3}}}{(s^{2}+a)^{\frac{1}{3}}} \quad\mbox{for}\quad\xi\leq s\leq\xi_{1}\,. \end{equation} Therefore, it would follow from (\ref{B9}) that \begin{equation}\label{F2} \tilde{H}(\xi) \geq\frac{ 2^{\frac{1}{3}}}{(\xi^{2}+a)^{\frac{1}{3}}}+ \frac{1}{2}\int_{\xi}^{\xi_{1}}\int_{s_{1}}^{\xi_{1}}\int_{s_{2}}^{\xi_{1}} (s_{3}^{2}+a)ds_{3}ds_{2}ds_{1} \end{equation} where we use that $\xi^{2}\geq\xi_{1}^{2}$. We can then extend the inequality (\ref{F1}) to a larger range of values of $\xi$ and therefore the inequality (\ref{F2}) also follows for all $\xi\leq\xi_{1}$ with $\xi$ in the extended interval. Since the integral term on the right-hand side of (\ref{F2}) tends to infinity as $\xi\to-\infty$, we obtain (\ref{B5}) as well. \end{proof} \begin{lemma}\label{contzero} Suppose that $F_{\tau}(x)$ is a solution of (\ref{compact1})-(\ref{compact4}) with $x\in\mathbb{R}^{+}\times\mathbb{R}^{2}\times(-\pi/2,\pi/2)$. Let us also assume that there exists a $\tau_{\ast}>-\infty$ such that $\lim_{\tau\to(\tau_{\ast})^{+}}\Phi(\tau)=0$. Then, there exists a $\delta=\delta(x)>0$ sufficiently small such that for any $y\in B_{\delta}(x)\cap(\mathbb{R}^{+}\times\mathbb{R}^{2}\times(-\pi/2,\pi/2))$ there exists a $\tilde{\tau}_{\ast}>-\infty$ such that $F_{\tau}(y)=(\tilde{\Phi}(\tau),\tilde{W}(\tau),\tilde{\Psi}(\tau),\tilde{\theta}(\tau))$ satisfies \[ \lim_{\tau\to(\tilde{\tau}_{\ast})^{+}}\tilde{\Phi}(\tau)=0\,. \] \end{lemma} \begin{proof}[Proof] As in the previous proof, it is more convenient to use the original formulation (\ref{S2E3}). We again use the smooth transformations (\ref{S2E8a}) and (\ref{S2E8}) to interpret the results between either formulation. Thus let $H$ be the solution of (\ref{S2E3}) associated to $F_{\tau}(x)$. Let also $\xi_{\ast}$ be defined by $\tau_{\ast}=\int_{0}^{\xi_{\ast}}(\eta^{2}+1)^{\frac{4}{9}}d\eta$, We observe that $\tau_{\ast}>-\infty$ implies that $\xi_{\ast}>-\infty$, and the hypothesis on $\Phi$ becomes \begin{equation}\label{H:2:zero} \lim_{\xi\to\xi_{\ast}} H(\xi)=0\,. \end{equation} Thus in regions close to $\xi_{\ast}$ we expect that the solutions are described by (\ref{ecuacionminima2}) and we employ the change of variable (cf. Appendix~\ref{sec:summary:II}, (\ref{ecuacionminima}) and (\ref{SEphi0-trans})), namely, \begin{equation}\label{V1E1} \frac{dH}{d\xi} = H^{-\frac{1}{3}}\,u \,, \quad \frac{d^{2}H}{d\xi^{2}}=H^{-\frac{5}{3}}\,v \,, \quad H(\zeta)= H(\xi)\,,\quad \xi=\Omega(z) \end{equation} where $\Omega(z)$ is defined by means of \begin{equation} z=-\int_{\Omega(z)}^{0}\frac{ds}{(H(s))^{\frac{4}{3}}} \label{V1E2}\,. \end{equation} Then, $u(z)$ and $v(z)$ are defined for any $z>z_{\ast}$ where $z_{\ast}$ is given by $\xi_\ast=\Omega(z_\ast)$. Notice that $|z_{\ast}|$ may or may not be finite. Moreover, $(H,u,v)$ satisfy \begin{equation} \label{V1E3} \frac{dH}{dz}=u\,H\,,\quad \frac{du}{dz}=v+\frac{u^{2}}{3}\,, \quad \frac{dv}{dz}=1 +\frac{5}{3}u\,v -(\Omega^{2}+a) H^{3} \end{equation} where all functions, including $\Omega$, are functions of $z$. The hypothesis on $\Phi$ translates into \[ \lim_{z\to(z_{\ast})^{+}}((\Omega(z))^{2}+a)(H(z))^{3}=0\,. \] The phase-plane analysis associated to (\ref{V1E3}) with $H(z)\equiv 0$ is included in Appendix~\ref{sec:summary:II}. Relevant to the current analysis are Lemma~\ref{separa} (where $v=\bar{v}(u)$ is defined) and Lemma~\ref{tozero-bwds} that describes the overall flow. We claim that there is a sequence $\{z_{n}\}$ such that $z_{n}\to(z_{\ast})^{+}$ as $n\to\infty$ and that $(u(z_{n}),v(z_{n}))\in \{(u,v):\ u>0\,,\ v<0\}$ for all $n$ large enough. Before we prove this we note that for any sequence $\{z_{n}\}$ such that $z_{n}\to(z_{\ast})^{+}$ as $n\to\infty$, the trajectory $(u(z),v(z))$ must be in the half-plane $\{(u,v):\ u>0\}$ for $z<z_{n}$ if $n$ is large enough. Indeed, otherwise the first equation in (\ref{V1E3}) implies that $dH(\xi)/d\xi\leq 0$ for all $\xi$ near $\xi_{\ast}$ and this contradicts (\ref{H:2:zero}). Let us now prove that we can select such a sequence and that it also satisfies $v(z_{n})<0$ for all $n$ large enough. Let $\{z_{n}\}$ be such that $z_{n}\to(z_{\ast})^{+}$ as $n\to\infty$ and suppose that $v(z_{n}) = 0$. Then, the third equation in (\ref{V1E3}) implies that \[ \frac{dv}{dz}(z_{n})= 1-((\Omega(z_{n}))^{2}+a)(H(z_{n}))^{3} \] and since the last term converges to zero as $n\to\infty$, it follows that $v(z)$ becomes negative for some $z<z_{n}$ close to $z_{n}$ for $n$ large enough. Thus we can construct another sequence $\{\hat{z}_{n}\}$ with $\hat{z}_{n}<z_{n}$, $\hat{z}_{n}\to(z_{\ast})^{+}$ as $n\to\infty$ and such that $v(\hat{z}_{n}) < 0$ for $n$ large enough. Suppose now that $v(z_{n}>0$ for large enough $n$. Then, the second equation in (\ref{V1E3}) implies that $(u(z),v(z))$ arrives to the half-line $\{u=0\,,\ v>0\}$ at some $\bar{z}_{n}< z_{n}$. For otherwise, the last equation in (\ref{V1E3}) implies that $(u(z),v(z))$ crosses the line $\{v=0\}$, and the argument of the previous case applies. Therefore, there exists a sequence $\{\hat{z}_{n}\}$ with $\hat{z}_{n}\to z_{\ast}$ as $n\to\infty$ such that one of the following possibilities take place \begin{align} \lim_{n\to\infty}v(\hat{z}_{n}) & >\bar{v}(0)\,,\label{Cs1}\,,\\ \lim_{n\to\infty}v(\hat{z}_{n}) & <\bar{v}(0)\,,\label{Cs2}\,,\\ \lim_{n\to\infty}v(\hat{z}_{n}) & =\bar{v}(0)\,. \label{Cs3a}\, \end{align} In the case (\ref{Cs1}), we can approximate the evolution of $(u(z),v(z))$ in intervals of the form $z\in[\bar{z}_{n}-L,\bar{z}_{n}]$ by the system (\ref{phi0system}) using standard continuous dependence results and Lemma~\ref{tozero-bwds} implies that $(u(z),v(z))$ enters $\{(u,v):\ u>0,\ v<0\}$ at some $z<\bar{z}_{n}$ for $n$ large enough, and the claim follows. Suppose now that (\ref{Cs2}) takes place. Using again continuous dependence we obtain that $(u(z),v(z))\in\{(u,v):\ 1+ 5uv/3<0\,, \ u<0\,,\ v>0\}=R_{5}$ for some $z<\bar{z}_{n}$ and $n$ large enough. In this region, and with $z$ close to $z_{\ast}$, then $v$ increases for decreasing $z$. Therefore, $d^{2}H(\xi)/d\xi^{2}$ remains positive and $dH(\xi)/d\xi$ is negative as long as $(u(z),v(z))$ stays in $R_{5}$. Moreover, due to the second equation in (\ref{V1E3}) $|u(z)|$ increases for decreasing $z$. This implies that the inequality $1+5 uv/3<0$ remains valid during all the evolution until $z=z_{\ast}$, thus also the inequalities $d^{2}H(\xi)/d\xi^{2}>0$, $dH(\xi)/d\xi<0$ remain valid. However, this contradicts (\ref{H:2:zero}) and (\ref{Cs2}) cannot hold. It remains to study the case (\ref{Cs3a}). In this case there exist a small $L$ such that for $z\in (\bar{z}_{n}-L,\bar{z}_{n})$ $(u(z),v(z))$ remains close to the separatrix $v=\bar{v}(u)$. On the other hand, $(u(z),v(z))$ must return to $\{(u,v):\ u>0\}$ infinitely often as $z_{n}\to z_{\ast}$. Thus the trajectory must remain close to $\bar{v}$ for $z$ close to $z_{\ast}$, or otherwise the trajectory enters $R_{5}$ giving a contradiction as before, or it enters the region $\{(u,v):\ v<-\frac{u^{2}}{3}\}$ which contradicts (\ref{Cs3a}). Then Lemma~\ref{separa} \textit{(i)} implies that $d^{2}H(\xi)/d\xi^{2}>0$ and $dH(\xi)/d\xi<0$ remain valid during all the evolution for decreasing $z<\hat{z}_{n}$ for $n$ large enough, and this contradicts (\ref{H:2:zero}). As in the proof of Lemma~\ref{continfinity} for $F_{\tau}(y)=(\tilde{\Phi}(\tau),\tilde{W}(\tau),\tilde{\Psi}(\tau), \tilde{\theta}(\tau))$ with $y\in B_{\delta}(x)\cap(\mathbb{R}^{+}\times\mathbb{R}^{2}\times(-\pi/2,\pi/2))$ we define $\tilde{H}(\xi)=(\xi^{2}+1)^{-1/3}\tilde{\Phi}(\tau)$ and the transformed functions $(\tilde{u}(\tilde{z})),\tilde{v}(\tilde{z}))$ by means of the transformations (\ref{V1E1}) and (\ref{V1E2}) with the obvious changes of notation. We then notice that, by continuous dependence of solutions on the initial data, if $\delta>0$ is chosen sufficiently small then $(\tilde{u}(\bar{z}),\tilde{v}(\bar{z}))$ enters the region $\{ (u,v):\ u>0\,,\ v<0\}$ for some $\bar{z}$ close to $z_{\ast}$ and therefore $d\tilde{H}(\bar{\xi})/d\xi>0$ and $d^{2}\tilde{H}(\bar{\xi})/d\xi^{2}<0$ for some $\bar{\xi}$ close to $\xi_{\ast}$ with $\bar{\xi}>\xi_{\ast}$. We have that $\tilde{H}(\bar{\xi})$ is small and $d^{3}\tilde{H}/d\xi^{3}>0$ as long as $\tilde{H}(\bar{\xi})$ is small. Integrating this inequality for $\xi<\bar{\xi}$ we obtain that $\frac{d\tilde{H}}{d\xi}(\xi)>0$, $\frac{d^{2}\tilde{H}}{d\xi^{2}}(\xi)<0$ and $\tilde{H}(\xi)$ remains small for $\xi<\bar{\xi}$ as long as $\tilde{H}$ is defined. Then, $\tilde{H}(\xi)$ vanishes for some $\tilde{\xi}_{\ast}>-\infty$, so the lemma follows. \end{proof} \subsection{Characterisation}\label{sec:classes} We now give necessary conditions for the solutions of (\ref{S2E3}) to either satisfy that \begin{equation}\label{B1:H} \lim_{\xi\to-\infty}H(\xi)=+\infty \end{equation} or that \begin{equation}\label{B2:H} \lim_{\xi\to(\xi_{\ast})^{+}}H(\xi)=0 \quad\mbox{for some} \quad\xi_{\ast}>-\infty\,. \end{equation} Observe that these behaviours imply (\ref{B1}) and (\ref{B2}) respectively, for the corresponding function $\Phi(\tau)$ given by (\ref{S2E8a}) and (\ref{S2E8}). We start by giving necessary conditions for (\ref{B1:H}), but first we need the following auxiliary calculus result. \begin{lemma}\label{polyn} Given the polynomials \[ P_{1}(Y) =-\frac{Y^{5}}{60}+\frac{Y^{4}}{12}-\frac{Y^{3}}{6}\quad \mbox{and}\quad P_{2}(Y) =-\frac{Y^{3}}{6}\,, \] then, they are strictly decreasing and positive for $Y<0$. Moreover, if $\lambda\in \mathbb{R}$ satisfies $1+2\lambda>0$, then \begin{equation}\label{In2} P_{1}(Y)+\lambda P_{2}(Y) \geq\frac{1}{2}P_{1}(Y) \,, \quad \mbox{for}\quad Y<0\,. \end{equation} If $1+2\lambda\leq 0$ then \begin{equation}\label{In3} P_{1}(Y)+2\lambda P_{2}(Y) > - \frac{4}{5}3^{\frac{3}{2}}\max\{|1+2\lambda|^{\frac{5}{2}} ,1 \} \end{equation} for $Y<0$, but $P_{1}(Y)+2\lambda P_{2}(Y)\geq 0$ if $Y\leq 5/2-\sqrt{25-40(1+2\lambda)}/2$. \end{lemma} \begin{proof}[Proof] The monotonicity properties of $P_{1}$ and $P_{2}$ are just an elementary calculus exercise. The inequality (\ref{In2}) is a consequence of the fact that the polynomial$-\frac{Y^{5}}{60}+\frac{Y^{4}}{12}-c\frac{Y^{3}}{6}=P_{1}(Y)+(c-1)P_{2}(Y)$ is non-negative and decreasing if $c\geq 0$, in particular \[ \frac{1}{2}P_{1}(Y)+\lambda P_{2}(Y) =\frac{1}{2}\left[ -\frac{Y^{5}}{60} +\frac{Y^{4}}{12}-(1+2\lambda)\frac{Y^{3}}{6}\right] \] is non-negative if $(1+2\lambda)\geq 0$, thus (\ref{In2}) holds. If $c<0$ (i.e. $1+2\lambda<0$) then $P_{1}(Y)+(c-1)P_{2}(Y)<0$ in $Y\in((5-\sqrt{25-40c})/2,0)$. But there the polynomial is larger than or equal than the value of the minimum in $Y<0$, namely, \[ P_{1}(Y)+(c-1)P_{2}(Y) \geq \left(\frac{1}{30}-\frac{(4-6c)^{\frac{1}{2}}}{60}-\frac{c}{15}\right) (2-(4-6c)^{\frac{1}{2}})^{3} > -\frac{4(1-2c)^{\frac{5}{2}}}{15} \] and (\ref{In3}) follows. \end{proof} We now give necessary conditions for (\ref{B1:H}) to hold. \begin{proposition}\label{Linf} Let us assume that there exists a positive constant $c_{1}=c_{1}(a)>0$ and some $\xi_{0}\in\mathbb{R}$ with \begin{equation}\label{bigness1} c_{1}>\left(\frac{24}{5}\right)^{3}|a|^{5}(1+3|a|) \quad \mbox{if} \quad |\xi_{0}|^{2}< -2a \quad (a<0) \end{equation} and \begin{equation}\label{bigness2} c_{1}>16(2+|a|) \quad \mbox{if} \quad |\xi_{0}|^{2}> -2a\,, \end{equation} such that a solution of (\ref{S2E3}) satisfies $((\xi_{0})^{2}+1+|a|)(H(\xi_{0}))^{3}\geq c_{1}$, $dH(\xi_{0})/d\xi<0$ and $d^{2}H(\xi_{0})/d\xi^{2}>0$. Then (\ref{B1:H}) holds. \end{proposition} \begin{proof}[Proof] Integrating (\ref{S2E3}) three times for $\xi<\xi_{0}$ we obtain: \begin{equation}\label{In1 H(\xi)\geq H(\xi_{0}) + \int_{\xi}^{\xi_{0}}\int_{s_{1}}^{\xi_{0}}\int_{s_{2}}^{\xi_{0}} \left((s_{3}^{2}+a) - \frac{1}{(H(s_{3}))^{3}}\right) ds_{3}ds_{2}ds_{1}\,. \end{equation} Given the polynomials defined in Lemma~\ref{polyn} and letting, for every $\xi<\xi_{0}$, \[ Y=\left\{ \begin{array}{l} \frac{\xi}{|\xi_{0}|} - 1 \ \mbox{if} \ \xi_{0} > 0 \\ \frac{\xi}{|\xi_{0}|} + 1 \ \mbox{if} \ \xi_{0} < 0 \end{array} \right. \] then, we can write \begin{equation}\label{xi0:int} \int_{\xi}^{\xi_{0}}\int_{s_{1}}^{\xi_{0}}\int_{s_{2}}^{\xi_{0}} (s_{3}^{2}+a)ds_{3}ds_{2}ds_{1} = |\xi_{0}|^{5}\left[ P_{1}(Y) + \frac{a}{|\xi_{0}|^{2}} P_{2}(Y)\right]\,. \end{equation} Clearly $\xi/|\xi_{0}|<-1$ if $\xi<\xi_{0}$ and $\xi_{0}<0$ and $\xi/|\xi_{0}|<1$ if $\xi<\xi_{0}$ and $\xi_{0}>0$, thus in either case $Y<0$ and the polynomials are in the range of values considered in Lemma~\ref{polyn}. We can now distinguish two cases. Suppose first that $\xi_{0}^{2}<-2a$. Then a Gronwall type of argument shows that for any $\xi<\xi_{0}$ with $\xi\in[-\sqrt{2|a|},\sqrt{2|a|}]$ then $H(\xi)>1>0$. Indeed, as long as $H(\xi)>1$ then (\ref{In1}) can be estimated from below by \begin{equation}\label{In4} H(\xi)\geq \frac{c_{1}^{\frac{1}{3}}}{(3|a|+1)^{\frac{1}{3}}} + |\xi_{0}|^{5}\left[ P_{1}(Y) +\left(\frac{a}{|\xi_{0}|^{2}} -1\right) P_{2}(Y)\right] \end{equation} (using (\ref{xi0:int})). Then we can apply (\ref{In3}) with $2\lambda=a/|\xi_{0}|^{2}-1(\leq -1)$, hence \[ P_{1}(Y)+\left(\frac{a}{|\xi_{0}|^{2}} -1\right) P_{2}(Y) > - \frac{4}{5}3^{\frac{3}{2}}\max\left\{\frac{|a|^{\frac{5}{2}}}{|\xi_{0}|^{5}} ,1 \right\} \] using this in (\ref{In4}) yields \[ \begin{array}{l} H(\xi)>\frac{c_{1}^{\frac{1}{3}}}{(3|a|+1)^{\frac{1}{3}}} - \frac{4}{5}3^{\frac{3}{2}}|a|^{\frac{5}{2}} \end{array} \] and (\ref{bigness1}) implies the claim by a continuity argument. Let us assume now that $\xi_{0}^{2}>-2a$. Using (\ref{xi0:int}) we obtain \[ \int_{\xi}^{\xi_{0}}\int_{s_{1}}^{\xi_{0}}\int_{s_{2}}^{\xi_{0}} (s_{3}^{2}+a)ds_{3}ds_{2}ds_{1}>\frac{|\xi_{0}|^{5}}{2}P_{1}(Y) \] by Lemma~\ref{polyn}. Applying now this inequality to (\ref{In1}) we obtain the following estimate for $\xi_{0}^{2}>-2a$: \begin{equation}\label{G1} H(\xi)\geq H(\xi_{0}) +\frac{|\xi_{0}|^{5}}{2}P_{1}(Y) -\int_{\xi}^{\xi_{0}} \int_{s_{1}}^{\xi_{0}} \int_{s_{2}}^{\xi_{0}} \frac{ds_{3}}{(H(s_{3}))^{3}}ds_{2}ds_{1} \,, \quad\xi\leq\xi_{0}\,. \end{equation} Now, we can use a Gronwall type of argument to prove that if $c_{1}$ satisfies (\ref{bigness2}) then (\ref{G1}) implies \begin{equation}\label{G2} (H(\xi))^{3}\geq\frac{c_{1}}{2( |\xi_{0}|^{2}+1+|a|) } \quad \mbox{for} \quad \xi\leq\xi_{0} \,. \end{equation} We observe that (\ref{G2}) holds by hypothesis and that it also holds for $\xi$ close to $\xi_{0}$ by continuity. Then, as long as (\ref{G2}) is satisfied, (\ref{G1}) implies that \begin{equation}\label{G3 H(\xi)\geq\left( \frac{c_{1}}{|\xi_{0}|^{2}+1+|a|}\right)^{\frac{1}{3}} +\frac{|\xi_{0}|^{5}}{2}P_{1}(Y) -\frac{2(|\xi_{0}|^{2}+1+|a|) }{c_{1}}|\xi_{0}|^{3}P_{2}(Y) \,. \end{equation} We can apply Lemma~\ref{polyn}, and this implies that the last term in (\ref{G3}) can be estimated by the previous one for any $\xi<\xi_{0}$ if $c_{1}$, $a$ and $\xi_{0}$ satisfy \begin{equation}\label{cond:c1:2} c_{1}>8(|\xi_{0}|^{2}+1+|a|)/|\xi_{0}|^{2} \end{equation} and (\ref{G2}) follows. Let us then prove (\ref{cond:c1:2}). If $|\xi_{0}| \geq 1$, (\ref{bigness2}) implies (\ref{cond:c1:2}). If $|\xi_{0}| < 1$ we consider two further cases. For $| \xi-\xi_{0}|\leq 2$ we obtain that the last two terms in (\ref{G3}) can be bounded from below by $-8(2+|a|)/(3c_{1})$. But this quantity can be absorbed by the first term if $c_{1}>(40)^{\frac{3}{4}}(2+|a|)$, which is satisfied if (\ref{bigness2}) is satisfied. Therefore (\ref{G2}) holds for this range of values. On the other hand, if $|\xi- \xi_{0}| >2$, then the second term of (\ref{G3}) can be estimated from below by $(24/15)|\xi- \xi_{0}|^{3}$, while the last term in (\ref{G3}) can be estimated by $-(2+|a|)|\xi-\xi_{0}|^{3}/(3c_{1})$. Then, we can absorb the last term in (\ref{G3}) into the second one if $c_{1}>5(2+|a|)/24$ which is guaranteed by (\ref{bigness2}). Thus, the inequality (\ref{G3}) holds for arbitrary values of $\xi\leq\xi_{0}$, and this implies (\ref{B1:H}) by taking the limit $\xi\to-\infty$. \end{proof} We end this section by giving necessary conditions for (\ref{B2:H}) to hold \begin{proposition}\label{Lext} Let us assume that there exist positive constants $c_{2}$ and $c_{3}$, depending on $a$, and some $\xi_{0}\in\mathbb{R}$ with \begin{equation}\label{smallness} \frac{c_{2}^{\frac{1}{3}}}{c_{3}} (|\xi_{0}|^{2}+1+|a|)^{\frac{4}{3}}< \frac{1}{10}\, \end{equation} such that a solution of (\ref{S2E3}) satisfies $0<((\xi_{0})^{2}+1+|a|)(H(\xi_{0}))^{3}\leq c_{2}$, $(|\xi_{0}|+1+|a|)^{\frac{5}{3}}dH(\xi_{0})/d\xi>c_{3}$ and $d^{2}H(\xi_{0})/d\xi^{2}<0$. Then there exists $\xi_{\ast}\in(-\infty,\xi_{0})$ such that (\ref{B2:H}) holds. \end{proposition} \begin{proof}[Proof] Suppose that $c_{2}$ is sufficiently small. Then, as long as $0<((\xi)^{2}+1+|a|)(H(\xi))^{3}\leq2 c_{2}$ we obtain from (\ref{S2E3}) that $d^{3}H(\xi)/d\xi^{3}>0$. Integrating this equation over $(\xi,\xi_{0})$ once we obtain that, as long as $(\xi^{2}+1+|a|)(H(\xi))^{3}\leq 2 c_{2}$ is satisfied for $\xi<\xi_{0}$, then $d^{2}H(\xi)/d\xi^{2}<0$ and, integrating a second time, also that $dH(\xi)/d\xi>c_{3}(|\xi_{0}|^{2}+1+|a|)^{-\frac{5}{3}}$. Then this concavity implies that $H(\xi)$ vanishes at some $\xi=\xi^*$. But a third integration implies that \ H(\xi) \leq H(\xi_{0})-\frac{dH(\xi_{0})}{d\xi} (\xi_{0}-\xi)\leq \left( 2^{\frac{1}{3}} - \frac{c_{3} (\xi_{0}-\xi) }{ c_{2}^{\frac{1}{3}}(|\xi_{0}|^{2}+1+|a|)^{\frac{4}{3}}} \right) \left( \frac{c_{2}}{|\xi_{0}|^{2}+1+|a|} \right)^{\frac{1}{3}} \ thus $\xi^*\geq\xi_{0}-\frac{c_{2}^{\frac{1}{3}}}{c_{3}}(|\xi_{0}|^{2}+1+|a|)^{\frac{4}{3}}$. Finally the condition (\ref{smallness}) implies that we can replace $\xi_{0}$ by $\xi\in (\xi^*,\xi_{0})$, thus $(\xi^{2}+1+|a|)(H(\xi))^{3}\leq 2 c_{2}$ follows in this interval and the result follows by a classical continuation argument. \end{proof} \section{Shooting argument}\label{sec:4} In this section we apply a standard shooting argument to prove the existence of solutions of (\ref{compact1})-(\ref{compact4}) such that (\ref{B4}) holds, and such that $\Phi$ remains positive and bounded for all $\tau\in \mathbb{R}$. Specifically, the main result of this section is: \begin{proposition}\label{existence:1} There exists a solution of (\ref{compact1})-(\ref{compact4}) $(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))$ defined for all $\tau\in(-\infty,\infty)$ such that $\lim_{\tau\to\infty}(\Phi (\tau),W(\tau),\Psi(\tau),\theta(\tau))=(1,0,0,\pi/2)$ and satisfying $\Phi(\tau)>0$ for all $\tau\in \mathbb{R}$, (\ref{B4}) and \begin{equation}\label{F4E3} \lim\inf_{\tau\to-\infty}\Phi(\tau) <\infty\,. \end{equation} \end{proposition} The proof of Proposition~\ref{existence:1} is divided in several steps. First we prove that points placed in the curve $\mathcal{V}_{+}\cap\mathbb{R}^{3}\times\{\theta=\frac{\pi}{2}-\varepsilon\}$ with $\varepsilon>0$ sufficiently small, yield solutions of the equation (\ref{S2E3}) satisfying the hypotheses of Proposition~\ref{Linf} if $\nu>0$ and those of Proposition~\ref{Lext} if $\nu<0$: \begin{lemma}\label{start:shooting} Let $\delta_{0}$ and $\Lambda(\nu,\sigma)$ be as in Lemma~\ref{trivia}. Then there exist $\nu_{0}>0$ and $\varepsilon>0$, such that for $w=(\nu,\sigma)\in\Pi$ with $\nu_{0}\leq\nu\leq \delta_{0}/4$ and $\sigma=-\varepsilon$ the trajectory associated to (\ref{compact1})-(\ref{compact4}) starting at $\Lambda(\nu,\sigma)$ satisfies (\ref{B1}). Moreover, if $-\delta_{0}/4\leq\nu\leq-\nu_{0}$ and $\sigma=-\varepsilon$ the corresponding trajectory of (\ref{compact1})-(\ref{compact4}) satisfies (\ref{B2}). \end{lemma} \begin{proof}[Proof] The dynamics induced by the system (\ref{compact1})-(\ref{compact4}) on the invariant subspace $\mathbb{R}^{3}\times\{\theta=\pi/2\}$ have been summarised in Proposition~\ref{Tapas1} and Lemma~\ref{Tapas2}. In particular, the trajectory starting at $\Lambda(\nu,0)$ with $\nu>0$ sufficiently small satisfies (\ref{P1:prime}) and, as it can be easily deduced, also that \[ \lim_{\tau\to-\infty}\frac{W(\tau)}{\tau^{2}}=-\frac{1}{2}\quad \mbox{and}\quad \lim_{\tau\to-\infty}\frac{\Psi(\tau)}{\tau}=-1\,. \] Then, classical continuous dependence results for ODEs imply that for any $\rho_{0}>0$ arbitrarily small and $\nu_{0}>0$ small enough there exists $\varepsilon$ sufficiently small such that, for $\nu_{0}\leq\nu\leq\delta_{0}/4$ the trajectory starting at $\Lambda(\nu,-\varepsilon)$ at $\tau=0$ satisfies \begin{equation}\label{G4 \left| \Phi(\tau_{0})+\frac{\tau_{0}^{3}}{6}\right| \leq\rho_{0}|\tau_{0}|^{3} \,, \quad\left| W(\tau_{0})+\frac{\tau_{0}^{2}}{2}\right| \leq\rho_{0}\left| \tau_{0}\right| ^{2}\,, \quad\left| \Psi(\tau_{0}) + \tau_{0}\right| \leq\rho_{0}|\tau_{0}| \end{equation} for some $\tau_{0}<0$. Using (\ref{G4}) and (\ref{S2E8a})-(\ref{2nd:dH}) to get $H$, $dH/d\xi$ and $d^{2}H/d\xi^{2}$ at the value $\xi_{0}$ (given by (\ref{S2E8})), we obtain \[ H(\xi_{0}) \geq c_{1}|\xi_{0}|^{-\frac{2}{3}} \,, \quad\frac{dH(\xi_{0})}{d\xi} <0 \,,\quad\frac{d^{2}H(\xi_{0})}{d\xi^{2}} >0 \] where $c_{1}>0$ can be made arbitrarily large choosing $\varepsilon$ sufficiently small and $|\tau_{0}|$ sufficiently large to guarantee that (\ref{bigness2}) is satisfied. Then we apply Proposition~\ref{Linf} to obtain (\ref{B1:H}) and hence (\ref{B1}) follows. On the other hand the trajectories starting at $\Lambda(\tilde{\nu},0)$ with $\tilde{\nu}<0$ satisfy $\lim_{\tau\to\tau_{\ast}^{+}}\Phi(\tau)=0$, for some $\tau_{\ast}^{+}>-\infty$. Moreover, (\ref{P2:prime}) is satisfied, as well as \[ \lim_{\tau\to\tau_{\ast}^{+}}(\tau-\tau_{\ast})^{\frac{1}{4}}W(\tau) =\frac{3}{4}\left(\frac{64}{15}\right)^{\frac{1}{4}} \quad \mbox{and} \quad \lim_{\tau\to\tau_{\ast}^{+}}(\tau-\tau_{\ast})^{\frac{5}{4}}\Psi(\tau)= -\frac{3}{16}\left( \frac{64}{15}\right)^{\frac{1}{4}}\,. \] Suppose now that $-\delta_{0}/4\leq\zeta\leq-\nu_{0}$, $\sigma =-\varepsilon$. Assuming again that $\varepsilon$ is sufficiently small we obtain that the numbers \[ \frac{\Phi(\tau_{0}) }{(\tau_{0}-\tau_{\ast})^{\frac{3}{4}}} -\left( \frac{64}{15}\right) ^{\frac{1}{4}}\,, \quad(\tau_{0}-\tau_{\ast})^{\frac {1}{4}}W(\tau_{0}) -\frac{3}{4}\left( \frac{64}{15}\right) ^{\frac{1}{4}}\,, \quad(\tau_{0}-\tau_{\ast})^{\frac{5}{4}}\Psi(\tau_{0}) +\frac{3}{16}\left( \frac{64}{15}\right) ^{\frac{1}{4}} \] can be made arbitrarily small for $\tau_{0}$ close to $\tau_{\ast}$, $\tau_{0}>\tau_{\ast}$. We can use this approximation to obtain that \[ H(\xi_{0}) \leq c_{2}| \xi_{0}|^{-\frac{2}{3}} \,,\quad \frac{dH(\xi_{0})}{d\xi}>c_{3} \,, \quad \frac{d^{2}H(\xi_{0})}{d\xi^{2}} <0 \] where $c_{2}\propto(\tau_{0}-\tau_{\ast})^{1/4}$ and $c_{3}\propto(\tau_{0}-\tau_{\ast})^{-1/4}$, thus they can be chosen to satisfy (\ref{smallness}) by taking $\varepsilon>0$ sufficiently small. We can now apply Proposition~\ref{Lext} to conclude the proof of the result. \end{proof} Next we prove that if for every compact set $K\subset (-\infty,\infty)$ we have that\ $\lim\inf_{\tau\to(\tau_{\ast})^{+}}\Phi(\tau) =0$ for some $\tau_{\ast}>-\infty$, then $\lim_{\tau\to(\tau_{\ast})^{+} \Phi(\tau)=0$. Therefore, we will be in the situation stated in Lemma \ref{contzero} and it will be possible to prove continuity of this behaviour for small changes of the initial values. \begin{lemma}\label{L1} Let $\Phi(\tau)$ be a solution of (\ref{S2E9})-(\ref{S2E9bis}) defined in some interval $(\tau_{\ast},\tau^{\ast})$ with $\tau_{\ast}>-\infty$, $\tau^{\ast}\leq\infty$, $\Phi(\tau)>0$ for $\tau>\tau_{\ast}$. Then \begin{equation}\label{F1E1:2} \lim\inf_{\tau\to(\tau_{\ast})^{+}}\Phi(\tau) =0\,. \end{equation} implies (\ref{B2}) for this value $\tau=\tau_{\ast}$. \end{lemma} \begin{proof}[Proof] It is easier to work with the original equation (\ref{S2E3}), observe that then, (\ref{F1E1:2}) is equivalent to \begin{equation}\label{F1E1} \lim\inf_{\xi\to(\xi_{\ast})^{+}}H(\xi) =0 \end{equation} for $\xi_{\ast}$ given by $\tau_{\ast}=\int_{0}^{\xi_{\ast}}(\eta^{2} +1)^{\frac{4}{9}}d\eta$. Let us then prove that (\ref{F1E1}) implies (\ref{B2:H}), and therefore (\ref{B2}) will follow. We argue by contradiction. We then assume that (\ref{F1E1}) is satisfied, but (\ref{B2:H}) does not hold, this means that also \begin{equation}\label{F1E3} \lim\sup_{\xi\to(\xi_{\ast})^{+}}H(\xi)>0 \,. \end{equation} On the one hand (\ref{F1E1}) gives the existence a decreasing sequence $\{\bar{\xi}_{n}\}$ such that $\bar{\xi}_{n}\to\xi_{\ast}$ as $n\to\infty$, $H(\bar{\xi}_{n+1})<H(\bar{\xi }_{n})$ and $\lim_{n\to\infty}H(\bar{\xi}_{n}) =0$. And (\ref{F1E3}) implies the existence of a sequence with elements $\tilde{\xi}_{n}\in(\xi_{\ast},\bar{\xi}_{n+1})$ such that $H(\tilde{\xi}_{n})=H(\bar{\xi}_{n})$. Then there exists another sequence $\{\xi_{n}\}$ with $\xi_{n}\in(\tilde{\xi}_{n},\bar{\xi}_{n})$ and $\lim_{n\to\infty}\xi_{n}=\xi_{\ast}$, where local minima are attained, i.e. satisfying \begin{equation}\label{F3E4:I} H(\xi_{n}) =\min_{\xi\in(\tilde{\xi}_{n},\hat{\xi}_{n})}H(\xi)\,, \quad \lim_{n\to\infty}H(\xi_{n}) =0 \,,\quad \frac{dH(\xi_{n})}{d\xi} =0 \,, \quad \frac{d^{2}H(\xi_{n})}{d\xi^{2}} \geq0\,. \end{equation} Let also $\{\hat{\xi}_{n}\}$ be the sequence where local maxima are attained, such that $\xi_{n+1}<\hat{\xi}_{n}<\xi_{n}$ and satisfying \begin{equation}\label{F3E4 H(\hat{\xi}_{n}) =\max_{\xi\in(\xi_{n+1},\xi_{n})}H(\xi)\,, \quad \lim\sup_{n\to\infty}H(\hat{\xi}_{n}) >0\,, \quad \frac{dH}{d\xi}(\hat{\xi}_{n})=0\,,\quad\frac{d^{2}H}{d\xi^{2}}(\hat{\xi}_{n}) \leq 0\,. \end{equation} Let us now show that \begin{equation}\label{F3E5} \lim_{n\to\infty} \left(\left| \frac{d^{2}H(\hat{\xi}_{n})}{d\xi^{2}} \right| \frac{1}{H(\hat{\xi}_{n}) }\right) >0\,. \end{equation} Indeed, from (\ref{S2E3}) we obtain that $d^{3}H/d\xi^{3}\geq-C_{1}$ if $\xi \in\left[ \xi_{n+1},\xi_{n}\right]$ and integrating this inequality, we also obtain \[ H(\xi_{n}) \geq H(\hat{\xi}_{n}) +\frac{d^{2}H(\hat{\xi}_{n})}{d\xi^{2}} \frac{(\xi_{n}-\hat{\xi}_{n})^{2}}{2} -C_{1}\frac{( \xi_{n}-\hat{\xi}_{n})^{3}}{6}\,. \] Now, if (\ref{F3E5}) fails, it follows that $H(\xi_{n}) >\frac{H(\hat{\xi}_{n})}{2}$ for some subsequence, and this contradicts (\ref{F3E4:I}) and (\ref{F3E4}). Thus (\ref{F3E5}) holds. We now claim that (\ref{F3E5}) implies that $H(\xi)$ vanishes for some $\xi \in[\xi_{n+1},\hat{\xi}_{n}]$. Indeed, since $d^{3}H/d\xi^{3}\geq-C_{1}$ we then have that, for $n$ large enough, \[ \frac{d^{2}H(\xi)}{d\xi^{2}} \leq\frac{d^{2}H(\hat{\xi}_{n})}{d\xi^{2}} + C_{1}(\hat{\xi}_{n}-\xi_{n+1}) \leq\frac{1}{2}\frac{d^{2}H(\hat{\xi}_{n )}{d\xi^{2}} <0\,,\quad\xi\in[\xi_{n+1},\hat{\xi}_{n}]\,. \] This implies that $dH(\xi)/d\xi>0$ for $n$ large enough with $\xi\in[\xi _{n+1},\hat{\xi}_{n}]$, but this contradicts the definition of $\xi_{n+1}$, and so for $n$ large enough there is a first value $\xi\in[\xi_{n+1},\hat{\xi }_{n}]$ such that $H(\xi)=0$, i.e. (\ref{B2:H}) holds. \end{proof} \begin{remark}\label{positivity} Notice that a classical Gronwall argument implies that any solution $\Phi(\tau)$ can be extended for arbitrary negative values of $\tau$ as long as $\Phi(\tau)$ remains away from zero. More precisely, if $\lim\inf_{\tau\to\tau_{0}^{+}}\Phi(\tau)>0$ for any $\tau_{0}\geq\tau_{\ast}>-\infty$, it is possible to extend $\Phi(\tau)$ as a solution of (\ref{compact1})-(\ref{compact4}) for times $\tau>\tau_{\ast}-\delta$ and some $\delta>0$. Reciprocally, the maximal existence time, due to Lemma \ref{L1}, is finite and it is given by $\tau_{\ast}>-\infty$ if $\lim\inf_{\tau\to \tau_{\ast}}\Phi(\tau)=0$. \end{remark} We are now ready to prove Proposition~\ref{existence:1}. \begin{proof}[Proof of Proposition~\ref{existence:1}] We consider the one-dimensional family of solutions of (\ref{compact1})-(\ref{compact4}) obtained choosing in Lemma~\ref{start:shooting} the parameters $\sigma=\varepsilon>0$ with $\varepsilon>0$ small enough and $\nu\in(-\delta_{0}/4,\delta_{0}/4)$. We define as $\mathcal{U}_{+}$ the set of values of $\nu$ such that the corresponding solution of (\ref{compact1})-(\ref{compact4}) satisfies (\ref{B1}). On the other hand, we denote by $\mathcal{U}_{-}$ the set of values of $\nu$ such that the corresponding solution of (\ref{compact1})-(\ref{compact4}) satisfy (\ref{B2}) for some $\tau_{\ast}>-\infty$. Due to Lemma~\ref{start:shooting} we have that $\mathcal{U}_{+}\neq\varnothing$ and $\mathcal{U}_{-}\neq\varnothing$. Moreover, by definition $\mathcal{U}_{+}\cap\mathcal{U}_{-}=\varnothing$. Due to lemmas \ref{continfinity} and \ref{contzero} we have that the sets $\mathcal{U}_{+}$ and $\mathcal{U}_{-}$ are open sets. Therefore, there exists $\bar{\nu}\in\left( -\frac{\delta_{0}}{4},\frac{\delta_{0}}{4}\right) $ such that $\bar{\nu}\notin\mathcal{U}_{+}\cup\mathcal{U}_{-}$. The corresponding solution of (\ref{compact1})-(\ref{compact4}) associated to the parameter $\bar{\nu}$ has the property that, for any $\tau_{0}>-\infty$ we have $\inf_{\tau\in(\tau_{0},\infty) }\Phi(\tau) \geq C_{-}(\tau_{0}) >0$, since otherwise $\bar{\nu}\in\mathcal{U}_{-}$ due to Lemma~\ref{L1}. This implies also that $\sup_{\tau\in(\tau_{0},\infty) \Phi(\tau) \leq C_{+}(\tau_{0}) <\infty$ because the right-hand side of (\ref{compact1})-(\ref{compact4}) is bounded in compact sets if $\Phi(\tau)\geq C_{-}(\tau_{0})$. Therefore, this solution is globally defined for $\tau\in(-\infty,\infty)$. Moreover, (\ref{F4E3}) holds, since otherwise $\bar{\nu}\in\mathcal{U}_{+}$ and the result follows. \end{proof} \section{Oscillatory solutions}\label{sec:dynamics} We recall that the final aim is to prove that the solutions found in Proposition~\ref{existence:1} have no alternative but to approach the invariant subspace $\theta=-\pi/2$ as $\tau\to-\infty$ and they remain uniformly bounded while $\Phi$ stays positive (see (\ref{B3})-(\ref{B4})). The argument is by contradiction and in this section we prove the following lemma that is the first step in the argument. \begin{proposition}\label{char:max} Suppose that $(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))$ is a solution of (\ref{compact1})-(\ref{compact4}) defined for all $\tau\in(-\infty,\infty)$ and satisfying \begin{equation}\label{F5E1 \lim\inf_{\tau\to-\infty}\Phi(\tau) <\infty\, \end{equation} and that \begin{equation}\label{G1E6 \lim\sup_{\tau\to-\infty}\left( \Phi(\tau) +\left| \frac{d\Phi(\tau)}{d\tau}\right| + \left| \frac{d^{2}\Phi(\tau)}{d\tau^{2}} \right| \right) =\infty\,. \end{equation} Then, there exists a decreasing sequence $\{\tau_{n}^{\ast}\}$ with $\lim_{n\to\infty}\tau_{n}^{\ast}=-\infty$ and a sequence $\{\varepsilon_{n}\}$ with $\varepsilon_{n}>0$ small enough such that \begin{equation}\label{def:max} \Phi(\tau_{n}^{\ast}) =\max_{\tau\in[\tau_{n}^{\ast}-\varepsilon_{n},\tau_{n}^{\ast}+\varepsilon_{n}]}\Phi(\tau) \end{equation} and that \begin{equation}\label{G5E3} \lim\sup_{n\to\infty}\Phi( \tau_{n}^{\ast})=\infty\,. \end{equation} \end{proposition} Before we prove this result prove three auxiliary lemmas. First we show that there exists a decreasing sequence of local minima attained at certain $\tau=\tau_{n}$ with $\lim_{n\to\infty}\tau_{n}=- \infty$. \begin{lemma}\label{L2} Let $( \Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))$ satisfy the assumptions of Proposition~\ref{char:max}. Then, there exists a decreasing sequence $\{\tau_{n}\}$ such that $\lim_{n\to\infty}\tau_{n}=-\infty$ and that \begin{equation}\label{G1E7} \lim_{n\to\infty}\Phi(\tau_{n}) \leq 1 \quad\mbox{and}\quad\quad\frac{d\Phi(\tau_{n})}{d\tau}=0\,, \quad\frac{d^{2}\Phi(\tau_{n})}{d\tau^{2}} \geq 0 \quad\mbox{for all}\quad n\,. \end{equation} \end{lemma} \begin{proof}[Proof] First, we claim that \begin{equation}\label{G1E3} \lim\inf_{\tau\to-\infty}\Phi(\tau) \leq 1\,, \quad \lim\sup_{\tau\to-\infty} \Phi(\tau) \geq 1\,. \end{equation} Indeed, suppose first that $\lim\inf_{\tau\to-\infty}\Phi(\tau) >1$. Then, there exists $\varepsilon_{0}>0$ and $\tau_{0}$ sufficiently negative, such that $\Phi(\tau)\geq 1 + 2\varepsilon_{0}$ for $\tau\leq\tau_{0}$. Then, (\ref{S2E8a}) and (\ref{S2E8}) imply $H(\xi)\geq(1+\varepsilon_{0})|\xi|^{-2/3}$ for $\xi\leq\xi_{0}$, where $\xi_{0}$ is related to $\tau_{0}$ by means of (\ref{S2E8}). This inequality applied to (\ref{S2E3}) gives \begin{equation} \frac{d^{3}H}{d\xi^{3}}=\frac{1}{H^{3}}-(\xi^{2}+a) \leq-\varepsilon_{1}\xi^{2} \label{G1E4 \end{equation} for $\xi\leq\xi_{0}$ and some $\varepsilon_{1}>0$ (by taking a more negative $\tau_{0}$ if necessary). Integrating (\ref{G1E4}) three times for $\xi\leq\xi_{0}$ gives $H(\xi)>\varepsilon_{1}|\xi_{0}|^{5}P_{1}(1+\xi/|\xi_{0}|)$ (where $P_{1}$ is as in Lemma~\ref{polyn}). Thus $\lim_{\xi\to-\infty}H(\xi)=\infty$, but this contradicts (\ref{F5E1}). We now prove the second inequality in (\ref{G1E3}). Suppose on the contrary that $\lim\sup_{\tau\to-\infty}\Phi(\tau)<1$, then $\Phi(\tau) \leq1 -\varepsilon_{0}$ for some $\varepsilon_{0}>0$ and for $\tau\leq\tau_{0}$ if $\tau_{0}<0$ with $|\tau_{0}|$ large enough. The transformation (\ref{S2E8})-(\ref{S2E8a}), with the obvious correspondence in notation, implies that $H(\xi) \leq(1-\varepsilon_{0})|\xi|^{-\frac{2}{3}}$ for $\xi\leq\xi_{0}$, whence (\ref{S2E3}) yields \begin{equation}\label{G1E5} \frac{d^{3}H}{d\xi^{3}}=\frac{1}{H^{3}}-(\xi^{2}+a) \geq\varepsilon_{2}\xi^{2} \end{equation} for some $\varepsilon_{2}>0$ and $\xi\leq\xi_{0}$. Integrating (\ref{G1E5}) for $\xi\leq\xi_{0}$ we obtain $H(\xi)<-\varepsilon_{1}|\xi_{0}|^{5}P_{1}(1+\xi/|\xi_{0}|)$. This implies the existence of a $\xi_{\ast}>-\infty$ such that $\lim_{\xi\to(\xi^{\ast})^{+}}H(\xi)=0$. This contradicts the assumption that $\xi_{\ast}=-\infty$ (see Remark~\ref{positivity}) and (\ref{G1E3}) follows. Suppose now that \begin{equation}\label{lim:1} \lim\inf_{\tau\to-\infty}\Phi(\tau)= \lim\sup_{\tau\to-\infty}\Phi(\tau) =\lim_{\tau\to-\infty}\Phi(\tau)=1\,. \end{equation} We define a sequence of functions $\{\Phi_{n}(s)\}$ with $s\in[-1,0]$ as follows. For every $n\in\mathbb{N}$ the variable $\xi_{n}(s)$ is given by, cf. (\ref{S2E8a}), \begin{equation}\label{G1E9 s = -\int_{\xi_{n}(s) }^{-n}( 1+\eta^{2})^{\frac{4}{9}}d\eta\,,\quad n\in\mathbb{N} \,, \end{equation} then, each $\Phi_{n}(s)$ is defined by, cf. (\ref{S2E8}), \ \Phi_{n}(s) =\left(1+|\xi_{n}(s)|^{2}\right)^{\frac{1}{3}}H(\xi_{n}(s)) \,,\quad s\leq0\,,\quad n\in\mathbb{N} \,. \ We observe that then \ \Phi_{n}(s) =\Phi(s-s_{n}) \,,\quad\mbox{where} \quad s_{n} = \int_{-n}^{0}(1+\eta^{2})^{\frac{4}{9}}d\eta\,, \ where $\Phi$ solves (\ref{S2E9})-(\ref{S2E9bis}). Also, for every $s\in[-1,0]$ the corresponding sequence $\tau_{n}=s-s_{n}$ converges to $-\infty$ as $n\to\infty$, since $\lim_{n\to\infty}(s_{n}) =\infty$. On the other hand, the functions $\Phi_{n}(s)$ solve (cf. (\ref{S2E9})-(\ref{S2E9bis})) \begin{equation} \label{G2E1} \frac{d^{3}\Phi_{n}}{ds^{3}}=\frac{1}{\Phi_{n}^{3}} -\frac{\left( \xi_{n}^{2}+a\right) }{\xi_{n}^{2}+1} - F_{n}(s) \,, \quad n\in\mathbb{N}\,,\quad s\in[-1,0] \end{equation} where $F_{n}(s)$ is given by the expression of $F$ in (\ref{S2E9bis}) with $\Phi$ and $\xi$ replaced by $\Phi_{n}$ and $\xi_{n}$, respectively. Consider now the result of integrating (\ref{G2E1}): \begin{equation}\label{phi:n:inte \begin{array}[c]{ll} \Phi_{n}(s) & = \displaystyle{ \Phi_{n}(0) +\frac{d\Phi_{n}(0)}{ds}s +\frac{d^{2}\Phi_{n}(0)}{ds^{2}}\frac{s^{2}}{2} }\\ & \\ & \displaystyle{ -\int_{s}^{0}\int_{s_{1}}^{0}\int_{s_{2}}^{0} \left[ \frac{1}{(\Phi_{n}(s_{3}))^{3}}-\frac{|\xi_{n}(s_{3})|^{2}+a}{|\xi_{n}(s_{3})|^{2}+1} \right] ds_{3}ds_{2}ds_{1} }\\ & \\ & \displaystyle{ -\int_{s}^{0}\int_{s_{1}}^{0}\int_{s_{2}}^{0}F_{n}(s_{3})ds_{3}ds_{2}ds_{1} }\,. \end{array} \end{equation} We now pass to the limit in the integral terms. Observe that the assumption (\ref{lim:1}) implies that $\lim_{n\to\infty}\Phi_{n}(s)=1$ uniformly on $[-1,0]$. Moreover, (\ref{G1E9}) yields $\lim_{n\to\infty}\xi _{n}(s) =-\infty$ uniformly on $[-1,0]$. The first term in (\ref{phi:n:inte}) can be seen to converge to zero using the limit properties of $\Phi_{n}(s)$ and $\xi_{n}(s)$. In the last term we integrate by parts where necessary in order to get integrands with $\Phi_{n}(s)$ as a coefficient (this gives boundary terms with a double or single integral, but these are estimated similarly, because $s\in[-1,0]$). The resulting integrands have $\Phi_{n}(s)$ multiplied by a function of $\xi_{n}(s)$ and its derivatives, which can be computed using (\ref{G1E9}): $d\xi_{n}(s)/ds = (|\xi_{n}(2)|^{2}+1)^{-4/9}$ and $d^{2}\xi_{n}(s)/ds^{2} = -4(|\xi_{n}(2)|^{2}+1)^{-17/9}/9$. Then, one can conclude that the limit of the last term in (\ref{phi:n:inte}) tends also to zero as $n\to\infty$, and we are left with \[ \lim_{n\to\infty}\left(\left|\frac{d\Phi_{n}(0)}{ds}\right| + \left|\frac{d^{2}\Phi_{n}(0)}{ds^{2}}\right|\right)=0\,, \] but this contradicts (\ref{G1E6} and (\ref{lim:1}) cannot hold. Then $\lim\inf_{\tau\to-\infty}\Phi(\tau)<\lim\sup_{\tau\to-\infty}\Phi(\tau)$. We can now construct a sequence that satisfies (\ref{G1E7}). We first take the following quantity \[ \alpha:=\frac{1}{2} \left( \lim\sup_{\tau\to-\infty}\Phi(\tau) + \lim\inf_{\tau\to-\infty}\Phi(\tau)\right) \] (that might be infinite if $\lim\sup_{\tau\to-\infty}\Phi(\tau)=\infty$). Due to the continuity of $\Phi$, there exist decreasing sequences $\{\tilde{\tau}_{n}\}$ and $\{ \hat{\tau}_{n}\} $ such that $\lim_{n\to\infty}\tilde{\tau}_{n} =\lim_{n\to\infty}\hat{\tau}_{n}=-\infty$, $\tilde{\tau}_{n}<\hat{\tau}_{n}$ and that $\max_{\tau\in(\tilde{\tau}_{n},\hat{\tau}_{n}) }\Phi(\tau)\leq\alpha$. We define another sequence $\{\tau_{n}\}$ by \[ \Phi(\tau_{n}) =\min_{\tau\in[ \tilde{\tau}_{n},\hat{\tau}_{n}] }\Phi(\tau)\,, \] and this one satisfies (\ref{G1E7}). \end{proof} We continue with another consequence of assuming (\ref{G1E6}), namely, \begin{lemma}\label{L3} Suppose that $(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))$ satisfies the assumptions of Proposition~\ref{char:max}. Then, at least one of the following identities holds: \[ \lim\inf_{\tau\to-\infty}\Phi(\tau) =0\,, \quad \lim\sup_{\tau\to-\infty}\Phi( \tau) =\infty\,. \] \end{lemma} \begin{proof}[Proof] We argue by contradiction. Suppose that \[ \lim\inf_{\tau\to-\infty}\Phi(\tau) >0 \quad\mbox{and} \quad \lim\sup_{\tau\to-\infty}\Phi(\tau) <\infty\,, \] then, there exists a $C_{0}>0$ and a $\tau_{0}$ sufficiently negative such that \begin{equation} \frac{1}{C_{0}}\leq\Phi(\tau) \leq C_{0}\,,\quad\tau\leq\tau_{0}\,. \label{G2E2a \end{equation} Then, (\ref{G1E6}) implies the existence of a sequence $\{ \hat{\tau}_{n}\}$ with $\hat{\tau}_{n}\to-\infty$ and such that \begin{equation} \lim_{n\to\infty} \left( \left| \frac{d\Phi( \hat{\tau}_{n})}{d\tau}\right| +\left| \frac{d^{2}\Phi(\hat{\tau}_{n})}{d\tau^{2}}\right| \right) =\infty\,. \label{G2E2 \end{equation} We now define a sequence of functions $\Phi_{n}(z)$ by means of $ \Phi_{n}(z) =\Phi(z+\hat{\tau}_{n})$ and observe that they solve (\ref{S2E9})-(\ref{S2E9bis}) with the obvious changes in notation and with $\xi_{n}(z)$ defined by \[ z+\hat{\tau}_{n}=-\int_{\xi_{n}(z)}^{0}(1+\eta^{2})^{\frac{4}{9}}d\eta\,, \quad n\in\mathbb{N}\,. \] They also satisfy, due to (\ref{G2E2}), that \[ \lim_{n_{\infty}} \left( \left| \frac{d\Phi_{n}( 0) }{dz}\right| +\left|\frac{d^{2}\Phi_{n}( 0) }{dz^{2}}\right| \right) = \infty\,. \] This allows us to introduce, for every $n$, the length scale \[ \gamma_{n}=\left( \left| \frac{d\Phi_{n}(0)}{dz} \right| + \sqrt{\left|\frac{d^{2}\Phi_{n}(0)}{dz^{2}}\right| } \right)^{-1}\,, \] which clearly satisfies $\lim_{n\to\infty} \gamma_{n}=0$. We now set $z=\gamma_{n}\bar{z}$, $\bar{\xi_{n}}(\bar{z})=\xi_{n}(\gamma_{n}\bar{z})$, $\bar{\Phi}_{n}(\bar{z})=\Phi_{n}(\gamma_{n}\bar{z})$ and $\bar{F}_{n}(\bar{z})=F_{n}(\gamma_{n}\bar{z})$, to obtain that $\bar{\Phi}_{n}$ satisfies \[ \frac{d^{3}\bar{\Phi}_{n}}{d\bar{z}^{3}} = (\gamma_{n})^{3} \left[ \frac{1}{\bar{\Phi}_{n}^{3}} - \frac{\bar{\xi}_{n}^{2} + a }{\bar{\xi}_{n}^{2} +1}\right] + (\gamma_{n})^{3} \bar{F}_{n}(\bar{z}) \,,\quad n\in\mathbb{N}\,. \] It is clear that there exists a $\bar{C}>0$ such that for $n$ large enough \begin{equation}\label{init:bdd} \left| \left( \bar{\Phi}_{n}(0),\frac{d\bar{\Phi}_{n}(0)}{d\bar{z}}, \frac{d^{2}\bar{\Phi}_{n}(0)}{d\bar{z}^{2}}\right) \right| \leq \bar{C} \quad \mbox{and}\quad \left| \left( \frac{d\bar{\Phi}_{n}(0)}{d\bar{z}}, \frac{d^{2}\bar{\Phi}_{n}(0)}{d\bar{z}^{2}} \right) \right| \geq\frac{1}{\bar{C}}\,. \end{equation} We can now use classical continuous dependence results for ODEs. Let $\bar{\Phi}_{\infty}$ denote a function that solves the limiting problem $d^{3}\bar{\Phi}_{\infty}/d\bar{z}^{3}=0$ with initial conditions close to $(\bar{\Phi}_{n}(0),d\bar{\Phi}_{n}(0)/d\bar{z}, d^{2}\bar{\Phi}_{n}(0)/d\bar{z}^{2})$ for $n$ large enough, and thus also satisfying (\ref{init:bdd}) (with $n=\infty$), then \[ \bar{\Phi}_{n}(\bar{z}) \to\bar{\Phi}_{\infty}(\bar{z}) \quad\mbox{as}\ n\to\infty\,, \] as well as its derivatives, uniformly in compact sets of $\bar{z}$. Since $\bar{\Phi}_{\infty}$ is a polynomial at most of second order that is not identically constant, there exist values of $\bar{s}$ such that either $\bar{\Phi}_{\infty}(\bar{z}) =0$ or $\bar{\Phi}_{\infty}(\bar{z})\geq 2M$. But this contradicts (\ref{G2E2a}). \end{proof} The third auxiliary lemma is the following: \begin{lemma}\label{L4} Suppose that $(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))$ satisfies is the assumptions of Proposition~\ref{char:max}. Then, \ \lim\sup_{\tau\to-\infty}\Phi(\tau) =\infty\,. \ \end{lemma} \begin{proof}[Proof] Suppose that \begin{equation}\label{G6E2} \lim\sup_{\tau\to-\infty}\Phi(\tau) <\infty\,. \end{equation} Then, due to Lemma~\ref{L3} we have $\lim\inf_{\tau\to-\infty}\Phi( \tau) =0$, and there exists a sequence of points $\{\tau_{n}\}$ such that (\ref{G1E7}) and that $\lim_{n\to\infty}\Phi(\tau_{n}) =0$. For every $n$ we introduce the changes of variables \begin{equation}\label{G4E2a} \varepsilon_{n}:=\Phi(\tau_{n}) \,,\quad s= \varepsilon_{n}^{-\frac{4}{3}}( \tau-\tau_{n}) \quad\mbox{and}\quad\Phi(\tau) =\varepsilon_{n} \varphi(s) \,. \end{equation} Observe that if $\xi_{n}$ is related to $\tau_{n}$ by means of (\ref{S2E8}). In particular, this reflects that when $|\tau-\tau_{n}|$ remains bounded as $n \to\infty$, then $|\xi|\to\infty$ as $n\to\infty$, a fact that we shall apply below. We write (\ref{S2E9}) in the new variables, then $\varphi$ solves \begin{equation}\label{G4E2} \frac{d^{3}\varphi}{ds^{3}} = \frac{1}{\varphi^{3}} - \varepsilon_{n}^{3} \frac{\xi^{2}+a}{\xi^{2}+1} - \varepsilon_{n}^{\frac{4}{3}} F_{n}(s) \end{equation} with (cf. \ref{S2E9bis}) \begin{eqnarray*} F_{n}(s) &=& \varepsilon_{n}^{\frac{8}{3}} \frac{16}{3}\frac{\xi}{(\xi^{2}+1)^{\frac{10}{3}}} \left[1-\frac{14}{9}\frac{\xi^{2}}{ \xi^{2}+1} \right] \varphi(s) \\ & + & \varepsilon_{n}^{\frac{4}{3}} \frac{1}{(\xi^{2}+1)^{\frac{17}{9}}}\left[ \frac{208}{81}\frac{\xi^{2}}{\xi^{2}+1} -\frac{10}{9}\right] \frac{d\varphi(s)}{ds} + \frac{2}{3}\frac{\xi}{(\xi^{2}+1)^{\frac{13}{9}}} \frac{d^{2}\varphi(s)}{ds^{2}}\,, \end{eqnarray*} and subject to \begin{equation}\label{G4E3} \varphi(0) =1\,,\quad\frac{d\varphi(0)}{ds}=0\,,\quad\frac{d^{2}\varphi(0) }{ds^{2}}\geq0\,. \end{equation} The coefficients involving $\xi$ are functions of $s$; $\xi(\tau)=\xi(\tau_{n}+\varepsilon_{n}^{\frac{4}{3}}s)$. Then, since $\varepsilon_{n}\to0$, we can use classical continuous dependence results for ODEs to approximate the solutions of (\ref{G4E2})-(\ref{G4E3}) by the solutions $\bar{\varphi}$ of the limiting problem (with $\varepsilon_{n}=0$) \begin{align*} \frac{d^{3}\bar{\varphi}}{ds^{3}} - \frac{1}{\bar{\varphi}^{3}} & =0\\%\label{G4E4}\\ \bar{\varphi}(0) & =1\,,\quad\frac{d\bar{\varphi}(0) }{ds}=0 \,, \quad\frac{d^{2}\bar{\varphi}(0) }{ds^{2}}=\frac{d^{2}\varphi(0)}{ds^{2}} \geq0\nonumber \end{align*} on compact sets of $s$. Clearly, $d^{2}\bar{\varphi}(s)/ds^{2}$ is increasing for $s>0$, and there exists a $\delta>0$ such that $d^{2}\bar{\varphi}(s)/ds^{2} \geq \delta>0$ for all $s\geq 1$. Thus, $d\bar{\varphi}(s)/ds$ increases at least linearly for $s$ large enough and one can find a value $s_{0}>0$ such that $d\bar{\varphi}(s_{0})/ds \geq 2$. Continuous dependence results then imply that $d\varphi(s_{0})/ds \geq 1$ if $n$ is sufficiently large (see (\ref{G4E2})). We now return to the original variables and get estimates on $\Phi(\tau)$ for every $n$ in an interval around the local minimum. Using (\ref{G4E2a}), (\ref{G6E2}) and the notation \[ \bar{\tau}_{n}=\tau_{n}+\varepsilon_{n}^{\frac{4}{3}} s_{0}\,, \] we obtain that \ \lim_{n\to\infty} \Phi(\bar{\tau}_{n})=0\,,\quad \lim_{n\to\infty} \frac{d\Phi(\bar{\tau}_{n})}{d\tau} =\infty\,,\quad\lim_{n\to\infty}\frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}} =\infty\,. \ For $n$ large enough and $\tau$ such that $|\tau-\bar{\tau}_{n}|\leq1$, there exists a positive sequence $\{B_{n}\}$ with $\lim_{n\to\infty}B_{n}=\infty$ and such that \[ B_{n}<\frac{1}{\Phi(\tau)^{3}}-\frac{\xi(\tau)^{2}+a}{\xi(\tau)^{2}+1 <B_{n+1}\,. \] Let us denote by \[ G_{n}(\tau)= \int_{\bar{\tau}_{n}}^{\tau}\int_{\bar{\tau}_{n}}^{\tau_{1} \int_{\bar{\tau}_{n}}^{\tau_{2}} F(\tau_{3}) d\tau_{3}d\tau_{2}d\tau_{1}\,, \] where the function $F(\tau)$ is given by (\ref{S2E9bis}), and let also \begin{align*} p_{n}(\tau)= \Phi(\bar{\tau}_{n})+\frac{d\Phi(\bar{\tau}_{n})}{d\tau} (\tau-\bar{\tau}_{n}) +\frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}}\frac{(\tau-\bar{\tau}_{n})^{2}}{2}\,. \end{align*} Then, integrating (\ref{S2E9}) we can write for $\tau\in[\bar{\tau}_{n}-1,\bar{\tau}_{n}+1]$ that \begin{equation}\label{G5E1} \frac{B_{n}}{3}\frac{d^{k}(\tau-\bar{\tau}_{n})^{3}}{d\tau^{k}}<\frac{d^{k}(\Phi-p_{n}-G_{n})(\tau)}{d\tau^{k}} < \frac{B_{n+1}}{3}\frac{d^{k}(\tau-\bar{\tau}_{n})^{3}}{d\tau^{k}} \,, \quad k=0,1,2\,. \end{equation} Then, taking $n$ large enough and combining the inequalities (\ref{G5E1}) we obtain that \begin{align} |\Phi(\tau)| < \Phi(\bar{\tau}_{n}) + 2 \frac{d\Phi(\bar{\tau}_{n})}{d\tau} |\tau-\bar{\tau}_{n} | + 2 \frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}} |\tau-\bar{\tau}_{n} |^{2} + |G_{n}(\tau)|\,,\nonumber\\ \left| \frac{d\Phi(\tau)}{d\tau} \right| < 2 \frac{d\Phi(\bar{\tau}_{n})}{d\tau} + 2 \frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}} |\tau-\bar{\tau}_{n} | + \left| \frac{dG_{n}}{d\tau} \right| \,, \label{G5E1tris} \\ \left| \frac{d^{2}\Phi(\tau)}{d\tau^{2}}\right| < 2\frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}} + \left|\frac{d^{2}G_{n}}{d\tau^{2}} \right| \,, \nonumbe \end{align} and that \begin{equation}\label{low:bdd} \Phi(\tau)\geq \Phi(\bar{\tau}_{n}) + \frac{d\Phi(\bar{\tau}_{n})}{d\tau} (\tau-\bar{\tau}_{n}) + \frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}} \frac{(\tau-\bar{\tau}_{n})^{2}}{2} + \bar{B}_{n}(\tau-\bar{\tau}_{n})^{3} + G_{n}(\tau) \end{equation} for some sequence $\bar{B}_{n}>0$ with $\lim_{n\to\infty} \bar{B}_{n}=\infty$ and for all $\tau\in[\bar{\tau}_{n}-1, \bar{\tau}_{n}+1]$. We now observe that $F$ has the form $F(\tau) = f_{1}(\tau) \Phi(\tau) +f_{2}(\tau) d \Phi(\tau)/d\tau+f_{3}(\tau) d^{2}\Phi(\tau)/d\tau^{2}$ where the functions $f_{1}(\tau)$, $f_{2}(\tau)$ and $f_{3}(\tau)$ converge uniformly to zero on sets $\left| \tau-\bar{\tau}_{n}\right|\leq1$ for every $n$ (cf. (\ref{S2E8}) and (\ref{S2E9bis})). This allows to get estimates on the integral terms (those involving $G_{n}$) as follows: \[ |G_{n}(\tau)| \,, \ \left| \frac{dG_{n}(\tau)}{d\tau}\right|\,,\ \left| \frac{d^{2} G_{n}(\tau)}{d\tau^{2}} \right| \leq \tilde{\varepsilon}_{n} \left( \sup_{|\tau-\bar{\tau}_{n}|\leq1}\Phi(\tau) + \sup_{|\tau-\bar{\tau}_{n}|\leq1}\left| \frac{d\Phi(\tau)}{d\tau}\right| + \sup_{|\tau-\bar{\tau}_{n}|\leq1}\left| \frac{d^{2}\Phi(\tau)}{d\tau^{2}}\right| \right) \] for all $n$ with $\tau$ such that $|\tau-\bar{\tau}_{n}|\leq 1$ and where $\tilde{\varepsilon}_{n}\to 0^{+}$. Applying this to (\ref{G5E1tris}) and to (\ref{low:bdd}) we obtain, taking $n$ sufficiently large, that \[ \Phi(\tau) \geq\frac{d\Phi(\bar{\tau}_{n})}{d\tau}\frac{(\tau-\bar{\tau}_{n})}{2} +\frac{d^{2}\Phi(\bar{\tau}_{n})}{d\tau^{2}}\frac{(\tau-\bar{\tau}_{n})^{2}}{4}+\bar{B}_{n}(\tau-\tau_{n})^{3 \] for $\left| \tau-\bar{\tau}_{n}\right| \leq1$. Choosing, say $\tau-\bar{\tau}_{n}=1$, we obtain that $\lim_{n\to\infty}\Phi(\bar{\tau}_{n}+1)=\infty$, but this contradicts (\ref{G6E2}), whence the lemma follows. \end{proof} We are now ready to prove Proposition~\ref{char:max}. \begin{proof}[Proof of Proposition~\ref{char:max}] The assumptions imply that we can use the statements of lemmas~\ref{L2} and \ref{L4}. In particular, by Rolle's theorem, we can guarantee the existence of local maxima in each interval $(\tau_{n+1},\tau_{n})$ where $\{\tau_{n}\}$ is the sequence of minima defined in Lemma~\ref{L2}. We then observe that the regularity of a solution $\Phi$ of (\ref{S2E9}) guarantees that the points at which $\Phi$ attains local maxima or minima are isolated. Otherwise $\Phi$ would take constant values on closed intervals, but constants are not solutions of (\ref{S2E9}). Hence, we can define the sequence such that (\ref{def:max}) holds. On the other hand, Lemma~\ref{L4} implies (\ref{G5E3}). \end{proof} \section{Properties of oscillatory solutions}\label{sec:proof:osci} In this section we proof the following proposition: \begin{proposition}\label{max:increase} Let the assumptions of Proposition~\ref{char:max} hold and let $\{\tau_{n}^{\ast}\}$ be the sequence found in this proposition. Then there exists $n_{0}\in\mathbb{N}$ large such that for all $n>n_{0}$ \begin{equation}\label{G5E4} \Phi( \tau_{n-1}^{\ast}) > \Phi( \tau_{n}^{\ast}) \,. \end{equation} \end{proposition} We observe that this result is in contradiction with (\ref{G5E3}). We now define a sequence $\{\tau_{n}^{min}\}$ as follows \begin{equation}\label{def:min} \Phi(\tau_{n}^{min})=\min_{\tau\in(\tau_{n}^{\ast},\tau_{n-1}^{\ast})} \Phi(\tau)\,, \quad \frac{d\Phi(\tau_{n}^{min})}{d\tau}=0\,, \quad \frac{d^{2}\Phi(\tau_{n}^{min})}{d\tau^{2}}\geq 0\,, \end{equation} i.e. $\Phi$ reaches the minimum in the interval $(\tau_{n}^{\ast},\tau_{n-1}^{\ast})$ at $\tau=\tau_{n}^{min}$. Then as part of the construction necessary to prove (\ref{G5E4}) it will follow that: \begin{proposition}\label{min:decrease} Let the assumptions of Proposition~\ref{char:max} hold. Then the sequence $\{\tau_{n}^{min}\}$ given in (\ref{def:min}) is well-defined, $\lim_{n\to\infty}\tau_{n}^{min}=-\infty$ and there exists $n_{0}$ such that for all $n>n_{0}$ then \begin{equation}\label{G5E5} \Phi(\tau_{n-1}^{min})<\Phi(\tau_{n}^{min}) \,. \end{equation} \end{proposition} The proofs of these propositions are divided in several steps that we outline below for clarity. We first identify a two-parameter family of polynomials that approximate $H$ near a large maximum of $\Phi$. Most part of this analysis is done in Appendix~\ref{sec:polynomials}, where we identify and give some properties of the polynomials that solve (\ref{ecuacionminima1}) (with the reverse sign). Then for each $n$ and around $\tau_{n}^{\ast}$ we identify a length scale that transform these polynomials into polynomials of order one. We then translate the properties found into the rescaled polynomials. We also rescale accordingly the function $H$ near each $\xi_{n}^{\ast}$ defined by (\ref{S2E8a}) for $\tau=\tau_{n}^{\ast}$ and give the approximating lemma that in particular implies that $H$ will get close to $0$ in a linear decreasing way. Next we adapt the {\it matching lemma}, Lemma~\ref{MatchingLemma} in Appendix~\ref{sec:control}, that gives the behaviour of the solutions in the inner regions near each $\xi_{n}^{min}=\xi(\tau_{n}^{min})$. From this result we can conclude that the approximating polynomial in the outer region must have a double zero in order to match. This, in particular, reduces the class of approximating polynomials to a one-parameter family. We finally derive an iterative relation between the elements of the sequence $\{\Phi(\tau_{n}^{\ast})\}$ if $n$ is large enough that implies Proposition~\ref{max:increase}, as well as a relation for the elements of $\{\Phi(\tau_{n}^{min})\}$ that implies Proposition~\ref{min:decrease}. \subsection{The outer variables and the auxiliary polynomials}\label{sec:aux} Given the sequence of $\{\tau_{n}^{\ast}\}$ found in Proposition~\ref{char:max}, see (\ref{def:max}), and the sequence of local minima $\{\tau_{n}^{min}\}$ defined in (\ref{def:min}), we define the sequences $\{\xi_{n}^{\ast}\}$, $\{\xi_{n}^{min}\}$, $\{M_{n}\}$ and $\{\beta_{n}\}$ by means of (see (\ref{H6}) and (\ref{S4E3})): \begin{equation}\label{S5E2} \tau_{n}^{\ast}=\int_{0}^{\xi_{n}^{\ast}}(\eta^{2}+1)^{\frac{4}{9}}d\eta\,, \quad \tau_{n}^{min}=\int_{0}^{\xi_{n}^{min}}(\eta^{2}+1)^{\frac{4}{9}}d\eta\,, \end{equation \begin{equation}\label{S5E3 M_{n}=\frac{1}{|\xi_{n}^{\ast}|^{\frac{17}{3}}}\,\Phi(\tau_{n}^{\ast})\,,\quad \beta_{n}=\frac{((\xi_{n}^{\ast})^{2}+1)^{\frac{8}{9}}}{|\xi_{n}^{\ast}|^{\frac{11}{3}}}\, \frac{d^{2}\Phi(\tau_{n}^{\ast})}{d\tau^{2}}\,. \end{equation Observe that the definition of $\tau_{n}^{\ast}$ implies that $\beta_{n}<0$, that $\lim_{n\to\infty}\xi_{n}^{\ast}=-\infty$ and that $\lim_{n\to\infty}\xi_{n}^{min}=-\infty$. Observe that $M_{n}$ is the value of the maximum of $\Phi$ at each $\tau_{n}^{\ast}$ rescaled appropriately with the position of the maximum in the variable $\xi$, $\xi_{n}^{\ast}$ (this scaling near a maximum resembles that $\Phi\sim|\xi|^{\frac{2}{3}}H$ there and that $H$ is approximated by a fifth order polynomial, cf. (\ref{P:Z:eq}) and (\ref{S4E3}) in Appendix~\ref{sec:polynomials}). The definition of $\beta_{n}$ results from similar considerations, but is a parameter that captures the value of the second derivative of $\Phi$ at $\tau_{n}^{\ast}$. Following Appendix~\ref{sec:polynomials}, associated to every $\tau_{n}^{\ast}$ we construct the two-parameter family of polynomials \[ P(Z_{n};M_{n},\beta_{n}) \quad\mbox{with}\quad Z_{n}=\frac{\xi}{\xi_{n}^{\ast}} \] that solve (\ref{P:Z:eq}) with $Z$ replaced by $Z_{n}$, and that are given by (\ref{S4E4}). We recall that they satisfy that $P(1;M_{n},\beta_{n})=M_{n}$. We shall see later that these polynomials are close to $-(\xi_{n}^{\ast})^{5}H(\xi)$ for $\xi$ close to $\xi_{n}^{\ast}$ in an interval contained in $[\xi_{n+1}^{min},\xi_{n}^{min}]$. Thus we have to consider $Z_{n}$ in some interval containing $Z_{n}=1$ and where $P$ stays positive. Moreover, since $\xi_{n}^{\ast}\to -\infty$ the approximation will be applicable for $Z_{n}>0$ only. In this regard, for each $n$, we have derived a number of properties that are outlined in lemmas~\ref{Pol2}, \ref{Pol4}, \ref{Pol5} and \ref{new:pol}. These give, in particular, that the largest root of $P(Z_{n};M_{n},\beta_{n})$ in $Z_{n}<1$ is attained at a value $Z_{n}=Z_{0}(M_{n},\beta_{n})$ for every $M_{n}>0$ and every $\beta_{n}<0$. It is also shown that for every $M_{n}>0$ there exists a unique value $\beta_{n}=\beta_{\ast}(M_{n})$ such that $P(Z_{n};M_{n},\beta_{\ast}(M_{n}))$ has a double zero at some $Z_{n}=Z_{\ast}(M_{n})>1$. In these lemmas the asymptotic behaviour as $M_{n}\to 0$ and as $M_{n}\to\infty$ of $\beta_{\ast}(M_{n})$, $Z_{\ast}(M_{n})$ and $Z_{0}(M_{n})=Z_{0}(M_{n},\beta_{\ast}(M_{n}))$ is also given. But, as we shall see later and assume now, the sequence $\{ M_{n}\}$ is bounded. Taking these considerations into account, we now introduce a rescaling of $P(Z_{n};M_{n},\beta_{n})$ for every $n$ in order to have values of order one in the relevant range of parameters. Namely, we set \begin{equation} \label{S6E1} \bar{P}(\zeta_{n};M_{n},\beta_{n}) =\frac{P(Z_{n};M_{n},\beta_{n})}{M_{n}}\,,\quad \zeta_{n}=-\frac{1}{M_{n}^{\frac{1}{3}}}\left( \frac{\xi}{\xi_{n}^{\ast}}-1\right)\,. \end{equation} Observe that now the variable $\zeta_{n}$ is meaningful in an interval around $\zeta_{n}=0$ and with $\zeta_{n}<1/M_{n}^{1/3}$. We note that $M_{n}^{\frac{1}{3}}$ is a characteristic length scale which measures the distance between $Z_{n}=1$ and $Z_{n}=Z_{\ast}(M_{n})$, relevant if $M_{n}$ is very small, see Lemma~\ref{Pol4}. We also observe that the polynomials $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ are explicitly given by \begin{equation}\label{S5E6} \bar{P}(\zeta_{n};M_{n},\beta_{n}) = -\frac{\zeta_{n}^{3}}{60} \left( M_{n}^{\frac{2}{3}}\zeta_{n}^{2} -5 M_{n}^{\frac{1}{3}}\zeta_{n}+10 \right) + \left( \frac{5}{9}M_{n}^{\frac{2}{3}}\zeta_{n}^{2} + \frac{2}{3} M_{n}^{\frac{1}{3}}\zeta_{n} +1\right) + \frac{\beta_{n}}{M_{n}^{\frac{1}{3}}}\frac{\zeta_{n}^{2}}{2}\,. \end{equation} It is natural to define the following values of $\zeta_{n}$: \begin{equation} \label{min:zeta:root} \zeta_{0}(M_{n},\beta_{n})= -\frac{Z_{0}(M_{n},\beta_{n})-1}{M_{n}^{\frac{1}{3}}}\,, \end{equation} (see Lemma~\ref{Pol5}, (\ref{max:Zroot})), thus clearly $\zeta_{0}(M_{n},\beta_{n})=\min\{\zeta_{n}>0:\ \bar{P}(\zeta_{n};M_{n},\beta_{n})=0\}$. And if $\beta_{n}=\beta_{\ast}(M_{n})$ we define also \begin{equation}\label{S5E8} \zeta_{\ast}(M_{n})=-\frac{Z_{\ast}(M_{n})-1}{M_{n}^{\frac{1}{3}}}\,, \end{equation} therefore $\bar{P}$ has a double zero at this value (see Lemma~\ref{Pol2}). When $\beta_{n}=\beta_{\ast}(M_{n})$ and for simplicity of notation, we shall write \[ \bar{P}(\zeta_{n};M_{n})=\bar{P}(\zeta_{n};M_{n},\beta_{\ast}(M_{n}) ) \] and \[ \zeta_{0}(M_{n})=\zeta_{0}(M_{n},\beta_{\ast}(M_{n}))\,. \] The following result follows easily: \begin{lemma}\label{AppPolynomials} For every $n$ and $M_{n}$ the polynomial $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ solves \begin{equation}\label{S6E6} \frac{d^{3}\bar{P}(\zeta;M_{n},\beta_{n})}{d\zeta^{3}}+(1-M_{n}^{\frac{1}{3}}\zeta)^{2}=0 \end{equation} with initial conditions \begin{equation}\label{S6E62} \bar{P}(0;M_{n},\beta_{n})=1\,,\quad\frac{d\bar{P}(0;M_{n},\beta_{n})}{d\zeta} =\frac{2M_{n}^{\frac{1}{3}}}{3}\,, \quad\frac{d^{2}\bar{P}(0;M_{n},\beta_{n})}{d\zeta^{2}} =\frac{\beta_{n}}{M_{n}^{\frac{1}{3}}}+\frac{10M_{n}^{\frac{2}{3}}}{9}\,. \end{equation} \end{lemma} We now reformulate the results of Appendix~\ref{sec:polynomials} for these approximating functions: \begin{lemma}\label{scaled:pols} Let $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ be given by (\ref{S5E6}). They satisfy that if $\beta_{n}>\beta_{\ast}(M_{n})$, then $\bar{P}(\zeta_{n};M_{n},\beta_{n})>0$ in $\zeta_{n}<0$ and if $\beta_{n}<\beta_{\ast}(M_{n})$ then there are two zeros of $\bar{P}(\zeta_{n};M_{n}\beta_{n})$ in $\zeta_{n}<0$. The derivative of $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ with respect to $\zeta_{n}$ is positive at the largest root in $\zeta_{n}<0$. If $\beta_{n}=\beta_{\ast}(M_{n})$ there is only one double zero in $\zeta_{n}<0$ and is placed at $\zeta_{n}=\zeta_{\ast}(M_{n})$. Moreover, \begin{equation}\label{zeta:gamma:M0} \zeta_{\ast}(M_{n})\sim -12^{\frac{1}{3}}(1+M_{n}^{\frac{1}{3}})\,,\quad \beta_{\ast}(M_{n})\sim -\left(\frac{3M_{n}}{2}\right)^{\frac{1}{3}} \quad\mbox{as}\quad M_{n} \to 0 \end{equation \begin{equation}\label{zeta:gamma:M0II} \frac{\partial^{2}\bar{P}(\zeta_{\ast}(M_{n});M_{n})}{\partial\zeta_{n}^{2}} \sim\left( \frac{3}{2}\right)^{\frac{1}{3}}\quad \mbox{as}\quad M_{n}\to 0\,. \end{equation} Also, the value (\ref{min:zeta:root}) is well-defined for every $n$ and $M_{n}$, and if $\beta_{n}=\beta_{\ast}(M_{n})$, then \begin{equation}\label{zeta:plus:M0} \zeta_{0}(M_{n})\sim\left(\frac{3}{2}\right)^{\frac{1}{3}}\,,\quad \frac{\partial\bar{P}(\zeta_{0}(M_{n});M_{n})}{\partial\zeta_{n}} \sim-\left( \frac{3}{2}\right)^{\frac{5}{3}}\quad\mbox{as}\quad M_{n}\to 0\,. \end{equation} \end{lemma} Finally, we have also that \begin{lemma}\label{new:pol:rescale} The value $\zeta_{0}(M_{n},\beta_{n})$ is the unique root of $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ in $\zeta_{n} \geq 0$. Moreover, if $\zeta_{0}(M_{n},\beta_{n}) \leq 2/M_{n}^{\frac{1}{3}}$, there exists a positive constant $c_{0}$ independent of $M_{n}$ and $\beta_{n}$ such that \[ \frac{d\bar{P}(\zeta_{0}(M_{n},\beta_{n});M_{n},\beta_{n}) }{d\zeta}\leq - c_{0}\max\{1,M_{n}^{\frac{1}{3}}\} \,. \] \end{lemma} \begin{remark} We point out that the case $M_{0} \to 0$ corresponds to the the approximating polynomials obtained for (\ref{ODEtapas}) in \cite{CV}. The asymptotics (\ref{zeta:plus:M0}) are in agreement with this observation. \end{remark} \subsection{The sequence of rescaled $H(\xi)$ near each $\xi_{n}^{\ast}$}\label{sec:rescale:H} In order to compare $H$ with a polynomial $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ we need to apply the scaling (\ref{S6E1}) to $H$ around $\xi=\xi_{n}^{\ast}$. We then obtain: \begin{lemma}\label{H:rescaling} Let us assume that $\Phi$ satisfies the assumptions of Proposition~\ref{char:max}, so that the sequence (\ref{def:max}) is well-defined. Let $H(\xi)$ be the solution of (\ref{S2E3}) related to $\Phi$ by means of (\ref{S2E8a}) and (\ref{S2E8}). Let the sequence of functions $\{\mathcal{H}_{n}(\zeta_{n})\,, \ \zeta_{n}\in\mathbb{R}\}$ be defined by \begin{equation}\label{S6E2} H(\xi)=|\xi_{n}^{\ast}|^{5}M_{n}\mathcal{H}_{n}(\zeta_{n})\,,\quad \zeta_{n}=-\frac{1}{M_{n}^{\frac{1}{3}}}\left( \frac{\xi}{\xi_{n}^{\ast}}-1\right) \,, \end{equation} then, for each $n$, $\mathcal{H}_{n}$ solves \begin{equation} \label{S6E2a} \frac{d^{3}\mathcal{H}_{n}}{d\zeta_{n}^{3}}+R_{n}(\zeta_{n};M_{n}) =\frac{\delta_{n}}{(\mathcal{H}_{n})^{3}}\,, \end{equation} where \begin{equation} \label{parameters} \delta_{n}=\frac{1}{|\xi_{n}^{\ast}|^{17}M_{n}^{3}}\,,\quad R_{n}(\zeta_{n};M_{n}) =\left( (1-M_{n}^{\frac{1}{3}}\zeta_{n})^{2}+\frac{a}{(\xi_{n}^{\ast})^{2}}\right) \,, \end{equation} with initial conditions \begin{equation}\label{S6E4} \mathcal{H}_{n}(0) =\frac{|\xi_{n}^{\ast}|^{\frac{2}{3}}}{(1+|\xi_{n}^{\ast}|^{2})^{\frac{1}{3}}}\,, \quad\frac{d\mathcal{H}_{n}(0)}{d\zeta_{n}} =\frac{2}{3}M_{n}^{\frac{1}{3}}\frac{|\xi_{n}^{\ast}|^{\frac{8}{3}}}{(|\xi_{n}^{\ast}|^{2}+1)^{\frac{4}{3}}} \end{equation} and \begin{equation} \label{S6E5 \frac{d^{2}\mathcal{H}_{n}(0)}{d\zeta_{n}^{2}} =\frac{\beta_{n}}{M_{n}^{\frac{1}{3}}}\frac{|\xi_{n}^{\ast}|^{\frac{2}{3}}}{(|\xi_{n}^{\ast}|^{2}+1)^{\frac{1}{3}}} +\frac{2M_{n}^{\frac{2}{3}}}{3} \left( \frac{\left( \frac{5}{3}|\xi_{n}^{\ast}|^{2}-1\right) |\xi_{n}^{\ast}|^{\frac{8}{3}}}{(|\xi_{n}^{\ast}|^{2}+1)^{\frac{7}{3}}}\right) \,. \end{equation} \end{lemma} \begin{proof}[Proof] That each $\mathcal{H}_{n}$ solves (\ref{S6E2a}) follows by changing variables in (\ref{S2E3}). The initial conditions follow from (\ref{def:max}) and (\ref{S5E3}). \end{proof} The following lemma will be used in the following to approximate the functions $\mathcal{H}_{n}(\cdot)$ by polynomials $\bar{P}(\cdot;M_{n},\beta_{n})$. \begin{lemma}\label{ApproximationLemma} For each $n$ let $\mathcal{H}_{n}(\zeta_{n})$ solve (\ref{S6E2a})-(\ref{S6E5}). Then, for every $\varepsilon>0$ there exists $n_{0}\in\mathbb{N}$ such that for all $n>n_{0}$ the estimates \begin{equation}\label{W1E9} \mathcal{H}_{n}(\zeta_{n})\geq \frac{ \zeta_{0}(M_{n},\beta_{n})-\zeta_{n}}{2\zeta_{0}(M_{n},\beta_{n})} \,, \end{equation} \begin{align*} \left| \mathcal{H}_{n}(\zeta_{n})-\bar{P}(\zeta_{n};M_{n},\beta_{n})\right| & \leq \varepsilon\left| \bar{P}(\zeta_{n};M_{n},\beta_{n})\right| \\ \left|\frac{d\mathcal{H}_{n}(\zeta_{n}) }{d\zeta_{n}} -\frac{d\bar{P}(\zeta_{n};M_{n},\beta_{n})}{d\zeta_{n}}\right| & \leq\varepsilon\left(\left| \frac{d\bar{P}(\zeta_{n};M_{n},\beta_{n})}{d\zeta_{n}}\right|+1\right) \\ \left| \frac{d^{2}\mathcal{H}_{n}(\zeta_{n})}{d\zeta_{n}^{2}} -\frac{d^{2}\bar{P}(\zeta_{n};M_{n},\beta_{n})}{d\zeta_{n}^{2}}\right| & \leq\varepsilon\left(\left| \frac{d^{2}\bar{P}(\zeta;M_{n},\beta_{n})}{d\zeta_{n}^{2}}\right| +1\right) \end{align*} hold in $\zeta_{n}>0$ and as long as \[ \frac{\zeta_{0}(M_{n},\beta_{n})-\zeta_{n}}{2\zeta_{0}(M_{n},\beta_{n})} \geq \varepsilon\,. \] \end{lemma} \begin{proof}[Proof] The proof of this result is a standard bootstrap argument similar to the ones that has been used repeatedly. The idea is that the initial conditions (\ref{S6E4}) and (\ref{S6E5}) tend to the ones for $\bar{P}$ as $n\to \infty$, see (\ref{S6E62}). Also the term $\delta_{n}/(\mathcal{H}_{n})^{3}$ in (\ref{S6E2a}) is negligible if $n$ is sufficiently large since $\delta_{n}\to 0$ as $n\to\infty$ (observe that $\Phi(\tau_{n}^{\ast})=\delta_{n}^{-1}$). On the other hand, the term $R_{n}(\zeta_{n};M_{n})$ can be approximated by $(1-M_{n}^{\frac{1}{3}}\zeta_{n})^{2}$ as $n\to\infty$. The resulting limiting equation is then (\ref{S6E6}) and the values of $\mathcal{H}_{n}$ and of its derivatives can be approximated at $\zeta_{n}=0$ by those of $\bar{P}(\cdot;M_{n},\beta_{n})$ and its derivatives there. The difference between $\mathcal{H}_{n}(\zeta_{n})$ and $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ can then be approximated arguing as in, for example, Lemma~\ref{continfinity} as well as in Lemma~4.3 of \cite{CV}. We observe that, as in \cite{CV}, (\ref{W1E9}) implies upon integration of (\ref{S6E2a}), that a condition on $\zeta_{0}(M_{n},\beta_{n})-\zeta_{n}$ of the form \[ \frac{\zeta_{0}(M_{n},\beta_{n})-\zeta_{n}}{2\zeta_{0}(M_{n},\beta_{n})} \geq \delta_{n} \left|\log\left(\frac{\zeta_{0}(M_{n},\beta_{n})-\zeta_{n}}{2\zeta_{0}(M_{n},\beta_{n})} \right) \right| + \frac{a}{|\xi_{n}^{\ast}|^{2}} \] must be satisfied for $n$ large enough. Then for every $\varepsilon$ we can choose $n_{0}$ large enough to obtain that the $\varepsilon$ is larger than the solution of $\delta_{n_{0}} |\log(x)|+ \zeta_{0}a/|\xi_{n_{0}}^{\ast}|^{2}= x/2$ and that the initial data are close enough to those of $\bar{P}$. \end{proof} We now prove that $\beta_{n}\sim\beta_{\ast}(M_{n})$ as $n\to\infty$. The idea is to use the fact that the derivative of the approximating polynomial $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ at $\zeta_{0}(M_{n},\beta_{n})$ is of order one (and negative) by Lemma~\ref{new:pol:rescale}. Then, we can use Lemma~\ref{MatchingLemma} of Appendix~\ref{sec:control} that gives the behaviour in the boundary layer where $\mathcal{H}_{n}$ becomes small (near $\xi=\xi_{n+1}^{min})$), to conclude that the next polynomial in the outer region is close to one having a double zero in the matching region near $\xi=\xi_{n+1}^{min})$. \begin{lemma}\label{LemmaDoubleZero} Suppose that the $\Phi$ and its derivatives satisfy the assumptions of Proposition~\ref{char:max} and let $\{\tau_{n}^{\ast}\}$ be the sequence found there. Let the sequences $\{\xi_{n}^{\ast}\}$, $\{M_{n}\}$ and $\{\beta_{n}\}$ be defined by means of (\ref{S5E2}) and (\ref{S5E3}), and let the functions $\beta_{\ast}(M_{n})$ be as in (\ref{S5E8}) and the sequence of functions $\mathcal{H}_{n}$ be given by (\ref{S6E2}). Then, for any $\varepsilon>0$, there exists a $L=L(\varepsilon)>0$ and a $n_{0}$ large enough such that if $\Phi(\tau_{n}^{\ast})\geq L$ and $n\geq n_{0}$ then \begin{equation}\label{J3E7} \left|\beta_{n}-\beta_{\ast}(M_{n})\right| \leq \varepsilon\left|\beta_{\ast}(M_{n})\right| \,. \end{equation} Also, for all $n>n_{0}$ \begin{equation}\label{linear:entry} \mathcal{H}_{n}(\zeta_{n})\simeq -K_{n}(\zeta_{n} - \zeta_{0}(M_{n})) \quad\mbox{as}\quad\zeta_{n} \to (\zeta_{0}(M_{n}))^{- \end{equation} \begin{equation}\label{linear:entry2} \quad\mbox{with}\quad K_{n} =-\frac{d\bar{P}(\zeta_{0}(M_{n}))}{d\zeta_{n}}>0 \end{equation} and \begin{equation} \label{quadratic:exit} \mathcal{H}_{n}(\zeta_{n})\simeq \frac{D_{n}}{\delta_{n}}(\zeta_{n}-\zeta_{0}(M_{n}))^{2} \quad\mbox{as}\quad\zeta_{n} \to (\zeta_{0}(M_{n}))^{+} \,, \end{equation} where $D_{n}$ is proportional to $K_{n}^{5}$ by a constant of order one, and there exist $\alpha_{1}$ and $\alpha_{2}\in\mathbb{R}$ independent of $M_{n}$ such that $0<\alpha_{1}\leq K_{n}\leq\alpha_{2}$. Moreover, for all $n>n_{0}$ there exists $\varepsilon_{0}>0$ small enough an $\xi_{crit,n}\in ( \xi_{n}^{min}-\varepsilon_{0}, \xi_{n}^{min}+\varepsilon_{0})$ such that \begin{equation}\label{J1E2} \zeta_{0}(M_{n})=-\frac{1}{M_{n}^{\frac{1}{3}}}\left( \frac{\xi_{crit,n}}{\xi_{n}^{\ast}}-1\right) \quad \mbox{and}\quad \zeta_{\ast}(M_{n-1})=-\frac{1}{M_{n-1}^{\frac{1}{3}}}\left( \frac {\xi_{crit,n}}{\xi_{n-1}^{\ast}}-1\right) \end{equation} and \begin{equation}\label{quadratic:exit3} \mathcal{H}_{n-1}(\zeta_{n-1})\simeq \Gamma_{n-1}(\zeta_{n-1}-\zeta_{\ast}(M_{n-1}))^{2} \quad\mbox{as}\quad\zeta_{n-1} \to (\zeta_{\ast}(M_{n-1}))^{+} \end{equation} \begin{equation} \label{quadratic:exit2} \quad \mbox{with} \quad \Gamma_{n-1}=\frac{1}{2}\frac{d^{2}\bar{P}(\zeta_{\ast}(M_{n-1});M_{n-1})}{d\zeta_{n-1}^{2}}>0\,. \end{equation} \end{lemma} \begin{proof}[Proof] Suppose that $n$ is very large. We apply Lemma~\ref{ApproximationLemma} for $n$, thus starting $\zeta_{n}=0$ or at $\xi=\xi_{n}^{\ast}$. It then follows that we can approximate $\mathcal{H}_{n}$ by the polynomial $\bar{P}(\zeta_{n};M_{n},\beta_{n})$ in intervals of the form $\zeta_{n}\in\left[0,\zeta_{0}(M_{n},\beta_{n})-\varepsilon_{1}\right]$ with $\varepsilon_{1}>0$ small but fixed and $n$ large enough. This in particular implies (\ref{linear:entry}) and (\ref{linear:entry2}). The fact that $K_{n}$ is bounded from above and below follows from Lemma~\ref{new:pol:rescale}. Using then Lemma~\ref{MatchingLemma} we obtain that $\mathcal{H}_{n}(\zeta)$ can be approximated as a quadratic polynomial for $\zeta_{n}=\zeta_{0}(M_{n},\beta_{n})+ \varepsilon_{1}$. This implies \[ \left| \mathcal{H}_{n}(\zeta_{n})-\frac{A K_{n}^{5}}{\delta_{n}} (\zeta_{n}-\zeta_{0}(M_{n}) )^{2} \right| \leq\frac{\varepsilon_{2}K^{5}_{n}}{\delta_{n}}(\zeta_{n}-\zeta_{0}(M_{n}))^{2}\,, \] for some $\varepsilon_{2}>0$ small enough and $A$ of order one, and thus (\ref{quadratic:exit}) follows. We can then replace the variables $\zeta_{n}$ by $\zeta_{n-1}$ and $\mathcal{H}_{n}$ by $\mathcal{H}_{n-1}$ using (\ref{S6E2}) and applying again Lemma~\ref{ApproximationLemma} we can then approximate the function $\mathcal{H}_{n-1}(\zeta)$ by one polynomial which has a double root at the value of $\zeta_{n-1}$ corresponding to $\zeta_{n}=\zeta_{0}(M_{n},\beta_{n})+\varepsilon_{1}$. This implies (\ref{quadratic:exit}), but also (\ref{J3E7}) follows by the definition of $\beta_{\ast}$, and therefore also (\ref{J1E2}), (\ref{quadratic:exit3}) and (\ref{quadratic:exit2}) follow. \end{proof} \subsection{Proof of propositions \ref{max:increase} and \ref{min:decrease}} \label{sec:formal} In order to prove the propositions we derive information from Lemma~\ref{LemmaDoubleZero} in the matching region around $\xi_{n}^{min}$ for $n$ large enough. Let us then assume that $\Phi$ satisfies the assumptions of Proposition~\ref{char:max}, so that the sequence (\ref{def:max}) is well-defined. Let $H(\xi)$ be defined by means of (\ref{S2E8a}) and (\ref{S2E8}), and satisfies (\ref{S2E3}). Let the sequence of functions $\mathcal{H}_{n}$ be defined by (\ref{S6E2}), so that, by Lemma~\ref{H:rescaling}, each such function satisfies (\ref{S6E2a}) with initial conditions (\ref{S6E4}) and (\ref{S6E5}). We further assume in the following that the approximating polynomials have $\beta_{n}=\beta_{\ast}(M_{n})$ thus they are as described in Lemma~\ref{scaled:pols} and we drop the dependency on $\beta_{n}$ in the notation. Lemma~\ref{LemmaDoubleZero} (\ref{J1E2}) and the definition of the variables $Z_{n}$ give \begin{equation}\label{J1E1} Z_{0}(M_{n}) \xi_{n}^{\ast} = Z_{\ast}(M_{n-1})\xi_{n-1}^{\ast} \,. \end{equation} \begin{remark} We can now argue that $\{M_{n}\}$ is a bounded sequence. Indeed, if $M_{n}$ is very large, (\ref{Z0:Minfty}) implies that $Z_{0}(M_{n})$ is very negative. This would imply, using (\ref{J1E1}) that $\xi_{n-1}^{\ast}>0$ (notice that $Z_{\ast}(M_{n-1})>0$ by definition). However, we cannot have $\xi_{n-1}^{\ast}>0$ for large $n$, because $\xi_{n}^{\ast}\to-\infty$. \end{remark} Using now the definition of $\zeta_{n}$ (see (\ref{S6E2})) and (\ref{J1E2}) we can compute $(\zeta_{n}-\zeta_{0}(M_{n}))$ and $(\zeta_{n-1}-\zeta_{\ast}(M_{n-1}))$ to get \begin{equation} \label{J1E3 M_{n}^{\frac{1}{3}}\xi_{n}^{\ast}(\zeta_{n}-\zeta_{0}(M_{n})) =M_{n-1}^{\frac{1}{3}}\xi_{n-1}^{\ast}(\zeta_{n-1}-\zeta_{\ast}(M_{n-1}))\,. \end{equation} On the other hand, the definition of the sequence $\mathcal{H}_{n}$ (in (\ref{S6E2})) gives \begin{equation}\label{J1E4} |\xi_{n}^{\ast}|^{5}M_{n}\mathcal{H}_{n}(\zeta_{n}) =|\xi_{n-1}^{\ast}|^{5}M_{n-1}\mathcal{H}_{n-1}(\zeta_{n-1}) \,. \end{equation} We then change variables according to (\ref{J1E3}) and (\ref{J1E4}) in (\ref{quadratic:exit}) in order to write it in terms of the variables $\zeta_{n-1}$ and $\mathcal{H}_{n-1}$. This gives the asymptotic formula, for $n$ large enough, \[ \mathcal{H}_{n-1}(\zeta_{n-1}) \sim\frac{D_{n}}{\delta_{n}}\frac{|\xi_{n}^{\ast}|^{3}M_{n}}{|\xi_{n-1}^{\ast}|^{3}M_{n-1}} \frac{M_{n-1}^{\frac{2}{3}}}{M_{n}^{\frac{2}{3}}} (\zeta_{n-1}-\zeta_{\ast}(M_{n-1}))^{2}\,, \quad\zeta_{n-1}\to (\zeta_{\ast}(M_{n-1}))^{+}\,. \] But comparing this to (\ref{quadratic:exit3}) implies that the approximation \[ \Gamma_{n-1}M_{n-1}^{\frac{1}{3}}|\xi_{n-1}^{\ast}|^{3} =D_{n}|\xi_{n}^{\ast}|^{20} M_{n}^{\frac{10}{3}} \] is valid for $n$ large enough (here we have also used (\ref{parameters})). Using the fact that $M_{n}$ is bounded, and also that $D_{n}$ can be estimated from above and below by a constant independent on $n$, we obtain \[ C_{1}|\xi_{n}^{\ast}|^{20} M_{n}^{\frac{10}{3}}\leq \Gamma_{n-1}M_{n-1}^{\frac{1}{3}}|\xi_{n-1}^{\ast}|^{3}\leq C_{2}|\xi_{n}^{\ast}|^{20}M_{n}^{\frac{10}{3} \] for $0<C_{1}\leq C_{2}$. Using (\ref{S5E3}), then \[ C_{1}\frac{|\xi_{n}^{\ast}|^{\frac{10}{9}}}{|\xi_{n-1}^{\ast}|^{\frac{10}{9}}}(\Phi(\xi_{n}^{\ast}))^{\frac{10}{3}} \leq\Gamma_{n-1}(\Phi(\xi_{n-1}^{\ast}))^{\frac{1}{3}} \leq C_{2}\frac{|\xi_{n}^{\ast}|^{\frac{10}{9}}}{|\xi_{n-1}^{\ast}|^{\frac{10}{9}}}(\Phi(\xi_{n}^{\ast}))^{\frac{10}{3}}\,. \] Using now (\ref{J1E1}) as well as the fact that $Z_{0}(M_{n})$ and $Z_{\ast}(M_{n-1})$ are bounded from above and below for $M_{n}$ and $M_{n-1}$ bounded (cf. (\ref{S5E4}), (\ref{Z0:Mzero})), we obtain, for different $C_{1}$ and $C_{2}$ if necessary, that \[ C_{1}(\Phi(\xi_{n}^{\ast}))^{\frac{10}{3}} \leq\Gamma_{n-1}(\Phi(\xi_{n-1}^{\ast}))^{\frac{1}{3}} \leq C_{2}(\Phi(\xi_{n}^{\ast}))^{\frac{10}{3}}\,. \] Using now (\ref{zeta:gamma:M0II}) for $n-1$ we can estimate $\Gamma_{n-1}$ from above and below by positive constants independent on $n$. Then \begin{equation}\label{J3E8} \Phi(\xi_{n-1}^{\ast})\simeq C(\Phi(\xi_{n}^{\ast}))^{10} \end{equation} for some $C>0$. We are now in the position of proving Proposition~\ref{max:increase}. \begin{proof}[Proof of Proposition~\ref{max:increase}] Due to Lemma~\ref{LemmaDoubleZero} we can assume that (\ref{J3E7}) holds for $n$ large. In a similar fashion as in \cite{CV} we can make rigorous the argument outlined above by combining the lemmas~\ref{MatchingLemma} and \ref{ApproximationLemma} and prove indeed that (\ref{J3E8}) hold. Since by hypothesis $\Phi(\xi_{n}^{\ast}) \to\infty$ as $n\to\infty$ it then follows that $\Phi(\xi_{n}^{\ast})<\Phi(\xi_{n-1}^{\ast})$ and this gives Proposition~\ref{max:increase}. \end{proof} For each $n$, let now $\tau_{n}^{min}$ be the value of $\tau$ at which $\Phi$ reaches the minimum in the interval $(\tau_{n}^{\ast},\tau_{n-1}^{\ast})$ as defined in (\ref{def:min}). We can now prove Proposition~\ref{min:decrease}: \begin{proof}[Proof of Proposition~\ref{min:decrease}] As before, we only give the formal steps of the proof and refer to \cite{CV} for details. Let $\xi_{n}^{min}=\xi(\tau_{n}^{min})$ be defined by means of (\ref{S2E8a}). Then, by Lemma~\ref{ApproximationLemma}, the fact that $\delta_{n}\ll 1$ and Lemma~\ref{MatchingLemma} we can write, to leading order for $n$ large enough, \[ \xi_{n}^{min}=\xi_{n}^{\ast} Z_{0}(M_{n}) \] and that, \[ \Phi(\tau_{n}^{min}) = |\xi_{n}^{min}|^{\frac{2}{3}} |\xi_{n}^{\ast}|^{5}M_{n}\mathcal{H}_{n}(\zeta_{0}(M_{n})) \] It is clear that for $n$ large enough $\Phi(\tau_{n}^{min})$ approaches $0$ by Lemma~\ref{ApproximationLemma} and employing the scaling \[ \mathcal{H}_{n}=\delta_{n}h_{n}\,,\quad\zeta_{n}-\zeta_{0}(M_{n}) =\delta_{n}\,s_{n \] (that is analogous to the one used in the proof of Lemma~\ref{MatchingLemma}), give that the following is a valid approximation \[ \Phi(\tau_{n}^{min}) = \left|\frac{\xi_{n}^{min}}{\xi_{n}^{\ast}}\right|^{\frac{2}{3}} \frac{1}{(\Phi(\tau_{n}^{\ast}))^{2}} h_{n}(0) \] for $n$ large enough. Here we also use that de definition of $M_{n}$ (see \ref{S5E3}) and that $\delta_{n}=(\Phi(\tau_{n}^{\ast})^{-3})$ (see (\ref{parameters})). Thus for $n$ large enough one also has, by (\ref{J3E8}), that \[ \Phi(\tau_{n-1}^{min}) \simeq \left|\frac{\xi_{n-1}^{min}}{\xi_{n-1}^{\ast}}\right|^{\frac{2}{3}} \frac{C}{(\Phi(\tau_{n}^{\ast}))^{20}} h_{n-1}(0) \] for some order one constant $C>0$. We finally observe that the values $h_{n}(0)$ are of order one if $n$ is large enough, by Lemma~\ref{MatchingLemma}. Also, we can approximately write the quotients $|\xi_{n}^{min}|/|\xi_{n}^{\ast}|=Z_{0}(M_{n})$. But each $Z_{0}(M_{n})$ is an order one constant, since the sequence $M_{n}$ is uniformly bounded. Then, since $\Phi(\tau_{n}^{\ast})\to\infty$, we have that $\Phi(\tau_{n}^{min})\to 0$. Moreover, there exists a constant $C>0$ (different from the one above) such that, for $n$ large enough, \[ \Phi(\tau_{n-1}^{min}) \simeq C (\Phi(\tau_{n}^{min}))^{10}\,, \] and thus (\ref{min:decrease}) follows. \end{proof} \section{Convergence to the equilibrium point $p_{-}$}\label{sec:6} In this section we finish the proof of Theorem~\ref{hetcon}. First we prove the following \begin{proposition}\label{ControlOscillations} Suppose that $(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))$ is a solution of (\ref{compact1})-(\ref{compact4}) as found in Proposition~\ref{existence:1} and satisfying (\ref{F5E1}). Then, there exist positive constants $C_{1}$ and $C_{2}$ depending only on $a$ such that \begin{align} \lim\sup_{\tau\to-\infty}\left( \Phi(\tau) +\left| \frac{d\Phi(\tau)}{d\tau} \right| +\left| \frac{d^{2}\Phi(\tau)}{d\tau^{2}}\right| \right) \leq C_{1} \,,\label{F5E2}\\ \nonumber\\ \mbox{and} \quad\lim\inf_{\tau\to-\infty}\Phi(\tau) \geq C_{2}>0 \,. \label{F5E3 \end{align} \end{proposition} \begin{proof}[Proof] We recall that the solutions found in Proposition~\ref{existence:1} are defined for all $\tau\in(-\infty,\infty)$ and satisfy $\lim_{\tau\to\infty}(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))=(1,0,0,\pi/2)$. Moreover, $\lim_{\tau\to-\infty}\theta(\tau)=-\frac{\pi}{2}$, $\Phi(\tau)>0$ for any $\tau\in(-\infty,\infty)$ and $\lim\inf_{\tau\to-\infty}\Phi(\tau)<\infty$. We now claim that (\ref{F5E2}) holds for some $C_{1}>0$. Indeed, otherwise, due to Proposition~\ref{char:max} and Proposition~\ref{max:increase} there would exist a sequence $\{\tau_{n}^{\ast}\}$ such that $\lim_{n\to\infty}\tau_{n}^{\ast}=-\infty$ and $\lim_{n\to\infty}\Phi(\tau_{n}^{\ast})=\infty$ but such that there exits $n_{0}$ with $\Phi(\tau_{n-1}^{\ast})>\Phi(\tau_{n}^{\ast})$ for all $n>n_{0}$. Then, since $\lim\sup_{n\to\infty}\Phi(\tau_{n}^{\ast})=\infty$ (cf. (\ref{G5E3})) it follows that $\Phi(\tau_{n_{0}}^{\ast})=\infty$, this yields a contradiction and, therefore, (\ref{F5E2}) is satisfied. Suppose now that (\ref{F5E3}) is not satisfied, then in particular this implies, probably taking a subsequence, that $\lim_{n\to \infty} \Phi(\tau_{n}^{min})=0$ but this contradicts (\ref{G5E5}) of Proposition~\ref{min:decrease}. \end{proof} We can now finish the proof of the main result. \begin{proof}[Proof of Theorem \ref{hetcon}] Due to Proposition~\ref{existence:1} there exists a solution of (\ref{compact1})-(\ref{compact4}) defined for all $\tau\in(-\infty,\infty)$ such that $\lim_{\tau\to\infty}(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))=(1,0,0,\pi/2)$. Moreover, $\lim_{\tau\to-\infty}\theta(\tau)=-\frac{\pi}{2}$, $\Phi(\tau)>0$ for any $\tau\in(-\infty,\infty)$ and $\lim\inf_{\tau\to-\infty}\Phi(\tau)<\infty$. Then Proposition~\ref{ControlOscillations} gives that (\ref{F5E2}) and (\ref{F5E3}) hold. We now define a sequence of functions \[ \Phi_{n}(\tau)=\Phi(\tau-n)\,, \quad W_{n}(\tau)=W(\tau-n)\,,\quad \Psi_{n}(\tau)=\Psi(\tau-n)\quad n=1,2,3,\dots \] Using (\ref{compact1})-(\ref{compact4}), (\ref{F5E2}), (\ref{F5E3}), standard compactness arguments and the fact that $\lim_{\tau\to-\infty}\theta(\tau)=-\frac{\pi}{2}$ we can show that there exists a subsequence $\{n_{j}\}$ satisfying $\lim_{j\to\infty}n_{j}=\infty$ and such that $\{( \Phi_{n_{j}}(\tau),W_{n_{j}}(\tau),\Psi_{n_{j}}(\tau))\}$ converges uniformly in compact sets of $\tau$ to a bounded solution of (\ref{S3E1}), say $(\Phi_{\infty}(\tau),W_{\infty}(\tau),\Psi_{\infty}(\tau))$. Moreover, we have $\Phi_{\infty}(\tau)\geq C_{1}>0$, $\tau\in(-\infty ,\infty)$. Due to Proposition~\ref{Tapas1} it follows that $(\Phi_{\infty}(\tau),W_{\infty}(\tau),\Psi_{\infty}(\tau))$ is close to $P_{s}$ if $\tau<0$ and $|\tau|$ is large enough. Using the Stable Manifold Theorem it then follows that $\left(\Phi_{\infty}(\tau),W_{\infty}(\tau),\Psi_{\infty}(\tau)\right)$ is contained in the unstable manifold of $P_{s}$. However, due to Proposition~\ref{Tapas1}{\it (v)} it follows that the only bounded trajectory contained in the unstable manifold of $P_{s}$ is the is the critical point itself, thus $(\Phi_{\infty}(\tau),W_{\infty}(\tau),\Psi_{\infty}(\tau))\equiv P_{s}$. This implies that the sequence $P_{j}=(\Phi_{n_{j}}(0),W_{n_{j}}(0),\Psi_{n_{j}}(0)),\theta_{n_{j}}(0))$ converges to the equilibrium $p_{-}$ as $j\to\infty$. Therefore the points $P_{j}$ are contained in the centre-unstable manifold of $p_{-}$, whence $\lim_{\tau\to-\infty}(\Phi(\tau),W(\tau),\Psi(\tau),\theta(\tau))=p_{-}$ and the result follows. \end{proof} {\bf Acknowledgements} This work was supported by the Hausdorff Center of the University of Bonn. C.M. Cuesta was also partially supported by the DGES Grant MTM2011-24-109. \section*{Appendix~}
2,877,628,089,874
arxiv
\section{Introduction} Much of the work in NLP strives to develop systems that are able to perform linguistic tasks similarly to humans. To achieve this goal, one typically provides NLP systems with \emph{human knowledge} about the task at hand. This knowledge can come in the form of linguistic annotations, hand-crafted rules and access to linguistic databases, as well as various model design choices. In this work, we study the possibility of providing the model with an inductive bias by using \emph{human behavioral signals} based on eye movements in reading as an additional source of information which can guide NLP models to adequately process linguistic input and solve linguistic tasks. As a case study, we examine reading comprehension, a task of central importance for probing both human and machine understanding of text. To enable this study, we collect eye movement data from 269 participants who engage in a reading comprehension task using the materials of OneStopQA \cite{berzak2020starc}. We argue that reading comprehension is a particularly well-suited task for linking human eye movement information to NLP modelling due to the substantial correspondence between reading times and the relevance of the text segment for answering the question. Hahn and Keller \shortcite{hahn2018} have shown this correspondence by establishing increased reading times on the correct answer in a question answering task where answers are named entities. Our study generalizes this result to an arbitrary QA setting, and demonstrates longer reading times for portions of the text which are most pertinent for answering the question correctly. Building on this observation, we develop a new approach to machine reading comprehension in which the model is directed to \emph{mimic human fixation times} over the text, given the question. The idea behind this approach is to encourage the model to focus on question-relevant information. Specifically, we introduce a multi-task reading comprehension architecture in which a state-of-the-art transformer model jointly performs question-answering and prediction of the human reading time distribution over the text. Our modelling framework is \emph{behavioral}, treating the reading comprehension model itself as a black-box. This leads to both theoretical and practical advantages. From a theoretical perspective, this approach is appealing as it creates a direct parallel to human reading, in which eye movements are an external behavior. Practically, our approach has the advantage of being modular, allowing swapping our model with other reading comprehension models, and the task with other NLP tasks. Our experiments demonstrate that our approach leads to consistent gains in question-answering performance across different training regimes, model variants, and on both in- and out-of-domain evaluations. In particular, our model outperforms baseline models with gaze from human reading without exposure to the question. It also performs better than using manual annotations of the textual span critical for answering the question. To summarize, we present three contributions: \begin{enumerate} \setlength\itemsep{0em} \item We collect an eye-tracking dataset with a large number of participants engaging in free-form multiple choice question answering. \item We show that human gaze behavior during question answering is strongly task-conditioned. \item We demonstrate that human gaze can improve the performance of a state-of-the-art reading comprehension model. \end{enumerate} While this work is a proof of concept and uses a relatively costly data collection procedure, as eye-tracking technology continues to become more ubiquitous and affordable, it will be feasible to perform large scale data collection and deployment of similar approaches for QA and other NLP tasks. \section{Related Work} Our work contributes to two areas of research. The first is how human gaze is conditioned on the reading task. This question was previously investigated in the domain of question answering by Hahn and Keller \shortcite{hahn2018}, who collected eye-tracking data in an experimental setup similar to ours for materials from the CNN and Daily Mail corpus \cite{hermann2015}. They demonstrate that reading times on the named entity which is the correct answer to the question are longer if participants are shown the question before reading the passage as compared to ordinary reading. Our work builds on this result, introducing a more general QA setup which is not restricted to questions whose answer is a named entity. Crucially, we further leverage this information for improving machine question answering. The second research area to which or work contributes is augmenting NLP models with gaze data. In this area, gaze during reading has been used for tasks such as syntactic annotation \cite{barrett2015pos,barrett2015functions,barrett2016,strzyz2019}, text compression \cite{klerke2016}, text readability \cite{gonzalez2017}, Named Entity Recognition \cite{hollenstein2019-entity}, and sentiment classification \cite{mishra2016,mishra2017,mishra2018}. Work on the first four tasks used task-independent eye-tracking corpora, primarily the Dundee corpus \cite{dundee} and GECO \cite{geco2017}. For the task of sentiment classification, the authors used task specific eye-tracking corpora in which the participants were asked to perform sentiment classification. Our study differs from this literature in several aspects. First, we address the previously unexplored task of reading comprehension, which has established theoretical and empirical connections to eye movements in reading \citep[][{\it among others}]{just1980,reichle2010eye,rayner2016,hahn2018}. Also differently from these studies, we cover and directly compare both a task specific reading condition (Hunting) and a task-independent condition (Gathering), as well as both external (Dundee) and corpus specific (OneStopQA) eye-tracking data. Our QA task can be viewed as a generalization of the work in Mirsha et al. \shortcite{mishra2016,mishra2017,mishra2018}, where instead of being asked about the sentiment of a paragraph, subjects are presented with arbitrary questions. Our multitask approach for jointly performing the QA task and predicting gaze is similar to Klerke et al. \shortcite{klerke2016}, Berrett et al. \shortcite{barrett2018} and Mishra et al. \shortcite{mishra2018}. In particular, in Equation \ref{loss_equation} we use the same loss term as Barrett et al. \shortcite{barrett2018} which consists of a linear combination of an NLP task loss and gaze prediction loss. Our approach differs from Barrett et al. \shortcite{barrett2018} in that their model uses the gaze predictions as input attention weights for the NLP task, while our model treats gaze only as an output. Our approach provides a parallel to human reading, in which eye movements are an external behavior rather than an input to language processing tasks. Our work differs from Mishra et al. \shortcite{mishra2018} in the model and the use of a single auxiliary objective based on gaze. Finally, we note that in Vajjala et al. \shortcite{vajjala2016} eye-tracking data from ESL learners was collected for 4 articles from the same source of OneStopEnglish articles \cite{vajjala2018onestop} used here, and utilized to study the influence of text difficulty level on fixation measures and reading comprehension. Our work focuses on a different task and a different population of readers. A large body of work exists on QA, including span prediction (e.g. BiDAF \cite{bidaf2017}), cloze (e.g. \cite{hermann2015}), and multiple choice QA (e.g. Stanford Attentive Reader \cite{chen2016}). Here, we focus on multiple choice QA due to its prevalence in human evaluations of reading comprehension, and use RoBERTa due to its state-of-the-art performance on this task. Further, neural models for QA deploy various notions of \emph{internal attention}. The study of NLP model internal attention has drawn much interest in recent years \citep[][{\it among others}]{adi2016fine,clark2019,serrano2019,kovaleva2019,hoover2019exbert}. In this work we abstract away from model internal dynamics due to their complexity, and the theoretical justification for treating gaze as an external behavior rather than an internal model property. Examination of internal model attention and its relation to human gaze is however an intriguing research direction that we intend to pursue in future work. \section{Data} \subsection{Reading Comprehension Data} \label{rc_data} We use two reading comprehension resources, OneStopQA \cite{berzak2020starc} and RACE \cite{lai2017race}. \\ \textbf{OneStopQA} is a reading comprehension dataset containing paragraph-level multiple choice reading comprehension questions for 30 Guardian articles (162 paragraphs) taken from the OneStopEnglish dataset \cite{vajjala2018onestop}. Each article is available in three parallel text difficulty levels: the original Advanced text and two simplified versions, Intermediate and Elementary. Each paragraph has three multiple choice reading comprehension questions. All the questions are answerable based on any of the text level versions of the paragraph. We use the Advanced and Elementary text versions, corresponding to 972 question--paragraph pairs. The answers for each OneStopQA question are structured as follows. \\ \textbf{A} is the correct answer. Answering a question correctly requires information from a textual span in the paragraph called the \emph{critical span}. Importantly, the critical span does not contain the answer in verbatim form.\\ \textbf{B} is a distractor which represents a plausible miscomprehension of the critical span. \\ \textbf{C} is a distractor which is anchored in an additional span in the paragraph, called the distractor span. \\ \textbf{D} is a distractor which has no support in the text. Both the critical span and the distractor span are annotated manually in the text. \textbf{RACE} is the standard dataset in NLP for training and evaluation of multiple choice reading comprehension. It comprises reading comprehension examination materials for middle school and high school students in China. Similarly to OneStopQA, RACE questions are multiple choice, with four possible answers for each question. As opposed to OneStopQA, the questions are based on an entire article rather than a specific paragraph and the answers have no systematic structure with respect to the text. Although RACE has been widely used in NLP, it was recently shown that it has substantial quality assurance drawbacks; 47\% of its questions are guessable by RoBERTa without the passage, and 18\% do not have a unique correct answer \cite{berzak2020starc}. We therefore treat RACE as a secondary evaluation benchmark. Statistics on the reading comprehension materials are presented in Table \ref{tab:dataset-stats}. \begin{table}[ht!] \small \begin{center} \begin{tabular}{l|ll|ll} \hline & \multicolumn{2}{c}{\bf OneStopQA}& \multicolumn{2}{|c}{\bf RACE} \\ \cline{2-5} & Ele & Adv & Mid & High\\ \hline Words per text & 112.3 & 138.6 & 232.1 & 354.1 \\ \# Passages & 162 & 162 & 6,409 & 18,728 \\ \# Questions & 486 & 486 & 25,421 & 62,445\\ \hline \end{tabular \end{center} \caption{\label{tab:dataset-stats} Statistics for OneStopQA and the RACE training set. The term \emph{passage} refers to a single paragraph in OneStopQA and an article in RACE.} \end{table} \subsection{OneStopQA Eye-Tracking Data} \label{subsec:eyetracking-experiment} We collected a dataset of eye movements for the 30 OneStopQA articles. The articles are divided into three 10-article batches with 54 paragraphs in each batch. Each participant read a single 10-article batch. Following the experimental setup of \cite{hahn2018}, a given batch is presented in one of two possible between subject conditions: \emph{Hunting} and \emph{Gathering}. In the Hunting condition participants are presented with the question prior to reading the text, while in the Gathering condition the question is provided only after the participant has completed reading the text. A single experiment trial consists of reading a paragraph and answering one reading comprehension question about it. In the Hunting condition, a trial has 5 pages in which the screen shows one page at a time. In the first page, the participant reads the question (henceforth \emph{question preview} page). In the second page, they read the paragraph. In the third page they read the question again. The fourth page retains the question, and also displays the four answers. After choosing one of the answers, the fifth page informs the participant on whether they answered the question correctly. The Gathering condition is identical to the Hunting condition, except that participants are not presented with the question preview page. Consequently, subjects in this condition have to be prepared for any question. Each trial was randomly assigned to one of six conditions in a Latin square design, where each condition is a combination of one of the three questions and one of the two paragraph levels. The presentation order of the articles and the assignment of answers to A -- D letters was randomized. Eye movements were recorded using an EyeLink 1000 Plus eye tracker (SR Research) at a sampling rate of 1000Hz. The experiment duration was typically 1 - 1.5 hours. Further details on the eye-tracking experiment are provided in Appendix \ref{appendix-eyetracking}. We collected data from 269 participants, with an average of 7.5 participants per trial (question - paragraph level pair). We excluded trials in which participants did not answer the question correctly, remaining with 6.3 participants per trial. The overall question answering accuracy rate was 86.9\% in the Hunting condition and 81.9\% in the Gathering condition, which is lower ($p<10^{-4}$).\footnote{Satterhwaite's method on a mixed-effects model: $\text{correct}\sim\text{preview}+(\text{preview}||\text{subject})+(\text{preview}||\text{example})$.} \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{pics/gaze.pdf} \caption{Mean Total Fixation Duration inside and outside the critical span in the Hunting (with question preview) and Gathering (without question preview) conditions. Error bars correspond to a 95\% confidence interval from a mixed-effects model that accounts for variation of fixation durations across subjects and questions. } \label{fig:span_reading_times} \end{center} \end{figure} \section{Question Conditioned Gaze in Human Reading Comprehension} \label{sec:reading-times} \begin{figure*}[ht!] \centering \begin{subfigure}[b]{0.95\textwidth} \includegraphics[trim={1.5cm 2cm 0 0},clip,width=\textwidth]{pics/preview.pdf} \caption{Hunting condition (with question preview)} \label{fig:gull} \end{subfigure} \par\medskip \begin{subfigure}[b]{0.95\textwidth} \includegraphics[trim={1.5cm 2cm 0 2.5cm},clip,width=\textwidth]{pics/no_preview.pdf} \caption{Gathering condition (without question preview)} \label{fig:tiger} \end{subfigure} \caption{Example of gaze distributions in the Hunting and Gathering conditions for an Elementary level paragraph. The color of each word corresponds to its Total Fixation Duration divided by the overall passage reading time, averaged across participants. The critical span appears in bold red. The distractor span appears in purple italics.} \label{fig:examples} \end{figure*} We motivate our approach by demonstrating that human gaze distributions are strongly conditioned on the reading comprehension task. This conditioning has been previously established for the case of named entities \cite{hahn2018}, and we examine it here in a more general QA setting. Specifically, we consider speed-normalized Total Fixation Duration; for each subject, we take the Total Fixation Duration (i.e.\ sum of all the fixation times) on a word and normalize it by the subject's total reading time for the passage. Consider the example in Figure \ref{fig:examples}, where we visualize the speed-normalized gaze on each word averaged across subjects for the same question -- paragraph pair in the Hunting (with question preview) and Gathering (without question preview) conditions. As can be seen from the heatmaps, the gaze distributions are fundamentally different between these conditions. In particular, in the Hunting condition we observe a noticeable correspondence between gaze and the annotated critical span. Although the degree of correspondence between gaze and the critical span in the Hunting condition depends on the specifics of the question and the text, the presented example is representative of a large portion of our items. To further substantiate this observation, in Figure \ref{fig:span_reading_times} we compare the average Total Fixation Duration within versus outside the critical span in both the Hunting and Gathering conditions. We observe that in the Hunting condition, reading times are significantly longer within the critical span compared to outside of the critical span ($p<10^{-15}$),\footnote{This and subsequent tests are calculated using Satterthwaite's method applied to a mixed-effects model that treats subjects and questions as crossed random effects. Using R formula notation, the model is $\text{gaze} \sim \text{span}*\text{condition} + (\text{span}||\text{subject}) + (\text{condition*span}||\text{example})$). Tests were performed with the lme4 and lmerTest R packages.} while in the Gathering condition they are slightly shorter within the critical span ($p<10^{-4}$). The difference between within-span vs outside-of-span reading times between Hunting and Gathering conditions is also significant ($p<10^{-15}$). We further note that the total reading time for the passage is shorter in the Hunting condition ($p<10^{-4}$), consistent with more targeted reading as compared to the Gathering condition. While our analysis provides evidence for an increased concentration of gaze time around text that is critical for answering the question, the potential utility of human gaze is not limited to this aspect alone. Human gaze can be viewed as a soft form of text annotation that relates the entire text to cognitive load during processing. In particular, it can in principle provide valuable fine-grained information within the critical span. \section{Method: Joint Question Answering and Human Gaze Prediction} To test the effectiveness of utilizing human gaze data for enhancing the performance of a reading comprehension system, we trained a reading comprehension model to perform the same multiple choice task as the human subjects. We then conducted a series of controlled experiments to assess how the accuracy of the model is affected by providing it with human eye movements information. \subsection{Model} \begin{figure*} \begin{center} \includegraphics[width=0.78\textwidth]{pics/model_diagram.pdf} \caption{Model diagram. The model uses the standard transformer architecture for multiple choice QA, augmented to simultaneously predict human reading times over the passage.} \label{fig:model_diagram} \end{center} \end{figure*} We utilize the RoBERTa transformer architecture, which has shown state-of-the-art performance on the multiple choice reading comprehension task \cite{roberta2019}. We experiment with both the Base and the Large variants of this model. To allow RoBERTa to benefit from the gaze data, we use multi-task learning with hard parameter sharing \cite{caruana1993}, and modify RoBERTa to jointly predict the answer to each question and the human gaze times allocated to each passage word. Each multiple-choice example is composed of the passage $d$, the question $Q$, and the four possible answers $\{y_{1},y_{2},y_{3},y_{4}\}$. We follow the standard procedure for using transformer architectures for multiple-choice tasks, concatenating the passage, question, and answer $[CLS,d,SEP,Q,y]$ for each possible answer $y$. The resulting string is encoded through RoBERTa. We then take the final embedding of the CLS token for each answer and run it through a logistic layer to return the probability of each answer being correct. This probability is used to calculate a cross-entropy QA loss term $L_{QA}$ for each example. \begin{equation} L_{QA} = -\log{p(y_{c})} \end{equation} Where $y_{c}$ is the correct answer for the question. We additionally calculate an auxiliary loss based on gaze information. As in Figure \ref{fig:examples}, our reference metric $RT(w)$ is speed-normalized Total Fixation Duration ($TF$). Specifically, for each passage word $w$ and subject $s$, we consider the subject's Total Fixation Duration on the word $TF_{s}(w)$ normalized by the sum of all their fixation durations over the passage, and then average this quantity across all subjects who read the passage. \begin{equation} RT(w) = \frac{1}{S}\sum_{s}{\frac{TF_{s}(w)}{\sum_{w'}{TF_{s}(w')}}} \end{equation} In cases where RoBERTa's byte pair tokenizer \cite{sennrich2016} splits a single word into multiple tokens, we evenly split the gaze time associated with the word among the resulting tokens. We take the encoding of each passage word at the last layer of RoBERTa for each candidate answer $y$ and add a linear layer parameterized by a weight vector $v \in R^d$ shared across all passage word positions, where $d$ is the RoBERTa embedding dimension. For each passage word $w$, this layer maps from the $d$-dimensional word embedding to a scalar gaze value. These values are put through a softmax layer, obtaining predictions $RT_{pred_{y}}(w)$ which are guaranteed to be between 0 and 1 for each word and sum to 1 for each passage, making them comparable to our normalized human gaze measurements $RT$. These predictions are then averaged across the four possible answers to obtain reading time predictions for each passage word $RT_{pred}(w)$. Finally, we compute the cross-entropy loss between the gaze predictions and observed human gaze. \begin{equation} L_{gaze} = -\sum_{w}{RT(w)\log{RT_{pred}(w)}} \end{equation} The final loss term is a convex combination of the gaze loss term and the answer prediction loss, where a hyperparameter $\alpha$ is the relative weight assigned to the gaze loss term: \begin{equation} \label{loss_equation} L = (1-\alpha)L_{QA} + \alpha L_{gaze} \end{equation} Figure \ref{fig:model_diagram} presents a diagram of our model. Our modelling approach is fundamentally behavioral, as it attempts to mimic human eye movements as an external behavior. It treats the model itself largely as a black-box, relying only on the model's final query-conditioned representations of the passage words. It is therefore also modular -- the RoBERTa model can be substituted with any QA model which provides passage word representations. Furthermore, our framework is compatible not only with the multiple choice variant of the QA task, but also with other answer output formats. We also note that the standard multiple choice QA transformer architecture requires a copy of the passage and the question for each answer, and thus the reading time predictions are generated for each copy and averaged. In QA models were the query and passage are encoded only once, such averaging would not be required. Further, other architectures are conceivable for joint multiple choice QA and gaze prediction. In particular, one may consider architectures which do not include the answers for gaze prediction; for example, through soft parameter-sharing multi-task approaches. We chose hard parameter sharing as it enables predicting gaze with only a minimal architecture change and a small number of additional parameters to the standard multiple choice QA transformer model. \subsection{Training Procedure} Each experiment consists of a training set of QA examples from OneStopQA accompanied by gaze data, a development set, and a test set. For each experiment, we fine-tune an initial model for 15 epochs for each $\alpha \in [0, .2, .4, .6, .8, 1.0]$. We pick the epoch and $\alpha$ that have the highest question-answering accuracy on the development set and report accuracy on the test set. For experiments on OneStopQA, we perform five-fold cross validation where each fold has 18 training articles, 6 development articles and 6 test articles. Each article appears three times in train, once in dev and once in test across the 5 folds. \subsection{Conditions} We test two initial models: \begin{enumerate} \setlength\itemsep{0.2em} \item \textbf{No RACE fine-tuning} using RoBERTa that has not been fine-tuned for QA on RACE. This experiment shows the value of incorporating eye-tracking data in data-scarce scenarios where only a small amount of data is available for fine-tuning on the given task. \item \textbf{With RACE fine-tuning} using RoBERTa that has been fine-tuned on RACE to perform multiple choice question answering, following the procedure in \cite{roberta2019}. \end{enumerate} For each fine-tuning regime, we test the model for two levels of generalization: \begin{enumerate} \setlength\itemsep{0.2em} \item \textbf{Within-domain} where we use our five-fold cross validation setup to train and test on OneStopQA. \item \textbf{Out-of-domain} where we train on all 30 OneStopQA articles and use the RACE dev and test sets for development and testing. \end{enumerate} We note that in addition to the quality assurance issues with RACE mentioned in Section \ref{rc_data}, the out-of-domain RACE evaluations are particularly challenging due to substantial differences in the genres and questions types between OneStopQA and RACE, and the small size of OneStopQA as compared to RACE. \begin{table*}[ht!] \begin{center} \begin{tabular}{cl|c|c|c|c|c|c|c|c} & \multicolumn{1}{c}{}& \multicolumn{4}{c}{\bf OneStopQA} & \multicolumn{4}{c}{\bf RACE}\\ \cline{3-10} & & \multicolumn{2}{c|}{No RACE} & \multicolumn{2}{c|}{With RACE} & \multicolumn{2}{c|}{No RACE} & \multicolumn{2}{c}{With RACE} \\ & & \multicolumn{2}{c|}{Fine-tuning} & \multicolumn{2}{c|}{Fine-tuning} & \multicolumn{2}{c|}{Fine-tuning} & \multicolumn{2}{c}{Fine-tuning} \\ \cline{3-10} & & Base & Large & Base & Large& Base & Large& Base & Large \\ \hline &No OneStopQA Fine-tuning & 23.5 & 23.1 & 68.5 & 85.8 & 24.2 & 24.3 & 68.4 &82.9 \\ &With OneStopQA Fine-tuning & 48.9 & 62.9 & 68.3 & 85.9 & 36.8 & 42.9 & 68.4 & 82.9\\ \hline \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{+ Aux. Loss}}} & Question--Passage Similarity &54.7 & 78.9 & 72.3 & 87.3 &\bf 41.2 & 52.6 & 68.2& 82.9\\ &Gaze Gathering Dundee & 55.9 & 78.2 & 72.5 & 86.6 & 39.3 & 52.5 & 68.4 & 82.9\\ &Gaze Gathering OneStopQA & 54.7 & 80.2 & 71.5 & 87.0 & 39.6 & 48.4 & 68.4 & \bf 83.0\\ &Critical Spans OneStopQA & 54.7 & \bf 80.7 & 70.1 & 86.5 & 40.4 & 51.6 & 68.4 & 82.9 \\ \cline{2-10} &Gaze Hunting OneStopQA & \bf57.1 & 80.5 & \bf 73.1 & \bf 88.0 & 41.1 &\bf 53.0 & \bf 68.5 & \bf 83.0\\ \hline \end{tabular} \end{center} \caption{Question Answering accuracy for RoBERTa Base and Large on OneStopQA and RACE. RACE Fine-tuning denotes whether the model has been first fine-tuned for QA on RACE. The first two rows are baselines without an auxiliary loss, without and with QA fine-tuning on OneStopQA. The following four rows are baselines fine-tuned for QA on OneStopQA with auxiliary loss, using four different alternatives for measuring the importance of each passage word. The last row is our primary model variant, which uses gaze in the Hunting condition. All the results with OneStopQA fine-tunings are averaged over three runs of the model.} \label{table:results} \end{table*} \subsection{Baselines} We compare our model with two baselines which do not use auxiliary loss. We further introduce four auxiliary loss models, which replace Hunting condition gaze with alternative information sources for measuring the importance of each passage word. \subsubsection*{No Auxiliary Loss} These two baselines do not utilize the auxiliary loss during model fine-tuning. \begin{enumerate} \setlength\itemsep{0em} \item \textbf{No OneStopQA Fine-tuning} The model is not fine-tuned for QA on OneStopQA. \item \textbf{With OneStopQA Fine-tuning} The model is fine-tuned for QA on OneStopQA. \end{enumerate} \subsubsection*{With Auxiliary Loss} These four models are fine-tuned for QA on OneStopQA, and use an auxiliary loss where gaze in the Hunting condition is replaced with other ways for weighting each word in the passage. \begin{enumerate} \setlength\itemsep{0em} \item \textbf{Question--Passage Similarity} In this baseline, the auxiliary information is based on the similarity between the question and each passage word. We encode the question and the passage separately with an off-the-shelf encoder (here, RoBERTa that has not been fine-tuned for question-answering) and compute the dot-product between each encoded passage word and the final encoding of the question's CLS token. These values are then normalized by applying a softmax function. \item \textbf{Gaze Gathering Dundee} Here, we utilize gaze data from the Dundee corpus \cite{dundee}, allowing us to examine the benefit of predicting gaze on the same texts used for QA, versus unrelated texts. We split each Dundee article into passages of size equal to the average OneStopQA passage (125 words), yielding 453 passages. We then normalize the average Total Fixation Duration across Dundee's 10 subjects as for OneStopQA. In each training step, we predict answers on one batch of OneStopQA questions and gaze distributions on one batch of Dundee paragraphs chosen at random, and perform a step of gradient descent. This interleaved procedure is similar to that used by Barrett et al. \shortcite{barrett2018}, and is analogous to the other baselines, where we predict answers on one batch of OneStopQA examples and gaze distribution on those same examples for each gradient descent step. \item \textbf{Gaze Gathering OneStopQA} In this method, we use gaze data from the Gathering variant of the OneStopQA reading experiment where subjects do not see the question before seeing the paragraph they will later be questioned about, and hence their gaze is necessarily not question-dependent. \item \textbf{Critical Span Annotations OneStopQA} In OneStopQA, each question includes a manual annotation which indicates the span in the passage which is critical for answering the question. We assign a gaze value of $1$ to the tokens within the span and $0$ to those outside it, and normalize with softmax as before. This corresponds to a theoretical subject who looks equally at each word within the critical span and not anywhere else in the passage. \end{enumerate} We note that the last two baselines are new methods for improving machine QA using human-generated behavioral data (gaze and span annotations) that have not been previously proposed in the literature, and constitute very strong alternatives to our model. \section{Experimental Results} Our results are summarized in Table \ref{table:results}. All the results involving OneStopQA fine-tunings are averaged across three runs. In the following, p values are indicated when the difference in the performance of the compared models is statistically significant at the $p<0.05$ level. Fine-tuning the model for QA on OneStopQA is most beneficial in the two resource-lean regimes when the model has not been previously fine-tuned on RACE ($p<10^{-10}$, Wald test). Similarly, adding auxiliary loss to the QA model in these two regimes has a substantially larger impact on model performance compared to performing prior fine-tuning on RACE ($p<10^{-8}$ for all baselines). In our within-domain evaluations on OneStopQA, we observe improvements of our model over all the baselines in all evaluations, except for the case of the Large model without RACE fine-tuning where our model comes second. We also observe improvements in the out-of-domain evaluations on RACE. When the Large model is fine-tuned for QA only on OneStopQA, it obtains an accuracy of $53.0$, reflecting a $0.4$ improvement over the strongest baseline. The Base model comes second in this evaluation. When first fine-tuning the model for QA on RACE, then performing additional fine-tuning on OneStopQA, the Base model obtains an improvement of $0.1$ over the strongest auxiliary loss baseline. For the Large model we observe a similar improvement when using gaze, with the same performance in the Hunting and Gathering conditions. Interestingly, we do not observe a consistent ordering in the performance of the baselines. In particular, we do not observe a clear advantage of using gaze in the Gathering condition over Question--Passage Similarity. We also obtain comparable performance when gaze data in the Gathering condition comes from OneStopQA and Dundee. Notably, in nearly all the evaluations our model performs better compared to the manual Critical Span Annotation baseline. We hypothesize that this may be because the annotated spans do not capture potential inter-annotator variation in span annotations, as well as within-span information which is informative for our task. We note that while the gains over the strongest baselines are not statistically significant at the $.05$ level, the overall consistent pattern across evaluation regimes suggests the promise of using Hunting gaze data as the target of the auxiliary loss objective over any other single baseline. Finally, we note that an $\alpha$ of $0.2$ -- $0.4$ was most often chosen. \section{Conclusion} \label{sec:discussion} We present a framework for performing automated reading comprehension in a human-like fashion, yielding performance gains for a state-of-the art reading comprehension model. Our work also contributes to the study of human reading, providing evidence for a systematic conditioning of human reading on the reading comprehension task. In the future we intend to study the relation between gaze and internal model attention, and further explore the relation between gaze, task and task performance in QA and well as other tasks. \section*{Acknowledgments} We gratefully acknowledge support from Elemental Cognition and from NSF grant IIS1815529, a Google Faculty Research Award, and a Newton Brain Science Award to RPL. \bibliographystyle{acl_natbib}
2,877,628,089,875
arxiv
\section{Introduction} In this paper, we investigate transverse increments and long-range correlations in first passage percolation (FPP) on $\ensuremath{\mathbb{Z}}^2$. Various models of FPP have been considered in the literature, but we will consider the classical model, as introduced by \cite{HammersleyWelsh65}, which places i.i.d.\ weights on the edges of the integer lattice. \subsection{Brief description of the model} Let $\edge{E}(\ensuremath{\mathbb{Z}}^2)$ be the set of nearest-neighbor edges in $\ensuremath{\mathbb{Z}}^2$. Consider a collection $\mathbb{T}:=\left\{ \tau_e: e\in \edge{E}(\ensuremath{\mathbb{Z}}^2)\right\}$ of random variables which are called \emph{edge-weights.} We will list our assumptions in two categories: the ``basic assumptions" described below will be used throughout the paper. Later we will describe another set of more ``technical assumptions," which we will use more selectively. \paragraph{\textbf{Basic Assumptions:}} We assume that the edge-weights are \emph{i.i.d., non-negative, continuous, with $\Exp\exp\left( C \tau_e\right)<\infty$ for some $C>0$.}. The \emph{passage time} of a self-avoiding lattice path $\gamma$ is defined as the sum of the edge-weights of all the edges on the path $\gamma$, i.e., \[ T(\gamma):=\sum_{\gamma\mbox{ contains } e} \tau_e. \] We adopt the convention that paths are a continuous curve in $\ensuremath{\mathbb{R}}^2$ as opposed to a sequence of lattice points or sequence of nearest-neighbor edges. The passage time between two points $\point{u}$ and $\point{v}$ in $\ensuremath{\mathbb{Z}}^2$ is defined as \[ T(\point{u},\point{v}):=\inf\left\{ T(\gamma)\middle\vert\gamma\;\mbox{is a path joining $\point{u}$ and $\point{v}$}\right\}. \] Then $T$ is a random pseudo-metric on $\ensuremath{\mathbb{Z}}^2$. It was shown by \cite{WiermanReh1978}, under the i.i.d.\ and nonnegative assumptions on the edge-weights, that the infimum above is attained for some paths; in our case there can be only one such path because the edge-weights are also assumed to be continuous, this path is called the \emph{geodesic} between $\point{u}$ and $\point{v}$ and will be denoted by $\Gamma(\point{u},\point{v})$. Since the edge-weights have finite expectation, the passage times $T(\point{u},\point{v})$ also have finite expectation for all $\point{u}$, $\point{v}$, and hence we can define \[ h(\point{u}):=\Exp T(\point{0},\point{u}). \] From the triangle inequality of $T$ it follows that $h$ is sub-additive, i.e., for any $\point{u},\point{v}\in\ensuremath{\mathbb{Z}}^2$ we have \[ h(\point{u}+\point{v})\leq h(\point{u})+h(\point{v}). \] By the subadditive ergodic theorem of \cite{Kingman1984}, we get that for any $\point{u}\in\ensuremath{\mathbb{Z}}^2$ the following limits exist almost surely and in $L^1$: \[ g(\point{u}):=\lim_{n\to\infty}\frac{T(\point{0},n\point{u})}{n}=\lim_{n\to\infty}\frac{h(n\point{u})}{n}=\inf_{n>0}\frac{h(n\point{u})}{n}. \] By taking appropriate subsequences the domain of $g$ can be extended to $\ensuremath{\mathbb{Q}}^2$. Then $g$ becomes a norm, and therefore, the domain of $g$ can be further extended to $\ensuremath{\mathbb{R}}^2$. The unit ball in $g$-norm is called the \emph{limit shape} and will be denoted by \[ \mathcal{B}:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:g(\point{x})\leq 1\right\}. \] The shape theorem of \cite{CoxDur81} states, under conditions milder than our our basic assumption, that the \emph{wet region} at time $t$, defined as \[ \mathcal{B}(t):=\left\{\point{x}\in\ensuremath{\mathbb{Z}}^2:T(\point{0},\point{x})\leq t\right\}, \] approaches $\mathcal{B}$ in an appropriate sense as $t\to\infty$, and further, $\mathcal{B}$ is compact, convex, has a nonempty interior, and has all the symmetries of the lattice. \begin{notn} We extend the domain of $T$ and define passage times between points in $\ensuremath{\mathbb{R}}^2$ as follows. For $\point{x}\in\ensuremath{\mathbb{R}}^2$ let $\lfloor\point{x}\rfloor$ be the down-left corner of the unit square containing $\point{x}$ in the standard lattice of $\ensuremath{\mathbb{Z}}^2$. For $\point{x},\point{y}\in\ensuremath{\mathbb{R}}^2$, let \[ T(\point{x},\point{y}):=T(\lfloor\point{x}\rfloor,\lfloor\point{y}\rfloor). \] Similarly, by $\Gamma(\point{x},\point{y})$ we will mean the geodesic between $\lfloor\point{x}\rfloor$ and $\lfloor\point{y}\rfloor$. Also for $\point{x}\in\ensuremath{\mathbb{R}}^2$ let \[ h(\point{x}):=h(\lfloor\point{x}\rfloor). \] \label{notn:nonlattice} \end{notn} \begin{remark} Throughout the paper, $C, C_0, C_1, C_2, \dots$ will denote constants that depend only on the distribution of the edge-weights. We will restart numbering of $C_i$s in each proof with the exception that when we break the proof of a theorem in smaller results, we keep a continuous numbering even in the proofs of the smaller results. Also, when we need to use a result which has been proved before, we use ``tilde-version" of the variables and parameters.\end{remark} \begin{remark} Extending definition of $T$ from $\ensuremath{\mathbb{Z}}^2$ to $\ensuremath{\mathbb{R}}^2$ introduces a minor technical issue. Although $g(\point{x})\leq h(\point{x})$ for all $\point{x}\in\ensuremath{\mathbb{Z}}^2$, this may not be true for $\point{x}\in\ensuremath{\mathbb{R}}^2$. Instead we have $g(\point{x})-\Cl{n1}\leq h(\point{x})$ for some constant $\Cr{n1}>0$. Similarly, for $\point{x},\point{y}\in\ensuremath{\mathbb{R}}^2$ we have $h(\point{x}+\point{y})-\Cl{n2}\leq h(\point{x})+h(\point{y})$ for some constant $\Cr{n2}>0$.\label{remark:nonlattice1}\end{remark} \subsection{Heuristics of the Main Results} It is common in the literature, for example in the works \cite{Newman95}, \cite{NewmanPiza1995}, \cite{LiceaNewman1996}, \cite{DamronHanson14}, \cite{DamronHanson2017}, to take as assumption specific unproved properties of the boundary of the limit shape such as differentiability, and curvature, either locally or globally, which eliminates the possibility of facets or corners. These properties are believed to be valid under our assumptions, but there is no proof yet. We will also make similar assumptions. Suppose the boundary of the limit shape is differentiable at a direction $\theta$, and $\theta^t$ is the corresponding tangential direction. By \emph{transverse increments} we mean differences of the form $T(\point{0},\point{x})-T(\point{0},\point{y})$ where $\point{x}$ has direction $\theta$, $\point{x}-\point{y}$ has direction ${\theta^t}$. Heuristically we can say what the order of the fluctuations of transverse increments should be. For this, we need the scaling exponents $\chi$ and $\xi$. It is believed that for FPP on Euclidean lattices of any dimension, that there exists an exponent $\chi$, called the \emph{`fluctuation exponent,'} such that $T(\point{0},\point{v})-h(\point{v})$ is of the order of $\ltwo{\point{v}}^\chi$ ($\ltwo{\cdot}$ is the Euclidean norm.) Also, it is believed that there exists an exponent $\xi$, called the \emph{`wandering exponent,'} such that the geodesic $\Gamma(\point{0},\point{v})$ wanders $\ltwo{\point{v}}^\xi$ distance on average from the line joining $\point{0}$ and $\point{v}$. The two exponents are related by the equation $\chi=2\xi-1$ which has been proved by \cite{Chatterjee2013} assuming $\chi$ and $\xi$ exist in a certain sense. In dimension $d=2$ it is believed that $\chi=1/3$ and $\xi=2/3$. In $d=3$ it is believed that $\chi$ is approximately $1/4$, and in higher dimensions there does not seem to be a consensus even among physicists about values of $\chi$ and $\xi$, see for example \cite{Marinari_2002}, \cite{Le_Doussal_2005}, \cite{Fogedby_2008}, \cite{Kim_2013}, \cite{Alves_2014}. In the closely related model of last passage percolation it has been proved that in $d=2$ with exponential or geometric edge-weights that $\chi=1/3$ and $\xi=2/3$ in $d=2$, see \cite{KurtShape}, \cite{KurtTransverse}, \cite{BalazsCatorTimo2006}. If one assumes the existence of these exponents in some appropriate sense, then fluctuations of the transverse increment $T(\point{0},\point{x})-T(\point{0},\point{y})$ should be of the order of $\ltwo{\point{x}-\point{y}}^{\chi/\xi}$. The heuristic of this is the following. We expect that the geodesics $\Gamma(\point{0},\point{x})$ and $\Gamma(\point{0},\point{y})$ stay disjoint after starting from $\point{x}$ and $\point{y}$ respectively for a distance of the order of $\ltwo{\point{x}-\point{y}}^{1/\xi}$. Then these two branches should contribute approximately independently $\ltwo{\point{x}-\point{y}}^{\chi/\xi}$ to the fluctuation. The right scale of the coalescence time as above has been proved by \cite{BasuSarkarSly2019} in the exactly solvable model of two-dimensional last passage percolation. \begin{figure}[H] \centering \includegraphics[width=0.5\linewidth]{Fig1HeuInc.pdf} \bigskip \caption{Illustration for the heuristic of the exponent $\chi/\xi$: directions of $\point{x}$ and $\point{y}$ are approximately ${\theta_0}$; direction of $\point{y}-\point{x}$ is ${\theta_0^t}$; the two geodesics $\Gamma(\point{0},\point{x})$ and $\Gamma(\point{0},\point{y})$ are expected to coalesce approximately at distance $\ltwo{\point{x}-\point{y}}^{1/\xi}$ in $-{\theta_0}$ direction when traced starting from $\point{x}$ and $\point{y}$ respectively.} \end{figure}% One reason for studying the fluctuations of transverse increments is the following. In $d=2$, it is believed that the transverse increments behave like increments of Brownian motion, that is, the increments are approximately uncorrelated. If this is true, then the exponent for fluctuation of transverse increments should be $1/2$ so that $\chi/\xi=1/2$. This with $\chi=2\xi-1$ would imply $\chi=1/3$ and $\xi=2/3$. As an application of the upper bound on fluctuations of transverse increments we get an upper bound on long-range correlations. By long-range correlation we mean the correlation between $T(\point{0},\point{a})$ and $T(\point{0},\point{b})$ where $\point{a}$ and $\point{b}$ are points approximately in the same direction from the origin and distance between $\point{a}$ and $\point{b}$ is large compared to typical wanderings of the geodesics $\Gamma(\point{0},\point{a})$ and $\Gamma(\point{0},\point{b})$. Heuristically we can say the following about this correlation, see figure~\ref{fig:longrangeheu}. Assuming $\chi$ and $\xi$ exist, typical wandering of the geodesics $\Gamma(\point{0},\point{a})$ and $\Gamma(\point{0},\point{b})$ is of the order of $n^\xi$ where $n$ is the distance of the points $\point{a}$, $\point{b}$ from the origin. Suppose the distance between $\point{a}$ and $\point{b}$ is $J n^\xi$ for some large $J$. Then $\Gamma(\point{0},\point{a})$ and $\Gamma(\point{0},\point{b})$ are expected to branch apart at a distance $m$ from the origin such that the distance between the rays joining $\point{0}$ to $\point{a}$ and $\point{0}$ to $\point{b}$ at distance $m$ from the origin is of the order of the typical wandering of the geodesics of at distance $m$ from origin. So we have approximately $m^\xi/m=J n^\xi/n$. Hence $m=n J^{-(1-\xi)^{-1}}$. Then the covariance between $T(\point{0},\point{a})$ and $T(\point{0},\point{b})$ is expected to be of the order of $m^{2\chi}$. So the correlation should be of the order of $J^{-2\chi/(1-\xi)}$. \begin{figure}[H] \centering \includegraphics[width=0.2\linewidth,angle=90]{Fig2HeuLong.pdf} \bigskip \caption{Illustration for the heuristic of the exponent $2\chi/(1-\xi)$: directions of $\point{a}$ and $\point{b}$ are approximately ${\theta_0}$; direction of $\point{a}-\point{b}$ is ${\theta_0^t}$; distances of $\point{a}$ and $\point{b}$ from origin are approximately $n$; distance between $\point{a}$ and $\point{b}$ is $2Jn^\xi$; the two geodesics $\Gamma(\point{0},\point{a})$ and $\Gamma(\point{0},\point{b})$ are expected to branch apart at approximately distance $m$ from the origin where $m^\xi/m=Jn^\xi/n$.} \label{fig:longrangeheu}\end{figure}% \subsection{Advanced Assumptions} Along with our basic assumptions we make the following assumptions. Similar assumptions have been used in \cite{KenGeoBiGeo} and \cite{ganguly2020optimal}. We assume that there exists $\sigma:(0,\infty)\to(0,\infty)$ such that the following hold. \resetconstant \begin{assum} There exist positive constants $\Cl{A11}$, $\Cl{A12}$, $\Cl{A13}$, such that for all $\point{x},\point{y}\in\ensuremath{\mathbb{R}}^2$ with $\ltwo{\point{x}-\point{y}}\geq\Cr{A11}$, and all $t>0$, we have \begin{equation} \Prob\left(|T(\point{x},\point{y})-h(\point{x}-\point{y})|\geq t\sigma(\ltwo{\point{x}-\point{y}})\right) \leq\Cr{A12} e^{-\Cr{A13}t}. \tag{A1} \label{A1} \end{equation} \label{asm:sigmaexp} \end{assum} \begin{assum} There exist constants $\mathfrak{p}>0$, $\mathfrak{q}>0$, $\alpha\in(0,1)$, $\beta\in(0,1)$, $\alpha\leq\beta$, such that for all $x>y>0$ we have \begin{equation} \mathfrak{p}\left(\frac{x}{y}\right)^\alpha\leq\frac{\sigma(x)}{\sigma(y)}\leq\mathfrak{q}\left(\frac{x}{y}\right)^\beta. \tag{A2} \label{A2} \end{equation} \label{asm:sigmareg} \end{assum} \resetconstant \begin{assum} There exist positive constants $\epsilon$, $\Cl{A3}$, such that for $\point{x},\point{y}\in\ensuremath{\mathbb{R}}^2$ with $\ltwo{\point{x}-\point{y}}\geq\Cr{A3}$, we have \begin{equation} \Prob\left( T(\point{x},\point{y})\leq h(\point{x}-\point{y})-\epsilon\sigma(\ltwo{\point{x}-\point{y}})\right)\geq\epsilon, \quad\mbox{and}\quad \Prob\left( T(\point{x},\point{y})\geq h(\point{x}-\point{y})+\epsilon\sigma(\ltwo{\point{x}-\point{y}})\right)\geq\epsilon. \tag{A3}\label{A3}\end{equation} \label{asm:sigmavar} \end{assum} \begin{remark} By Remark~1.1 of \cite{KenGeoBiGeo}, we assume without loss of generality that $\sigma$ is monotonically increasing and continuous.\end{remark} \resetconstant \begin{remark} Under Assumption~\ref{asm:sigmaexp}, Assumption~\ref{asm:sigmavar} is equivalent to saying that there exist positive constants $\Cl{remark1.41}$, $\Cl{remark1.42}$, such that for all $\point{x},\point{y}\in\ensuremath{\mathbb{R}}^2$ with $\ltwo{\point{x}-\point{y}}\geq\Cr{remark1.41}$, we have \[ \Var(T(\point{x},\point{y}))\geq\Cr{remark1.42}\sigma^2(\ltwo{\point{x}-\point{y}}). \] \end{remark} \begin{remark} The assumption $\beta<1$ is natural because, from results of \cite{Kesten1993}, the passage times are known to satisfy exponential concentration with scaling exponent $1/2$, which shows that under any reasonable definition of $\chi$ we must have $\chi\leq 1/2$.\end{remark} \begin{remark} The assumption $\alpha>0$ is natural because, for certain definition of $\chi$ and $\xi$ it was shown by \cite{WehrAizenman1990} that $\chi\geq (1-(d-1)\xi)/2$ in $d$-dimensions, which in two dimensions coupled with $\chi\geq 2\xi-1$ and $\chi\leq 1/2$ yields $\chi\geq 1/8$.\end{remark} \begin{defn} We say that a direction ${\theta_0}$ is of \emph{type I} if there exist constants $C>0$, $\delta_1>0$, $\delta_2>0$ as follows. The limit shape boundary $\partial\mathcal{B}$ is differentiable in a neighborhood $({\theta_0}-\delta_1,{\theta_0}+\delta_1)$ of the direction ${\theta_0}$, and for $|\delta|\leq\delta_2$, ${\theta}\in({\theta_0}-\delta_1,{\theta_0}+\delta_1)$, we have \[ g({\Unit{\th}}+\delta{\Unit{\tht}})-g({\Unit{\th}})\geq C\delta^2 \] where ${\theta^t}$ is the direction of the tangent to $\partial\mathcal{B}$ at the point in direction ${\theta}$. \end{defn} \begin{remark} We take the direction of the tangents in counter-clockwise direction around the limit shape boundary.\end{remark} \begin{remark} An alternative formulation of type I direction is the following: there exist constants $C>0$, $\delta_1>0$ such that $\partial\mathcal{B}$ is differentiable in the sector $({\theta_0}-\delta_1,{\theta_0}+\delta_1)$, and for ${\theta}\in({\theta_0}-\delta_1,{\theta_0}+\delta_1)$, we have \[ g({\Unit{\th}}+\delta{\Unit{\tht}})-g(n{\Unit{\th}})\geq C\min\left\{|\delta|,\delta^2\right\}. \] \end{remark} \begin{defn} We say that a direction ${\theta_0}$ is of \emph{type II} if there exist constants $C>0$, $\delta_1>0$, $\delta_2>0$ as follows. The limit shape boundary $\partial\mathcal{B}$ is differentiable in a neighborhood $({\theta_0}-\delta_1,{\theta_0}+\delta_1)$ of the direction ${\theta_0}$ and for $|\delta|\leq\delta_2$, ${\theta}\in({\theta_0}-\delta_1,{\theta_0}+\delta_1)$, we have \begin{equation} g({\Unit{\th}}+\delta{\Unit{\tht}})-g({\Unit{\th}})\leq C\delta^2, \label{eq:defcurvup} \end{equation} where ${\theta^t}$ is the direction of the tangent to $\partial\mathcal{B}$ at the point in direction ${\theta}$. \end{defn} \begin{remark} In a neighborhood of a type I direction the limit shape boundary cannot have a facet and in a neighborhood of a type II direction the limit shape boundary cannot have a corner. \end{remark} \begin{remark} As observed in Remark~1.2 of \cite{KenGeoBiGeo}, the condition in \eqref{eq:defcurvup} can be alternatively stated as follows. If $\point{u}_{\theta}$ is the point on $\partial\mathcal{B}$ in direction ${\theta}$, then in a neighborhood of $\point{u}_{\theta}$, the boundary is sandwiched between the tangent at $\point{u}_{\theta}$ and a parabola tangent to $\partial\mathcal{B}$ at $\point{u}_{\theta}$. This implies that the direction of the tangent grows at most linearly in a neighborhood of ${\theta}$. So, if ${\theta_0}$ is a direction of type II, then there exist constants $C>0$, $\delta_1>0$, $\delta_2>0$ such that $\partial\mathcal{B}$ is differentiable in the directions $({\theta_0}-\delta_1,{\theta_0}+\delta_1)$, and for ${\theta}_1,{\theta}_2\in({\theta_0}-\delta_1,{\theta_0}+\delta_1)$ with $|{\theta}_1-{\theta}_2|\leq\delta_2$ we have $|{\theta}_1^t-{\theta}_2^t|\leq C|{\theta}_1-{\theta}_2|$. \label{Remark:linear}\end{remark} \subsection{Main Results} \begin{notn} For any direction ${\theta}$ the unit vector in direction ${\theta}$ is denoted by ${\Unit{\th}}$. Given two distinct directions ${\theta}_1$, ${\theta}_2$ define the projections $\pi^1_{{\theta}_1,{\theta}_2}$ and $\pi^2_{{\theta}_1,{\theta}_2}$ so that for any $\point{v}$ we have \[ \point{v}=\pi^1_{{\theta}_1,{\theta}_2}(\point{v})\Unit{{\theta}_1}+\pi^2_{{\theta}_1,{\theta}_2}(\point{v})\Unit{{\theta}_2}. \] We will denote standard unit vectors in $\ensuremath{\mathbb{R}}^2$ by $\Unit{1}$ and $\Unit{2}$. \end{notn} \begin{notn} For $n>0$ let $\Delta(n):=\left( n\sigma(n)\right)^{1/2}$.\end{notn} Our first main result is the following. \begin{thm} Let ${\theta_0}$ be a direction of both type I and II. Define for $n>0$, $L>0$, \[ \segment{I}(n,L):=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{x})=n,\; 0\leq\pi^2_{\theta_0,\theta_0^t}(\point{x})\leq L\right\} \] and \[ \dev{D}(n,L):=\max\left\{|T(\point{0},\point{x})-T(\point{0},\point{y})|:\point{x},\point{y}\in\segment{I}(n,L)\right\}. \] Fix $\eta\in(0,1]$. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C_1$, $C_2$, $L_0$, $n_0$, $t_0$, such that for $L\geq L_0$, $n\geq n_0$, $t\geq t_0$, $L\leq\Delta(n)$, we have \[ \Prob\left(\dev{D}(n,L)\geq t(\log L)^\eta\sigma\left(\Delta^{-1}(L)\right)\Rp \leq C_1 \exp\left(-C_2 t(\log L)^\eta\right). \] \label{thm:loglogupinc} \end{thm} The following theorem is our lower bound on the fluctuations of transverse increments. We show that standard deviation of transverse increment between two points at a distance $L$ is at least of the order of $\sigma\left(\Delta^{-1}(L)\right)$ with a correction factor smaller than any power of $L$. \begin{thm} Let ${\theta_0}$ be a direction of both type I and II. Fix $\nu\in(1/2,1)$. Then, under Assumptions~\ref{asm:sigmaexp}, \ref{asm:sigmareg}, and \ref{asm:sigmavar}, there exist positive constants $L_0$, $n_0$, such that for $L\geq L_0$, $n\geq n_0$, $L\leq\Delta(n)$, we have \[ \Var\left( T(\point{0},n{\Unit{\thn}})-T(\point{0},n{\Unit{\thn}}+L{\Unit{\thnt}})\right) \geq e^{-(\log L)^{\nu}}\sigma^2\left(\Delta^{-1}(L)\right). \] \label{thm:lowmain} \end{thm} As a corollary of Theorems~\ref{thm:loglogupinc} and \ref{thm:lowmain} we get the following result. It shows if we assume $\chi$ and $\xi$ exist in a certain sense, then $\chi/\xi$ is the correct scaling exponent for the fluctuations of transverse increments. \begin{cor} Suppose there exists $\chi>0$ such that \[ \lim_{x\to\infty}\frac{\log\sigma(x)}{\log x}=\chi, \] and let \[ \xi:=\frac{1+\chi}{2}=\lim_{x\to\infty}\frac{\log\Delta(x)}{\log x}. \] Let ${\theta_0}$ be a direction of both type I and II. Then, under Assumptions~\ref{asm:sigmaexp}, \ref{asm:sigmareg}, and \ref{asm:sigmavar}, there exist functions $f_1$, $f_2$, $f_3$, which converge to $0$ at $\infty$, and positive constants $C_1$, $C_2$, $C_3$, $n_0$, $L_0$, $t_0$, such that for $n\geq n_0$, $L\geq L_0$, $t\geq t_0$, $L\leq n^{\xi+f_1(n)}$, we have \[ \Prob\left( |T(\point{0},n{\Unit{\thn}})-T(\point{0},n{\Unit{\thn}}+L{\Unit{\thnt}})|\geq t L^{\chi/\xi + f_2 (L)} \right) \leq C_1 \exp\left(-C_2 t\right), \] and \[ \Var\left( T(\point{0},n{\Unit{\thn}})-T(\point{0},n{\Unit{\thn}}+L{\Unit{\thnt}})\right)\geq C_3 L^{2\chi/\xi+f_3(L)}. \] \end{cor} \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\eta\in(0,1)$ and $\nu\in(1/2,1)$. Define $f_1$, $f_2$, $f_3$ such that for all $x>1$, \begin{align*} & x^{\xi+f_1(x)} = \Delta(x),\\ & x^{\chi/\xi+f_2(x)} = \sigma(\Delta^{-1}(x))(\log x)^{\eta},\\ & x^{2\chi/\xi+f_3(x)} = e^{-(\log x)^{\nu_0}}\sigma^2(x). \end{align*} Then $f_1$, $f_2$, $f_3$ converge to $0$ at $\infty$ and the result readily follows from Theorems~\ref{thm:loglogupinc} and \ref{thm:lowmain}. \end{proof} \begin{notn} Let $f(n):=\Delta(n)(\log n)^{1/2}/n$ and $f^{-1}(y):=\sup\left\{ x:f(x)\geq y\right\}$.\end{notn} \begin{remark} Since $\beta<1$ by \eqref{A2}, and because we have assumed $\sigma$ is monotonically increasing and continuous, we get that $f$ is continuous and goes to $0$ at $\infty$. Therefore, $f^{-1}$ is continuous, monotonically decreasing, and converges to $0$ at $\infty$.\end{remark} Now we state the result on upper bound of long-range correlations. \begin{thm} Let ${\theta_0}$ be a direction of both type I and II. Recall $\beta$ and $\mathfrak{q}$ from \eqref{A2}. Fix $\delta\in(0,(1-\beta)/2)$. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C$, $J_0$, $n_0$, such that for $n\geq n_0$, $J\in[\mathfrak{q}^{1/2}J_0,n^\delta]$, we have \begin{multline*} \Cov\left( T(\point{0},n{\Unit{\thn}}-J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}}), T(\point{0},n{\Unit{\thn}}+J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}})\right)\\ \leq C\sigma^2\left( f^{-1}\left(\frac{J}{J_0}f(n)\right)\Rp\log n. \end{multline*} \label{thm:longrange} \end{thm} The following corollary shows how we can get the exponent $2\chi/(1-\xi)$ under further regularity assumptions on $\sigma$. \begin{cor} Suppose $\sigma(n)=n^{\chi} L(n)$, where $L$ is a slowly varying function. Let $\xi:=(1+\chi)/2$. Fix $\delta_1\in(0,(1-\beta)/2)$. Let ${\theta_0}$ be a direction of both type I and II. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C$, $J_0$, such that the following holds. For any $\delta_2>0$ there exists $n_0>0$ such that for $n\geq n_0$ and $J\in[\mathfrak{q}^{1/2}J_0,n^{\delta_1}]$, we have \begin{multline*} \Cor\left( T(\point{0},n{\Unit{\thn}}-J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}}), T(\point{0},n{\Unit{\thn}}+J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}})\right)\\ \leq C J^{-2\chi/(1-\xi)+\delta_2}\log n. \end{multline*} \label{cor:longcor} \end{cor} \noindent\begin{proof}[\emph{\textbf{Proof:}}]\resetconstant From Theorem~\ref{thm:longrange} we get positive constants $\Cl{longcor1}$, $J_0$, $n_0$, such that for $n\geq n_0$, $J\in[\mathfrak{q}^{1/2}J_0,n^{\delta_1}]$, we have \begin{equation} \Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})\right)\leq\Cr{longcor1}\sigma^2(m)\log n, \label{eq:longrangecor0} \end{equation} where $\point{a}:=n{\Unit{\thn}}+J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}}$, $\point{b}:=n{\Unit{\thn}}-J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}}$, $m:=f^{-1}(Jf(n)/J_0)$. Using $J\leq n^{\delta_1}$, $\delta_1<(1-\beta)/2$, and \eqref{A2}, we get $\ltwo{\point{a}}\leq\Cl{longcor3}n$, $\ltwo{\point{b}}\leq\Cr{longcor3}$. Hence, using \eqref{eq:longrangecor0} and \eqref{A2}, we get \begin{equation} \Cor\left( T(\point{0},\point{a}),T(\point{0},\point{b})\right)\leq\C\frac{\sigma^2(m)}{\sigma^2(n)}\log n. \label{eq:longrangecor1} \end{equation} From $m=f^{-1}(Jf(n)/J_0)$ we get \[ J_0\frac{\Delta(m)(\log m)^{1/2}}{m}=J\frac{\Delta(n)(\log n)^{1/2}}{n}. \] Therefore, using $J\in[\mathfrak{q}^{1/2}J_0,n^\delta_1]$, $\delta_1<(1-\beta)/2$, and \eqref{A2}, we get $m\leq n$ and $\log m\geq\Cl{longcor2}\log n$. Fix an $\delta_2>0$ and let $\delta_3>0$ be such that \begin{equation} \frac{2\chi}{1-\xi}-\delta_2\leq\frac{2\chi-2\delta_3}{1-\xi+\delta_3/2}. \label{eq:longrangecor3} \end{equation} Since $L$ is slowly-varying, by possibly increasing $n_0$ based on $\delta_2$, we get \begin{equation} \frac{L(n)}{L(m)}\geq\left(\frac{n}{m}\right)^{-\delta_3}. \label{eq:longrangecor2} \end{equation} Therefore, using $\Delta(n)(\log n)^{1/2}=n^\xi L(n)^{1/2} (\log n)^{1/2}$ we get \[ \left(\frac{n}{m}\right)^{1-\xi} =\frac{J}{J_0}\left(\frac{L(n)\log n}{L(m)\log m}\right)^{1/2} \geq\frac{J}{J_0}\left(\frac{n}{m}\right)^{-\delta_3/2}. \] Combining this with \eqref{eq:longrangecor2} and \eqref{eq:longrangecor3}, we get \[ \frac{\sigma^2(m)}{\sigma^2(n)} =\left(\frac{m}{n}\right)^{2\chi}\left(\frac{L(m)}{L(n)}\right)^2 \leq\left(\frac{m}{n}\right)^{2\chi-2\delta_3} \leq\left(\frac{J_0}{J}\right)^{\frac{2\chi-2\delta_3}{1-\xi+\delta_3/2}} \leq J_0^{\frac{2\chi}{1-\xi}}J^{-\frac{2\chi}{1-\xi}+\delta_2}. \] Combining this with \eqref{eq:longrangecor1} completes the proof of Corollary~\ref{cor:longcor}. \end{proof} \section{Wandering of Geodesics}\label{sec:wanderinglog} In this section, our objective is to prove some results regarding upper bounds of wandering of geodesics. We start with a preliminary bound. \begin{notn} For any set $A\subset\ensuremath{\mathbb{R}}^2$ let \[ \Diam(A):=\sup\left\{\ltwo{\point{x}-\point{y}}:\point{x},\point{y}\in A\right\}. \] \end{notn} \begin{lemma} There exist positive constants $C_1$, $C_2$, $C_3$, such that for all $\point{u},\point{v}\in\ensuremath{\mathbb{R}}^2$ with $\ltwo{\point{u}-\point{v}}\geq C_1$ we have \[ \Prob\left(\Diam(\Gamma(\point{u},\point{v}))\geq C_2\ltwo{\point{u}-\point{v}}\right)\leq e^{-C_3\ltwo{\point{u}-\point{v}}}. \] \label{lem:boxwand} \end{lemma} Since our basic assumptions include that the edge-weights have finite exponential moments, the proof of Lemma~\ref{lem:boxwand} follows readily from Proposition~5.8 of \cite{Kesten1993}. Moreover, the conclusion of Lemma~\ref{lem:boxwand} has been shown to hold under milder assumptions in Theorem~6.2 of \cite{AuffingerDamronHanson2015}. In the $g$-norm, the shortest path between two points is, of course, the line joining them. So when a geodesic wanders transversely from the line joining the endpoints, the extra distance covered by the geodesic in $g$-norm can be thought of as a cost for excessive wandering. The following lemma, which is essentially same as Lemma~2.3 of \cite{KenGeoBiGeo}, provides a lower bound for this cost. We state it without proof. \begin{lemma} Let ${\theta_0}$ be a direction of type I. Then there exist positive constants $C$, $\delta$ such that for $n>0$, $k$, $l$, $d$, satisfying $|l|/n\leq\delta$, we have \[ g(\point{u}) + g(\point{a}-\point{u})-g(\point{a})\geq C\min\left\{ |d|,\frac{d^2}{n}\right\}, \] where $\point{a}:=n{\Unit{\thn}}+l{\Unit{\thnt}}$ and $\point{u}:=k{\Unit{\thn}}+(l\frac{k}{n}+d){\Unit{\thnt}}$. \label{lem:auxgeom1} \end{lemma} Geodesics cannot wander too much because the cost associated with the $g$-norm becomes difficult to be compensated by the fluctuations of passage times. Thus, bounds on the fluctuations $T(\point{0},\point{x})-g(\point{x})$, combined with Lemma~\ref{lem:auxgeom1} yields upper bounds on geodesic wanderings. By Assumption~\ref{asm:sigmaexp} we know $T(\point{0},\point{x})-h(\point{x})$ satisfies exponential concentration in the scale $\sigma(\ltwo{\point{x}})$ uniformly over $\point{x}$. So we need a bound on the differences $h(\point{x})-g(\point{x})$. These differences are known as nonrandom fluctuations in the literature. A general method of bounding the nonrandom fluctuations was developed in \cite{Ken93,Ken97}. There it was shown, using exponential concentration of $T(\point{0},\point{x})-h(\point{x})$ on the scale of $\ltwo{\point{x}}^{1/2}$ from \cite{Kesten1993}, that $h(\point{x})-g(\point{x})$ is at most of the order of $\ltwo{\point{x}}^{1/2}\log\ltwo{\point{x}}$. In our case, Alexander's method can be used with no significant alteration to yield a bound of the order of $\sigma(\ltwo{\point{x}})\log\ltwo{\point{x}}$, which we state below. Improvements to the logarithmic bound has been made by \cite{Tessera2018} and \cite{DamronWang} in related models, which we will briefly discuss in Section~\ref{sec:ghloglog}. We will also improve the logarithmic bound to $\sigma(\ltwo{\point{x}})(\log\ltwo{\point{x}})^\eta$ for arbitrary small $\eta>0$ in Section~\ref{sec:ghloglog}. This improvement is necessary to prove the lower bound result Theorem~\ref{thm:lowmain}. To state the bound on the nonrandom fluctuations we use the notion of `general approximation property' from \cite{Ken97}. \begin{notn} Let $\Phi$ be the set of functions from $(0,\infty)$ to $[0,\infty)$ such that for every $\phi$ there exists some $C\geq 0$ such that $\phi(x)>0$ for $x>C$ and $\inf_{x>y>C}\phi(x)/\phi(y)>0$. For $\eta\in(0,1]$, define $\phi_\eta(k)=0$ for $k\leq 1$, and for $k>1$ \[ \phi_\eta(k):=k^{-\alpha}\sigma(k)(\log k)^\eta. \] Also define $\widehat{\phi}(k)=0$ for $k\leq 2$, and for $k>2$ \[ \widehat{\phi}(k):=k^{-\alpha}\sigma(k)\log\log k. \] So $\widehat{\phi}$ and $\phi_\eta$ for all $\eta\in(0,1]$ belong to $\Phi$. \label{notn:phi}\end{notn} \begin{defn} For $\nu\geq 0$ and $\phi\in\Phi$ we say that $h$ satisfies the \emph{general approximation property} with exponent $\nu$ and correction factor $\phi$ in a sector of directions $\mathcal{S}$ if there exist positive constants $C$ and $M$ such that for all $\point{x}\in\ensuremath{\mathbb{R}}^2$ with $\ltwo{\point{x}}\geq M$ and having direction in $\mathcal{S}$ we have \[ h(\point{x})\leq g(\point{x})+C\ltwo{\point{x}}^\nu\phi(\ltwo{\point{x}}). \] When we want to specify the relevant constants, we say $h$ satisfies $\text{GAP}(\nu,\phi,M,C)$ in sector $\mathcal{S}$. \end{defn} In \cite{Ken97}, the class of correction factors consisted of non-decreasing functions and the general approximation property was not restricted to any particular set of directions. These are some minor modifications we need in our setup. As we mentioned before, we get the following result in our context by following Alexander's method. \begin{prop} Under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C$, $M$, such that $h$ satisfies $\text{GAP}(\alpha,\phi_1,M,C)$ in all directions, i.e., for all $\ltwo{\point{x}}\geq M$, we have \[ h(\point{x}) \leq g(\point{x}) + C\sigma(\ltwo{\point{x}})\log\ltwo{\point{x}}. \] \label{prop:nrflog}\end{prop} Let us now introduce a notation to measure wandering of geodesics. \begin{notn} Suppose ${\theta_0}$ is a direction where the boundary of the limit shape is differentiable. Let ${\theta_0^t}$ be the direction of the tangent. Let $\mathcal{W}(\point{u},\point{v},k,{\theta_0})$ denote the maximum wandering of the geodesic $\Gamma(\point{u},\point{v})$, in $\pm{\theta_0^t}$ directions, from the line joining $\point{u}$ and $\point{v}$, when the geodesic is at a distance $k$ from $\point{u}$ in ${\theta_0}$ direction. We allow $k$ to be negative also. More precisely \[ \mathcal{W}(\point{u},\point{v},k,{\theta_0}):=\max\left\{\left|\pi^2_{\theta_0,\theta_0^t}(\point{w}-\point{u})-k\frac{\pi^2_{\theta_0,\theta_0^t}(\point{v}-\point{u})}{\pi^1_{\theta_0,\theta_0^t}(\point{v}-\point{u})}\right|:\point{w}\in\Gamma(\point{u},\point{v}),\; \pi^1_{\theta_0,\theta_0^t}(\point{w}-\point{u})=k\right\}. \] \label{notn:wandering}\end{notn} In the next result, we show if distance between the endpoints is approximately $n$, then wandering at a fixed but arbitrary distance $k$ from one of the end points is at most of the order of $\Delta(n)$ with some logarithmic correction factors. Thus, roughly speaking, this theorem deals with global wandering of a geodesic. The tail bound in this result is sub-optimal for $k$ bigger than a large multiple of $n$, because in that case we get a better bound using Lemma~\ref{lem:boxwand}. In Theorem~\ref{thm:endwandlog} we consider local wandering of geodesics. We show that at a distance $k$ from an endpoint the wandering is of the order of $\Delta(k)$ with some logarithmic factors. In Theorem~3 of \cite{BasuSarkarSly2019} and Theorem~4.4 of \cite{Basu_2019} similar bound has been proved for geodesics in the integrable model of last passage percolation. Although, there the results are sharper, i.e., there are no logarithmic correction factors involved, because in the integrable model of last passage percolation exact asymptotics of the distribution of the passage times are known. \begin{thm} Let ${\theta_0}$ be a direction of type I. Suppose $h$ satisfies GAP with exponent $\alpha$ and correction factor $\phi_\eta$ for some $\eta\in(0,1]$ in a sector $({\theta_0}-\delta,{\theta_0}+\delta)$. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C_1$, $C_2$, $\delta_1$, $\delta_2$, $n_0$, $t_0$, such that for $n\geq n_0$, $t\geq t_0$, $t\Delta(n)(\log n)^{\eta/2}\leq n\delta_1$, $|l|\leq n\delta_2$, we have \[ \max_{k}\Prob\left(\mathcal{W}\left(\point{0},n{\Unit{\thn}}+l{\Unit{\thnt}},k,{\theta_0}\right)\geq t\Delta(n)(\log n)^{\eta/2}\right)\leq C_1 e^{- C_2 t^2(\log n)^{\eta}}. \] \label{thm:midptwand}\end{thm} \resetconstant \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\delta_1>0$, $\delta_2>0$, to be assumed appropriately small whenever required. Fix $n_0>0$, $t_0>0$, to be assumed appropriately large whenever required. Consider $n$, $t$, $l$ satisfying $n\geq n_0$, $t\geq t_0$, $t\Delta(n)(\log n)^{\eta/2}\leq n\delta_1$, $|l|\leq n\delta_2$. Let $\point{a}:=n{\Unit{\thn}}+l{\Unit{\thnt}}$. Assuming $\delta_2<1$ we get $\ltwo{\point{a}}\leq 2n$, so that, by Lemma~\ref{lem:boxwand}, the geodesic $\Gamma(\point{0},\point{a})$ stays inside a square of side length $\C n$ around $\point{0}$ with probability at least $1-e^{-\C n}$. Using $t\Delta(n)(\log n)^{\eta/2}\leq n\delta_1$, assuming $\delta_1$ is small enough, and using \eqref{A2}, we get $t^2(\log n)^\eta\leq \C n/\sigma(n)\leq\C n^{1-\alpha}$. Hence, the probability bound in the statement is trivial for $|k|\geq\Cl{244} n$. So let us consider $k$ satisfying $|k|\leq\Cr{244} n$. We split the probability under consideration as \begin{multline} \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq t\Delta(n)(\log n)^{\eta/2}\right) \leq \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq n\right) \\+ \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\in[t \Delta(n)(\log n)^{\eta/2},n]\right) \label{eq:midptwand0} \end{multline} For any point $\point{u}$ on $\Gamma(\point{0},\point{a})$ we have \begin{align*} 0 = & T(\point{0},\point{u})+T(\point{u},\point{a})-T(\point{0},\point{a})\\ = & \left( T(\point{0},\point{u})-h(\point{u})\right) + \left( T(\point{u},\point{a})-h(\point{a}-\point{u})\right) - \left( T(\point{0},\point{a})-h(\point{a})\right)\\ & + \left( h(\point{u})-g(\point{u})\right) + \left( h(\point{a}-\point{u})-g(\point{a}-\point{u})\right) - \left( h(\point{a})-g(\point{a})\right)\\ & + \left( g(\point{u})+g(\point{a}-\point{u})-g(\point{a})\right), \end{align*} so that \begin{align*} & |T(\point{0},\point{u})-h(\point{u})|+|T(\point{u},\point{a})-h(\point{a}-\point{u})|+|T(\point{0},\point{a})-h(\point{a})|\\ \geq & \left( h(\point{u})-g(\point{u})\right) + \left( h(\point{a}-\point{u})-g(\point{a}-\point{u})\right) - \left( h(\point{a})-g(\point{a})\right) + \left( g(\point{u}) + g(\point{a}-\point{u})-g(\point{a})\right)\\ \geq & \left( g(\point{u})+g(\point{a}-\point{u})-g(\point{a})\right)-(h(\point{a})-g(\point{a})).\numberthis\label{eq:midptwand1} \end{align*} Define a set of points $V$ in the $\ensuremath{\mathbb{R}}^2$ as follows. If ${\theta_0^t}$ is one of the axial directions, then let $V$ be the set of lattice points $\point{u}$ with $\pi^1_{\theta_0,\theta_0^t}(\point{u})=k$. If ${\theta_0^t}$ is not an axial direction, then let $V$ be the set of intersection points of the integer lattice-grid with the line $\pi^1_{\theta_0,\theta_0^t}(\point{u})=k$. For $\point{u}\in V$ let \[ d(\point{u}):=\pi^2_{\theta_0,\theta_0^t}(\point{u})-\pi^1_{\theta_0,\theta_0^t}(\point{u})\frac{l}{n}. \] Let $V_1$ be the set of points $\point{u}\in V$ with $|d(\point{u})|\geq n$. Let $V_2$ be the set of points $\point{u}\in V$ with $t\Delta(n)(\log n)^{\eta/2}\leq |d(\point{u})|\leq n$. Thus, if $\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq n$, then $\Gamma(\point{0},\point{a})$ goes through a point $\point{u}\in V_1$. Assuming $\delta_2$ is small enough and using Lemma~\ref{lem:auxgeom1} we get \begin{equation} g(\point{u}) + g(\point{a}-\point{u}) - g(\point{a}) \geq\Cl{245}|d(\point{u})|. \label{eq:midptwand2 \end{equation} Using $h$ satisfies GAP with exponent $\alpha$ and correction factor $\phi_\eta$ in a neighborhood of ${\theta_0}$ and assuming $\delta_2$ is small enough we get \begin{equation} h(\point{a}) - g(\point{a})\leq\Cl{246}\sigma(n)(\log n)^{\eta}. \label{eq:midptwand3} \end{equation} Using $|d(\point{u})|\geq n$, $|l|/n\leq\delta_2<1$, and $|k|\leq\Cr{244}n$, we get \[ \max\left\{\ltwo{\point{u}-\point{a}},\ltwo{\point{u}},\ltwo{\point{a}}\right\}\leq\C|d(\point{u})|. \] Therefore, using \eqref{A1} and \eqref{A2}, we get for all $t^\prime>0$ \[ \Prob\left(\max\left\{ |T(\point{0},\point{u}) - h(\point{u})|, |T(\point{u},\point{a}) - h(\point{a}-\point{u})|, |T(\point{0},\point{a}) - h(\point{a})| \right\}\geq t^\prime\sigma(|d(\point{u})|)\right) \leq \C e^{-\C t^\prime}. \] Using this with \eqref{eq:midptwand1}-\eqref{eq:midptwand3}, and \eqref{A2}, we get \begin{align*} & \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq n\right) \\ \leq & \sum_{\point{u}\in V_1}\C\exp\left(-\C\left(\Cr{245}|d(\point{u})| - \Cr{246} \sigma(n)(\log(n))^{\eta}\right)/\sigma(|d(\point{u})|)\right)\\ \leq & \sum_{u\in V_1}\C\exp\left(-\C |d(\point{u})|/\sigma(|d(\point{u})|)\right)\\ \leq & \C\exp\left( -\C n / \sigma(n)\right). \numberthis\label{eq:midptwand4} \end{align*} If $\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\in[t\Delta(n)(\log n)^{\eta/2},n]$, then $\Gamma(\point{0},\point{a})$ goes through a point $\point{u}\in V_2$. Assuming $\delta_2$ is small enough and using Lemma~\ref{lem:auxgeom1} we get \begin{equation} g(\point{u}) + g(\point{a}-\point{u})-g(\point{a}) \geq \Cl{2416} \frac{|d(\point{u})|^2}{n} \geq \Cr{2416} t^2 \sigma(n)(\log n)^{\eta}. \label{eq:midptwand5} \end{equation} Since $|d(\point{u})|\leq n$, we have \[ \max\left\{\ltwo{\point{u}-\point{a}},\ltwo{\point{u}},\ltwo{\point{a}}\right\}\leq\C n. \] Hence, using Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg} we get, for all $t^\prime>0$ \[ \Prob\left(\max\left\{ |T(\point{0},\point{u}) - h(\point{u})|, |T(\point{u},\point{a}) - h(\point{a}-\point{u})|, |T(\point{0},\point{a}) - h(\point{a})|\right\}\geq t^\prime\sigma(n)\right)\leq\C e^{-\C t^\prime}. \] Using this with \eqref{eq:midptwand1}, \eqref{eq:midptwand3}, \eqref{eq:midptwand5} we get \begin{align*} & \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\in[t\Delta(n)(\log n)^{1/2},n]\right)\\ \leq & \sum_{\point{u}\in V_2} \C\exp\left(-\C\left(\Cr{2416}\frac{|d(\point{u})|^2}{n}-\Cr{246}\sigma(n)(\log n)^\eta\right)/\sigma(n)\right)\\ \leq & \C\exp\left(-\C t^2(\log n)^{\eta}\right). \numberthis\label{eq:midptwand6} \end{align*} Assuming $\delta_1$ is small enough and combining \eqref{eq:midptwand4}, \eqref{eq:midptwand6}, \eqref{eq:midptwand0} we get \[ \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq t\Delta(n)(\log n)^{\eta/2}\right) \leq \C e^{-\C t^2(\log n)^{\eta}}. \] This completes the proof of Theorem~\ref{thm:midptwand}. \end{proof} \begin{cor} Let ${\theta_0}$ be a direction of type I. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C_1$, $C_2$, $\delta_1$, $\delta_2$, $n_0$, $t_0$, such that for $n\geq n_0$, $t\geq t_0$, $t\Delta(n)(\log n)^{1/2}\leq n\delta_1$, $|l|\leq n\delta_2$, we have \[ \Prob\left(\max_k\mathcal{W}\left(\point{0},n{\Unit{\thn}}+l{\Unit{\thnt}},k,{\theta_0}\right)\geq t\Delta(n)(\log n)^{1/2}\right)\leq C_1 e^{-C_2 t^2\log n}. \] \label{cor:overallwand}\end{cor} \resetconstant \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\delta_1>0$, $\delta_2>0$, to be assumed appropriately small whenever required. Fix $n_0>0$, $t_0>0$, to be assumed appropriately large whenever required. Consider $n$, $t$, and $l$, satisfying $n\geq n_0$, $t\geq t_0$, $t\Delta(n)(\log n)^{1/2}\leq n\delta_1$, $|l|\leq n\delta_2$. Let $\point{a}:=n{\Unit{\thn}}+l{\Unit{\thnt}}$. Assuming $\delta_2<1$ we get $\ltwo{\point{a}}\leq 2n$, so that by Lemma~\ref{lem:boxwand}, the geodesic $\Gamma(\point{0},\point{a})$ stays inside a square of side length $\C n$ around $\point{0}$ with probability at least $1-e^{-\Cl{252} n}$. On this event $|\pi^1_{\theta_0,\theta_0^t}(\point{u})|\leq\Cl{251}n$ for all $\point{u}$ in $\Gamma(\point{0},\point{a})$. Assuming $\delta_1<1$ and using \eqref{A2} we get $t^2\log n\leq n/\sigma(n)\leq\C n^{1-\alpha}$. Thus \begin{equation} \Prob\left(\max_{|k|\geq\Cr{251}n}\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq t\Delta(n)(\log n)^{1/2}\right)\leq e^{-\Cr{252}n}\leq \C e^{-\C t^2\log n}. \label{eq:251} \end{equation} Using Proposition~\ref{prop:nrflog} and Theorem~\ref{thm:midptwand} and taking a union bound we get \begin{equation} \Prob\left(\max_{|k|\leq\Cr{251}n} \mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq t\Delta(n)(\log n)^{1/2}\right)\leq \C e^{-\C t^2\log n}. \label{eq:252} \end{equation} Combining \eqref{eq:251} and \eqref{eq:252} completes the proof. \end{proof} Now we show that wandering of a geodesic at a distance $k$ from an endpoint is at most of the order of $\Delta(k)$ with some logarithmic correction factor. \begin{thm} Let ${\theta_0}$ be a direction of type I. Suppose $h$ satisfies GAP with exponent $\alpha$ and correction factor $\phi_\eta$ for some $\eta\in(0,1]$ in a neighborhood of ${\theta_0}$. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C_1$, $C_2$, $\delta_1$, $\delta_2$, $k_0$, $n_0$, $t_0$, such that for $n\geq n_0$, $t\geq t_0$, $k\geq k_0$, $k<n$, $t\Delta(k)(\log k)^{\eta/2}\leq k\delta_1$, $|l|\leq n\delta_2$, we have \[ \Prob\left(\mathcal{W}\left(\point{0},n{\Unit{\thn}}+l{\Unit{\thnt}},k,{\theta_0}\right)\geq t\Delta(k)(\log k)^{\eta/2}\right)\leq C_1 e^{-C_2 t^2(\log k)^{\eta}}. \] \label{thm:endwandlog}\end{thm} \resetconstant \def\Delta(\lambda^{\zeta q} k)(\log(\lambda^{\zeta q} k))^{\eta/2}{\Delta(\lambda^{\zeta q} k)(\log(\lambda^{\zeta q} k))^{\eta/2}} \def\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}{\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}} \def\Delta(\lambda^{\zeta m} k)(\log(\lambda^{\zeta m} k))^{\eta/2}{\Delta(\lambda^{\zeta m} k)(\log(\lambda^{\zeta m} k))^{\eta/2}} \def\Delta(\lambda^{\zeta (p+1)} k)(\log(\lambda^{\zeta (p+1)} k))^{\eta/2}{\Delta(\lambda^{\zeta (p+1)} k)(\log(\lambda^{\zeta (p+1)} k))^{\eta/2}} \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\delta_1>0$, $\delta_2>0$, to be assumed appropriately small whenever required. Fix $k_0>0$, $n_0>0$, $t_0>0$, to be assumed appropriately large whenever required. Consider $k$, $n$, $l$, $t$ satisfying $n\geq n_0$, $t\geq t_0$, $k\geq k_0$, $t\Delta(k)(\log k)^{\eta/2}\leq k\delta_1$, $|l|\leq n\delta_2$. Let $\point{a}:=n{\Unit{\thn}}+l{\Unit{\thnt}}$. The strategy of the proof is as follows. We will define two sequences of points $(\point{a}_p)_{p=0}^{m+1}$ and $(\point{b}_p)_{p=0}^{m+1}$ in such a manner that, if for all $p$, the geodesics $\Gamma(\point{0},\point{a}_p)$ and $\Gamma(\point{0},\point{b}_p)$ do not wander ``excessively" then $\mathcal{W}(\point{0},\point{a},k,{\theta_0})\leq t\Delta(k)(\log k)^{\eta/2}$ as we require. We will define events $\left\{\event{E}_p^i:1\leq i\leq 4,0\leq p\leq m\right\}$ in which these geodesics do not wander excessively. We will then show that these events have appropriately high probability. Fix $1<\zeta<2/(1+\beta)$ and let $\epsilon:=1-\zeta(1+\beta)/2$. Fix $\lambda>1$, $\lambda^\prime>0$. Later we choose $\lambda$ so that \eqref{eq:endwandlog3} holds, and we choose $\lambda^\prime$ based on $\lambda$ so that \eqref{eq:lambdaprime} holds. Let $m\geq 0$ be such that $\lambda^m k < n \leq \lambda^{m+1} k$. Let $\point{a}_{m+1}:=\point{a}$, $\point{b}_{m+1}:=\point{a}$, and for $0\leq p\leq m$, let $\pi^1_{\theta_0,\theta_0^t}(\point{a}_p)=\lambda^{p} k$, $\pi^1_{\theta_0,\theta_0^t}(\point{b}_p)=\lambda^{p} k$, \begin{gather*} \pi^2_{\theta_0,\theta_0^t}(\point{a}_{p})=\frac{\pi^1_{\theta_0,\theta_0^t}(\point{a}_{p})}{\pi^1_{\theta_0,\theta_0^t}(\point{a}_{p+1})}\pi^2_{\theta_0,\theta_0^t}(\point{a}_{p+1})+\lambda^\prime t\Delta(\lambda^{\zeta p} k)(\log \lambda^{\zeta p} k)^{\eta/2},\\ \pi^2_{\theta_0,\theta_0^t}(\point{b}_{p})=\frac{\pi^1_{\theta_0,\theta_0^t}(\point{b}_{p})}{\pi^1_{\theta_0,\theta_0^t}(\point{b}_{p+1})}\pi^2_{\theta_0,\theta_0^t}(\point{b}_{p+1})-\lambda^\prime t\Delta(\lambda^{\zeta p} k)(\log \lambda^{\zeta p} k)^{\eta/2}. \end{gather*} For any $0\leq p\leq q\leq m$, using \eqref{A2} we get \begin{multline*} \frac{\lambda^{-q}\Delta(\lambda^{\zeta q} k)(\log(\lambda^{\zeta q} k))^{\eta/2}}{\lambda^{-p}\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}} \leq\C\lambda^{-\epsilon(q-p)}\left(\frac{q\zeta\log\lambda+\log k}{p\zeta\log\lambda+\log k}\right)^{\eta/2}\\ \leq\C\lambda^{-\epsilon(q-p)}\left( 1+\frac{(q-p)\zeta\log \lambda}{p\zeta\log\lambda+\log k}\right)^{\eta/2} \leq\C\lambda^{-\epsilon(q-p)}\left( 1+(q-p)\log\lambda\right)^{\eta/2}. \end{multline*} Therefore, we can choose $\lambda^\prime$, depending on $\lambda$, such that \begin{equation} \sum_{q=p}^{m}\lambda^{-(q-p)}\Delta(\lambda^{\zeta q} k)(\log(\lambda^{\zeta q} k))^{\eta/2}\leq\frac{1}{\lambda^\prime}\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}.\label{eq:lambdaprime} \end{equation} Therefore, for $0\leq p \leq m$ \begin{gather*} \pi^2_{\theta_0,\theta_0^t}(\point{a}_{p}) = \lambda^{p} k \frac{l}{n} + \lambda^\prime t \sum_{q=p}^{m} \lambda^{-(q-p)}\Delta(\lambda^{\zeta q} k)(\log(\lambda^{\zeta q} k))^{\eta/2} \leq \lambda^{p} k\frac{l}{n} + t \Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}\\ \pi^2_{\theta_0,\theta_0^t}(\point{b}_{p}) = \lambda^{p} k \frac{l}{n} - \lambda^\prime t \sum_{q=p}^{m} \lambda^{-(q-p)}\Delta(\lambda^{\zeta q} k)(\log(\lambda^{\zeta q} k))^{\eta/2} \geq \lambda^{p} k\frac{l}{n} - t \Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}. \end{gather*} For $0\leq p\leq m$, let \[ \overline{\segment{I}}_p:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{x})=\lambda^p k\right\}. \] So $\point{a}_p$, $\point{b}_p$ are on this line. Let $\segment{I}_p$ be the segment joining $\point{a}_p$ and $\point{b}_p$. Therefore, $\mathcal{W}(\point{0},\point{a},k,{\theta_0})\leq t\Delta(k)(\log k)^{\eta/2}$ if and only if, whenever $\Gamma(\point{0},\point{a})$ intersects $\overline{\segment{I}}_0$, it intersects $\segment{I}_0$. We will define the events $\left\{\event{E}^i_p:1\leq i\leq 4, 0\leq p\leq m\right\}$ in a way that, if $\mathbb{T}\in\cap_{i=1}^4\cap_{p=0}^m\event{E}^i_p$ then for all $0\leq p\leq m$, whenever $\Gamma(\point{0},\point{a})$ intersects $\overline{\segment{I}}_p$, it intersects $\segment{I}_p$, see Figure~\ref{fig:inductive}. \begin{figure}[H] \centering\includegraphics[width=0.4\linewidth]{Fig3Inductive.pdf} \caption{Rough sketch for $m=2$: $\point{a}_{m+1}=\point{b}_{m+1}=\point{a}$; for $0\leq p\leq m$ the segment joining $\point{a}_p$ and $\point{b}_p$ is $\segment{I}_p$, and extending $\segment{I}_p$ we get the line $\overline{\segment{I}}_p$; for all $0\leq p\leq m$ if the geodesics $\Gamma(\point{0},\point{a}_p)$ and $\Gamma(\point{0},\point{b}_p)$ do not wander excessively then for all $0\leq p\leq m$ the geodesic $\Gamma(\point{0},\point{a})$ intersects $\segment{I}_p$ whenever it intersects $\overline{\segment{I}}_p$.}\label{fig:inductive}\end{figure} For $0\leq p \leq m$, define the events \begin{gather*} \event{E}^1_p:=\left\{\mathcal{W}\left(\point{0},\point{a}_{p+1},\lambda^p k,{\theta_0}\right)\leq\lambda^\prime t\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}\right\},\\ \event{E}^2_p:=\left\{\mathcal{W}\left(\point{0},\point{b}_{p+1},\lambda^p k,{\theta_0}\right)\leq\lambda^\prime t\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}\right\}. \end{gather*} By construction of the sequences $(\point{a}_p)_{p=0}^{m+1}$ and $(\point{b}_p)_{p=0}^{m+1}$, we get that for $0\leq p\leq m$, if $\mathbb{T}\in\event{E}_p^1\cap\event{E}_p^2$, then the geodesics $\Gamma(\point{0},\point{a}_{p+1})$ and $\Gamma(\point{0},\point{b}_{p+1})$ intersect $\segment{I}_p$ whenever they intersect $\overline{\segment{I}}_p$. We want the geodesic $\Gamma(\point{0},\point{a})$ to intersect $\segment{I}_p$ whenever it intersects $\overline{\segment{I}_p}$, for all $0\leq p\leq m$; Figure~\ref{fig:inductive} shows the ideal situation. But $\mathbb{T}\in\cap_{p=0}^m (\event{E}_p^1\cap\event{E}_p^2)$ is not sufficient for this because the geodesics can backtrack. So we will define the events $\event{E}_p^3$ and $\event{E}_p^4$ for $0\leq p\leq m$ where the geodesics do not backtrack. Define the half-line \[ \segment{I}:=\left\{\point{x}:\pi^1_{\theta_0,\theta_0^t}(\point{x})\geq n,\frac{\pi^2_{\theta_0,\theta_0^t}(\point{x})}{\pi^1_{\theta_0,\theta_0^t}(\point{x})}=\frac{l}{n}\right\}. \] If in addition to $\mathbb{T}\in\cap_{p=0}^m(\event{E}_p^1\cap\event{E}_p^2)$, we also have that for all $0\leq p\leq m$, the geodesics $\Gamma(\point{0},\point{a}_p)$ and $\Gamma(\point{0},\point{b}_p)$ do not intersect $\segment{I}$, then we get that for all $0\leq p\leq m$ the geodesic $\Gamma(\point{0},\point{a})$ intersects $\segment{I}_p$ whenever it intersects $\overline{\segment{I}}_p$. Therefore we define the events $\event{E}^3_p$, $\event{E}^4_p$ for $0\leq p\leq m$ as follows. Let \begin{gather*} \event{E}^3_m:=\left\{\max_{n^\prime\geq n}\mathcal{W}\left(\point{0},\point{a}_{m},n^\prime,{\theta_0}\right)\leq\lambda^\prime t \Delta(\lambda^{\zeta m} k)(\log(\lambda^{\zeta m} k))^{\eta/2}\right\},\\ \event{E}^4_m:=\left\{\max_{n^\prime\geq n}\mathcal{W}\left(\point{0},\point{b}_{m},n^\prime,{\theta_0}\right)\leq\lambda^\prime t \Delta(\lambda^{\zeta m} k)(\log(\lambda^{\zeta m} k))^{\eta/2}\right\}, \end{gather*} and for $0\leq p <m$ let \[ \event{E}^3_p:=\left\{\text{Diam}\left(\Gamma(\point{0},\point{a}_{p})\right) \leq n\right\},\quad \event{E}^4_p:=\left\{\text{Diam}\left(\Gamma(\point{0},\point{b}_{p})\right) \leq n\right\}. \] So, for $0\leq p\leq m$, if $\mathbb{T}\in\event{E}^3_p$, then $\Gamma(\point{0},\point{a}_p)$ does not intersect $\segment{I}$, and if $\mathbb{T}\in\event{E}^4_p$, then $\Gamma(\point{0},\point{b}_p)$ does not intersect $\segment{I}$. Thus, if $\mathbb{T}\in\cap_{i=1}^4\cap_{p=0}^m\event{E}_p^i$, then for all $0\leq p\leq m$, whenever $\Gamma(\point{0},\point{a})$ intersects $\overline{\segment{I}}_p$ it intersects $\segment{I}_p$. In this event $\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\leq t\Delta(k)(\log k)^{\eta/2}$ as required. So now we will show these events have appropriately high probability. We use Theorem~\ref{thm:midptwand} to bound $\Prob((\event{E}^i_p)^c)$ for $0\leq p\leq m$ and $i=1,2$. For a fixed $p$ and $i=1$ we use the following parameters. \[ \tilde{\eta}:=\eta,\quad \tilde{n}:=\pi^1_{\theta_0,\theta_0^t}(\point{a}_{p+1}),\quad \tilde{l}:=\pi^2_{\theta_0,\theta_0^t}(\point{a}_{p+1}),\quad \tilde{t}:=\lambda^\prime t \frac{\Delta(\lambda^{\zeta p} k)(\log(\lambda^{\zeta p} k))^{\eta/2}}{\Delta(\tilde{n})(\log\tilde{n})^{\eta/2}}. \] We need to verify $\tilde{n}\geq\tilde{n}_0$, $\tilde{t}\geq\tilde{t}_0$, $\tilde{t}\Delta(\tilde{n})(\log\tilde{n})^{\tilde{\eta}/2}\leq\tilde{n}\delta_1$, $|\tilde{l}|\leq\tilde{n}\tilde{\delta}_2$. The condition $\tilde{n}\geq\tilde{n}_0$ holds by taking $k_0$ large enough because $\tilde{n}\geq k\geq k_0$. Using $\tilde{n}\leq\lambda^{p+1}k$ and \eqref{A2} we get $\tilde{t}\geq\C t$. So $\tilde{t}\geq\tilde{t}_0$ holds by choosing $t_0$ large enough. Using $\tilde{n}\geq\lambda^p k$, \eqref{A2}, $t\Delta(k)(\log k)^{\eta/2}\leq k\delta_1$, and assuming $\delta_1$ is small enough, we get \[ \frac{1}{\tilde{n}}\tilde{t}\Delta(\tilde{n})(\log\tilde{n})^{\eta/2}\leq \C \frac{1}{\lambda^p k} t \Delta(\lambda^{p} k)(\log \lambda^{p} k)^{\eta/2}\leq \C \frac{1}{k} t \Delta(k)(\log k)^{\eta/2}\leq \C \delta_1\leq\tilde{\delta_1}. \] If $p=m$, then $|\tilde{l}|/\tilde{n}=|l|/n\leq\delta_2\leq\tilde{\delta}_2$. For $p<m$, using $t\Delta(k)(\log k)^{\eta/2}\leq k\delta_1$, $|l|\leq n\delta_2$, \eqref{A2}, and assuming $\delta_1$, $\delta_2$ are small enough, we get \begin{align*} \frac{|\tilde{l}|}{\tilde{n}}=\frac{|\pi^2_{\theta_0,\theta_0^t}(\point{a}_{p+1})|}{\pi^1_{\theta_0,\theta_0^t}(\point{a}_{p+1})} &\leq\frac{1}{\lambda^{p+1} k}\left(\lambda^{p+1} k\frac{|l|}{n}+t\Delta(\lambda^{\zeta (p+1)} k)(\log(\lambda^{\zeta (p+1)} k))^{\eta/2}\right)\\ &\leq\delta_2+\delta_1\frac{\Delta(\lambda^{\zeta (p+1)} k)(\log(\lambda^{\zeta (p+1)} k))^{\eta/2}}{\lambda^p\Delta(k)(\log k)^{\eta/2}}\\ &\leq\delta_2+\C\delta_1\lambda^{-\epsilon(p+1)}\left( 1+\frac{(p+1)\zeta\log\lambda}{\log k}\right)^{\eta/2}\\ &\leq\delta_2+\C\delta_1\leq\tilde{\delta_2}. \end{align*} So the conditions for applying Theorem~\ref{thm:midptwand} hold. Using $\tilde{n}\leq\lambda^{p+1}k$ and \eqref{A2} we get \[ \tilde{t}^2(\log\tilde{n})^\eta\geq\C t^2\lambda^{p\epsilon}(\log k)^{\eta}. \] Therefore, applying Theorem~\ref{thm:midptwand} we get \begin{equation} \Prob((\event{E}^1_p)^c)\leq \C\exp\left(-\C t^2\lambda^{p\epsilon}(\log k)^\eta\right). \label{eq:endwandlog1} \end{equation} Similar bounds hold for $\event{E}^2_p$ for $0\leq p\leq m$, $\event{E}^3_m$, $\event{E}^4_m$. For $0\leq p<m$, $ \ltwo{\point{a}_p} \leq 2 \lambda^p k $. We have $n>\lambda^m k$. Thus, assuming $\lambda$ large enough, we get for $i=3,4$ and $0\leq p < m$, using Lemma~\ref{lem:boxwand}, \begin{equation} \Prob((\event{E}^i_p)^c)\leq \C\exp\left(-\C\lambda^p k\right). \label{eq:endwandlog3} \end{equation} Combining \eqref{eq:endwandlog1} and \eqref{eq:endwandlog3} we get \[ \Prob\left(\mathcal{W}\left(\point{0},\point{a},k,{\theta_0}\right)\geq t\Delta(k)(\log k)^{\eta/2}\right)\leq \Prob\left(\cup_{i=1}^4\cup_{p=0}^m(\event{E}^i_p)^c\right)\leq \C e^{-\C t^2 (\log k)^\eta}. \] This completes proof of Theorem~\ref{thm:endwandlog}. \end{proof} As a corollary we can deal with wandering of a geodesic within a fixed distance from an endpoint. \begin{cor} Let ${\theta_0}$ be a direction of type I. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C_1$, $C_2$, $\delta_1$, $\delta_2$, $k_0$, $n_0$, $t_0$, such that for $k\geq k_0$, $n\geq n_0$, $t\geq t_0$, $t\Delta(k)(\log k)^{1/2}\leq k\delta_1$, $|l|\leq n\delta_2$, we have \[ \Prob\left(\max_{k^\prime\leq k}\mathcal{W}\left(\point{0},n{\Unit{\thn}}+l{\Unit{\thnt}},k^\prime,{\theta_0}\right)\geq t\Delta(k)(\log k)^{1/2}\right)\leq C_1 e^{-C_2 t^2\log k}. \] \label{cor:endwandlogspl} \end{cor} \resetconstant \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\delta_1>0$, $\delta_2>0$, to be assumed appropriately small whenever required. Fix $k_0>0$, $n_0>0$, $t_0>0$, to be assumed large enough whenever required. Let $\point{a}:=n{\Unit{\thn}}+l{\Unit{\thnt}}$. Define \[ L:=\left\{\point{v}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{v})=k,|\pi^2_{\theta_0,\theta_0^t}(\point{v})-k\frac{l}{n}|\leq\frac{t}{2}\Delta(k)(\log k)^{1/2}\right\}. \] If $\max_{k^\prime\leq k}\mathcal{W}(\point{0},\point{a},k^\prime,{\theta_0})\geq t\Delta(k)(\log k)^{1/2}$, then either $\mathcal{W}(\point{0},\point{a},k,{\theta_0})\geq(t/2)\Delta(k)(\log k)^{1/2}$, or $\Gamma(\point{0},\point{a})$ passes through some $\point{v}\in L$ and $\max_{k^\prime}\mathcal{W}\left(\point{0},\point{v},k^\prime,{\theta_0^t}\right)\geq (t/2)\Delta(k)(\log k)^{1/2}$. In the first case, by Theorem~\ref{thm:endwandlog} and Proposition~\ref{prop:nrflog}, we get \begin{equation} \Prob\left(\mathcal{W}(\point{0},\point{a},k,{\theta_0})\geq\frac{t}{2}\Delta(k)(\log k)^{1/2}\right)\leq C_1 e^{-C_2 t^2\log k}. \label{eq:endlogspl1}\end{equation} For the second case, consider $\point{v}\in L$. We will apply Corollary~\ref{cor:overallwand} with $\tilde{{\theta_0}}={\theta_0}$, $\tilde{n}=k$, $\tilde{l}=\pi^2_{\theta_0,\theta_0^t}(\point{v})$, $\tilde{t}=t$. Then \[ \frac{|\tilde{l}|}{\tilde{n}}\leq \frac{|l|}{n}+\frac{1}{k}\frac{t}{2}\Delta(k)(\log k)^{1/2}\leq \delta_1+\frac{\delta_2}{2}. \] Also \[ \frac{1}{\tilde{n}}\tilde{t}\Delta(\tilde{n})(\log\tilde{n})^{1/2}\leq \frac{1}{k}t\Delta(k)(\log k)^{1/2}\leq\delta_2. \] Therefore, assuming $\delta_1$, $\delta_2$ are small enough, we get \[ \Prob\left(\max_{k^\prime\in[0,k]}\mathcal{W}\left(\point{0},\point{v},k^\prime,{\theta_0}\right)\geq\frac{t}{2}\Delta(k)(\log k)^{1/2}\right)\leq C_3 e^{-C_4 t^2\log k}. \] Taking a union bound over $2t\Delta(k)(\log k)^{1/2}$ many choices of $\point{v}$ we get \begin{equation} \Prob\left(\max_{k^\prime}\mathcal{W}\left(\point{0},n{\Unit{\thn}}+l{\Unit{\thnt}},k^\prime,{\theta_0}\right)\geq t\Delta(k)(\log k)^{1/2}\right)\leq C_5 e^{-C_6 t^2\log k}. \label{eq:endlogspl2}\end{equation} Combining \eqref{eq:endlogspl1}, \eqref{eq:endlogspl2} completes the proof of Corollary~\ref{cor:endwandlogspl}. \end{proof} \begin{remark} Recall from Notation~\ref{notn:nonlattice} that geodesic between points $\point{u}$, $\point{v}$ which are not necessarily lattice points is defined as the geodesic between $\lfloor\point{u}\rfloor$ and $\lfloor\point{v}\rfloor$. In Theorem~\ref{thm:midptwand}, Corollary~\ref{cor:overallwand}, Theorem~\ref{thm:endwandlog}, and Corollary~\ref{cor:endwandlogspl}, we are dealing with geodesics having one endpoint $\point{0}$ and we are measuring the wandering at a distance from $\point{0}$. Later while applying these results we may have a point of $\ensuremath{\mathbb{R}}^2$ in place of $\point{0}$. This does not cause any major complication, i.e., bounds that hold for wandering of $\Gamma(\point{0},\point{u})$ also hold for wandering of $\Gamma(\point{v},\point{u}+\point{v})$, where $\point{u}$ and $\point{v}$ are not necessarily lattice points. \label{remark:nonlattice2} \end{remark} \section{Preliminary Upper Bound of Transverse Increments} In this section, our objective is to prove a weaker version of Theorem~\ref{thm:loglogupinc}. This weaker version will be used to improve the bound on nonrandom fluctuations in Theorem~\ref{thm:nrf2}. This improved bound on nonrandom fluctuations is necessary for the proof of Theorem~\ref{thm:loglogupinc}. To prove the weaker version of Theorem~\ref{thm:loglogupinc} we need the following result on curvature of the boundary of the limit shape. We skip the proof as the result is essentially same as Lemma~2.7 of \cite{KenGeoBiGeo}. \begin{lemma} Let ${\theta_0}$ be a direction of type II. Then there exist positive constants $C$, $\delta_1$, $\delta_2$, such that for $d$, $k>0$, $L$, satisfying $|d|\leq k\delta_1$, $|L|\leq k\delta_2$, we have \[ |g(k{\Unit{\thn}}+(d+L){\Unit{\thnt}}) - g(k{\Unit{\thn}}+d{\Unit{\thnt}})| \leq C\left(\frac{L^2}{k}+\frac{|d||L|}{k}\right). \] \label{lem:auxgeom2} \end{lemma} The preliminary upper bound of transverse increments is the following. \begin{thm} Let ${\theta_0}$ be a direction of both type I and II. For $n>0$, $L>0$, let $\segment{I}(n,L)$ and $\dev{D}(n,L)$ be as defined in Theorem~\ref{thm:loglogupinc}. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $C_1$, $C_2$, $L_0$, $n_0$, $t_0$, such that for $L\geq L_0$, $n\geq n_0$, $t\geq t_0$, $L\leq\Delta(n)$, we have \[ \Prob\left( \dev{D}(n,L) \geq t\log L\sigma(\Delta^{-1}(L))\right) \leq C_1 e^{-C_2 t\log L}. \] \label{thm:logupinc} \end{thm} \resetconstant \noindent\begin{proof}[\emph{\textbf{Proof:}}] Let us assume without loss of generality that ${\theta_0}\in[0,\pi/4]$. Fix $L_0>0$, $n_0>0$, $t_0>0$. We will assume $L_0$, $n_0$, $t_0$ are large whenever required. Consider $L\geq L_0$, $n\geq n_0$, $t\geq t_0$, $L\leq\Delta(n)$. Based on the values of $t$ we consider two cases because for suitably large values of $t$ we are in a large deviation regime and the proof is straightforward. \paragraph{\textbf{Case I:}} Suppose $t\geq 4\mu L(\sigma(\Delta^{-1}(L))\log L)^{-1}$, where $\mu$ is the expected passage edge-weight. Since $\segment{I}(n,L)$ has width $L$, the lattice points corresponding to the points in $\segment{I}(n,L)$ can be joined by a lattice path of at most $\lceil 2 L \rceil$ edges. Hence $\dev{D}(n,L)\leq X_1+\cdots+X_{\lceil 2 L \rceil}$, where $X_i$'s are i.i.d.\ random variables which have the same distribution as that of the edge-weights. Therefore, \begin{multline*} \Prob\left( \dev{D}(n,L) \geq t\log L\sigma(\Delta^{-1}(L))\right) \leq \Prob\left( X_1+\cdots+X_{\lceil 2 L\rceil}\geq t\log L\sigma(\Delta^{-1}(L))\right)\\ \leq \C\exp\left(-\C t\log L\sigma(\Delta^{-1}(L))\right) \leq \C\exp\left(-\C t\log L\right). \end{multline*} \paragraph{\textbf{Case II:}} Suppose \begin{equation} t\leq 4\mu L(\sigma(\Delta^{-1}(L))\log L)^{-1}. \label{eq:logupinc0} \end{equation} Let \begin{multline} \segment{I}^\ast:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2: \pi^1_{\theta_0,\theta_0^t}(\point{x})=n-\Delta^{-1}(L),\right.\\ \left. \pi^2_{\theta_0,\theta_0^t}(\point{x})\in\left[-t^{1/2}L(\log\Delta^{-1}(L))^{1/2}, \left( 1-\frac{\Delta^{-1}(L)}{n}\right) L + t^{1/2}L\left(\log\Delta^{-1}(L)\right)^{1/2}\right] \vphantom{\point{x}\in\ensuremath{\mathbb{R}}^2}\right\}. \label{eq:logupinc1} \end{multline} Define the event \[ \event{E}: \mbox{$\Gamma(\point{0},\point{u})$ intersects $\segment{I}^\ast$ for all $\point{u}\in\segment{I}(n,L)$}. \] If $\mathbb{T}\not\in\event{E}$, then for some $\point{u}\in\segment{I}(n,L)$ the geodesic $\Gamma(\point{0},\point{u})$ wanders more than $t^{1/2}L(\log\Delta^{-1}(L))^{1/2}$ in $\pm{\theta_0^t}$ directions when it is at a distance $\Delta^{-1}(L)$ from $\point{u}$ in direction $-{\theta_0}$, i.e., $\mathcal{W}\left(\point{u},\point{0},\Delta^{-1}(L),-{\theta_0}\right)\geq t^{1/2}L(\log\Delta^{-1}(L))^{1/2}$. Consider an $\point{u}\in\segment{I}(n,L)$. We will apply Theorem~\ref{thm:endwandlog} with the variables \[ \tilde{k}:=\Delta^{-1}(L),\; \tilde{l}:=\pi^2_{\theta_0,\theta_0^t}(\point{u}),\; \tilde{n}:=\pi^2_{\theta_0,\theta_0^t}(\point{u})=n,\; \tilde{t}:=t^{1/2},\; \tilde{{\theta_0}}:=-{\theta_0},\; \tilde{\eta}:=1. \] The point $\point{u}$ is not necessarily a lattice point, but this issue has been addressed in Remark~\ref{remark:nonlattice2}. The conditions on $\tilde{{\theta_0}}$ hold because by assumption ${\theta_0}$ is of type I, and by Proposition~\ref{prop:nrflog} $h$ satisfies GAP with correction factor $\phi_1$ in all directions. So we need to verify the conditions $\tilde{k}\geq\tilde{k_0}$, $\tilde{n}\geq\tilde{n_0}$, $\tilde{t}\geq\tilde{t_0}$, $\tilde{t}\Delta(\tilde{k})(\log\tilde{k})^{1/2} \leq\tilde{k}\tilde{\delta_1}$, $|\tilde{l}|\leq\tilde{n}\tilde{\delta_2}$. Assuming $n_0$, $L_0$, $t_0$ are large enough we get $\tilde{k}\geq\tilde{k_0}$, $\tilde{n}\geq\tilde{n_0}$, $\tilde{t}\geq\tilde{t_0}$. From $\point{u}\in\segment{I}(n,L)$ we get $|\pi^2_{\theta_0,\theta_0^t}(u)|\leq L$. Therefore, using $L\leq\Delta(n)$, \eqref{A2}, and assuming $n_0$ is large enough, we get \[ \frac{|\tilde{l}|}{\tilde{n}} \leq\frac{L}{n} \leq\frac{\Delta(n)}{n} \leq n_0^{-(1-\beta)/2} \leq\tilde{\delta_2}. \] Using \eqref{eq:logupinc0}, \eqref{A2}, and assuming $L_0$ is large enough, we get \begin{align*} \frac{1}{\tilde{k}}\tilde{t}\Delta(\tilde{k})(\log\tilde{k})^{1/2} = \frac{t^{1/2}L\left(\log\Delta^{-1}(L)\right)^{1/2}}{\Delta^{-1}(L)} & \leq\C\frac{L^{3/2}(\log\Delta^{-1}(L))^{1/2}}{\Delta^{-1}(L)(\sigma(\Delta^{-1}(L))^{1/2}(\log L)^{1/2}}\\ & \leq\C\frac{L^{1/2}}{(\Delta^{-1}(L))^{1/2}} \leq\C L^{-\frac{(1-\beta)}{2(1+\beta)}} \leq\C L_0^{-\frac{(1-\beta)}{2(1+\beta)}} \leq\tilde{\delta_1}. \end{align*} Therefore, all the conditions for applying Theorem~\ref{thm:endwandlog} are satisfied, and we get \[ \Prob\left(\mathcal{W}\left(\point{u},\point{0},\Delta^{-1}(L),-{\theta_0}\right)\geq t^{1/2}L(\log\Delta^{-1}(L))^{1/2}\right)\leq \C e^{-\C t\log L}. \] Therefore, taking a union bound over all $\point{u}\in\segment{I}(n,L)$ we get \begin{equation} \Prob\left(\event{E}^c\right)\leq \C e^{-\C t\log L}. \label{eq:logupinc3} \end{equation} So in order to complete the proof we consider $\mathbb{T}\in\event{E}$. \begin{figure}[H] \centering\includegraphics[width=0.8\linewidth]{Fig4LogInc.pdf} \caption{Illustration for Theorem~\ref{thm:logupinc} under case II: distance of $\segment{I}(n,L)$ from $\point{0}$ in ${\theta_0}$ direction is $n$, width of $\segment{I}(n,L)$ in ${\theta_0^t}$ direction is $L$, distance of $\segment{I}^\ast$ from $\point{0}$ is $n-\Delta^{-1}L$, if $\mathbb{T}\in\event{E}$ then geodesics from $\point{0}$ to points in $\segment{I}(n,L)$ passes through $\segment{I}^\ast$.} \end{figure} Consider two points $\point{u}$ and $\point{v}$ on $\segment{I}(n,L)$. Since $\mathbb{T}\in\event{E}$, there exist points $\point{y}$ and $\point{z}$ on $\segment{I}^\ast$ such that the geodesic $\Gamma(\point{0},\point{u})$ passes through $\point{y}$ and the geodesic $\Gamma(\point{0},\point{v})$ passes through $\point{z}$. Then \[ T(\point{0},\point{u})-T(\point{0},\point{v})\leq (T(\point{0},\point{z})+T(\point{z},\point{u}))-(T(\point{0},\point{z})+T(\point{z},\point{v}))=T(\point{z},\point{u})-T(\point{z},\point{v}). \] Similarly, we get the opposite inequality with $\point{y}$ in place of $\point{z}$. Therefore, \begin{equation} |T(\point{0},\point{u})-T(\point{0},\point{v})|\leq\max_{\point{x}\in\segment{I}^\ast}|T(\point{x},\point{u})-T(\point{x},\point{v})|. \label{eq:logupinc4} \end{equation} Fix an $\point{x}\in \segment{I}^\ast$. Then \begin{multline} |T(\point{x},\point{u})-T(\point{x},\point{v})| \leq |T(\point{x},\point{u})-h(\point{u}-\point{x})| + |T(\point{x},\point{v})-h(\point{v}-\point{x})|\\ + |h(\point{u}-\point{x})-g(\point{v}-\point{x})| + |h(\point{v}-\point{x})-g(\point{v}-\point{x})| + |g(\point{u}-\point{x})-g(\point{v}-\point{x})|. \label{eq:logupinc5} \end{multline} Since $\point{u}$, $\point{v}$ are in $\segment{I}(n,L)$, we have \begin{equation} \pi^2_{\theta_0,\theta_0^t}(\point{u}-\point{v})\leq L. \label{eq:logupinc6} \end{equation} From \eqref{eq:logupinc1} we get \begin{equation} \pi^1_{\theta_0,\theta_0^t}(\point{u}-\point{x})=\Delta^{-1}(L), \label{eq:logupinc7} \end{equation} and \begin{equation} |\pi^2_{\theta_0,\theta_0^t}(\point{u}-\point{x})|\leq\C t^{1/2}L(\log\Delta^{-1}(L))^{1/2}. \label{eq:logupinc8} \end{equation} Combining \eqref{eq:logupinc6}, \eqref{eq:logupinc7}, \eqref{eq:logupinc0} we get \begin{equation} \ltwo{\point{u}-\point{x}}\leq\C\Delta^{-1}(L). \label{eq:logupinc9} \end{equation} Hence, by Proposition~\ref{prop:nrflog}, and using $\log L$ and $\log\Delta^{-1}(L)$ are of the same order, we get \begin{equation} |h(\point{u}-\point{x})-g(\point{u}-\point{x})|\leq\C\sigma(\Delta^{-1}(L))\log L. \label{eq:logupinc10} \end{equation} Similarly, \eqref{eq:logupinc7}-\eqref{eq:logupinc10} hold for $\point{u}$ replaced with $\point{v}$. By Lemma~\ref{lem:auxgeom2} and using \eqref{eq:logupinc6}, \eqref{eq:logupinc7}, \eqref{eq:logupinc8}, we get \begin{equation} |g(\point{u}-\point{x})-g(\point{v}-\point{x})| \leq \C t^{1/2} \frac{L^2}{\Delta^{-1}(L)}\log\Delta^{-1}(L) \leq \C t^{1/2} \sigma(\Delta^{-1}(L))\log L. \label{eq:logupinc11} \end{equation} Using \eqref{eq:logupinc10} and the same for $\point{u}$ replaced with $\point{v}$, \eqref{eq:logupinc11}, and \eqref{eq:logupinc5}, we get \begin{multline*} \Prob\left(|T(\point{x},\point{u})-T(\point{x},\point{v})|\geq t\sigma(\Delta^{-1}(L))\log L\right) \leq\Prob\left(|T(\point{x},\point{u})-h(\point{u}-\point{x})|\geq\Cl{321} t\sigma(\Delta^{-1}(L))\log L\right)\\ + \Prob\left(|T(\point{x},\point{v})-h(\point{v}-\point{x})|\geq\Cr{321} t\sigma(\Delta^{-1}(L))\log L\right). \end{multline*} Therefore, using \eqref{eq:logupinc9} and the same for $\point{u}$ replaced by $\point{v}$, and using \eqref{A1} we get \begin{equation} \Prob\left(|T(\point{x},\point{u})-T(\point{x},\point{v})|\geq t\sigma(\Delta^{-1}(L))\log L\right)\leq \C e^{-\C t\log L}. \label{eq:logupinc12} \end{equation} The number of choices of lattice points corresponding to $\point{x}$, $\point{u}$, $\point{v}$ is $\C t L^3 (\log L)^{1/2}$. Using \eqref{eq:logupinc4}, \eqref{eq:logupinc12}, and a union bound, we get \[ \Prob\left(\dev{D}(n,L)\geq t\sigma(\Delta^{-1}(L))\log L,\;\mathbb{T}\in\event{E}\right) \leq \C e^{-\C t\log L}. \] This concludes the proof because we already found in \eqref{eq:logupinc3} that $\event{E}^c$ has appropriately small probability. \end{proof} We shall need the following variation of the last result. In the last result we took a direction ${\theta_0}$ of both type I and type II and considered transverse increment over a segment with one endpoint having direction ${\theta_0}$. In the next result, we look at transverse increment over a segment whose endpoints have direction in a neighborhood of ${\theta_0}$ but may not have direction exactly ${\theta_0}$. \begin{cor} Let ${\theta_0}$ be a direction of both type I and II. For $n>0$, $L>0$, $d$, let \[ \segment{I}(n,L,d):=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{x})=n,\; d\leq\pi^2_{\theta_0,\theta_0^t}(\point{x})\leq d+L\right\}, \] and \[ \dev{D}(n,L,d):=\max\left\{|T(\point{0},\point{x})-T(\point{0},\point{y})|:\point{x},\point{y}\in\segment{I}(n,L,d)\right\}. \] Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist positive constants $\delta_1$, $\delta_2$ $C_1$, $C_2$, $C_3$, $L_0$, $n_0$, $t_0$ such that for $L\geq L_0$, $n\geq n_0$, $t\geq t_0$, $|d|\leq n\delta_1$, $L\leq\delta_2\Delta(n)$, we have \[ \Prob\left(\dev{D}(n,L,d) \geq C_3 L\frac{|d|}{n} + t\sigma(\Delta^{-1}(L))\log L\right) \leq C_1 e^{-C_2 t\log L}. \] \label{cor:logupinc} \end{cor} \resetconstant \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\delta_1>0$, $\delta_2>0$, to be assumed appropriately small whenever required. Fix $n_0>0$, $L_0>0$, $t_0>0$, to be assumed appropriately large whenever required. Consider $n$, $L$, $t$, $d$ such that $L\geq L_0$, $n\geq n_0$, $|d|\leq n\delta_1$, $L\leq\delta_2\Delta(n)$. Let $\point{u}:=n{\Unit{\thn}}+d{\Unit{\thnt}}$, $\point{v}:=n{\Unit{\thn}}+(d+L){\Unit{\thnt}}$. Let ${\theta}_1$, ${\theta}_2$ be the directions of $\point{u}$ and $\point{v}$ respectively. Let $\point{w}$ be the projection of $\point{u}$ on the line joining $\point{0}$ and $\point{v}$ in direction ${\theta}_1^t$ which exists assuming $\delta_1$ is small enough. Let $\segment{I}^\ast$ be segment joining $\point{u}$ and $\point{w}$. \begin{figure}[H] \centering \includegraphics[width=0.6\linewidth]{Fig5LogIncCor.pdf} \caption{Illustration for Corollary~\ref{cor:logupinc}. The segment joining $\point{u}$ and $\point{v}$ is $\segment{I}(n,L,d)$. The segment joining $\point{u}$ and $\point{w}$ is $\segment{I}^\ast$.} \end{figure}% Let \[ \dev{D}^\ast:=\max\left\{|T(\point{0},\point{x})-T(\point{0},\point{y})|:\point{x},\point{y}\in \segment{I}^\ast\right\}. \] Let $|\segment{I}^\ast|$ be the length of $\segment{I}^\ast$. To bound $\dev{D}^\ast$ we will use Theorem~\ref{thm:logupinc} with the following variables \[ \tilde{{\theta_0}}={\theta}_1,\quad\tilde{n}:=\ltwo{\point{u}},\quad \tilde{L}:=|\segment{I}^\ast|,\quad\tilde{t}=\frac{t}{2},\quad\tilde{\eta}=1. \] Since ${\theta_0}$ is of both type I and type II, assuming $\delta_1$ is small enough, we get all possible values of ${\theta}_1$ are uniformly of both type I and type II i.e., they satisfy the curvature conditions with same constants. Thus the condition on ${\theta}_1$ for applying Theorem~\ref{thm:logupinc} holds. Assuming $\delta_1$ and $\delta_2$ are small enough, we get $\C L\leq|\segment{I}^\ast|\leq \Cl{logupinc1} L$, $\C n\leq\tilde{n}\leq\C n$. Therefore, $\tilde{L}\geq\tilde{L_0}$, $\tilde{n}\geq\tilde{n_0}$ hold assuming $L_0$ and $n_0$ are large enough. Also from $L\leq\delta_2\Delta(n)$ we get $\tilde{L}\leq\Delta(\tilde{n})$. Hence all the conditions for applying Theorem~\ref{thm:logupinc} hold and we get \begin{equation} \Prob\left(\dev{D}^\ast\geq\frac{t}{2}\sigma(\Delta^{-1}(L))\log L\right)\leq\C e^{-\C t \log L}. \label{eq:corlogupinc1} \end{equation} Now let us consider the difference $|\dev{D}^\ast-\dev{D}(n,L,d)|$. Considering the triangle with vertices $\point{u}$, $\point{v}$, $\point{w}$, we get \begin{equation} \ltwo{\point{v}-\point{w}}=L\frac{|\sin({\theta}_1^t-{\theta_0^t})|}{|\sin({\theta}_1^t-{\theta}_2)|}. \label{eq:corlogupinc2} \end{equation} Assuming $\delta_1$ and $\delta_2$ are small enough, we get \begin{equation} |\sin({\theta}_1^t-{\theta}_2)|\geq \C|\sin({\theta_0^t}-{\theta_0})|, \label{eq:corlogupinc3 \end{equation} and \begin{multline} |\sin({\theta^t}-{\theta_0^t})| \leq\C|{\theta}_1^t-{\theta_0^t}| \leq\C|{\theta}_1-{\theta_0}|\\ \leq\C|\sin({\theta}_1-{\theta_0})| \leq\C\frac{|d|}{n}|\sin({\theta_0^t}-{\theta})| \leq\C\frac{|d|}{n}|\sin({\theta_0^t}-{\theta_0})|, \label{eq:corlogupinc4} \end{multline} where the second inequality holds by Remark~\ref{Remark:linear}. Combining \eqref{eq:corlogupinc2}-\eqref{eq:corlogupinc4} we get \[ \ltwo{\point{v}-\point{w}}\leq\Cl{curvup1}L\frac{|d|}{n}. \] Therefore, if $\point{x}$ is a point on $\segment{I}(n,L,d)$ and $\point{y}$ is its projection on $\segment{I}^\ast$ in direction ${\theta}_2$, then $\ltwo{\point{x}-\point{y}}\leq\C L|d|/n$. Since $h$ is subadditive and therefore sublinear, we get $h(\point{x}-\point{y})\leq\Cl{317} L|d|/n$. Assuming $\delta_1<1$, taking $L_0$ large enough, and using \eqref{A2}, we get $|d|/n\leq\delta_1<1\leq\Delta^{-1}(L)/L$, so that $L|d|/n\leq\Delta^{-1}(L)$. Therefore, using \eqref{A1} we get for all $t^\prime>0$ \[ \Prob\left( T(\point{x},\point{y})\geq\Cr{317}L\frac{|d|}{n}+t^\prime\sigma(\Delta^{-1}(L))\right)\leq\C e^{-\C t^\prime}. \] Using $|\segment{I}^\ast|\leq\Cr{logupinc1} L$, $|\segment{I}(n,L,d)|=L$, and using a union bound, we get \[ \Prob\left(|\dev{D}^\ast-\dev{D}(n,L)|\geq\Cr{317} L\frac{|d|}{n}+t^\prime\sigma(\Delta^{-1}(L))\right) \leq \C L^2 \exp\left( -\C t^\prime\right). \] Therefore taking $t^\prime=(t/2)\log L$ and assuming $t_0$ and $L_0$ are large enough we get \begin{equation} \Prob\left(|\dev{D}^\ast-\dev{D}(n,L)|\geq\Cr{317} L\frac{|d|}{n}+\frac{t}{2}\sigma(\Delta^{-1}(L))\log L\right) \leq \C \exp\left( -\C t\log L\right). \label{eq:corlogupinc5} \end{equation} Combining \eqref{eq:corlogupinc1} and \eqref{eq:corlogupinc5} completes the proof. \end{proof} \section{Refined Upper Bound on nonrandom Fluctuations}\label{sec:ghloglog} In the previous section, we proved a preliminary upper bound of fluctuations of transverse increments. To prove the more refined bound of Theorem~\ref{thm:loglogupinc}, we need to improve the correction factor in Proposition~\ref{prop:nrflog} from $\log$ to small power of $\log$. Related results are known in the literature, for example, \cite{Tessera2018} has shown that $(\log x)^{1/2}$ is a valid correction factor in FPP on Cayley Graphs on integer lattices, and \cite{DamronWang} have shown in a spherically symmetric model of FPP that any iterate of $\log$ is a valid correction factor. In Section~\ref{sec:wanderinglog} we briefly discussed the method from \cite{Ken97}, we will now discuss it in more detail. \begin{notn} Let $\mathcal{S}_0$ be the set of directions where the boundary of $\mathcal{B}$ is differentiable. Consider $\point{x}$ with direction in $\mathcal{S}_0$. Let $H_{\point{x}}$ be the tangent to $\partial g(\point{x})\mathcal{B}$ at $\point{x}$. Let $H_{\point{x}}^0$ be the line through $\point{0}$ parallel to $H_{\point{x}}$. Let $g_{\point{x}}$ be the unique linear functional on $\ensuremath{\mathbb{R}}^2$ satisfying $g_{\point{x}}(\point{y})=0$ for all $y\in H_{\point{x}}^0$, and $g_{\point{x}}(\point{x})=g(\point{x})$. Recall $\Phi$ from Notation~\ref{notn:phi}. Define for $\phi\in\Phi$, $\nu\geq 0$, $C>0$, $K>0$, \[ Q_{\point{x}}(\nu,\phi,C,K):=\left\{\point{y}\in\ensuremath{\mathbb{Z}}^2:\ltwo{\point{y}}\leq K\ltwo{\point{x}}, g_{\point{x}}(\point{y})\leq g(\point{x}), h(\point{y})\leq g_{\point{x}}(\point{y}) + C\ltwo{\point{x}}^\nu\phi\left(\ltwo{\point{x}}\right)\right\}. \] \label{Notation:CHAP}\end{notn} \begin{defn} We say that $h$ satisfies the \textit{convex hull approximation property} (CHAP) with exponent $\nu\geq 0$ and correction factor $\phi\in\Phi$ in a set of directions $\mathcal{S}\subset\mathcal{S}_0$, if there exist constants $M>0$, $C>0$, $K>0$, $a>1$ such that $x/\alpha\in \textsf{Co}(Q_{\point{x}}(\nu,\phi,C,K))$ for some $\alpha\in[1,a]$, for all $\point{x}\in\ensuremath{\mathbb{Q}}^2$ with $\ltwo{\point{x}}\geq M$ and direction of $\point{x}$ in $\mathcal{S}$, where $\textsf{Co}$ denotes the convex hull. When we want to specify the specific constants, we say $h$ satisfies $\text{CHAP}(\nu,\phi,M,C,K,a)$ in sector $\mathcal{S}$.\end{defn} In our terminology, the procedure in \cite{Ken97} is as follows. There the objective is to prove GAP with exponent $\alpha$ and correction factor $\phi_1$. To achieve this, first, it is shown that CHAP with exponent $\alpha$ and correction factor $\phi_1$ holds. Since $h$ is sublinear, GAP with exponent $1$ and correction factor $\phi_1$ holds trivially. Then the exponent of GAP is reduced from $1$ to $\alpha$ iteratively using CHAP. Here we are not concerned about the exponent $\alpha$. We want to change the correction factor of GAP from $\phi_1$ to $\phi_\eta$ for small $\eta>0$ while keeping the exponent $\alpha$ fixed. We will prove this in two steps. In the first step, we will prove that CHAP holds with exponent $\alpha$ and correction factor $\widehat{\phi}$. From Proposition~\ref{prop:nrflog} we get $h$ satisfies GAP with exponent $\alpha$ and correction factor $\phi_1$. Using this, we will reduce the correction factor of GAP from $\phi_1$ to $\phi_\eta$, which will be the second step. The result about CHAP with exponent $\alpha$ and correction factor $\widehat{\phi}$ is the following. \begin{thm} Let ${\theta_0}$ be a direction of both type I and type II. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exists $\delta>0$ such that CHAP holds for $h$ with exponent $\alpha$ and correction factor $\widehat{\phi}$ in the sector $({\theta_0}-\delta,{\theta_0}+\delta)$. \label{thm:chaploglog} \end{thm} After carrying out the second step we get the refined upper bound on the nonrandom fluctuations stated below. \begin{thm} Let ${\theta_0}$ be a direction of both type I and type II. Fix $\eta\in(0,1]$. Then, under Assumptions~\ref{asm:sigmaexp} and \ref{asm:sigmareg}, there exist constants $C>0$, $M>0$, $\delta>0$ such that for all $\ltwo{\point{x}}\geq M$ with direction of $\point{x}$ in $({\theta_0}-\delta,{\theta_0}+\delta)$, we have \[ h(\point{x})\leq g(\point{x})+C\sigma(\ltwo{\point{x}})(\log\ltwo{\point{x}})^\eta, \] i.e., $\text{GAP}(\alpha,\phi_\eta,C,M)$ holds in the sector $({\theta_0}-\delta,{\theta_0}+\delta)$. \label{thm:nrf2} \end{thm} In the first subsection below we prove Theorem~\ref{thm:chaploglog}, then in the second subsection we prove Theorem~\ref{thm:nrf2} using Theorem~\ref{thm:chaploglog}. \subsection{Proof of Theorem~\ref{thm:chaploglog}}\resetconstant First, we fix $\delta>0$ which will be used throughout the proof of Theorem~\ref{thm:chaploglog}. Since ${\theta_0}$ is of both type I and II, we can choose $\delta>0$ such that $\partial\mathcal{B}$ is differentiable in $({\theta_0}-2\delta,{\theta_0}+2\delta)$ and there exists $\delta_1>0$ such that for all ${\theta}\in({\theta_0}-2\delta,{\theta_0}+2\delta)$ and $|\delta_2|\leq\delta_1$ \begin{equation} \C\delta_2^2\leq g({\Unit{\th}}+\delta_2{\Unit{\tht}})-g({\Unit{\th}}) \leq\C\delta_2^2. \label{eq:deltachoice} \end{equation} So all ${\theta}\in({\theta_0}-\delta,{\theta_0}+\delta)$ are of both type I and type II with same constants. This allows us to use results which hold in type I or type II directions with same constants for all ${\theta}\in({\theta_0}-\delta,{\theta_0}+\delta)$. We extract a sufficient condition from \cite{Ken97} for the Theorem~\ref{thm:chaploglog} to hold. We state the condition as Proposition~\ref{prop:sufficient}. Since it is essentially proved in \cite{Ken97}, we will not prove it here. To state the condition we need the concept of skeletons of paths defined below. \paragraph{\textbf{Construction of fine skeletons:}} For $\point{x}\in\ensuremath{\mathbb{R}}^2$, $n>0$, $\lambda>0$, $K>0$, the \textit{fine $Q_{\point{x}}(\alpha,\widehat{\phi},\lambda,K)$-skeleton} of a self-avoiding path $\gamma$ from $\point{0}$ to $n\point{x}$ is the sequence of marked points $\point{v}_0,\dots,\point{v}_m$ on $\gamma$ constructed as follows. Let $\point{v}_0:=\point{0}$, and given $\point{v}_i$, let $\point{v}^\prime_{i+1}$ be the first point (if any) in $\gamma$ such that $\point{v}^\prime_{i+1}-\point{v}_i\not\in Q_{\point{x}}(\alpha,\widehat{\phi},\lambda,K)$; then let $\point{v}_{i+1}$ be the last lattice point in $\gamma$ before $\point{v}^\prime_{i+1}$ if $\point{v}^\prime_{i+1}$ exists; otherwise let $\point{v}_{i+1}=\lfloor n\point{x}\rfloor$ and end the construction. \begin{prop} Consider an infinite sequence of i.i.d.\ copies of the passage-time configuration $(\hat{T}^i)_{i=0}^\infty$ on the lattice. If for some positive constants $\lambda_1$, $\lambda_2$, $\lambda_3$, we have \begin{multline} \Prob\left(\sum_{i=0}^{m-1} \left[ h(\point{v}_{i}-\point{v}_{i+1})-\hat{T}^i(\point{v}_i,\point{v}_{i+1})\right] \geq\frac{\lambda_1}{16}m\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}\right.\\ \left.\text{for some $m\geq 1$ and some $Q_{\point{x}}(\alpha,\widehat{\phi},\lambda_1,5)$-skeleton $(\point{v}_j)_{j=0}^m$ of a path}\vphantom{\sum_{i=0}^{m-1}}\right) \leq e^{-\lambda_2\log\log\ltwo{\point{x}}} \label{eq:nrf23} \end{multline} for all $\point{x}$ with $\ltwo{\point{x}}\geq\lambda_3$ and direction of $\point{x}\in ({\theta_0}-\delta,{\theta_0}+\delta)$, then $h$ satisfies $\text{CHAP}(\alpha,\widehat{\phi},M,\lambda_1,4,2)$ for some $M$ in the sector $({\theta_0}-\delta,{\theta_0}+\delta)$. \label{prop:sufficient}\end{prop} In order to verify \eqref{eq:nrf23} we need the concept of `coarse skeletons.' \paragraph{\textbf{Construction of coarse skeletons:}} Consider $\point{x}\in\ensuremath{\mathbb{R}}^2$ with direction ${\theta}\in({\theta_0}-\delta,{\theta_0}+\delta)$. Let \begin{align*} & \ell_{\point{x}}:=\ltwo{\point{x}}(\log\ltwo{\point{x}})^{-2/\alpha},\\ & B_{ij}:=\left\{\point{y}\in\ensuremath{\mathbb{R}}^2:\xprojth(\point{y})\in[i\ell_{\point{x}},(i+1)\ell_{\point{x}}), \yprojth(\point{y})\in[j\Delta(\ell_{\point{x}}),(j+1)\Delta(\ell_{\point{x}}))\right\},\numberthis\label{eq:coarsedef} \end{align*} where $i,j\in\ensuremath{\mathbb{Z}}$, and, $\alpha$ is defined in Assumption~\ref{asm:sigmareg}. So $B_{ij}$ is a parallelogram with side lengths $\ell_{\point{x}}$ and $\Delta(\ell_{\point{x}})$, and these parallelograms cover the whole plane. Given $\point{v}\in B_{ij}$, let \[ F_{\point{x}}(\point{v}):=\lfloor i\ell_{\point{x}}{\Unit{\th}}+j\Delta(\ell_{\point{x}}){\Unit{\tht}}\rfloor,\quad G_{\point{x}}(\point{v}):=\lfloor (i+1)\ell_{\point{x}}{\Unit{\th}}+j\Delta(\ell_{\point{x}}){\Unit{\tht}}\rfloor. \] So, $F_{\point{x}}(\point{v})$ is the lattice point corresponding to down-left corner of the parallelogram $B_{ij}$ containing $\point{v}$ and $G_{\point{x}}(\point{v})$ is the down-right corner, see Figure~\ref{Fig:6}. Suppose $(\point{v}_i)_{i=0}^m$ is a fine $Q_{\point{x}}(\alpha,\widehat{\phi},\lambda,5)$-skeleton of some path for some $\lambda>0$. Then its \textit{coarse skeleton} $(\point{w}_j)_{j=0}^{2m}$ is defined as follows. For $0\leq i\leq m-1$, let $\point{w}_{2i}:=F_{\point{x}}(\point{v}_i)$ and $\point{w}_{2i+1}:=G_{\point{x}}(\point{v}_{i+1})$. \begin{figure}[H] \centering\includegraphics[width=0.55\linewidth]{Fig6Box.pdf} \caption{Construction of coarse skeletons: for every pair of consecutive points $\point{v}$, $\point{v}^\prime$ in a fine skeleton of a path, we have $F_{\point{x}}(\point{v})$ and $G_{\point{x}}(\point{v}^\prime)$ as consecutive points in the coarse skeleton of the path.}\label{Fig:6}\end{figure} \begin{remark} If $(\point{v}_i)_{i=0}^m$ is a fine $Q_{\point{x}}(\alpha,\widehat{\phi},\lambda,5)$ skeleton of some path and $(\point{w}_j)_{j=0}^{2m}$ the corresponding coarse skeleton, then $\ltwo{\point{v}_i-\point{v}_{i+1}}\leq 5\ltwo{\point{x}}$ and $\ltwo{\point{w}_{2i-1}-\point{w}_{2i}}\leq 6\ltwo{\point{x}}$ for large enough $\ltwo{\point{x}}$.\label{Remark:coarse}\end{remark} We will state two propositions which, when combined, establishes \eqref{eq:nrf23}. We need to define few constants first. Let $\Cl{1}$, $\Cl{2}$, $\Cl{0}$ be positive constants such that for all $|\point{u}-\point{v}|\leq 6\ltwo{\point{x}}$ \begin{equation} \Prob\left(|T(\point{u},\point{v})-h(\point{u}-\point{v})|\geq t\sigma(\ltwo{\point{x}})\right)\leq\Cr{1} e^{-\Cr{2} t}, \label{eq:defineczero} \end{equation} and \begin{equation} \Cr{0}:=256\alpha^{-1}(1+\Cr{1})/\Cr{2}. \label{eq:defineczero2} \end{equation} For any $\point{x}$ let us use the shorthand notation \[ Q_{\point{x}}:=Q_{\point{x}}(\alpha,\widehat{\phi},\Cr{0},5). \] \begin{prop} Under the assumptions of Theorem~\ref{thm:chaploglog}, there exist positive constants $\Cl{3}$, $\Cl{4}$ such that for $\point{x}$ with direction in $({\theta_0}-\delta,{\theta_0}+\delta)$ and $\ltwo{\point{x}}\geq\Cr{3}$, and for all $m\geq 1$, we have \begin{align*} & \Prob\left(\sum_{i=0}^{m-1}\left[ h(\point{w}_{2i}-\point{w}_{2i+1})-\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\right]>\frac{\Cr{0}}{32}m\sigma\left(\ltwo{\point{x}}\right)\log\log\ltwo{\point{x}}\right.\\ &\left.\quad\text{for some coarse $Q_{\point{x}}$-skeleton $(\point{w}_j)_{j=0}^{2m}$ of a path}\vphantom{\sum_{i=0}^{m-1}}\right)\\ \leq & \exp\left(-m \Cr{4}\log\log\ltwo{\point{x}}\right). \end{align*} \label{prop:coarseentropy}\end{prop} \begin{prop} Under assumptions of Theorem~\ref{thm:chaploglog}, there exist positive constants $\Cl{5}$, $\Cl{6}$ such that for $\point{x}$ with direction in $({\theta_0}-\delta,{\theta_0}+\delta)$ and $\ltwo{\point{x}}\geq\Cr{5}$, and for all $m\geq 1$, we have \begin{align*} & \Prob\left(\sum_{i=0}^{m-1}\left[ h(\point{v}_i-\point{v}_{i+1})-\hat{T}^i(\point{v}_i,\point{v}_{i+1})-h(\point{w}_{2i}-\point{w}_{2i+1})+\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\right]\geq\frac{\Cr{0}}{32} m\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}\right.\\ &\left.\quad\text{for some fine $Q_{\point{x}}$-skeleton $(\point{v}_j)_{j=0}^m$ and corresponding coarse skeleton $(\point{w}_j)_{j=0}^{2m}$}\vphantom{\sum_{i=0}^{m-1}}\right)\\ \leq & \exp\left(-m \Cr{6}\log\log\ltwo{\point{x}}\right). \end{align*} \label{prop:finecoarsediffsum}\end{prop} So \eqref{eq:nrf23} holds by Propositions~\ref{prop:finecoarsediffsum} and \ref{prop:coarseentropy}. Hence, to complete the proof of Theorem~\ref{thm:chaploglog}, we only need to prove these two propositions. We need the following lemma first. \begin{lemma} Under assumptions of Theorem~\ref{thm:chaploglog}, there exist positive constants $\Cl{61}$, $\Cl{7}$ such that for $\point{x}$ with direction in $({\theta_0}-\delta,{\theta_0}+\delta)$ and $\ltwo{\point{x}}\geq\Cr{61}$ we have the following. \begin{enumerate}[(i)] \item For all $\point{y}\in Q_{\point{x}}$, $|\yprojth(\point{y})|\leq \Cr{7}\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}$, where ${\theta}$ is the direction of $\point{x}$. \item The number of coarse $Q_{\point{x}}$-skeletons of length $2m+1$ is at most $(\log\ltwo{\point{x}})^{4m/\alpha}$, where $\alpha$ is defined in \eqref{A2}. \end{enumerate} \label{lem:width} \end{lemma} \noindent\begin{proof}[\emph{\textbf{Proof:}}] Fix $\point{x}$ with direction ${\theta}$ in $({\theta_0}-\delta,{\theta_0}+\delta)$. We will assume that $\ltwo{\point{x}}$ is large enough whenever required. \begin{enumerate}[(i)] \item Consider $\point{y}\in Q_{\point{x}}$. Then \[ g(\point{y})\leq h(\point{y})\leq g_{\point{x}}(\point{y})+\Cr{0}\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}, \] so \begin{equation} g(\point{y})-g_{\point{x}}(\point{y})\leq\Cr{0}\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}. \label{eq:l551} \end{equation} Consider three cases. \textbf{Case I:} Suppose $\xprojth(\point{y})>0$ and $|\yprojth(\point{y})|\leq\delta_1\xprojth(\point{y})$, where $\delta_1$ is defined in \eqref{eq:deltachoice}. From \eqref{eq:deltachoice} we get \begin{equation} g(\point{y})-g_{\point{x}}(\point{y}) = g(\xprojth(\point{y}){\Unit{\th}}+\yprojth(\point{y}){\Unit{\tht}})-g(\xprojth(\point{y}){\Unit{\th}}) \geq \C\frac{\yprojth(\point{y})^2}{\xprojth(\point{y})}. \label{eq:l552} \end{equation} Since $\point{y}\in Q_{\point{x}}$, we have \[ \xprojth(\point{y})\leq\C\ltwo{\point{y}}\leq\C\ltwo{\point{x}}. \] This with \eqref{eq:l552} and \eqref{eq:l551} implies \[ |\yprojth(\point{y})|\leq\C\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}. \] \noindent\textbf{Case II:} Now suppose $\xprojth(\point{y})>0$ and $|\yprojth(\point{y})|\geq\delta_1\xprojth(\point{y})$. Let us consider $\yprojth(\point{y})>0$, the case $\yprojth(\point{y})<0$ is similar. Using convexity of $g$ and \eqref{eq:deltachoice} we get \begin{align*} & g(\point{y})-g_{\point{x}}(\point{y})\\ = & g(\xprojth(\point{y}){\Unit{\th}}+\yprojth(\point{y}){\Unit{\tht}})-g(\xprojth(\point{y}){\Unit{\th}})\\ \geq & \frac{g(\xprojth(\point{y}){\Unit{\th}}+\delta_1\xprojth(\point{y}){\Unit{\tht}})-g(\xprojth(\point{y}){\Unit{\th}})}{\delta_1\xprojth(\point{y})/\yprojth(\point{y})}\\ = & \yprojth(\point{y})\delta_1^{-1}(g({\Unit{\th}}+\delta_1{\Unit{\tht}})-g({\Unit{\th}}))\\ \geq & \C\yprojth(\point{y}). \end{align*} Hence using \eqref{eq:l551} and \eqref{A2} we get \[ \yprojth(\point{y}) \leq \C\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}} \leq \C\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}. \] \noindent\textbf{Case III:} Suppose $\xprojth(\point{y})<0$. Then $g_{\point{x}}(\point{y})<0$. Using $\point{y}\in Q_{\point{x}}$, \eqref{eq:l551}, and \eqref{A2} we get \[ \yprojth(\point{y}) \leq \C \ltwo{\point{y}} \leq \C g(\point{y}) \leq \C \sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}} \leq \C \Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}. \] \item Given any $\point{v}$ let $Q_{\point{x}}(\point{v})$ denote the translate of $Q_{\point{x}}$ by $\point{v}$. Suppose part of a coarse skeleton is given as $(\point{w}_0,\dots,\point{w}_{2i})$. We will find an upper bound on number of possibilities of $(\point{w}_{2i+1},\point{w}_{2i+2})$. Consider a fine skeleton $(\point{v}_0,\dots,\point{v}_i)$ corresponding to $(\point{w}_0,\dots,\point{w}_{2i})$. Since $(\point{w}_{2i-1},\point{w}_{2i})$ is fixed, all choices of $\point{v}_i$ lie in the same $B_{i_0j_0}$. Consider the union of $Q_{\point{x}}(\point{v})$ over all $\point{v}\in B_{i_0 j_0}$. Each $Q_{\point{x}}(\point{v})$ is contained in a parallelogram of length $\C\ltwo{\point{x}}$ in ${\theta}$ direction and length $\C\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}$ in ${\theta^t}$ direction. Hence the union of all such $Q_{\point{x}}(\point{v})$ as $\point{v}$ varies in a parallelogram $B_{i_0 j_0}$ is contained in a parallelogram of length $\C\ltwo{\point{x}}$ in ${\theta}$ direction and length $\C\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}$ in ${\theta^t}$ direction (using (i)). From \eqref{eq:coarsedef} we have \[ \frac{\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}}{\Delta(\ell_{\point{x}})} \leq\C(\log\ltwo{\point{x}})^{(1+\beta)/\alpha}(\log\log\ltwo{\point{x}})^{1/2}, \] and, \[ \frac{\ltwo{\point{x}}}{\ell_{\point{x}}}=(\log\ltwo{\point{x}})^{2/\alpha}. \] Using $\beta<1$, the number of parallelograms $B_{ij}$ that cover the aforementioned union is at most $(\log\ltwo{\point{x}})^{4/\alpha}$. Hence number of choices of $(\point{w}_{2i+1},\point{w}_{2i+2})$ is at most $(\log\ltwo{\point{x}})^{4/\alpha}$. Iterating this from $i=0$ to $m$, we get the result. \end{enumerate} \end{proof} \subsubsection{Proof of Proposition~\ref{prop:coarseentropy}} Fix $\point{x}$ with direction $\theta\in({\theta_0}-\delta,{\theta_0}+\delta)$. We will assume $\ltwo{\point{x}}$ is large whenever required. Fix a coarse $Q_{\point{x}}$-skeleton $(\point{w}_j)_{j=0}^{2m}$ for some $m\geq 1$. By Remark~\ref{Remark:coarse} and \eqref{eq:defineczero} we get \[ \Prob\left( h(\point{w}_{2i}-\point{w}_{2i+1})-\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\geq t\sigma\ltwo{\point{x}}\right) \leq \Cr{1} e^{-\Cr{2}t}. \] For $\Cl{sp}:=\Cr{2}/(1+\Cr{1})$ we get \[ \Exp\exp\left(\Cr{sp}\left( h(\point{w}_{2i}-\point{w}_{2i+1})-\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\right)^+/\sigma\left(\ltwo{\point{x}}\right)\Rp\leq 2. \] Using independence of $\hat{T}^i$s we get \[ \Exp\exp\left(\Cr{sp}\sum_{i=0}^{m-1} \left( h(\point{w}_{2i}-\point{w}_{2i+1})-\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\right)^+/\sigma\left(\ltwo{\point{x}}\right)\Rp\leq 2^m. \] Hence, for all $t>0$ we have \[ \Prob\left(\sum_{i=0}^{m-1} h(\point{w}_{2i}-\point{w}_{2i+1})-\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})>t m\sigma\left(\ltwo{\point{x}}\right)\log\log\ltwo{\point{x}}\right)\\\leq 2^m e^{-\Cr{sp} m t\log\log\ltwo{\point{x}}}. \] By Lemma~\ref{lem:width}, the number of coarse skeletons of length $2m+1$ is at most $(\log\ltwo{\point{x}})^{m(4/\alpha)}$. So by \eqref{eq:defineczero2} \begin{align*} & \Prob\left(\sum_{i=0}^{m-1}\left[ h(\point{w}_{2i}-\point{w}_{2i+1})-\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\right]>\frac{\Cr{0}}{32}m\sigma\left(\ltwo{\point{x}}\right)\log\log\ltwo{\point{x}}\right.\\ & \left.\quad\text{for some coarse $Q_{\point{x}}$-skeleton $(\point{w}_j)_{j=0}^{2m}$ of a path} \vphantom{\sum_{i=0}^{m-1}}\right)\\ \leq & 2^m(\log\ltwo{\point{x}})^{m(4/\alpha)}\exp\left(-\Cr{sp}\frac{\Cr{0}}{32}m\log\log\ltwo{\point{x}}\right)\\ \leq & \exp\left(-m\C\log\log\ltwo{\point{x}}\right). \end{align*} This completes the proof of Proposition~\ref{prop:coarseentropy}. \subsubsection{Proof of Proposition~\ref{prop:finecoarsediffsum}} Fix $\point{x}$ with direction $\theta\in({\theta_0}-\delta,{\theta_0}+\delta)$. We will assume $\ltwo{\point{x}}$ is large whenever required. Fix a coarse $Q_{\point{x}}$-skeleton $(\point{w}_j)_{j=0}^{2m}$ for some $m\geq 1$. For $0\leq i \leq m-1$, let \[ V_i := \left\{ (\point{v},\point{v}^\prime)\in\ensuremath{\mathbb{Z}}^2\times\ensuremath{\mathbb{Z}}^2 : F_{\point{x}}(\point{v})=\point{w}_{2i},G_{\point{x}}(\point{v}^\prime)=\point{w}_{2i+1},\point{v}^\prime-\point{v}\in Q_{\point{x}}\right\}, \] and \[ X_i := \sup\left\{ \left( h(\point{v}-\point{v}^\prime)-\hat{T}^i(\point{v},\point{v}^\prime)- h(\point{w}_{2i}-\point{w}_{2i+1})+\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})\right)/\sigma(\ltwo{\point{x}}): (\point{v},\point{v}^\prime)\in V_i\right\}. \] For any $0\leq i\leq m-1$ and $(\point{v},\point{v}^\prime)\in V_i$ we have $\ltwo{\point{v}-\point{v}^\prime}\leq 5\ltwo{\point{x}}$ by definition of $Q_{\point{x}}$. So the number of elements in $V_i$ is at most $\Cl{p440}\ltwo{\point{x}}^4$. Hence, for all $t>0$ we have \begin{equation} \Prob\left( X_i\geq t\right)\leq\Cl{p441}\ltwo{\point{x}}^4 e^{-\Cl{p442}t}. \label{eq:p4.41} \end{equation} If we can show that for some constant $\Cl{p443}>0$ \begin{equation} \Prob\left(\sum_{i=0}^{m-1} X_i\geq\frac{\Cr{0}}{32}m\log\log\ltwo{\point{x}}\right) \leq e^{-\Cr{p443}m\log\ltwo{\point{x}}}, \label{eq:p4.42} \end{equation} then the Proposition~\ref{prop:finecoarsediffsum} follows using Lemma~\ref{lem:width}. So we will verify the condition in \eqref{eq:p4.42}. Let $N_0$, $N_1$ be positive integers such that \begin{equation} 2^{N_0}<\frac{\Cr{0}}{96}\log\log\ltwo{\point{x}}\leq 2^{N_0+1}, \mbox{ and } 2^{N_1-1}<\Cl{p444}\log\ltwo{\point{x}}\leq 2^{N_1}, \label{eq:p4.43} \end{equation} where $\Cr{p444}:=8/\Cr{p442}$. Then \begin{multline} \Prob\left(\sum_{i=0}^{m-1} X_i\geq\frac{\Cr{0}}{32}m\log\log\ltwo{\point{x}}\right) \leq\Prob\left(\sum_{i=0}^{m-1} X_i\mathbf{1}(2^{N_0}\leq X_i<2^{N_1})\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\right)\\ +\Prob\left(\sum_{i=0}^{m-1} X_i\mathbf{1}(X_i\geq 2^{N_1})\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\right). \label{eq:p4.44} \end{multline} For the second term in the right-hand side of \eqref{eq:p4.44} we have \begin{align*} & \Prob\left(\sum_{i=0}^{m-1} X_i\mathbf{1}(X_i\geq 2^{N_1})\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\right)\\ \leq & \Prob\left(\sum_{q=N_1}^{\infty}\sum_{i=0}^{m-1} 2^{q+1}\mathbf{1}(2^{q+1}>X_i\geq 2^q)\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\sum_{q=N_1}^\infty 2^{-(q-N_1+1)}\right)\\ \leq &\sum_{q=N_1}^\infty\Prob\left(\sum_{i=0}^{m-1}\mathbf{1}(X_i\geq 2^q)\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}2^{N_1-2(q+1)}\right)\\ \leq &\sum_{q=N_1}^\infty e^{-m I(a_1(q)|b_1(q))}, \numberthis\label{eq:p4.45} \end{align*} where \[ a_1(q):=\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}2^{N_1-2(q+1)},\quad b_1(q):=\max_{0\leq i\leq m-1}\Prob(X_i\geq 2^q), \] and $I$ is the large deviation rate function for Bernoulli random variables given by \begin{equation} I(x|y):=x\log\frac{x}{y}+(1-x)\log\frac{1-x}{1-y}. \label{eq:p4.48} \end{equation} Using \eqref{eq:p4.41}, $\Cr{p444}=8/\Cr{p442}$, and $2^{N_1}\geq\Cr{p444}\log\ltwo{\point{x}}$, we get for $q\geq N_1$ \[ b_1(q)\leq\Cr{p441}\ltwo{\point{x}}^4 e^{-\Cr{p442}2^q}\leq e^{-\C 2^q}. \] So $b_1(q)$ is much smaller than $a_1(q)$. Therefore, using \eqref{eq:p4.48} for $I(a_1(q)|b_1(q))$ and expanding the log terms we see that the term $a_1(q)\log(b_1(q)^{-1})$ dominates the others. Hence \[ I(a_1(q)|b_1(q))\geq\C a_1(q)\log(b_1(q)^{-1})\geq\C\log\log\ltwo{\point{x}}\log\ltwo{\point{x}}2^{-q}. \] Therefore, continuing from \eqref{eq:p4.45} and using \eqref{eq:p4.41} we get \begin{equation} \Prob\left(\sum_{i=0}^{m-1} X_i\mathbf{1}(X_i\geq 2^{N_1})\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\right) \leq e^{-\C m \log\ltwo{\point{x}}}. \label{eq:p4.46} \end{equation} For the first term in the right-hand side of \eqref{eq:p4.44} we have \begin{align*} & \Prob\left(\sum_{i=0}^{m-1} X_i\mathbf{1}(2^{N_0}\leq X_i<2^{N_1})\geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\right)\\ \leq & \Prob\left(\sum_{i=0}^{m-1}\sum_{q=N_0}^{N_1-1}2^{q+1}\mathbf{1}(X_i\geq 2^q)\geq\frac{\Cr{0}}{96} m\log\log\ltwo{\point{x}}\right)\\ \leq & \sum_{q=N_0}^{N_1-1}\Prob\left(\sum_{i=0}^{m-1}\mathbf{1}(X_i\geq 2^q)\geq\frac{\Cr{0}}{96}m(\log\log\ltwo{\point{x}}) 2^{-(q+1)}(N_1-N_0)^{-1}\right)\\ \leq & \sum_{q=N_0}^{N_1-1} e^{-m I(a_2(q)|b_2(q))}, \numberthis\label{eq:p4.47}\end{align*} where \[ a_2(q):=\frac{\Cr{0}}{96}(\log\log\ltwo{\point{x}})2^{-(q+1)}(N_1-N_0)^{-1},\quad b_2(q):=\max_{0\leq i\leq m-1}\Prob(X_i\geq 2^q). \] We will use the following claim to derive a bound of $b_2(q)$. The proof of the claim is presented later. \begin{claim} For $0\leq i\leq m-1$, $(\point{v},\point{v}^\prime)\in V_i$, and $t\in[2^{N_0},2^{N_1-1}]$ we have \[ \Prob\left( h(\point{v}-\point{v}^\prime)-\hat{T}^i(\point{v},\point{v}^\prime) -h(\point{w}_{2i}-\point{w}_{2i+1})+\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1}))\geq t\sigma(\ltwo{\point{x}})\right)\leq e^{-C_9 t\log\ltwo{\point{x}}}. \] \label{claim:finecoarsediffsum} \end{claim} By this claim and using that size of $V_i$ is at most $\Cr{p440}\ltwo{\point{x}}^4$ we get, for $N_0\leq q\leq N_1-1$ \[ b_2(q) = \max_{0\leq i\leq m-1} \Prob\left( X_i\geq 2^q\right) \leq \C\ltwo{\point{x}}^4 e^{-\C 2^q\log\ltwo{\point{x}}} \leq e^{-\C 2^q\log\ltwo{\point{x}}}. \] From \eqref{eq:p4.43} we get $\C 2^{-q}\leq a_2(q) \leq \C 2^{-q}$. Therefore, using \eqref{eq:p4.48} for $I(a_2(q)|b_2(q))$ and expanding the log terms we see that the term $a_2(q)\log(b_2(q)^{-1})$ dominates the others. Hence \[ I(a_2(q)|b_2(q))\geq\C a_2(q)\log(b_2(q)^{-1})\geq\C\log\ltwo{\point{x}}. \] So continuing from \eqref{eq:p4.47} we get \[ \Prob\left(\sum_{i=0}^{m-1} X_i\mathbf{1}(2^{N_0}\leq X_i<2^{N_1}) \geq\frac{\Cr{0}}{96}m\log\log\ltwo{\point{x}}\right) \leq e^{-m\C\log\ltwo{\point{x}}}. \] Combining this with \eqref{eq:p4.46} we get \eqref{eq:p4.42}. So to complete the proof of Proposition~\ref{prop:finecoarsediffsum} we only need to prove Claim~\ref{claim:finecoarsediffsum}. \paragraph{\textbf{Proof of Claim~\ref{claim:finecoarsediffsum}:}} Fix $0\leq i\leq m-1$ and $(\point{v},\point{v}^\prime)\in V_i$. Let \[ \dev{D}(\point{v},\point{v}^\prime):=h(\point{v}-\point{v}^\prime)-\hat{T}^i(\point{v},\point{v}^\prime) -h(\point{w}_{2i}-\point{w}_{2i+1})+\hat{T}^i(\point{w}_{2i},\point{w}_{2i+1})). \] Let $\ell_0:=\ltwo{\point{x}}(\log\ltwo{\point{x}})^{-1/\alpha}$. We consider two cases, either $\xprojth(\point{v}^\prime-\point{v})\leq\ell_0$, or $\xprojth(\point{v}^\prime-\point{v})\geq\ell_0$. \paragraph{\textbf{Case I:}} Suppose $\xprojth(\point{v}^\prime-\point{v})\leq\ell_0$. Using Lemma~\ref{lem:width}, $\yprojth(\point{v}-\point{w}_{2i})$ and $\yprojth(\point{v}^\prime-\point{w}_{2i+1})$ are at most $\C\Delta(\point{x})\log\log\ltwo{x}$, which is smaller than $\ell_0$. Hence $\ltwo{\point{v}-\point{w}_{2i}}$ and $\ltwo{\point{v}^\prime-\point{w}_{2i+1}}$ are at most $\C\ell_0$. Therefore, using \eqref{A1} and \eqref{A2}, we get \[ \Prob\left(\dev{D}(\point{v},\point{v}^\prime)\geq t\sigma(\ltwo{\point{x}})\right)\leq \C e^{-\C t\sigma(\ltwo{\point{x}})/\sigma(\ltwo{\ell_0})}\leq e^{-\C t\log\ltwo{\point{x}}}. \] So the claim is proved in this case. \paragraph{\textbf{Case II:}} Suppose $\xprojth(\point{v}^\prime-\point{v})\geq\ell_0$. Let \begin{align*} & \dev{D}_1(\point{v},\point{v}^\prime):= h(\point{v}-\point{v}^\prime)-T(\point{v},\point{v}^\prime)-h(\point{v}-\point{w}_{2i+1})+T(\point{v},\point{w}_{2i+1}),\\ & \dev{D}_2(\point{v},\point{v}^\prime):= h(\point{v}-\point{w}_{2i+1})-T(\point{v},\point{w}_{2i+1})-h(\point{w}_{2i}-\point{w}_{2i+1})+T(\point{w}_{2i},\point{w}_{2i+1}). \end{align*} So $\dev{D}(\point{v},\point{v}^\prime)=\dev{D}_1(\point{v},\point{v}^\prime)+\dev{D}_2(\point{v},\point{v}^\prime)$, and hence \begin{equation} \Prob\left(\dev{D}(\point{v},\point{v}^\prime)\geq t\sigma(\ltwo{\point{x}})\right)\leq \Prob\left(\dev{D}_1(\point{v},\point{v}^\prime)\geq \frac{t}{2}\sigma(\ltwo{\point{x}})\right)+ \Prob\left(\dev{D}_2(\point{v},\point{v}^\prime)\geq \frac{t}{2}\sigma(\ltwo{\point{x}})\right). \label{eq:p542} \end{equation} We will only consider the first term on the right-hand side, the second term can be dealt with similarly. Suppose $i_0$ and $j_0$ are such that $\point{v}^\prime\in B_{i_0 j_0}$, where recall that $B_{ij}$ is defined in \eqref{eq:coarsedef}. Let \begin{multline*} R(t) := \left\{\point{y}\in\ensuremath{\mathbb{R}}^2:\xprojth(\point{y})=i_0\ell_{\point{x}},\right.\\ \left.j_0\Delta(\ell_{\point{x}})-t^{1/2}\Delta(\ell_{\point{x}})(\log\ell_{\point{x}})^{1/2}\leq\yprojth(\point{y})\leq (j_0+1)\Delta(\ell_{\point{x}})+t^{1/2}\Delta(\ell_{\point{x}})(\log\ell_{\point{x}})^{1/2}\vphantom{\point{y}\in\ensuremath{\mathbb{R}}^2}\right\}. \end{multline*} So $R(t)$ is an extension of a side of the parallelogram $B_{i_0j_0}$. Define the event \[ \mbox{$\event{E}(t)$: $\Gamma(\point{v},\point{v}^\prime)$ intersects $R(t)$}. \] Since $\point{v}^\prime\in B_{i_0j_0}$, distance of the segment $R(t)$ from $\point{v}^\prime$ in $-{\theta}$ direction is less than $\ell_{\point{x}}$. So, if $\mathbb{T}\not\in\event{E}(t)$, then $\mathcal{W}(\point{v}^\prime,\point{v},k,-{\theta})\geq t^{1/2}\Delta(\ell_{\point{x}})(\log\ell_{\point{x}})^{1/2}$ for some $k\leq\ell_{\point{x}}$. So to bound the probability of $\mathbb{T}\not\in\event{E}(t)$ we use Corollary~\ref{cor:endwandlogspl} with \[ \tilde{{\theta_0}}:=-{\theta},\quad \tilde{n}:=\xprojth(\point{v}^\prime-\point{v}),\quad \tilde{l}:=\yprojth(\point{v}-\point{v}^\prime),\quad \tilde{k}:=\ell_{\point{x}},\quad \tilde{t}:=t^{1/2}. \] We will now verify the conditions of Corollary~\ref{cor:endwandlogspl}. Recall that by our choice of $\delta$ from \eqref{eq:deltachoice} ${\theta}$ is a direction of type I, equivalently so is $-{\theta}$. Using the assumption of case II we get $\tilde{n}\geq\ell_0\geq\tilde{n_0}$. By Lemma~\ref{lem:width} we have $|\tilde{l}|\leq\C\Delta(\ltwo{x})\log\log\ltwo{\point{x}}$. Hence, $|\tilde{l}|\leq\tilde{n}\tilde{\delta_2}$, as required. Recall from the statement of the claim that $t\leq 2^{N_1-1}$, and from \eqref{eq:p4.43} we have $2^{N_1-1}\leq\Cr{p444}\log\ltwo{\point{x}}$. Therefore, $t^{1/2}\Delta(\tilde{k})(\log\tilde{k})\leq\tilde{k}\tilde{\delta_3}$, as required. Thus, all the conditions for applying Corollary~\ref{cor:endwandlogspl} hold, and we get \begin{equation} \Prob\left(\event{E}(t)^c\right)\leq e^{-\C t\log\ell_{\point{x}}}\leq e^{-\C t\log\ltwo{\point{x}}}. \label{eq:p543} \end{equation} Let $R:=R(2^{N_1-1})$. For any $\point{y}$ in $R$ let \[ \dev{D}^\prime_1(\point{y}) := h(\point{v}-\point{y})-T(\point{v},\point{y})-h(\point{v}-\point{w}_{2i+1})+T(\point{v},\point{w}_{2i+1}),\quad \dev{D}^\prime_2(\point{y}) := h(\point{y}-\point{v}^\prime)-T(\point{y},\point{v}^\prime). \] If $\Gamma(\point{v},\point{v}^\prime)$ passes through $\point{y}$ then $\dev{D}_1(\point{v},\point{v}^\prime)\leq\dev{D}^\prime_1(\point{y})+\dev{D}^\prime_2(\point{y})$. Hence \begin{equation} \Prob\left(\dev{D}_1(\point{v},\point{v}^\prime)\geq\frac{t}{2}\sigma(\ltwo{\point{x}})\right)\leq \Prob\left(\max_{\point{y}\in R}\dev{D}^\prime_1(\point{y})\geq\frac{t}{4}\sigma(\ltwo{\point{x}})\right) + \Prob\left(\max_{\point{y}\in R}\dev{D}^\prime_2(\point{y})\geq\frac{t}{4}\sigma(\ltwo{\point{x}})\right) + \Prob(\event{E}(t)^c). \label{eq:p544} \end{equation} Let us consider the first term in the right-hand side first. We will use Corollary~\ref{cor:logupinc} with \begin{gather*} \tilde{{\theta_0}}:={\theta},\quad \tilde{n}:=i_0\ell_{\point{x}}-\xprojth(\point{v}),\quad \tilde{L}:=\Delta(\ell_{\point{x}})(1+2^{(N_1-1)/2+1}(\log\ell_{\point{x}})^{1/2}),\\ \tilde{d}:=\Delta(\ell_{\point{x}})(j_0-2^{(N_1-1)/2}(\log\ell_{\point{x}})^{1/2})-\yprojth(\point{v}). \end{gather*} We will now verify the conditions of Corollary~\ref{cor:logupinc}. By our choice of $\delta$ in \eqref{eq:deltachoice}, ${\theta}$ is of both type I and type II. Since $\tilde{n}$ is the distance of $R$ from $\point{v}$ in direction $-{\theta}$, we have \[ \tilde{n}\geq\ell_0-\ell_{\point{x}}\geq\Cl{331}\ell_0\geq\tilde{n_0}. \] Using $2^{N_1-1}<\Cr{p444}\log\ltwo{\point{x}}$, \eqref{A2}, and $\tilde{n}\geq\Cr{331}\ell_0$, we get \begin{equation} \tilde{L} \leq\Cl{332}\Delta(\ell_{\point{x}})\log\ltwo{\point{x}} \leq\C\Delta(\ell_0)(\log\ltwo{\point{x}})^{1-(1+\beta)/(2\alpha)} \leq\tilde{\delta_2}\Delta(\tilde{n}). \label{eq:lengthofR1} \end{equation} By Lemma~\ref{lem:width} we have $|\yprojth(\point{v}^\prime-\point{v})|\leq\C\Delta(\point{x})(\log\log\ltwo{\point{x}})^{1/2}$. Since $\point{v}^\prime\in B_{i_0j_0}$, we have $|\yprojth(\point{v}^\prime)-j_0 \Delta(\ell_{\point{x}})|\leq\Delta(\ell_{\point{x}})$. Therefore, using \eqref{A2} and $\tilde{n}\geq\Cr{331}\ell_0$, we get \begin{equation} |\tilde{d}|\leq \C\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2} + 2^{N_1/2}\Delta(\ell_{\point{x}})(\log\ell_{\point{x}})^{1/2} \leq \C\Delta(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}\leq \tilde{\delta_1}\tilde{n}. \label{eq:d} \end{equation} By Corollary~\ref{cor:logupinc} we get for $\tilde{t}\geq\tilde{t_0}$ \begin{equation} \Prob\left(\max_{\point{y}\in R}\dev{D}^\prime_1(\point{y})\geq\C\tilde{L}\frac{|\tilde{d}|}{\tilde{n}}+\tilde{t}\sigma(\Delta^{-1}(\tilde{L}))\log\Delta^{-1}(\tilde{L})\right) \leq\C e^{-\C\tilde{t}\log\tilde{L}}. \label{eq:101520@1} \end{equation} Let $\tilde{t}$ be such that \begin{equation} \C\tilde{L}\frac{|\tilde{d}|}{\tilde{n}}+\tilde{t}\sigma(\Delta^{-1}(\tilde{L}))\log\Delta^{-1}(\tilde{L})=\frac{t}{4}\sigma(\ltwo{\point{x}}). \label{eq:101520@2} \end{equation} We need to verify $\tilde{t}\geq\tilde{t_0}$. Using \eqref{eq:lengthofR1} and \eqref{A2}, we have \begin{equation} \tilde{L}\leq\Cr{332}\Delta(\ell_{\point{x}})\log\ltwo{\point{x}}\leq\C\Delta(\ltwo{\point{x}})(\log\ltwo{\point{x}})^{-1/\alpha}. \label{eq:lengthofR} \end{equation} Using this with \eqref{eq:d} we get \begin{equation} \tilde{L}\frac{|\tilde{d}|}{\tilde{n}} \leq\C\sigma(\ltwo{\point{x}})(\log\log\ltwo{\point{x}})^{1/2}(\log\ltwo{\point{x}})^{-1/\alpha}. \label{eq:101520@3} \end{equation} Using $t\geq2^{N_0}\geq(\Cr{0}/96)\log\log\ltwo{\point{x}}$, \eqref{eq:101520@2}, \eqref{eq:101520@3}, and \eqref{eq:lengthofR}, we get \begin{multline*} \tilde{t} \geq \C t\frac{\sigma(\ltwo{\point{x}})}{\sigma(\Delta^{-1}(\tilde{L}))\log\Delta^{-1}(\tilde{L})} \geq \C t\left(\frac{\ltwo{\point{x}}}{\Delta^{-1}(\tilde{L})}\right)^{\alpha}(\log\ltwo{\point{x}})^{-1}\\ \geq \C t\left(\frac{\Delta(\ltwo{\point{x}})}{\tilde{L}}\right)^{2\alpha/(1+\beta)}(\log\ltwo{\point{x}})^{-1} \geq \C t (\log\ltwo{\point{x}})^{(1-\beta)/(1+\beta)}\geq\tilde{t_0}. \end{multline*} Therefore, from \eqref{eq:101520@1}, \eqref{eq:101520@2} we get \begin{equation} \Prob\left(\max_{\point{y}\in R}\dev{D}^\prime_1(\point{y})\geq\frac{t}{4}\sigma(\ltwo{\point{x}})\right)\leq \C e^{-\C t(\log\ltwo{\point{x}})^{2/(1+\beta)}}\leq \C e^{-\C t(\log\ltwo{\point{x}})}. \label{eq:p546} \end{equation} Now we will consider the second term in the right-hand side of \eqref{eq:p544}. By \eqref{eq:lengthofR} width of $R$ is less than $\Delta(\point{x})$. Distance of $R$ from $\point{v}^\prime$ in ${\theta}$ direction is less than $\ell_{\point{x}}$. So $\ltwo{\point{y}-\point{v}^\prime}\leq\C\ell_{\point{x}}$ for all $\point{y}\in R$. Thus, using \eqref{A1}, \eqref{A2}, and a union bound, we get \[ \Prob\left(\max_{\point{y}\in R}\dev{D}^\prime_2(\point{y}) \geq\frac{t}{4}\sigma(\ltwo{\point{x}})\right) \leq \C\ltwo{\point{x}}e^{-\C t\sigma(\ltwo{\point{x}})/\sigma(\ell_{\point{x}})} \leq \C e^{-\C t(\log\ltwo{\point{x}})^2} \] Using this in \eqref{eq:p544} together with \eqref{eq:p543} and \eqref{eq:p546} we get appropriate bound for the first term in the right-hand side of \eqref{eq:p542}. The second term can be dealt with similarly. This completes the proof of Claim~\ref{claim:finecoarsediffsum}. This also completes the proof of Proposition~\ref{prop:finecoarsediffsum}. \subsection{Proof of Theorem~\ref{thm:nrf2}} \resetconstant We assume that $\eta<1$ because for $\eta=1$ the result is same as Proposition~\ref{prop:nrflog}. Let $n$ be a positive integer such that $(1-\alpha)^n\leq\eta/2$. Define for $0\leq m\leq n$ and $k\geq 3$ \[ \psi_m(k):=k^{-\alpha}\sigma(k)(\log k)^{(1-\alpha)^m}(\log\log k)^{1-(1-\alpha)^m}. \] Because $\psi_n(k)\leq\phi_\eta(k)$ for large enough $k$, to prove Theorem~\ref{thm:nrf2} it is enough to show that $h$ satisfies GAP with exponent $\alpha$ and correction factor $\psi_n$. By assumptions of Theorem~\ref{thm:nrf2} ${\theta_0}$ is a direction of both type I and II. Therefore, by Theorem~\ref{thm:chaploglog}, there exist positive constants $\delta$, $C_c$, $M_c$, $K$, $a$, such that $h$ satisfies $\text{CHAP}(\alpha,\widehat{\phi},M_c,C_c,K,a)$ in the sector of directions $({\theta_0}-\delta,{\theta_0}+\delta)$. Define $\mathcal{S}_0:=[0,2\pi]$, and for $1\leq m\leq n$ define \[ \mathcal{S}_m:=\left[{\theta_0}-\delta\frac{n-m+1}{n},{\theta_0}+\delta\frac{n-m+1}{n}\right]. \] We will show that $h$ satisfies GAP with exponent $\alpha$ and correction factor $\psi_n$ in $\mathcal{S}_n$. By Proposition~\ref{prop:nrflog}, $h$ satisfies GAP with exponent $\alpha$ and correction factor $\phi_1=\psi_0$ in all directions. Hence, there exist constants $C_g>0$ and $M_g>0$ such that for $\ltwo{\point{x}}\geq M_g$ we have \begin{equation} h(\point{x})\leq g(\point{x})+C_g\sigma\left(\ltwo{\point{x}}\right)\log\ltwo{\point{x}}. \label{eq:gapcgmg} \end{equation} We will use an inductive argument. Fix $0\leq m<n$. Suppose $h$ satisfies GAP with exponent $\alpha$, correction factor $\psi_m$, in the sector $\mathcal{S}_{m}$, with constants $C$ and $M$. We will show that $h$ satisfies GAP with exponent $\alpha$, correction factor $\psi_{m+1}$, in the sector $\mathcal{S}_{m+1}$, with constant $C$ and $M$. This will prove that $h$ satisfies GAP with exponent $\alpha$, correction factor $\psi_n$ in the sector $\mathcal{S}_n$. The constants $C$ and $M$ need to remain unchanged. We will see that if $C$ and $M$ are chosen large enough then the inductive step works. We assume without loss of generality $K>1$, $M>3$, $M>M_c$, $M>M_g$. Also we assume $M$ is large enough, independent of $m$, so that $\psi_{m+1}(\ltwo{\point{x}})\geq 1$ for all $\ltwo{\point{x}}\geq M$, which is possible by \eqref{A2}. Since $h$ has sublinear growth, there exists constant $r>0$ such that for all $\point{x}$ we have $h(\point{x})\leq r\ltwo{\point{x}}$. Let $\nu:=(1-\beta)/4$, $c_0:=3C_c$, $c_1:=C_c a$, $c_2:=3K$, $c_3:=(c_1+c_0+C_g c_2(\alpha\nu)^{-1})\lowconst^{-1}$, $c_4:=c_2^\alpha\lowconst^{-1}$, \[ c_5:=c_3^\alpha c_4^{1-\alpha}\left(\Lp\frac{\alpha}{1-\alpha}\right)^{1-\alpha}+\left(\frac{1-\alpha}{\alpha}\right)^{\alpha}\right), \] and $c_6:=(1-\alpha)c_3(\alpha c_4)^{-1}$. We start the inductive step now. Consider $\point{x}$ with direction ${\theta}\in\mathcal{S}_{m+1}$ and $\ltwo{\point{x}}\geq M$. We need to show \begin{equation} h(\point{x})\leq g(\point{x})+C\ltwo{\point{x}}^\alpha\psi_{m+1}(\ltwo{\point{x}}). \label{eq:toverify} \end{equation} We are free to choose $C$ and $M$ large enough, independent of $m$. In various steps we will assume $C$ is large enough depending on $M$, and $M$ will be chosen to be large enough without depending on $C$. \paragraph{\textbf{Bounding $h(\point{x})$ when $\ltwo{\point{x}}$ is small:}} Suppose $\ltwo{\point{x}}\leq c_2 M$. Assuming $C\geq r c_2 M$, we get \[ h(\point{x})\leq r\ltwo{\point{x}}\leq r c_2 M\leq C \leq C\ltwo{\point{x}}^\alpha\psi_{m+1}(\point{x}) \leq g(\point{x}) + C\ltwo{\point{x}}^\alpha\psi^m(\ltwo{\point{x}}). \] Thus \eqref{eq:toverify} is verified. \paragraph{\textbf{Defining $\point{x}^\ast$, $\x_{\sf L}$ and $\x_{\sf S}$ when $\ltwo{\point{x}}$ is large:}} Suppose $\ltwo{\point{x}}\geq c_2 M$. Take $q\in[c_2,\ltwo{\point{x}}/M]\cap\ensuremath{\mathbb{Q}}$. Then $\ltwo{\point{x}/q}\geq M\geq M_c$. Applying $\text{CHAP}(\alpha,\widehat{\phi},M_c,C_c,K,a)$ to $\point{x}/q$ we get \begin{equation} \point{x}/q=\sum_{i=1}^{3}\alpha_{qi}\point{y}_{qi}\mbox{ with }\alpha_{qi}\geq 0,\;\sum_{i=1}^{3}\alpha_{qi}\in[1,a]\mbox{ and }\point{y}_{qi}\in Q_{\point{x}/q}(\alpha,\widehat{\phi},C_c,K). \label{eq:xqdecomp}\end{equation} Let \[ L(q):=\left\{ 1\leq i\leq 3:\ltwo{\point{y}_{qi}}\geq\ltwo{\point{x}/q}^{1-\nu}\right\} \] and \begin{equation} \point{x}^\ast:=\sum_{i=1}^{3}\lfloor q\alpha_{qi}\rfloor\point{y}_{qi},\; \x_{\sf L}:=\sum_{i\in L(q)}\gamma_{qi}\point{y}_{qi},\; \x_{\sf S}:=\sum_{i\not\in L(q) }\gamma_{qi}\point{y}_{qi}, \label{eq:xstxlxs}\end{equation} where \[ \gamma_{qi}:=q\alpha_{qi}-\lfloor q\alpha_{qi}\rfloor\in[0,1). \] Therefore, \[ \point{x}=\point{x}^\ast+\x_{\sf L}+\x_{\sf S}. \] \paragraph{\textbf{Direction of $\x_{\sf L}$:}} Consider $i\in L(q)$. Then \begin{equation} \ltwo{\point{y}_{qi}}\geq\ltwo{\point{x}/q}^{1-\nu}. \label{eq:dirxl1} \end{equation} Using Lemma~\ref{lem:width}, $\point{y}_{qi}\in Q_{\point{x}/q}(\alpha,\widehat{\phi},C_c,K)$, $\ltwo{\point{x}/q}\geq M$, assuming $M$ is large enough, and \eqref{A2}, we get \[ \yprojth(\point{y}_{qi})\leq\C\Delta(\point{x}/q)(\log\log\ltwo{\point{x}/q})^{1/2}\leq\C\ltwo{\point{x}/q}^{(1+\beta)/2}(\log\log\ltwo{\point{x}/q})^{1/2}. \] Therefore, using \eqref{eq:dirxl1}, \eqref{A2}, and $1-\nu=1-(1-\beta)/4>(1+\beta)/2$, we get \[ \xprojth(\point{y}_{qi})\geq\ltwo{\point{y}_{qi}}-\yprojth(\point{y}_{qi})\geq\C\ltwo{\point{x}/q}^{1-\nu}, \] and, further, using $1-\nu-(1+\beta)/2=(1-\beta)/4$ we get \[ \frac{\yprojth(\point{y}_{qi})}{\xprojth(\point{y}_{qi})}\leq \C\frac{\ltwo{\point{x}/q}^{(1+\beta)/2}(\log\log\ltwo{\point{x}/q})^{1/2}}{\ltwo{\point{x}/q}^{1-\nu}}\leq \C\frac{(\log\log M)^{1/2}}{M^{(1-\beta)/4}}. \] Since ${\theta}\in\mathcal{S}_{m+1}$, assuming $M$ is large enough, we get direction of $\point{y}_{qi}$ is in $\mathcal{S}_m$. This implies $\x_{\sf L}$ has direction in $\mathcal{S}_m$. \paragraph{\textbf{Bounding $h(\point{x}^\ast)$:}} Using subadditivity of $h$, \eqref{eq:xstxlxs}, \eqref{eq:xqdecomp}, and \eqref{A2}, we get \begin{multline} h(\point{x}^\ast) \leq \sum_{i=1}^{3}\lfloor q\alpha_{qi}\rfloor h(\point{y}_{qi}) \leq \sum_{i=1}^{3}\lfloor q\alpha_{qi}\rfloor \left[ g_{\point{x}}(\point{y}_{qi})+C_c\ltwo{\point{x}/q}^\alpha\widehat{\phi}(\ltwo{\point{x}/q})\right]\\ \leq g_{\point{x}}(\point{x}^\ast)+c_1\lowconst^{-1} q^{1-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}}). \label{eq:hxt2} \end{multline} \paragraph{\textbf{Bounding $h(\x_{\sf S})$ when $\ltwo{\x_{\sf S}}$ is large:}} Suppose $\ltwo{\x_{\sf S}}\geq M_g$. Using $q\geq c_2=3K>3$, $\ltwo{\point{x}/q}\geq M\geq 1$, and \eqref{eq:xstxlxs} we get \[ \ltwo{\x_{\sf S}} = \ltwo{\sum_{i\not\in L(q)}\gamma_{qi}\point{y}_{qi}} \leq \sum_{i\not\in L(q)}\gamma_{qi}\ltwo{\point{y}_{qi}} \leq 3\ltwo{\point{x}/q}^{1-\nu} \leq 3\ltwo{\point{x}/q} \leq \ltwo{\point{x}}. \] Using this and $\log\ltwo{\point{x}}\leq\ltwo{\point{x}}^{\alpha\nu}(\alpha\nu)^{-1}$ we get \[ \frac{\sigma(\ltwo{\x_{\sf S}})\log\ltwo{\x_{\sf S}}}{\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}}\leq\lowconst^{-1}\left(\frac{\ltwo{\x_{\sf S}}}{\ltwo{\point{x}}}\right)^\alpha\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\leq\lowconst^{-1}\frac{c_2^\alpha q^{\alpha\nu}\log\ltwo{\point{x}}}{q^\alpha\ltwo{\point{x}}^{\alpha\nu}\log\log\ltwo{\point{x}}}\leq\lowconst^{-1} c_2^\alpha(\alpha\nu)^{-1}q^{-\alpha(1-\nu)}. \] Since $\ltwo{\x_{\sf S}}\geq M_g$, using \eqref{eq:gapcgmg} we get \begin{align*} h(\x_{\sf S}) & \leq g(\x_{\sf S})+C_g\ltwo{\x_{\sf S}}^\alpha\phi_1(\ltwo{\x_{\sf S}})\\ ~ & \leq \sum_{i\not\in L(q)}\gamma_{qi} g(\point{y}_{qi}) + C_g\sigma(\ltwo{\x_{\sf S}})\log\ltwo{\x_{\sf S}}\\ ~ & \leq g_{\point{x}}(\x_{\sf S})+\sum_{i\not\in L(q)}\gamma_{qi}\left[ g(\point{y}_{qi})-g_{\point{x}}(\point{y}_{qi})\right]+ C_g\sigma(\ltwo{\x_{\sf S}})\log\ltwo{\x_{\sf S}} \\ ~ & \leq g_{\point{x}}(\x_{\sf S})+\sum_{i\not\in L(q)}\gamma_{qi}\left[ g(\point{y}_{qi})-g_{\point{x}}(\point{y}_{qi})\right]+C_g\lowconst^{-1} c_2^\alpha (\alpha\nu)^{-1} q^{-\alpha(1-\nu)}\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}\\ ~ & \leq g_{\point{x}}(\x_{\sf S})+\sum_{i\not\in L(q)}\gamma_{qi}\left[ g(\point{y}_{qi})-g_{\point{x}}(\point{y}_{qi})\right]+C_g\lowconst^{-1} c_2^\alpha(\alpha\nu)^{-1} q^{1-\alpha} \sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}}. \numberthis\label{eq:hxslarge \end{align*} \paragraph{\textbf{Bounding $h(\x_{\sf S})$ when $\ltwo{\x_{\sf S}}$ is small:}} Suppose $\ltwo{\x_{\sf S}}\leq M_g$. Since $\point{y}_{qi}\in Q_{\point{x}/q}(\alpha,\phi_2,C_c,K)$, \[ 0\leq h(\point{y}_{qi})\leq g_{\point{x}}(\point{y}_{qi})+C_c q^{-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}/q}). \] Therefore \[ g_{\point{x}}(\point{y}_{qi})\geq - C_c q^{-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}/q}). \] So, letting $I(q):=\left\{ i\leq 3:g_{\point{x}}(\point{y}_{qi})<0\right\}$, and using definition of $c_0$, and \eqref{A2}, we have \[ g_{\point{x}}(\x_{\sf S}) = \sum_{i\not\in L(q)}\gamma_{qi}g_{\point{x}}(\point{y}_{qi}) \geq - \sum_{i\in I(q)}|g_{\point{x}}(\point{y}_{qi})| \geq - c_0 q^{-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}/q}) \geq - \lowconst^{-1} c_0 q^{-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}}). \] Therefore, using $\ltwo{\x_{\sf S}}\leq M_g$ and $h(\x_{\sf S})\leq r\ltwo{\x_{\sf S}}$ \begin{equation} h(\x_{\sf S}) \leq r M_g \leq g_{\point{x}}(\x_{\sf S}) + \lowconst^{-1} c_0 q^{-\alpha} \ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}}) + r M_g. \label{eq:hxssmall}\end{equation} \paragraph{\textbf{Bounding $h(\x_{\sf L})$ when $\ltwo{\x_{\sf L}}$ is large:}} Suppose $\ltwo{\x_{\sf L}}\geq M$. Using $q\geq c_2\geq 1$, $\ltwo{\point{x}/q}\geq M\geq 1$, \eqref{eq:xstxlxs}, and $\ltwo{\point{y}_{qi}}\leq K\ltwo{\point{x}/q}$, we get \[ \ltwo{\x_{\sf L}} = \ltwo{\sum_{i\in L(q)} \gamma_{qi} \point{y}_{qi}} \leq \sum_{i\in L(q)} \gamma_{qi} \ltwo{\point{y}_{qi}} \leq c_2 \ltwo{\point{x}/q} \leq \ltwo{\point{x}}. \] Using this and applying $\text{GAP}(\alpha,\psi_m,M,C)$, which holds by the induction hypothesis for $m$, we get \begin{align*} h(\x_{\sf L}) & \leq g(\x_{\sf L}) + C \ltwo{\x_{\sf L}}^\alpha \psi_m \left(\ltwo{\x_{\sf L}}\right)\\ ~ & \leq \sum_{i\in L(q)}\gamma_{qi} g(\point{y}_{qi}) + C\lowconst^{-1} c_2^\alpha q^{-\alpha}\ltwo{\point{x}}^\alpha\psi_m\left(\ltwo{\point{x}}\right)\\ ~ & \leq g_{\point{x}}(\x_{\sf L}) +\sum_{i\in L(q)}\gamma_{qi}\left[ g(\point{y}_{qi})-g_{\point{x}}(\point{y}_{qi})\right] + C\lowconst^{-1} c_2^\alpha q^{-\alpha}\ltwo{\point{x}}^\alpha\psi_m\left(\ltwo{\point{x}}\right)\numberthis\label{eq:hxllarge \end{align*} \paragraph{\textbf{Bounding $h(\x_{\sf L})$ when $\ltwo{\x_{\sf L}}$ is small:}} Suppose $\ltwo{\x_{\sf L}}\leq M$. Then by similar calculations that lead to \eqref{eq:hxssmall} we get \begin{equation} h(\x_{\sf L})\leq r M \leq g_{\point{x}}(\x_{\sf L}) + \lowconst^{-1} c_0 q^{-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}}) + rM. \label{eq:hxlsmall} \end{equation} \paragraph{\textbf{Overall bound on $h(\point{x})$:}} Using $\point{y}_{qi}\in Q_{\point{x}/q}(\alpha,\widehat{\phi},C_c,K)$, definition of $c_0$, and \eqref{A2}, \begin{align*} \sum_{i=1}^{3}\gamma_{qi}\left[ g(\point{y}_{qi})-g_{\point{x}}(\point{y}_{qi})\right] \leq \sum_{i=1}^{3}\gamma_{qi}\left[ h(\point{y}_{qi})-g_{\point{x}}(\point{y}_{qi})\right] \leq c_0 \ltwo{\point{x}/q}^{\alpha}\widehat{\phi}(\ltwo{\point{x}/q}) \leq c_0 \lowconst^{-1} q^{-\alpha}\ltwo{\point{x}}^\alpha\widehat{\phi}(\ltwo{\point{x}}). \end{align*} Combining this with \eqref{eq:hxt2}-\eqref{eq:hxlsmall}, we get \begin{align*} h(\point{x}) & \leq h(\point{x}^\ast) + h(\x_{\sf L}) + h(\x_{\sf S})\\ & \leq g(\point{x}) + c_3 q^{1-\alpha}\sigma(\ltwo{\point{x}})\log\log\ltwo{\point{x}} + C c_4 q^{-\alpha}\sigma(\ltwo{\point{x}})(\log\ltwo{\point{x}})^{(1-\alpha)^m}(\log\log\ltwo{\point{x}})^{1-(1-\alpha)^m} \\ & \quad + rM + rM_g.\numberthis\label{eq:hx2} \end{align*} \paragraph{\textbf{Optimizing over $q$:}} The optimal $q$ that minimizes the right-hand side \eqref{eq:hx2} is \[ q_0:=C\frac{\alpha c_4}{(1-\alpha) c_3}\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{(1-\alpha)^m}. \] Plugging in $q=q_0$ in \eqref{eq:hx2} we see that if $C$ is large enough depending on $r$, $M$, $M_g$, then \begin{align*} h(\point{x}) & \leq g(\point{x}) + c_5 C^{1-\alpha} \sigma(\ltwo{\point{x}})(\log\ltwo{\point{x}})^{(1-\alpha)^{m+1}} (\log\log\ltwo{\point{x}})^{1-(1-\alpha)^{m+1}} + rM + rM_g\\ & \leq g(\point{x})+C\sigma(\ltwo{\point{x}})(\log\ltwo{\point{x}})^{(1-\alpha)^{m+1}} (\log\log\ltwo{\point{x}})^{1-(1-\alpha)^{m+1}}. \end{align*} Thus we get \eqref{eq:toverify} provided we prove $q_0$ is feasible. \paragraph{\textbf{Feasibility of $q$:}} We need to verify that $q_0\in [c_2,\ltwo{\point{x}}/M]$. We get $q_0\geq c_2$ using $\ltwo{\point{x}}\geq M$, $m\leq n$, choosing $C>1$, and assuming $M$ is large enough. Suppose $\ltwo{\point{x}}<q_0 M$ so that we have \begin{equation} CM\geq c_6\ltwo{\point{x}}\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{-(1-\alpha)^m}. \label{eq:aux1} \end{equation} This gives an upper bound on $\ltwo{\point{x}}$. So $q_0$ is not feasible when $\ltwo{\point{x}}$ is too small. But we can prove \eqref{eq:toverify} in a different way. Consider two cases. \textbf{Case I:} Suppose $\point{x}$ is such that \[ C\geq C_g\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{1-(1-\alpha)^{m+1}}. \] Then from \eqref{eq:gapcgmg} we get \[ h(\point{x})\leq g(\point{x})+C\sigma(\ltwo{\point{x}})(\log\ltwo{\point{x}})^{(1-\alpha)^{m+1}}(\log\log\ltwo{\point{x}})^{1-(1-\alpha)^{m+1}}. \] Thus \eqref{eq:toverify} is verified. \textbf{Case II:} Suppose $\point{x}$ is such that \[ C\leq C_g\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{1-(1-\alpha)^{m+1}}. \] Combining with \eqref{eq:aux1} we get \[ C_g M \left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{1-(1-\alpha)^{m+1}} \geq c_6\ltwo{\point{x}}\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{-(1-\alpha)^m}. \] So \[ \frac{C_g M}{c_6} \geq\ltwo{\point{x}}\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{-(1-(1-\alpha)^{m+1}+(1-\alpha)^m)} \geq\ltwo{\point{x}}\left(\frac{\log\ltwo{\point{x}}}{\log\log\ltwo{\point{x}}}\right)^{-2}. \] Therefore, $M\geq F(\ltwo{\point{x}})$, where $F(k):= c_6 C_g^{-1} k(\log k/\log\log k)^{-2}$. Taking $C\geq r F^{-1}(M)$, where $F^{-1}(k^\prime):=\sup\left\{ k: F(k)\leq k^\prime\right\}$, we get \[ h(\point{x}) \leq r\ltwo{\point{x}} \leq r F^{-1}(M) \leq C \leq g(\point{x})+C\psi_{m+1}(\ltwo{\point{x}}). \] Thus \eqref{eq:toverify} is verified. This completes the inductive step and proves Theorem~\ref{thm:nrf2}. \section{Upper Bound of Transverse Increments} \resetconstant In this section, we will prove Theorem~\ref{thm:loglogupinc}, which is our main result about upper bound of the transverse increments. Fix $L_0>0$, $n_0>0$, $t_0>0$, to be assumed large enough whenever required. Consider $n$, $L$, $t$, satisfying $n\geq n_0$, $L\geq L_0$, $t\geq t_0$, $L\leq\Delta(n)$. If $t\geq 4\mu L(\sigma(\Delta^{-1}(L))(\log\Delta^{-1}(L))^\eta)^{-1}$, where $\mu$ is the expected edge-weight, then we are in a large-deviation regime, and the proof is similar to Case I of Theorem~\ref{thm:logupinc}. Therefore, let us assume \begin{equation} t\leq 4\mu L(\sigma(\Delta^{-1}(L))(\log\Delta^{-1}(L))^\eta)^{-1}. \label{eq:loglogupinc0} \end{equation} Define a segment \begin{multline*} \segment{I}^\ast:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2: \pi^1_{\theta_0,\theta_0^t}(\point{x})=n-\Delta^{-1}(L),\right.\\ \left. \pi^2_{\theta_0,\theta_0^t}(\point{x})\in\left[-t^{1/2}L\left(\log\Delta^{-1}(L)\right)^{\eta/2}, \left( 1-\frac{\Delta^{-1}(L)}{n}\right) L + t^{1/2}L\left(\log\Delta^{-1}(L)\right)^{\eta/2}\right] \vphantom{\point{x}\in\ensuremath{\mathbb{R}}^2}\right\}. \end{multline*} Let \begin{equation} M:=(1+\beta)/(2\alpha),\quad N_1:=\lfloor(\log L)^M\rfloor,\quad N_2:=\lfloor t^{1/2}(\log L)^{M+\eta/2}\rfloor. \label{eq:Mvalue} \end{equation} Divide the segment $\segment{I}(n,L)$ in $N_1$ segments of equal length: $\segment{I}_1,\dots,\segment{I}_{N_1}$, with endpoints $\point{a}_0,\dots,\point{a}_{N_1}$, as shown in Figure~\ref{Fig:loglogupinc}. Divide the segment $\segment{I}^\ast$ in $N_2$ segments of equal length: $\segment{I}^\ast_1,\dots,\segment{I}^\ast_{N_2}$, with endpoints $\point{b}_0,\dots,\point{b}_{N_2}$, as shown in Figure~\ref{Fig:loglogupinc}. By \ref{A2}, $\log L$ is of the same order as $\log\Delta^{-1}(L)$, i.e., \begin{equation} \C\log L\leq\log\Delta^{-1}(L)\leq\C\log L. \label{eq:loglogupinc0.5} \end{equation} Therefore, length of the segments $\segment{I}_i$ and $\segment{I}^\ast_j$ are bounded as \begin{gather} \C L(\log L)^{-M} \leq \pi^2_{\theta_0,\theta_0^t}(\point{a}_{i-1}-\point{a}_i) \leq \C L(\log L)^{-M},\label{eq:loglogupinc01}\\ \C L(\log L)^{-M} \leq \pi^2_{\theta_0,\theta_0^t}(\point{b}_{j-1}-\point{b}_j) \leq \C L(\log L)^{-M}.\label{eq:loglogupinc02} \end{gather} \begin{figure}[H] \centering\includegraphics[width=0.7\linewidth]{Fig7LogLogInc.pdf} \caption{Setup of Theorem~\ref{thm:loglogupinc}: width of the portion of $\segment{I}^\ast$ between the dotted lines is $L(1-\Delta^{-1}(L)/n)$; width of the portion of $\segment{I}^\ast$ above the line joining $\point{0}$ and $\point{a}_0$ is $t^{1/2}L\left(\log\Delta^{-1}(L)\right)^{\eta/2}$; width of the portion of $\segment{I}^\ast$ below the line joining $\point{0}$ and $\point{a}_{N_1}$ is also $t^{1/2}L\left(\log\Delta^{-1}(L)\right)^{\eta/2}$.} \label{Fig:loglogupinc}\end{figure}% Define an event \[ \event{E}:\mbox{$\Gamma(\point{0},\point{a}_i)$ passes through $\segment{I}^\ast$ for all $0\leq i\leq N_1$}. \] We will first show that $\event{E}^c$ has small probability. If $\mathbb{T}\not\in\event{E}$, then for some $i$ the geodesic $\Gamma(\point{0},\point{a}_i)$ wanders more than $t^{1/2}L(\log\Delta^{-1}(L))^{\eta/2}$ in $\pm{\theta_0^t}$ directions when it is at a distance $\Delta^{-1}(L)$ from $\point{a}_i$ in $-{\theta_0}$ direction. Since ${\theta_0}$ is a direction of both type I and type II, by Theorem~\ref{thm:nrf2} $h$ satisfies GAP with exponent $\alpha$ and correction factor $\phi_\eta$ in a neighborhood of ${\theta_0}$. Thus, applying Theorem~\ref{thm:endwandlog} with the variables \[ \tilde{{\theta_0}}=-{\theta_0},\quad \tilde{\eta}=\eta,\quad \tilde{n}=n,\quad \tilde{k}=\Delta^{-1}(L),\quad \tilde{l}=\pi^2_{\theta_0,\theta_0^t}(\point{a}_i),\quad \tilde{t}=t^{1/2}, \] and using \eqref{eq:loglogupinc0.5}, we get \begin{equation} \Prob\left(\mathcal{W}(\point{a}_i,\point{0},\Delta^{-1}(L),-{\theta_0})\geq t^{1/2}L(\log\Delta^{-1}(L))^{\eta/2}\right) \leq \C e^{-\C t(\log L)^\eta}, \label{eq:1016201} \end{equation} provided $\tilde{n}\geq\tilde{n_0}$, $\tilde{t}\geq\tilde{t_0}$, $\tilde{k}\geq\tilde{k_0}$, $\tilde{t}\Delta(\tilde{k})(\log\tilde{k})^{\tilde{\eta}/2}\leq\tilde{k}\tilde{\delta_1}$, $\tilde{l}\leq\tilde{n}\tilde{\delta_2}$. We will verify these conditions now. Taking $n_0$, $L_0$, $t_0$ large enough we get $\tilde{n}\geq\tilde{n_0}$, $\tilde{t}\geq\tilde{t_0}$, $\tilde{k}\geq\tilde{k_0}$. Using \eqref{eq:loglogupinc0} and \eqref{A2} we get \begin{equation} \frac{\tilde{t}\Delta(\tilde{k})(\log\tilde{k})^{\eta/2}}{\tilde{k}}= \frac{t^{1/2}L(\log\Delta^{-1}(L))^{\eta/2}}{\Delta^{-1}(L)} \leq \C\frac{L^{1/2}}{(\Delta^{-1}(L))^{1/2}} \leq \C L^{-(1-\beta)/2} \leq \C L_0^{-(1-\beta)/2} \leq \tilde{\delta_1}. \label{eq:loglogupinc2} \end{equation} Using $L\leq\Delta(n)$ and \eqref{A2}, we get \[ \frac{|\tilde{l}|}{\tilde{n}}\leq\frac{L}{n}\leq\frac{\Delta(n)}{n}\leq\C n_0^{-(1-\beta)/2}\leq\tilde{\delta_2}. \] Thus all the conditions for \eqref{eq:1016201} to hold are true. From \eqref{eq:1016201} taking a union bound over $i$ values we get \begin{equation} \Prob\left(\event{E}^c\right)\leq \C e^{-\C t(\log L)^\eta}. \label{eq:eventloglogfinal} \end{equation} Thus, it is enough to consider $\mathbb{T}\in\event{E}$. Define \begin{align*} D^1 &:=\max\left\{|T(\point{0},\point{a}_{i_1})-T(\point{0},\point{a}_{i_2})|:0\leq i_1 < i_2 \leq N_1\right\},\\ D^2 &:=\max\left\{|T(\point{x},\point{a}_{i_1})-T(\point{x},\point{a}_{i_2})|:0\leq i_1 < i_2 \leq N_1,\point{x}\in \segment{I}^\ast\right\},\\ D^3 &:=\max\left\{|T(\point{b}_j,\point{a}_{i_1})-T(\point{b}_j,\point{a}_{i_2})|:0\leq i_1 < i_2 \leq N_1, 0\leq j\leq N_2\right\},\\ D^4 &:=\max\left\{|T(\point{b},\point{a}_i)-T(\point{b}^\prime,\point{a}_i)|:\point{b},\point{b}^\prime\in \segment{I}^\ast_j, 0\leq j\leq N_2, 0\leq i\leq N_1\right\},\\ D_i &:=\max\left\{|T(\point{0},\point{x})-T(\point{0},\point{y})|:\point{x},\point{y}\in\segment{I}_i\right\},\mbox{ for } 0\leq i\leq N_1. \end{align*} Therefore \begin{equation} \dev{D}(n,L)\leq D^1+\max_{0\leq i\leq N_1} D_i,\quad\mbox{and}\quad D^2 \leq D^3+2 D^4. \label{eq:loglogupinc3} \end{equation} We claim that if $\mathbb{T}\in\event{E}$ then \begin{equation} D^1\leq D^2. \label{eq:loglogupinc4} \end{equation} To see this, take $0\leq i_1 < i_2 \leq N_1$. If $\mathbb{T}\in\event{E}$, then there exist points $\point{y}$ and $\point{z}$ in $\segment{I}^\ast$ such that $\Gamma(\point{0},\point{a}_{i_1})$ passes through $\point{y}$, and $\Gamma(\point{0},\point{a}_{i_2})$ passes through $\point{z}$. Therefore \[ T(\point{0},\point{a}_{i_1})-T(\point{0},\point{a}_{i_2}) \leq T(\point{0},\point{z})+T(\point{z},\point{a}_{i_1})-T(\point{0},\point{z})-T(\point{z},\point{a}_{i_2}) = T(\point{z},\point{a}_{i_1})-T(\point{z},\point{a}_{i_2}). \] Similarly, \[ T(\point{0},\point{a}_{i_2})-T(\point{0},\point{a}_{i_1})\leq T(\point{y},\point{a}_{i_2})-T(\point{y},\point{a}_{i_1}). \] Therefore \[ |T(\point{0},\point{a}_{i_1})-T(\point{0},\point{a}_{i_2})|\leq\max_{\point{x}\in \segment{I}^\ast}|T(\point{x},\point{a}_{i_1})-T(\point{x},\point{a}_{i_2})|. \] Thus, \eqref{eq:loglogupinc4} is proved by taking maximum over values of $i_1$, $i_2$. Combining \eqref{eq:loglogupinc3} and \eqref{eq:loglogupinc4} we get that on the event $\event{E}$, \[ \dev{D}(n,L)\leq D^3+2 D^4+\max_{0\leq i\leq N_1} D_i. \] Therefore \begin{multline} \Prob\left(\dev{D}(n,L)\geq t\sigma\left(\Delta^{-1}(L)\right)(\log L)^\eta\right)\leq \Prob\left( D^3\geq\frac{t}{4}\sigma\left(\Delta^{-1}(L)\right)(\log L)^\eta\right)\\ + \Prob\left( D^4\geq\frac{t}{4}\sigma\left(\Delta^{-1}(L)\right)(\log L)^\eta\right) + \Prob\left( \max_{0\leq i\leq N_1} D_i\geq\frac{t}{4}\sigma\left(\Delta^{-1}(L)\right)(\log L)^\eta\right) + \Prob\left(\event{E}^c\right).\label{eq:loglogdecomp} \end{multline} We have already shown in \eqref{eq:eventloglogfinal} that the last term in the right-hand side of \eqref{eq:loglogdecomp} is sufficiently small. Now let us consider the other three terms in the right-hand side of \eqref{eq:loglogdecomp}. First we are going to show that the first term in the right-hand side of \eqref{eq:loglogdecomp} is appropriately small. Take $\point{x}$ in $\segment{I}^\ast$, and $\point{u}$, $\point{v}$ in $\segment{I}(n,L)$. Then \begin{multline*} |T(\point{x},\point{u})-T(\point{x},\point{v})| \leq |T(\point{x},\point{u})-h(\point{u}-\point{x})| + |T(\point{x},\point{v})-h(\point{v}-\point{x})|\\ + |h(\point{u}-\point{x})-g(\point{u}-\point{x})| + |h(\point{v}-\point{x})-g(\point{v}-\point{x})| + |g(\point{u}-\point{x})-g(\point{v}-\point{x})|. \end{multline*} From definition of $\segment{I}^\ast$ it follows \begin{equation} \pi^1_{\theta_0,\theta_0^t}(\point{u}-\point{x})=\Delta^{-1}(L),\quad |\pi^2_{\theta_0,\theta_0^t}(\point{u}-\point{x})|\leq \C t^{1/2} L (\log\Delta^{-1}(L))^{\eta/2}. \label{eq:loglogupinc7} \end{equation} By same calculation as in \eqref{eq:loglogupinc2} we get that the direction of $\point{u}-\point{x}$ can be made arbitrarily close to ${\theta_0}$ by choosing $L_0$ large enough. So by Theorem~\ref{thm:nrf2} and equation~\eqref{eq:loglogupinc0.5} we get \begin{equation} |h(\point{u}-\point{x})-g(\point{u}-\point{x})|\leq\C\sigma(\Delta^{-1}(L))(\log L)^{\eta}. \label{eq:loglogupinc7.1} \end{equation} Same holds true for $\point{v}$ replaced by $\point{u}$. Using \eqref{eq:loglogupinc7}, Lemma~\ref{lem:auxgeom2}, and \eqref{eq:loglogupinc0.5}, we get \[ |g(\point{u}-\point{x})-g(\point{v}-\point{x})| \leq \C t^{1/2}\frac{L^2(\log(\Delta^{-1}(L)))^\eta}{\Delta^{-1}(L)} \leq \C t^{1/2}\sigma(\Delta^{-1}(L))(\log L)^\eta. \] Using this with \eqref{eq:loglogupinc7.1} and \eqref{A1} implies \[ \Prob\left(|T(\point{x},\point{u})-T(\point{x},\point{v})|\geq\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^\eta\right)\leq \C e^{-\C t(\log L)^\eta}. \] This is true for fixed $\point{x}$, $\point{u}$ and $\point{v}$. So for all $0\leq i_1 < i_2\leq N_1$ and $0\leq j\leq N_2$ we get \[ \Prob\left(|T(\point{b}_j,\point{a}_{i_1})-T(\point{b}_j,\point{a}_{i_2})|\geq\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^\eta\right)\leq \C e^{-\C t(\log L)^\eta}. \] By \eqref{eq:Mvalue}, the number of triplets $(i_1,i_2,j)$ is less than $\C t^{1/2}(\log L)^{3M+\eta/2}$. So by a union bound we get \begin{equation} \Prob\left( D^3\geq\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^\eta\right)\leq\C e^{-\C t(\log L)^{\eta}}. \label{eq:D3final} \end{equation} Now let us consider the second term in the right-hand side of \eqref{eq:loglogdecomp}. Fix $0\leq i\leq N_1$ and $1\leq j\leq N_2$. Applying Corollary~\ref{cor:logupinc} with the variables \[ \tilde{{\theta_0}}:=-{\theta_0},\quad \tilde{n}:=\Delta^{-1}(L),\quad \tilde{L}:=|\pi^2_{\theta_0,\theta_0^t}(\point{b}_{j-1}-\point{b}_j)|,\quad \tilde{d}:=\pi^2_{\theta_0,\theta_0^t}(\point{b}_j-\point{a}_i), \] we get for all $\tilde{t}\geq\tilde{t_0}$ \begin{equation} \Prob\left(\max_{\point{b},\point{b}^\prime\in\segment{I}^\ast_j} |T(\point{b},\point{a}_i)-T(\point{b}^\prime,\point{a}_i)|\geq\Cl{D4}\tilde{L}\frac{|\tilde{d}|}{\tilde{n}} + \tilde{t}\sigma(\Delta^{-1}(\tilde{L}))\log\Delta^{-1}(\tilde{L})\right) \leq\C e^{-\C t\log\tilde{L}}, \label{eq:D4setup} \end{equation} provided the following conditions are satisfied: $|\tilde{d}|\leq\tilde{\delta_1}\tilde{n}$, $\tilde{L}\leq\tilde{\delta_2}\Delta(\tilde{n})$, $\tilde{n}\geq\tilde{n_0}$, $\tilde{L}\geq\tilde{L_0}$. Let us now verify these conditions. From definition of $\segment{I}^\ast$ we get \[ |\tilde{d}|\leq\C t^{1/2}L(\log\Delta^{-1}(L))^{\eta/2}, \] so that by calculations similar to \eqref{eq:loglogupinc2} we get $|\tilde{d}|\leq\tilde{\delta_1}\tilde{n}$. Combining this with \eqref{eq:loglogupinc02} we get \begin{equation} \tilde{L}\frac{|\tilde{d}|}{\tilde{n}} \leq \C t^{1/2}\sigma(\Delta^{-1}(L))(\log L)^{-M} \label{eq:loglogupinc11} \end{equation} Let $\tilde{t}$ be such that \begin{equation} \Cr{D4}\tilde{L}\frac{|\tilde{d}|}{\tilde{n}} + \tilde{t}\sigma(\Delta^{-1}(\tilde{L}))\log \Delta^{-1}(\tilde{L})=\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^{\eta} \label{eq:D4tsetup} \end{equation} Therefore, using \eqref{eq:loglogupinc11}, \eqref{A2}, lower bound on $\tilde{L}$ from \eqref{eq:loglogupinc02}, value of $M$ from \eqref{eq:Mvalue}, and \eqref{eq:loglogupinc0.5}, we get \[ \tilde{t} \geq\C t \frac{\sigma(\Delta^{-1}(L))(\log L)^\eta}{\sigma(\Delta^{-1}(\tilde{L}))\log\Delta^{-1}(\tilde{L})} \geq \C t (\log L)^{2\alpha M/(1+\beta)+\eta-1} \geq \Cl{D4t} t (\log L)^{\eta} \geq \Cr{D4t} t_0 (\log L_0)^{\eta}. \] So we have $\tilde{t}\geq\tilde{t_0}$, assuming $t_0$ and $L_0$ are large enough. Therefore, all the conditions for \eqref{eq:D4setup} are satisfied. Combining \eqref{eq:D4setup} with \eqref{eq:D4tsetup}, we get \[ \Prob\left(\max_{\point{b},\point{b}^\prime\in\segment{I}^\ast_j} |T(\point{b},\point{a}_i)-T(\point{b}^\prime,\point{a}_i)|\geq\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^{\eta}\right) \leq \C e^{-\C t(\log L)^{1+\eta}}, \] The number of choices of $i$ and $j$ is at most $\C t^{1/2} (\log L)^{2M+\eta/2}$. Hence using a union bound we get \begin{equation} \Prob\left( D^4\geq\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^{\eta}\right) \leq \C e^{-\C t(\log L)^{1+\eta}}. \label{eq:D4final} \end{equation} Now we are going to consider the third term in the right-hand side of \eqref{eq:loglogdecomp}. Fix an $i$. To bound $D_i$ we will apply Corollary~\ref{cor:logupinc} with the following variables: \[ \tilde{n}:=n,\quad \tilde{L}:=|\pi^2_{\theta_0,\theta_0^t}(\point{a}_{i-1}-\point{a}_i)|,\quad \tilde{d}:=\pi^2_{\theta_0,\theta_0^t}(\point{a}_i). \] By Corollary~\ref{cor:logupinc} we have for all $\tilde{t}\geq\tilde{t_0}$ \begin{equation} \Prob\left( D_i\geq \Cl{Di}\tilde{L}\frac{|\tilde{d}|}{\tilde{n}} + \tilde{t}\sigma(\tilde{k})\log \tilde{k}\right)\leq\C e^{-\C t\log\tilde{k}}, \label{eq:Disetup} \end{equation} provided the following conditions are satisfied: $|\tilde{d}|\leq\tilde{\delta_1}\tilde{n}$, $\tilde{L}\leq\tilde{\delta_2}\Delta(\tilde{n})$, $\tilde{n}\geq\tilde{n_0}$, $\tilde{L}\geq\tilde{L_0}$. From definition of $\segment{I}(n,L)$ we have $|\tilde{d}|\leq L$. Therefore, \[ \frac{|\tilde{d}|}{\tilde{n}}\leq\frac{L}{n}\leq\frac{\Delta(n)}{n}\leq\C n_0^{-(1-\beta)/2}\leq\tilde{\delta_1}. \] From \eqref{eq:loglogupinc01} we get $\tilde{L}\leq L$. Further using $L\leq\Delta(n)$ we get \[ \tilde{L}\leq L\leq\Delta(n)\leq\tilde{\delta_2}\tilde{n}. \] Also using the bound on $\tilde{L}$ from \eqref{eq:loglogupinc01} we get \begin{equation} \tilde{L}\frac{|\tilde{d}|}{\tilde{n}}\leq \C L(\log L)^{-M}\frac{L}{n}\leq \C \frac{L^2}{\Delta^{-1}(L)}(\log L)^{-M}\leq \C \sigma(\Delta^{-1}(L))(\log L)^{-M}. \label{eq:loglogupinc1015201} \end{equation} Let $\tilde{t}$ be such that \begin{equation} \Cr{Di}\tilde{L}\frac{|\tilde{d}|}{\tilde{n}} + \tilde{t}\sigma(\Delta^{-1}(\tilde{L})) \log\Delta^{-1}(\tilde{L}) = \frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^{\eta}. \label{eq:Ditsetup} \end{equation} Therefore, using \eqref{eq:loglogupinc1015201}, bound on $\tilde{L}$ from \eqref{eq:loglogupinc01}, and value of $M$ from \eqref{eq:Mvalue}, we get \[ \tilde{t}\geq \C t \frac{\sigma(\Delta^{-1}(L))(\log L)^\eta}{\sigma(\Delta^{-1}(\tilde{L})\log \Delta^{-1}(\tilde{L})}\geq \C t (\log L)^{2\alpha M/(1+\beta)+\eta-1}\geq \Cl{loglog50} t (\log L)^{\eta}\geq \Cr{loglog50} t_0 (\log L_0)^{\eta}. \] So $\tilde{t}\geq\tilde{t_0}$ assuming $t_0$ and $L_0$ are large enough. Combining this with \eqref{eq:Disetup} and \eqref{eq:Ditsetup} we get \[ \Prob\left( D_i\geq\frac{t}{4}\sigma(\Delta^{-1}(L))(\log L)^{\eta}\right)\leq \C e^{-\C t(\log L)^{1+\eta}} \] Using a union bound over values of $i$ we get \begin{equation} \Prob\left(\max_i D_i\geq t\sigma(\Delta^{-1}(L))(\log L)^{\eta}\right) \leq \C e^{-\C t(\log L)^{1+\eta}}. \label{eq:DiFinal} \end{equation} Combining \eqref{eq:eventloglogfinal}, \eqref{eq:D3final}, \eqref{eq:D4final}, \eqref{eq:DiFinal}, and \eqref{eq:loglogdecomp}, completes the proof. \section{Lower Bound of Variance of Transverse Increments}\label{sec:low} \resetconstant \resetconstant[epsilon] \resetconstant[nu] In this section, we will prove Theorem~\ref{thm:lowmain} on lower bound of fluctuations of transverse increments. Let ${\theta_0}$ be a direction of both type I and II as in the statement of Theorem~\ref{thm:lowmain}. Without loss of generality we assume ${\theta_0}\in[0,\pi/4]$. Fix $\nu\in(0,1)$ as in the statement of Theorem~\ref{thm:lowmain}. Also fix $\eta\in(1/2,1)$. Consider $n>0$ and $L>0$, satisfying $L\leq\Delta(n)$. Define $k$ such that \begin{equation} L=\Delta(k)(\log k)^{\eta}.\label{eq:lowoutline0} \end{equation} If we prescribe a lower limit of $k$, then that yields lower limit of both $n$ and $L$. Hence, we will state results which hold for large enough $k$, tacitly assuming $n$ and $L$ are large enough so that the two relations $L\leq\Delta(n)$ and \eqref{eq:lowoutline0} hold. We will use the following notations throughout this section. \begin{notn} Let $\point{a}:=n{\Unit{\thn}}$, $\point{b}:=n{\Unit{\thn}}+L{\Unit{\thnt}}$, \begin{align*} h^\ast & := \max\left\{ h(\point{x}-\point{a}):\point{x}\in\ensuremath{\mathbb{Z}}^2,\ltwo{\point{x}-\point{a}}\leq k\right\},\\ H & := \left\{\point{x}\in\ensuremath{\mathbb{Z}}^2:h(\point{x}-\point{a})\leq h^\ast\right\},\\ \tau & := \min\left\{ T(\point{0},\point{x}):\point{x}\in H\right\},\\ F & := \left\{\point{x}\in\ensuremath{\mathbb{Z}}^2:T(\point{0},\point{x})\leq\tau\right\},\\ \partial H & := \left\{\point{x}\in H:\point{x}\pm\Unit{i}\in H^c\mbox{ for some } i=1,2\right\}. \end{align*} In other words, $H$ is the smallest $h$-ball around $\point{a}$ which contains an Euclidean ball of radius $k$ around $\point{a}$; $\partial H$ is the vertex-boundary of $H$ i.e., the set of vertices in $H$ which are also adjacent to some vertex outside $H$; $\tau$ is the time required to reach $H$ from $\point{0}$; $F$ is the set of vertices that can be reached by time $\tau$ from $\point{0}$ i.e., the wet region $\mathcal{B}(\tau)$. Let $\edge{E}\subset\edge{E}(\ensuremath{\mathbb{Z}}^2)$ be the set of nearest-neighbor edges which have at least one endpoint in $F$. Let $\mathcal{F}$ be the sigma-field generated by $\tau$, $F$, and $\left\{ \tau_e:e\in\edge{E}\right\}$. \label{notn:shorthand}\end{notn} \begin{figure}[H] \centering\includegraphics[width=0.4\linewidth]{Fig8Var1.pdf} \caption{Illustration for $\point{a}$, $\point{b}$, $H$, $F$, $\partial H$: direction of $\point{a}$ is ${\theta_0}$, direction of $\point{b}-\point{a}$ is ${\theta_0^t}$.} \end{figure} \begin{remark} Since $h$ is sublinear and $g$ is a norm, we have for $\ltwo{\point{u}}\geq\C$, \begin{equation} \C\ltwo{\point{u}}\leq h(\point{u})\leq\C\ltwo{\point{u}}. \label{eq:boxh1} \end{equation} Therefore \begin{equation} \C k\leq h^\ast\leq\C k \label{eq:boxh2}. \end{equation} For any $\point{y}\in\partial H$ there is an adjacent vertex that does not belong to $H$. So for all $\point{y}\in\partial H$ we have \begin{equation} h^\ast\geq h(\point{y}-\point{a})\geq h^\ast-\C. \label{eq:boxh3} \end{equation} Combining \eqref{eq:boxh1}-\eqref{eq:boxh3} we get for all $\point{y}\in\partial H$, \begin{equation} \C k\leq\ltwo{\point{y}-\point{a}}\leq\C k. \label{eq:boxh4} \end{equation} Therefore, $H$ can be inscribed in a square of sides of the order of $k$ around $\point{a}$. \label{remark:boxh} \end{remark} \begin{remark} Given any $\delta>0$, using \eqref{A2}, \eqref{eq:lowoutline0}, and the fact that $L\leq\Delta(n)$, we get $k\leq\delta n$ for large enough $k$. Therefore, using Remark~\ref{remark:boxh}, we get that $\point{0}$ lies outside $H$. In this case, the region $F$ touches the region $H$ i.e., $F\cap H\subset\partial H$. Since the edge-weights are continuous, $F$ touches $H$ at only one point, almost surely. So we assume that $k$ is large enough such that $\point{0}$ lies outside $H$. \end{remark} Now we will state three propositions, and then we will complete the proof of Theorem~\ref{thm:lowmain} using these propositions. We will prove these propositions separately in later subsections. \begin{prop} Under the assumptions of Theorem~\ref{thm:lowmain}, there exist constants ${\Cl{nuone}}>0$, $\Cl[epsilon]{eps1}>0$, such that for large enough $k$ we have \[ \Prob\left(\Prob\left( T(\point{0},\point{a})\leq h^\ast+\tau-\Cr{eps1}\sigma(k)\middle\vert\mathcal{F}\right)\leq\Cr{eps1}\right)\leq e^{-k^{\Cr{nuone}}}. \] \label{prop:lowfast}\end{prop} Recall that we have fixed $\nu\in(1/2,1)$ in the beginning of Section~\ref{sec:low}. \begin{prop} Under the assumptions of Theorem~\ref{thm:lowmain}, there exist constants $\Cl[nu]{nutwo}\in(1/2,\nu)$, $\Cl[epsilon]{eps2}>0$, such that for large enough $k$ we have \[ \Prob\left( T(\point{0},\point{a})\geq h^\ast+\tau+\Cr{eps2}\sigma(k)\right)\geq e^{-(\log k)^{\Cr{nutwo}}}. \] \label{prop:lowslow}\end{prop} \begin{prop} Under the assumptions of Theorem~\ref{thm:lowmain}, there exist constant $\Cl{pcov}>0$, such that for large enough $k$ we have \[ 0\leq\Exp\Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})\middle\vert\mathcal{F}\right)\leq\Cr{pcov}. \] \label{prop:cov}\end{prop} Let us now complete the proof of Theorem~\ref{thm:lowmain} using these propositions. Let $\Cl[epsilon]{min12}:=\min\left\{\Cr{eps1},\Cr{eps2}\right\}$. Expanding expectation of conditional variance given $\mathcal{F}$, we get \begin{equation} \Var\left( T(\point{0},\point{a})-T(\point{0},\point{b})\right)\geq \Exp\Var\left( T(\point{0},\point{a})\middle\vert\mathcal{F}\right)+\Exp\Var\left( T(\point{0},\point{b})\middle\vert\mathcal{F}\right)-2\Exp\Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})\middle\vert\mathcal{F}\right). \label{eq:lowoutline1}\end{equation} For any random variable $X$, $\Var(X)=\Exp\left( X-X^\prime\right)^2/2$, where $X^\prime$ is another random variable with the same distribution as $X$ but independent of $X$. Therefore, for any random variable $X$ and for any $a>b$, \[ \Var(X)\geq\frac{1}{2}(a-b)^2\Prob(X\geq a)\Prob(X\leq b). \] Thus \begin{equation} \Var\left( T(\point{0},\point{a})\middle\vert\mathcal{F}\right)\geq \C\sigma^2(k) \Prob\left( T(\point{0},\point{a})\geq h^\ast+\tau+\Cr{min12}\sigma(k)\middle\vert\mathcal{F}\right) \Prob\left( T(\point{0},\point{a})\leq h^\ast+\tau-\Cr{min12}\sigma(k)\middle\vert\mathcal{F}\right). \label{eq:lowoutline3}\end{equation} As a shorthand notation let us use \begin{align*} X & :=\Prob\left( T(\point{0},\point{a})\geq h^\ast+\tau+\Cr{min12}\sigma(k)\middle\vert\mathcal{F}\right) \Prob\left( T(\point{0},\point{a})\leq h^\ast+\tau-\Cr{min12}\sigma(k)\middle\vert\mathcal{F}\right),\\ Y & :=\Cr{min12}\Prob\left( T(\point{0},\point{a})\geq h^\ast+\tau+\Cr{min12}\sigma(k)\middle\vert\mathcal{F}\right). \end{align*} From Proposition~\ref{prop:lowslow} we get \begin{equation} \Exp Y \geq \Cr{min12} e^{-(\log k)^{{\Cr{nutwo}}}}. \label{eq:lowoutline4} \end{equation} Using Proposition~\ref{prop:lowfast}, and $0\leq X,Y\leq 1$, we get \[ \Exp(Y-X)^+\leq\Prob\left( Y\geq X\right)\leq e^{-k^{{\Cr{nuone}}}}. \] Therefore, using \eqref{eq:lowoutline4} and the inequality $\Exp X \geq \Exp Y - \Exp(Y-X)^+$, we get \[ \Exp X \geq \Cr{min12} e^{-(\log k)^{\Cr{nutwo}}} - e^{-k^{\Cr{nuone}}} \geq\C e^{-(\log k)^{\Cr{nutwo}}}. \] Combining this with \eqref{eq:lowoutline3}, we get \[ \Exp\Var\left( T(\point{0},\point{a})\middle\vert\mathcal{F}\right) \geq \C\sigma^2(k)e^{-(\log k)^{\Cr{nutwo}}}. \] Therefore, by \eqref{eq:lowoutline1}, Proposition~\ref{prop:cov}, and \eqref{A2}, we get \[ \Var\left( T(\point{0},\point{a})-T(\point{0},\point{b})\right)\geq\C\sigma^2(k)e^{-(\log k)^{\Cr{nutwo}}}. \] Therefore, using \eqref{eq:lowoutline0}, $\nu>{\Cr{nutwo}}$, and \eqref{A2}, we get \[ \Var\left( T(\point{0},\point{a})-T(\point{0},\point{b})\right)\geq\sigma^2\left(\Delta^{-1}(L)\right) e^{-(\log L)^\nu}. \] This completes the proof of Theorem~\ref{thm:lowmain} using Proposition~\ref{prop:lowfast}, Proposition~\ref{prop:lowslow} and Proposition~\ref{prop:cov}. We are going to prove these propositions in the following subsections. \subsection{Proof of Proposition~\ref{prop:lowfast}} Let us recall definition of $\mathcal{F}$, $\tau$, and $\edge{E}$ from Notation~\ref{notn:shorthand}. Conditioned on $\mathcal{F}$, consider i.i.d.\ random variables $\left\{\tau^\prime_e:e\in\edge{E}\right\}$ each having distribution of the original edge-weights. For a path $\gamma$ let \[ T^\prime(\gamma)= \sum_{\mbox{$\gamma$ contains $e$ and $e\in\edge{E}$}}\tau^\prime_e + \sum_{\mbox{$\gamma$ contains $e$ and $e\in\edge{E}^c$}}\tau_e. \] For any two points $\point{y},\point{z}\in\ensuremath{\mathbb{R}}^2$ let \[ T^\prime(\point{y},\point{z}):=\inf\left\{ T^\prime(\gamma):\gamma\mbox{ is a path from $\point{y}$ to $\point{z}$}\right\}. \] Therefore, the conditional distribution of all the passage times $\{T^\prime(\point{y},\point{z}):\point{y},\point{z}\in\ensuremath{\mathbb{R}}^2\}$ given $\mathcal{F}$ is same as the unconditional distribution of all the passage times $\{T^\prime(\point{y},\point{z}):\point{y},\point{z}\in\ensuremath{\mathbb{R}}^2\}$. For all $\point{y},\point{z}\in\ensuremath{\mathbb{R}}^2$ let $\Gamma^\prime(\point{y},\point{z})$ be the geodesic corresponding to $T^\prime(\point{y},\point{z})$. Let $\point{x}$ be the point where $F$ touches $H$. Let $\point{u}$ be the first point belonging to $F$ when the geodesic $\Gamma^\prime(\point{a},\point{x})$ is traced starting from $\point{a}$, see Figure~\ref{fig:9}. \begin{figure}[H] \centering\includegraphics[width=0.4\linewidth]{Fig9Var2.pdf} \caption{Setup for Proposition~\ref{prop:lowfast}: given a realization of the edge-weights on the whole lattice, we take another configuration ``inside" $F$. The geodesic $\Gamma^\prime(\point{a},\point{x})$ is constructed in the environment where we have the new edge-weight configuration ``inside" $F$, and the original realization of edge-weights ``outside" $F$.} \label{fig:9} \end{figure} Since $\point{u}\in F$, we have $T(\point{0},\point{u})\leq\tau$. For any $\point{v}\in\ensuremath{\mathbb{Z}}^2$ let \[ d(\point{v}):=\max\left\{ T(\point{v},\point{v}\pm\Unit{i}):i=1,2\right\}. \] So $T(\point{u},\point{a})\leq d(\point{u}) + T^\prime(\point{u},\point{a})$. Therefore, \begin{equation} T(\point{0},\point{a}) \leq T(\point{0},\point{u}) + T(\point{u},\point{a}) \leq \tau + d(\point{u}) + T^\prime(\point{u},\point{a}) \leq \tau + d(\point{u}) + T^\prime(\point{x},\point{a}). \label{eq:lowfast1} \end{equation} By \eqref{A3} and the fact that distribution of $T^\prime(\point{x},\point{a})$ given $\mathcal{F}$ is the unconditional distribution of $T(\point{x},\point{a})$, we get there exists $\Cl[epsilon]{fast5}>0$ such that \begin{equation} \Prob\left( T^\prime(\point{x},\point{a})\leq h(\point{x}-\point{a})-\Cr{fast5}\sigma\left(\ltwo{\point{x}-\point{a}}\right)\middle\vert\mathcal{F}\right)\geq\Cr{fast5}. \label{eq:lowfast22} \end{equation} Since $F$ touches $H$ at $\point{x}$, we have $\point{x}\in\partial H$. Hence, $h(\point{x}-\point{a})\leq h^\ast$ and $\ltwo{\point{x}-\point{a}}\geq k$. Therefore, by \eqref{A2}, \eqref{eq:lowfast1}, and \eqref{eq:lowfast22}, we get for some $\Cl[epsilon]{fast6}>0$ \begin{align*} \Cr{fast6} \leq & \Prob\left( T^\prime(\point{x},\point{a})\leq h^\ast-\Cr{fast6}\sigma(k)\middle\vert\mathcal{F}\right)\\ \leq & \Prob\left( T(\point{0},\point{a}) \leq h^\ast+\tau-\frac{\Cr{fast6}}{2}\sigma(k)\middle\vert\mathcal{F}\right) + \Prob\left( d(\point{u})>\frac{\Cr{fast6}}{2}\sigma(k)\middle\vert\mathcal{F}\right). \end{align*} Therefore, taking $\Cl[epsilon]{fast7}:=\Cr{fast6}/2$, we get \begin{align*} & \Prob\left(\Prob\left( T(\point{0},\point{a})\leq h^\ast+\tau-\Cr{fast7}\sigma(k)\middle\vert\mathcal{F}\right)\leq\Cr{fast7}\right)\\ \leq &\Prob\left(\Prob\left( d(\point{u})\geq\Cr{fast7}\sigma(k)\middle\vert\mathcal{F}\right)\geq\Cr{fast7}\right)\\ \leq & \Cr{fast7}^{-1}\Prob\left( d(\point{u})\geq\Cr{fast7}\sigma(k)\right). \numberthis\label{eq:lowfast2} \end{align*} Since $\point{x}\in\partial H$, by Remark~\ref{remark:boxh} we get $\ltwo{\point{x}-\point{a}}\leq \C k$. Since $\point{u}$ lies on the geodesic $\Gamma^\prime(\point{a},\point{x})$, by Lemma~\ref{lem:boxwand} we get \[ \Prob\left(\ltwo{\point{u}-\point{a}}\geq\Cl{614}k\middle\vert\mathcal{F}\right)\leq e^{-\Cl{615}k}. \] Therefore \begin{equation} \Prob\left(\ltwo{\point{u}-\point{a}}\geq\Cr{614} k\right)\leq e^{-\Cr{615}k}. \label{eq:lowfast3} \end{equation} Since edge-weights have exponential moments, for any $\point{v}$ and all $t>0$ we have \begin{equation} \Prob\left( d(\point{v})\geq t\right)\leq\C e^{-\C t} \label{eq:lowfast4} \end{equation} Therefore, using \eqref{eq:lowfast3} and \eqref{eq:lowfast4}, we get \[ \Prob\left( d(\point{u})\geq\Cr{fast7}\sigma(k)\right) \leq\Cl{616} k^2 e^{-\Cl{617}\sigma(k)}. \] Combining this with \eqref{eq:lowfast2} and using \eqref{A2} we get \[ \Prob\left(\Prob\left( T(\point{0},\point{a})\leq h^\ast+\tau-\Cr{fast7}\sigma(k)\middle\vert\mathcal{F}\right)\leq\Cr{fast7}\right) \leq\Cr{fast7}^{-1}\Cr{616} k^2 e^{-\Cr{617}\sigma(k)} \leq e^{-k^{\Cr{nuone}}}. \] This completes the proof of Proposition~\ref{prop:lowfast}. \subsection{Proof of Proposition~\ref{prop:lowslow}} A brief outline of the proof is the following. Recall from Notation~\ref{notn:shorthand} that $\tau$ is the time taken to reach $H$ from the origin and $h^\ast$ is the expected time to reach $\partial H$. By definition, $H$ contains an Euclidean ball of size $k$ and, by Remark~\ref{remark:boxh} we know $H$ is contained in a ball of radius of the order of $k$ around $\point{a}$. So to prove Proposition~\ref{prop:lowslow} it suffices to show that the time taken for $\Gamma(\point{0},\point{a})$ to reach $\partial H$ from $\point{a}$ can be slower than $h^\ast$ by a fraction of $\sigma(k)$ with non-negligible probability. Here $\sigma(k)$ is the order of fluctuation of passage times from $\point{a}$ to any point on $\partial H$, and by non-negligible probability we mean probability at least $e^{-(\log k)^\epsilon}$ for some $\epsilon\in(0,1)$. We will define $H^\ast$, a subset of $\partial H$, such that $\Gamma(\point{0},\point{a})$ does not intersects $H\backslash H^\ast$ with high probability. Then we will show that passage times from $\point{a}$ to the points of $H^\ast$ can be uniformly slow with non-negligible probability. To do this, we will define $G^\ast$, a polygonal line which is roughly a discrete approximation of a sector of a $g$-ball around $\point{a}$. We will show that passage times from $\point{a}$ to $G^\ast$ can be uniformly slow with non-negligible probability, and passage time from $G^\ast$ to $H^\ast$ are sufficiently small, so that passage times to points of $H^\ast$ will also be slow. Moreover, $G^\ast$ will be constructed in a way so that $\Gamma(\point{0},\point{a})$ when traced from $\point{a}$ will intersect $G^\ast$ and then $H^\ast$ with high probability. By Remark~\ref{remark:boxh} we get that the maximum distance of a point in $\partial H$ (or $H$) in direction $-{\theta_0}$ from $\point{a}$ is at most of the order of $k$, more precisely \begin{equation} \max_{\point{x}\in\partial H}\pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x})\leq\Cl{slow1}k. \label{eq:maxH} \end{equation} Using Corollary~\ref{cor:endwandlogspl} and \eqref{A2}, for all $\point{x}$ on $\Gamma(\point{0},\point{a})$ with $\pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x})\leq\Cr{slow1}k$, we have \[ |\pi^2_{\theta_0,\theta_0^t}(\point{a}-\point{x})|\leq\Cl{slow2}\Delta(k)(\log k)^{1/2}, \] with probability at least $1-e^{-\Cl{slow3}\log k}$. Thus, defining the event \begin{align*} \event{E}: & \mbox{ $\Gamma(\point{0},\point{a})$ does not wander more than $\Cr{slow2}\Delta(k)(\log k)^{1/2}$ in $\pm{\theta_0^t}$ directions}\\ & \mbox{ before exiting $H$ starting from $\point{a}$}, \end{align*} we get \begin{equation} \Prob\left(\event{E}\right)\geq 1-e^{-\Cr{slow3}\log k}. \label{eq:event0} \end{equation} Let $H^\ast$ be the portion of $\partial H$, facing towards the origin i.e., towards direction $-{\theta_0}$ from $\point{a}$, having width $2\Cr{slow2}\Delta(k)(\log k)^{1/2}$ in ${\theta_0^t}$ direction, i.e., \begin{equation} H^\ast:=\left\{\point{x}\in\partial H: |\pi^2_{\theta_0,\theta_0^t}(\point{a}-\point{x})|\leq\Cr{slow2}\Delta(k)(\log k)^{1/2},\; \pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x})\geq 0\right\}. \label{eq:defHstar} \end{equation} \begin{lemma} Under the assumptions of Theorem~\ref{thm:lowmain} we have for large enough $k$ \begin{equation} \min_{\point{x}\in\partial H}\pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x})\geq\C k, \label{eq:minH} \end{equation} and \begin{equation} \max_{\point{x}\in H^\ast}\pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x}) - \min_{\point{x}\in H^\ast}\pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x}) \leq \Cl{21}\Delta(k)(\log k)^{1/2}. \label{eq:rangeh} \end{equation} \end{lemma} \noindent\begin{proof}[\emph{\textbf{Proof:}}] Recall definition of $g_{\point{a}}$ from Notation~\ref{Notation:CHAP}. Consider $\point{x}\in H^\ast$. Using that $g$ is a norm and \eqref{eq:defHstar}, we get \begin{equation} g_{\point{a}}(\point{a}-\point{x}) \geq g(\point{a}-\point{x})-\C\pi^2_{\theta_0,\theta_0^t}(\point{a}-\point{x}) \geq g(\point{a}-\point{x})-\C\Delta(k)(\log k)^{1/2}. \label{eq:width3} \end{equation} Using Proposition~\ref{prop:nrflog}, \eqref{eq:boxh3}, \eqref{eq:boxh4}, and \eqref{A2}, we get \begin{equation} g(\point{a}-\point{x})\geq h^\ast-\C\sigma(k)\log k. \label{eq:width4} \end{equation} Combining \eqref{eq:width3} and \eqref{eq:width4} we get \begin{equation} g_{\point{a}}(\point{a}-\point{x})\geq h^\ast-\C\Delta(k)(\log k)^{1/2}. \label{eq:width7} \end{equation} Using \eqref{eq:boxh3} we get \begin{equation} g_{\point{a}}(\point{a}-\point{x})\leq g(\point{a}-\point{x})\leq h(\point{a}-\point{x})+\Cl{633}\leq h^\ast+\Cr{633}, \label{eq:width8} \end{equation} where we have the extra constant by Remark~\ref{notn:nonlattice}. Combining \eqref{eq:width7} and \eqref{eq:width8} we get \[ \max_{\point{x}\in H^\ast}g_{\point{a}}(\point{a}-\point{x}) - \min_{\point{x}\in H^\ast}g_{\point{a}}(\point{a}-\point{x}) \leq \C\Delta(k)(\log k)^{1/2}. \] This establishes \eqref{eq:rangeh} because $g_{\point{a}}(\point{a}-\point{x})$ is proportional to $\pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{x})$. Then \eqref{eq:minH} follows from \eqref{eq:maxH}. \end{proof} \paragraph{\textbf{Construction of polygonal line $G^\ast$:}} Fix $\Cr{nutwo}$ and $\Cl[nu]{nu3}$ such that \begin{equation} \frac{1}{2}<\Cr{nu3}<\Cr{nutwo}<\nu. \label{eq:defeta} \end{equation} By Remark~\ref{Remark:linear}, we can choose $\delta>0$ such that for the limit shape boundary is differentiable in the sector $[{\theta_0}-\delta,{\theta_0}+\delta]$ and for ${\theta}$ belonging in this sector we have \begin{equation} |{\theta}^t-{\theta_0^t}|\leq\Cl{tangent}|{\theta}-{\theta_0}|. \label{eq:linear}\end{equation} Let \begin{equation} \ell:=\frac{\Delta(k)(\log k)^{1/2}}{(\log k)^{\Cr{nu3}}}. \label{eq:Gwidth} \end{equation} The polygonal $G^\ast$ with vertices $\left\{ \point{b}_i:-N_2\leq i\leq N_1\right\}$ is constructed as follows. The point $\point{b}_0$ is defined by the conditions $\pi^2_{\theta_0,\theta_0^t}(\point{b}_0)=0$ and \begin{equation} \pi^1_{\theta_0,\theta_0^t}(\point{b}_0)=\max_{y\in H^\ast}\pi^1_{\theta_0,\theta_0^t}(\point{y}). \label{eq:adjacent} \end{equation} Thus, $\point{b}_0$ is a point on the line joining $\point{0}$ and $\point{a}$. The points $\left\{\point{b}_i:0<i\leq N_1\right\}$ will be above the line joining $\point{0}$ and $\point{a}$. The points $\left\{\point{b}_i:-N_2\leq i<0\right\}$ will be below the line joining $\point{0}$ and $\point{a}$. We construct $\left\{\point{b}_i:0<i\leq N_1\right\}$ inductively as follows. Suppose we have defined $(\point{b}_0,\dots,\point{b}_j)$ for some $j\geq 0$. Let ${\theta}_i$ be the direction of $\point{a}-\point{b}_i$ for $0\leq i\leq j$. Further assume that $(\point{b}_0,\dots,\point{b}_j)$ satisfies the conditions: \begin{enumerate}[(i)] \item $|{\theta}_i-{\theta_0}|\leq\delta$ for all $0\leq i\leq j$, \item $\pi^2_{\theta_0,\theta_0^t}(\point{b}_j)<\Cr{slow2}\Delta(k)(\log k)^{1/2}$, \item direction of $\point{b}_{i+1}-\point{b}_i$ is ${\theta}_i^t$ for all $0\leq i<j$. \end{enumerate} Due to convexity of $\partial\mathcal{B}$, $\pi^1_{\theta_0,\theta_0^t}(\point{b}_{i+1}-\point{b}_i)\geq 0$ and $\pi^2_{\theta_0,\theta_0^t}(\point{b}_{i+1}-\point{b}_i)\geq 0$ for all $0\leq i<j$. We construct $\point{b}_{j+1}$ as follows. Let $\point{b}_j^\prime$ be the point such that direction of $\point{b}_j^\prime-\point{b}_j$ is ${\theta}_j^t$, $\ltwo{\point{b}_j^\prime-\point{b}_j}=\ell$. We define $\point{b}_{j+1}$ to be $\point{b}_j^\prime$, if $\pi^2_{\theta_0,\theta_0^t}(\point{b}_j^\prime)\leq\Cr{slow2}\Delta(k)(\log k)^{1/2}$. Otherwise, we take $\point{b}_{j+1}$ to be the point $\point{b}_j^{\prime\p}$ on the line joining $\point{b}_j$ and $\point{b}_j^\prime$ which satisfies $\pi^2_{\theta_0,\theta_0^t}(\point{b}_j^{\prime\p})=\Cr{slow2}\Delta(k)(\log k)^{1/2}$, and end the construction. To establish that this construction is well-defined, it suffices to show that $|{\theta}_{j+1}-{\theta_0}|\leq\delta$, where ${\theta}_{j+1}$ is the direction of $\point{a}-\point{b}_{j+1}$. Assuming $\delta$ is small enough and using \eqref{eq:linear} we get for all $0\leq i\leq j$ \begin{equation} \pi^1_{\theta_0,\theta_0^t}(\point{b}_{i+1}-\point{b}_{i})=\ltwo{\point{b}_{i+1}-\point{b}_i}\frac{|\sin({\theta}_i^t-{\theta_0^t})|}{|\sin({\theta_0^t}-{\theta_0})|}\leq\C\delta\ltwo{\point{b}_{i+1}-\point{b}_i}, \label{eq:wd1} \end{equation} and \begin{equation} \pi^2_{\theta_0,\theta_0^t}(\point{b}_{i+1}-\point{b}_i)=\ltwo{\point{b}_{i+1}-\point{b}_i}\frac{|\sin({\theta}_i^t-{\theta_0})|}{|\sin({\theta_0^t}-{\theta_0})|}\geq\C\ltwo{\point{b}_{i+1}-\point{b}_i}. \label{eq:wd2} \end{equation} By construction we have \begin{equation} \pi^2_{\theta_0,\theta_0^t}(\point{b}_{j+1}-\point{a})=\pi^2_{\theta_0,\theta_0^t}(\point{b}_{j+1}-\point{b}_0)=\pi^2_{\theta_0,\theta_0^t}(\point{b}_{j+1})\leq\Cr{slow2}\Delta(k)(\log k)^{1/2}. \label{eq:wd3} \end{equation} Taking sum over $0\leq i\leq j$ in \eqref{eq:wd1} and \eqref{eq:wd2}, and using \eqref{eq:wd3} we get \begin{equation} \pi^1_{\theta_0,\theta_0^t}(\point{b}_{j+1}-\point{b}_0)\leq\C\delta\pi^2_{\theta_0,\theta_0^t}(\point{b}_{j+1}-\point{b}_0) \leq\C\delta\Delta(k)(\log k)^{1/2}. \label{eq:wd5} \end{equation} Therefore, using \eqref{eq:minH} and \eqref{eq:adjacent}, we get \begin{equation} \pi^1_{\theta_0,\theta_0^t}(\point{a}-\point{b}_{j+1})\geq\C k. \label{eq:wd4} \end{equation} Combining this with \eqref{eq:wd3} and using \eqref{A2} we get $|{\theta}_{j+1}-{\theta_0}|\leq\delta$ for large enough $k$. This shows that the construction is well-defined. In a similar way we construct $\left\{\point{b}_i:-N_2\leq i<0\right\}$. Denoting the direction of $\point{a}-\point{b}_i$ by ${\theta}_i$, we require direction of $\point{b}_{i-1}-\point{b}_i$ is $-{\theta}_i^t$ for all $0\geq i>-N_2$, and $\pi^1_{\theta_0,\theta_0^t}(\point{b}_i)$ decreases from $0$ to $-\Cr{slow2}\Delta(k)(\log k)^{1/2}$ as $i$ runs from $0$ to $-N_2$. Equations~\eqref{eq:wd3} and \eqref{eq:wd4} also yields for all $\point{x}$ in the part of $G^\ast$ joining $\point{b}_0$ and $\point{b}_{N_1}$ \begin{equation} \C k \leq \ltwo{\point{a}-\point{x}} \leq \C k, \label{eq:Glength} \end{equation} but the same holds for all $\point{x}$ in the part joining $\point{b}_0$ and $\point{b}_{-N_2}$. By \eqref{eq:wd2}, \eqref{eq:wd3}, and \eqref{eq:Gwidth} we get $N_1\leq\C(\log k)^{\Cr{nu3}}$, and the same is true for $N_2$. Hence the total number of sides of $G^\ast$ is bounded as \begin{equation} N:=N_1+N_2\leq\C(\log k)^{\Cr{nu3}}. \label{eq:Gtotal} \end{equation} By \eqref{eq:wd5}, width of the part of $G^\ast$ joining $\point{b}_0$ and $\point{b}_{N_1}$ is at most $\C\Delta(k)(\log k)^{1/2}$ in ${\theta_0}$ direction, and the same holds for the part joining $\point{b}_0$ and $\point{b}_{-N_2}$. Therefore, for all $\point{x},\point{y}\in G^\ast$ \begin{equation} |\pi^1_{\theta_0,\theta_0^t}(\point{x}-\point{y})|\leq\C\Delta(k)(\log k)^{1/2} \label{eq:Gwidth2} \end{equation} We will now state two propositions and complete the proof of Proposition~\ref{prop:lowslow} using them. Then we will proceed to prove these propositions. \begin{prop} Under the assumptions of Theorem~\ref{thm:lowmain}, there exists $\Cl[epsilon]{line1}>0$ such that for all large enough $k$ we have \[ \Prob\left( T(\point{x},\point{a})\geq h(\point{x}-\point{a})+\Cr{line1}\sigma(k)\mbox{ for all } \point{x}\in G^\ast\right)\geq e^{-\Cl{slow4} (\log k)^{\Cr{nu3}}}. \] \label{prop:lowslowline}\end{prop} \begin{prop} Under the assumptions of Theorem~\ref{thm:lowmain}, we have \[ \Prob\left( |T(\point{x},\point{y})-h(\point{x}-\point{y})|\geq\frac{\Cr{line1}}{2}\sigma(k) \mbox{ for some } \point{x}\in H^\ast, \point{y}\in G^\ast\right)\leq e^{-k^{\Cl{slow5}}}. \] where $\Cr{line1}$ is the constant from \eqref{prop:lowslowline}. \label{prop:varlowslow3}\end{prop} Let us now complete proof of Proposition~\ref{prop:lowslow} using these propositions. Define the event \[ \event{E}_1: \mbox{The geodesic $\Gamma(\point{0},\point{a})$, when traced from $\point{a}$ to $\point{0}$, intersects first $G^\ast$ and then $H^\ast$}. \] Since $\event{E}_1\subset\event{E}$, we have using \eqref{eq:event0} \begin{equation} \Prob\left(\event{E}_1^c\right)\leq e^{-\Cr{slow3}\log k}. \label{eq:slowevent1} \end{equation} Define the events \[ \event{E}_2: \mbox{$T(\point{x},\point{a})\geq h(\point{x}-\point{a})+\epsilon_{8}\sigma(k)$ for all $\point{x}\in G^\ast$}, \] and \[ \event{E}_3: \;|T(\point{x},\point{y})-h(\point{x}-\point{y})|\leq\frac{\epsilon_{8}}{2}\sigma(k)\mbox{ for all }\point{x}\in H^\ast,\; \point{y}\in G^\ast. \] Using \eqref{eq:slowevent1}, Proposition~\ref{prop:lowslowline}, and Proposition~\ref{prop:varlowslow3}, and \eqref{eq:defeta}, we get \[ \Prob\left(\event{E}_1\cap\event{E}_2\cap\event{E}_3\right)\geq e^{-(\log k)^{\Cr{nutwo}}}. \] So let us suppose $\mathbb{T}\in\event{E}_1\cap\event{E}_2\cap\event{E}_3$. \begin{figure}[H] \centering\includegraphics[width=0.4\linewidth]{Fig10Var3.pdf} \caption{Setup of Proposition~\ref{prop:lowslow}: we show that (i) the geodesic $\Gamma(\point{0},\point{a})$ when traced starting from $\point{a}$ intersects first $G^\ast$ and then $H^\ast$ with high probability; (ii) passage time from $\point{a}$ to points in $G^\ast$ can be large with non-negligible probability; (iii) passage time between points in $G^\ast$ and points in $H^\ast$ are not too small with high probability. So passage time from $\point{a}$ to $G^\ast$ can be large with non-negligible probability.} \label{Fig:10}\end{figure} Since $\mathbb{T}\in\event{E}_1$, there exist points $\point{w}^\ast\in G^\ast$ and $\point{v}^\ast\in H^\ast$, both situated on the geodesic $\Gamma(\point{0},\point{a})$, such that \begin{equation} T(\point{a},\point{v}^\ast)=T(\point{a},\point{w}^\ast)+T(\point{w}^\ast,\point{v}^\ast), \label{eq:slowevent11} \end{equation} see Figure~\ref{Fig:10}. Since $\point{v}^\ast\in H^\ast\subset\partial H$, we have \begin{equation} T(\point{0},\point{v}^\ast)\geq\tau. \label{eq:slowevent12} \end{equation} Combining \eqref{eq:slowevent11} and \eqref{eq:slowevent12} we get \begin{equation} T(\point{0},\point{a}) = T(\point{0},\point{v}^\ast) + T(\point{v}^\ast,\point{a}) \geq \tau + T(\point{a},\point{w}^\ast) + T(\point{w}^\ast,\point{v}^\ast). \label{eq:slowevent13} \end{equation} From $\mathbb{T}\in\event{E}_2\cap\event{E}_3$ we get \begin{multline} T(\point{a},\point{w}^\ast)+T(\point{w}^\ast,\point{v}^\ast) \geq h(\point{w}^\ast-\point{a})+\epsilon_{8}\sigma(k)+h(\point{v}^\ast-\point{w}^\ast)-\frac{\epsilon_{8}}{2}\sigma(k) \\ \geq h(\point{v}^\ast-\point{a})+\frac{\epsilon_{8}}{2}\sigma(k)-\Cl{623}, \label{eq:slowevent14 \end{multline} where we get the extra constant at the end by Remark~\ref{remark:nonlattice1}. Since $\point{v}^\ast$ is on $\partial H$, from \eqref{eq:boxh3} we have \begin{equation} h(\point{v}^\ast-\point{a})\geq h^\ast-\Cl{624}. \label{eq:slowevent14.5 \end{equation} For large enough $k$ we have \begin{equation} \frac{\epsilon_{8}}{2}\sigma(k)-\Cr{623}-\Cr{624} \geq\frac{\epsilon_{8}}{4}\sigma(k). \label{eq:slowevent15} \end{equation} Therefore, combining \eqref{eq:slowevent13}-\eqref{eq:slowevent15} we get \[ T(\point{0},\point{a})\geq\tau+h^\ast+\frac{\epsilon_{8}}{4}\sigma(k). \] Taking $\epsilon_2=\epsilon_{8}/4$ completes proof of Proposition~\ref{prop:lowslow}. Let us now prove Propositions~\ref{prop:lowslowline} and \ref{prop:varlowslow3}. \subsubsection{\textbf{Proof of Proposition~\ref{prop:lowslowline}}} Let $G_i$ be the segment of $G^\ast$ joining $\point{b}_i$ and $\point{b}_{i+1}$ for all $-N_2\leq i<N_1$. For all $i$, let $\point{a}_i$ be the point on $G_i$ which has the maximum expected passage time from $\point{a}$. By \eqref{A3}, there exists $\Cl[epsilon]{line2}>0$ such that for each $i$, \[ \Prob\left( T(\point{a},\point{a}_i)\geq h(\point{a}_i-\point{a}) + \Cr{line2}\sigma(\ltwo{\point{a}_i-\point{a}})\right)\geq\Cr{line2}. \] Therefore, using \eqref{eq:Glength} and \eqref{A2} we get, for some $\Cl[epsilon]{line3}>0$ and for all $i$, \begin{equation} \Prob\left( T(\point{a},\point{a}_i)\geq h(\point{a}_i-\point{a}) + \Cr{line3}\sigma(k)\right)\geq\Cr{line3}. \label{eq:p2l3} \end{equation} Define for each $i$ \[ D_i:=\max\left\{|T(\point{a},\point{x})-T(\point{a},\point{y})|:\point{x},\point{y}\in G_i\right\}. \] Recall $\Cr{nu3}>1/2$ from \eqref{eq:defeta}. Choose $\Cl[nu]{nu4}>0$ such that \[ \Cr{nu4}<\frac{2\alpha}{(1+\beta)}(\Cr{nu3}-1/2). \] To bound $D_i$ for $i<0$ we can use Theorem~\ref{thm:loglogupinc} with the variables \[ \tilde{\eta}:=\Cr{nu4},\;\tilde{{\theta_0}}:=-{\theta}_i,\; \tilde{L}:=\ell,\; \tilde{n}:=\ltwo{\point{b}_i-\point{a}}. \] For $i\geq 0$ we have a minor technical issue because of the direction of the tangents, but same bounds hold. Using \eqref{eq:Gwidth}, \eqref{eq:Glength}, and \eqref{A2}, we get $\tilde{L}\geq\tilde{L_0}$, $\tilde{n}\geq\tilde{n_0}$, $\tilde{L}\leq\Delta(\tilde{n})$, as required. By Theorem~\ref{thm:loglogupinc} and \eqref{A2}, we get for large enough $t$ \begin{equation} \Prob\left( D_i\geq t(\log k)^{\Cr{nu4}}\sigma(\Delta^{-1}(\ell))\right) \leq \C e^{-\C t (\log k)^{\Cr{nu4}}}. \label{eq:p2lclaim3} \end{equation} Using \eqref{eq:Gwidth} and \eqref{A2}, we get \begin{align*} \frac{\sigma(k)}{\sigma(\Delta^{-1}(\ell))(\log k)^{\Cr{nu4}}} \geq \C (\log k)^{-\Cr{nu4}+(\Cr{nu3}-1/2)(2\alpha)/(1+\beta)}. \end{align*} This can be made arbitrarily large by choosing $k$ large. Therefore, in \eqref{eq:p2lclaim3} we can choose \[ t=\frac{\Cr{line3}}{2}\frac{\sigma(k)}{\sigma(\Delta^{-1}(\ell))(\log k)^{\Cr{nu4}}}, \] and we get for large enough $k$ \[ \Prob\left( D_i\geq\frac{\Cr{line3}}{2} \sigma(k)\right)\leq\frac{\Cr{line3}}{2}. \] Combining this with \eqref{eq:p2l3} we get for all $i$ \[ \Prob\left( T(\point{a},\point{x})\geq h(\point{x}-\point{a})+\frac{\Cr{line3}}{2}\sigma(k)\mbox{ for all }\point{x}\in G_i\right)\geq\frac{\Cr{line3}}{2}. \] Since for each $i$, $\inf_{\point{x}\in G_i}\left\{ T(\point{a},\point{x})-h(\point{x}-\point{a})\right\}$ is an increasing function of the edge-weight configuration, by the FKG inequality we get \[ \Prob\left( T(\point{a},\point{x})\geq h(\point{x}-\point{a})+\frac{\Cr{line3}}{2}\sigma(k)\mbox{ for all }\point{x}\in G_i\right)\geq\left(\frac{\Cr{line3}}{2}\right)^N, \] where recall from \eqref{eq:Gtotal} that $N$ is the total number of segments in $G^\ast$. Using \eqref{eq:Gtotal} we get \[ \Prob\left( T(\point{a},\point{x})\geq h(\point{x}-\point{a})+\frac{\Cr{line3}}{2}\sigma(k)\mbox{ for all }\point{x}\in G_i\right)\geq e^{-\C (\log k)^{\Cr{nu3}}}. \] This completes the proof of Proposition~\ref{prop:lowslowline}. \subsubsection{\textbf{Proof of Proposition~\ref{prop:varlowslow3}}} By \eqref{eq:rangeh}, width of $H^\ast$ in ${\theta_0}$ direction is $2\Cr{slow2}\Delta(k)(\log k)^{1/2}$. By \eqref{eq:Gwidth2}, width of $G^\ast$ in ${\theta_0}$ direction is $\C \Delta(k)(\log k)^{1/2}$. By construction of $G^\ast$ we have \[ \min_{\point{x}\in G^\ast}\pi^1_{\theta_0,\theta_0^t}(\point{x})=\max_{\point{y}\in H^\ast}\pi^1_{\theta_0,\theta_0^t}(\point{y}). \] Both $G^\ast$ and $H^\ast$ are centered around the line joining $\point{0}$ and $\point{a}$ and goes up to $\Cr{slow2}\Delta(k)(\log k)^{1/2}$ distance in $\pm{\theta_0^t}$ directions. Therefore, for all $\point{x}\in G^\ast$ and $\point{y}\in H^\ast$ \[ \ltwo{\point{x}-\point{y}}\leq\C\Delta(k)(\log k)^{1/2}. \] Using \eqref{A2}, the number of pairs of such points is at most $\C k^2$. Hence, using \eqref{A1}, \eqref{A2}, and a union bound, we get \begin{align*} & \Prob\left(|T(\point{x},\point{y})-h(\point{x}-\point{y})|\geq\frac{\Cr{line1}}{2}\sigma(k)\mbox{ for some } \point{x}\in G^\ast,\point{y}\in H^\ast\right)\\ \leq & \C k^2 \exp\left(-\C\frac{\sigma(k)\log k}{\sigma(\Delta(k)(\log k)^{1/2})}\right)\\ \leq & e^{-k^{\C}}. \end{align*} This completes the proof of Proposition~\ref{prop:varlowslow3}. \subsection{Proof of Proposition~\ref{prop:cov}} By the FKG inequality $\Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})\middle\vert\mathcal{F}\right)\geq 0$. Let us call a path $\gamma$ from $\point{0}$ to $\point{a}$ \emph{well behaved} if for all points $\point{y}$ in $\gamma$ outside $F$ we have $|\pi^2_{\theta_0,\theta_0^t}(\point{y}-\point{a})|<\Delta(k)(\log k)^\eta/2$. Similarly define well behaved paths from $\point{0}$ to $\point{b}$. Since $|\pi^2_{\theta_0,\theta_0^t}(\point{a}-\point{b})|=L=\Delta(k)(\log k)^\eta$, well behaved paths from $\point{0}$ to $\point{a}$ and well behaved paths from $\point{0}$ to $\point{b}$ do not intersect outside $F$. Let $T^\prime(\point{0},\point{a})$ and $T^\prime(\point{0},\point{b})$ be the minimum passage time restricted to well behaved paths from $\point{0}$ to $\point{a}$ and $\point{0}$ to $\point{b}$ respectively. Since well behaved paths from $\point{0}$ to $\point{a}$ and $\point{0}$ to $\point{b}$ do not intersect outside $F$, $T^\prime(\point{0},\point{a})$ and $T^\prime(\point{0},\point{b})$ are independent given $\mathcal{F}$. Let $\Gamma(\point{a},F)$ be the geodesic from $\point{a}$ to $F$ i.e., the path with minimum passage time from $\point{a}$ to a point in $F$. Let $T(\point{a},F):=T(\Gamma(\point{a},F))$. Since $F$ touches $H$, we have \[ T(\point{a},F)\leq\max_{\point{y}\in H}T(\point{a},\point{y}). \] By Remark~\ref{remark:boxh}, $H$ is contained in a square of side length $\Cl{644} k$. Therefore \begin{equation} \Prob\left( T(\point{a},F)\geq \Cl{645} k\right)\leq e^{-\C k}. \label{eq:cov1} \end{equation} Let \[ T^t(\point{0},\point{a}):=T^\prime(\point{0},\point{a})\wedge(\tau+\Cr{645}k),\quad\mbox{and}\quad T^t(\point{0},\point{b}):=T^\prime(\point{0},\point{b})\wedge(\tau+\Cr{645}k). \] Because $T^\prime(\point{0},\point{a})$ and $T^\prime(\point{0},\point{b})$ are independent given $\mathcal{F}$, $T^t(\point{0},\point{a})$ and $T^t(\point{0},\point{b})$ are also independent given $\mathcal{F}$. Therefore \begin{align*} & \Exp\Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})|\mathcal{F}\right)\\ = & \Exp\Cov\left( T(\point{0},\point{a})-T^t(\point{0},\point{a}),T(\point{0},\point{b})-\tau\middle\vert\mathcal{F}\right) \\& +\Exp\Cov\left( T^t(\point{0},\point{a})-\tau,T(\point{0},\point{b})-T^t(\point{0},\point{b})\middle\vert\mathcal{F}\right)\\ \leq & \Exp\left[\left(\Exp\left(\Lp T(\point{0},\point{a})-T^t(\point{0},\point{a})\right)^2\middle\vert\mathcal{F}\right)\Rp^{1/2} \left(\Exp\left(\Lp T(\point{0},\point{b})-\tau\right)^2\middle\vert\mathcal{F}\right)\Rp^{1/2}\right]\\ & +\Exp\left[\left(\Exp\left(\Lp T(\point{0},\point{b})-T^t(\point{0},\point{b})\right)^2\middle\vert\mathcal{F}\right)\Rp^{1/2} \left(\Exp\left(\Lp T^t(\point{0},\point{a})-\tau\right)^2\middle\vert\mathcal{F}\right)\Rp^{1/2}\right]\\ \leq & \left(\Exp\left( T(\point{0},\point{a})-T^t(\point{0},\point{a})\right)^2\right)^{1/2} \left(\Exp\left( T(\point{0},\point{b})-\tau\right)^2\right)^{1/2}\\ &+\left(\Exp\left( T(\point{0},\point{b})-T^t(\point{0},\point{b})\right)^2\right)^{1/2} \left(\Exp\left( T^t(\point{0},\point{a})-\tau\right)^2\right)^{1/2}. \numberthis\label{eq:cov2} \end{align*} Let $\point{x}$ be the point where $F$ touches $H$. Since $T(\point{0},\point{x})\leq\tau$, we get $T(\point{0},\point{a})-\tau\leq T(\point{a},\point{x})$. Again using that $H$ is contained in a box of size $\Cr{644}k$ we get \begin{equation} \Exp\left( T(\point{0},\point{a})-\tau\right)^2\leq\Exp T(\point{a},\point{x})^2\leq\C k^2. \label{eq:cov21} \end{equation} Similarly we have \begin{equation} \Exp\left( T(\point{0},\point{a})-\tau\right)^4\leq\C k^4. \label{eq:cov22} \end{equation} Combining \eqref{eq:cov21} and \eqref{eq:cov22}, we get \begin{align*} & \left(\Exp\left( T(\point{0},\point{a})-T^t(\point{0},\point{a})\right)^2\right)^{1/2}\\ \leq & \left(\Exp\left( T(\point{0},\point{a})-T^t(\point{0},\point{a})\right)^4\right)^{1/4}\Prob\left( T(\point{0},\point{a})\neq T^t(\point{0},\point{a})\right)^{1/4}\\ \leq & \left[\left(\Exp\left( T(\point{0},\point{a})-\tau\right)^4\right)^{1/4}+ \left(\Exp\left( T^t(\point{0},\point{a})-\tau\right)^4\right)^{1/4}\right]\Prob\left( T(\point{0},\point{a})\neq T^t(\point{0},\point{a})\right)^{1/4}\\ \leq & \C k \Prob\left( T(\point{0},\point{a})\neq T^t(\point{0},\point{a})\right)^{1/4}. \numberthis\label{eq:cov3} \end{align*} For large enough $k$ we have \[ \Prob\left(\min\left\{ T(\point{a},\point{y}):\ltwo{\point{y}-\point{a}}=\Cl{646}k\right\}\leq\Cr{645} k\right)\leq e^{-\Cl{647} k}. \] If $T(\point{a},F)\leq\Cr{645}k$, and $\min\left\{ T(\point{a},\point{y}):\ltwo{\point{y}-\point{a}}=\Cr{646} k\right\}>\Cr{645}k$, then $\text{Diam}(\Gamma(\point{a},F))\leq 2\Cr{646}k$. Therefore \begin{equation} \Prob\left( T(\point{a},F)\leq\Cr{645}k, \;\text{Diam}(\Gamma(\point{a},F))\geq 2\Cr{646}k\right)\leq e^{-\Cr{647}k}. \label{eq:cov4}\end{equation} Recall Notation~\ref{notn:wandering}. Using Corollary~\ref{cor:endwandlogspl} and $\eta>1/2$, we get \begin{equation} \Prob\left(\max_{k^\prime\leq 2\Cr{646}k}\mathcal{W}\left(\point{a},\point{0},k^\prime,-{\theta_0}\right)\geq\frac{1}{2}\Delta(k)(\log k)^\eta\right)\leq e^{-\C (\log k)^{2\eta}}. \label{eq:cov5} \end{equation} If $T(\point{a},F)\leq \Cr{645} k$, $\text{Diam}(\Gamma(\point{a},F))\leq 2 \Cr{646}k$, and $\max_{k^\prime\leq 2\Cr{646}k} \mathcal{W}\left(\point{a},\point{0},k^\prime,-{\theta_0}\right)\leq\Delta(k)(\log k)^\eta/2$, then $T(\point{0},\point{a})=T^t(\point{0},\point{a})$. Therefore, combining \eqref{eq:cov1}, \eqref{eq:cov4}, and \eqref{eq:cov5}, we get \begin{equation} \Prob(T(\point{0},\point{a})\neq T^t(\point{0},a))\leq e^{-\C(\log k)^{2\eta}}. \label{eq:cov6} \end{equation} Statements similar to \eqref{eq:cov21}, \eqref{eq:cov22}, \eqref{eq:cov3}, and \eqref{eq:cov6}, hold for $\point{a}$ replaced with $\point{b}$. Combining them using \eqref{eq:cov2}, and using $\eta>1/2$, we get $\Cov(T(\point{0},\point{a}),T(\point{0},\point{b})|\mathcal{F})\leq\Cl{pcov2}$ for any $\Cr{pcov2}>0$, provided $k$ is large enough. This completes the proof of Proposition~\ref{prop:cov}. \section{Long-range Correlations} \resetconstant In this section our objective is to prove Theorem~\ref{thm:longrange}. Fix $J_0>0$, $n_0>0$ to be assumed large enough whenever required. Consider $n\geq n_0$, $J\in[\mathfrak{q}^{1/2}J_0,n^\delta]$. Let $m:=f^{-1}(Jf(n)/J_0)$, so that \begin{equation} J_0\frac{\Delta(m)(\log m)^{1/2}}{m}=J\frac{\Delta(n)(\log n)^{1/2}}{n}. \label{eq:long1} \end{equation} Using $J\geq\mathfrak{q}^{1/2}J_0$ and \eqref{A2} we get $m\leq n$. Using $\delta<(1-\beta)/2$, $J\leq n^\delta$, \eqref{A2}, and assuming $n_0$ is large enough, we get $\log m\geq\Cl{long1}\log n$. Let $\point{a}:=n{\Unit{\thn}}+J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}}$, $\point{b}:=n{\Unit{\thn}}-J\Delta(n)(\log n)^{1/2}{\Unit{\thnt}}$. Using $\delta<(1-\beta)/2$ and $J\leq n^\delta$ we get $\ltwo{\point{a}}$ and $\ltwo{\point{b}}$ are at most $\Cl{long2}n$. Let \[ H:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{x})\geq m\right\}. \] Let $\mathcal{F}$ be the sigma-field generated by all the edge-weights $\tau_e$ such that both endpoints of the edge $e$ are in $H$. \begin{claim} Assuming $J_0$ and $n_0$ are large enough, we have \begin{equation} |\Cov\left(\Exp\left( T(\point{0},\point{a})\middle\vert\mathcal{F}\right),\Exp\left( T(\point{0},\point{b})\middle\vert\mathcal{F}\right)\Rp|\leq\C. \label{eq:longrangeclaim1}\end{equation} \label{longrangeclaim1} \end{claim} \begin{figure}[H] \centering\includegraphics[width=0.3\linewidth,angle=90]{Fig11Long.pdf} \caption{Setup of Claim~\ref{longrangeclaim1}: distance between $\point{u}$ and $\point{v}$ is $2J\Delta(m)(\log m)^{1/2}$; $H$ is the region to the right of $\partial H$; $R_{\point{a}}$ is the subset of $H$ above the line in direction ${\theta_0}$, $R_{\point{b}}$ is the region below the line; with high probability $\Gamma(\point{0},\point{a})$ stays in $R_{\point{a}}$ while it is in $H$, $\Gamma(\point{0},\point{b})$ stays in $R_{\point{b}}$ while it is in $H$.} \label{Fig:11}\end{figure}% \paragraph{\textbf{Proof of Claim~\ref{longrangeclaim1}:}} Define two regions \[ R_{\point{a}}:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{x})\geq m,\;\pi^2_{\theta_0,\theta_0^t}(\point{x})>0\right\}, \] and \[ R_{\point{b}}:=\left\{\point{x}\in\ensuremath{\mathbb{R}}^2:\pi^1_{\theta_0,\theta_0^t}(\point{x})\geq m,\;\pi^2_{\theta_0,\theta_0^t}(\point{x})<0\right\}. \] Define the event \[ \event{E}_1: \mbox{ $\Gamma(\point{0},\point{a})$ stays inside $R_{\point{a}}$ while it is in the region $H$,} \] i.e., for all $\point{u}\in\Gamma(\point{0},\point{a})$ with $\pi^1_{\theta_0,\theta_0^t}(\point{u})\geq m$ we have $\pi^2_{\theta_0,\theta_0^t}(\point{u})>0$. Similarly, define the event \[ \event{E}_2: \mbox{ $\Gamma(\point{0},\point{b})$ stays in the region $R_{\point{b}}$ while it is in the region $H$.} \] Thus, if $\mathbb{T}\not\in\event{E}_1$, then either for some $k\in[m,n]$ we have $\mathcal{W}(\point{0},\point{a},k,{\theta_0})\geq J_0(k/m)\Delta(m)(\log m)^{1/2}$, or for some $k\geq n$ we have $\mathcal{W}(\point{0},\point{a},k,{\theta_0})\geq J\Delta(n)(\log n)^{1/2}$. By \eqref{A2}, $J_0(k/m)\Delta(m)(\log m)^{1/2}\geq J_0\mathfrak{q}^{-1/2}\Delta(k)(\log k)^{1/2}$ for $k\geq m$. Therefore, for each $k\in[m,n]$, using Proposition~\ref{prop:nrflog} and Theorem~\ref{thm:endwandlog}, we get \begin{equation} \Prob\left(\mathcal{W}(\point{0},\point{a},k,{\theta_0})\geq J_0\frac{k}{m}\Delta(m)(\log m)^{1/2}\right) \leq\C e^{-\C J_0^2\log k}. \label{eq:long2} \end{equation} Using Corollary~\ref{cor:endwandlogspl} we get \begin{equation} \Prob\left(\mathcal{W}(\point{0},\point{a},k,{\theta_0})\geq J_0\frac{k}{m}\Delta(m)(\log m)^{1/2} \mbox{ for some $k\geq n$}\right)\leq \C e^{- \C J_0^2 \log n}. \label{eq:long3} \end{equation} Using union bound over $k\in[m,n]$ in \eqref{eq:long2}, $\log m\geq\Cr{long1}\log n$, and \eqref{eq:long3}, we get \begin{equation} \Prob\left(\event{E}_1^c\right)\leq \C e^{-\C J_0^2\log n}. \label{eq:long4} \end{equation} The same holds for $\event{E}_2$. Let $\hat{T}(\point{0},\point{a})$ be the minimum passage time among of all paths from $\point{0}$ to $\point{a}$ which stays in $R_{\point{a}}$ when in $H$. Similarly define $\hat{T}(\point{0},\point{b})$. Then $\Exp(\hat{T}(\point{0},\point{a})|\mathcal{F})$ and $\Exp(\hat{T}(\point{0},\point{b})|\mathcal{F})$ are independent because $R_{\point{a}}$ and $R_{\point{b}}$ are disjoint. If $\mathbb{T}\in\event{E}_1$, then $T(\point{0},\point{a})=\hat{T}(\point{0},\point{a})$, and if $\mathbb{T}\in\event{E}_2$, then $T(\point{0},\point{b})=\hat{T}(\point{0},\point{b})$. Using $\ltwo{\point{a}}\leq\Cr{long2}n$, $\ltwo{\point{b}}\leq\Cr{long2}n$, \eqref{eq:long4}, and the same bound for $\event{E}_2$, we get \begin{align*} & \Cov\left(\Exp\left( T(\point{0},\point{a})\middle\vert\mathcal{F}\right),\Exp\left( T(\point{0},\point{b})\middle\vert\mathcal{F}\right)\Rp\\ \leq & \left(\Exp\left( T(\point{0},\point{a})-\hat{T}(\point{0},\point{a})\right)^2\right)^{1/2}\left(\Exp T(\point{0},\point{b})^2\right)^{1/2}\\ & + \left(\Exp\left( T(\point{0},\point{b})-\hat{T}(\point{0},\point{b})\right)^2\right)^{1/2}\left(\Exp T(\point{0},\point{a})^2\right)^{1/2}\\ \leq & \Prob(\event{E}_1^c)^{1/2}\left(\Exp\left( T(\point{0},\point{a})-\hat{T}(\point{0},\point{a})\right)^4\right)^{1/4} \left(\Exp T(\point{0},\point{b})^2\right)^{1/2}\\ & + \Prob(\event{E}_2^c)^{1/2}\left(\Exp\left( T(\point{0},\point{b})-\hat{T}(\point{0},\point{b})\right)^4\right)^{1/4} \left(\Exp T(\point{0},\point{b})^2\right)^{1/2}\\ & \leq \C n^2 e^{-\C J_0^2 \log n} \leq \C. \end{align*} This completes proof of Claim~\ref{longrangeclaim1}. For the expected conditional covariance we have \begin{equation} \Exp\Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})|\mathcal{F}\right) \leq \left(\Exp\Var\left( T(\point{0},\point{a})|\mathcal{F}\right)\Rp^{1/2} \left(\Exp\Var\left( T(\point{0},\point{b})|\mathcal{F}\right)\Rp^{1/2}. \label{eq:long5} \end{equation} By symmetry it is enough to find an upper bound of $\Exp\Var(T(\point{0},\point{a})|\mathcal{F})$. Consider an independent edge-weight configuration on the edges which have at least one endpoint not in the half-space $H$. Let $T^\prime(\point{0},\point{a})$ be the passage time from $\point{0}$ to $\point{a}$ on the new configuration. Let $\Gamma^\prime(\point{0},\point{a})$ be the corresponding geodesic. Then $T(\point{0},\point{a})$ and $T^\prime(\point{0},\point{a})$ are independent given $\mathcal{F}$. Therefore \begin{equation} \Exp\Var(T(\point{0},\point{a})|\mathcal{F}) =\Exp\left(\Exp\left(\frac{1}{2}\left( T(\point{0},\point{a})-T^\prime(\point{0},\point{a})\right)^2|\mathcal{F}\right)\Rp =\frac{1}{2}\Exp\left( T(\point{0},\point{a})-T^\prime(\point{0},\point{a})\right)^2. \label{eq:long6} \end{equation} \begin{claim} Assuming $J_0$ and $n_0$ are large enough, we have \begin{equation} \Exp(T(\point{0},\point{a})-T^\prime(\point{0},\point{a}))^2\leq\Cl{longrangeclaim2}\sigma^2(m)\log n. \label{eq:longrangeclaim2a} \end{equation} \label{longrangeclaim2} \end{claim} \begin{figure}[H] \centering\includegraphics[width=0.5\linewidth]{Fig12Long.pdf} \caption{Setup of Claim~\ref{longrangeclaim2}: the segment $\segment{I}$ is a part of $\partial H$, see Figure~\ref{Fig:11} for the location of $\partial H$; the geodesic $\Gamma^\prime(\point{0},\point{a})$ is constructed by taking a new configuration in the left-side of $\partial H$. With high probability, both geodesics $\Gamma(\point{0},\point{a})$ and $\Gamma^\prime(\point{0},\point{a})$ intersect $\segment{I}$ when they intersect $\partial H$.} \end{figure} \paragraph{\textbf{Proof of Claim~\ref{longrangeclaim2}:}} Consider the line segment \[ \segment{I}:=\left\{\point{x}:\pi^1_{\theta_0,\theta_0^t}(\point{x})=m,\; 0\leq\pi^2_{\theta_0,\theta_0^t}(\point{x})\leq 2J_0\Delta(m)(\log m)^{1/2}\right\}. \] Define the event $\event{E}_3$ \[ \event{E}_3: \mbox{ $\Gamma(\point{0},\point{a})$ and $\Gamma^\prime(\point{0},\point{a})$ passes through $\segment{I}$.} \] If $\mathbb{T}\not\in\event{E}_3$ then both $\Gamma(\point{0},\point{a})$ and $\Gamma^\prime(\point{0},\point{a})$ wanders more than $J_0\Delta(m)(\log m)^{1/2}$ in $\pm{\theta_0^t}$ directions when they are at distance $m$ in ${\theta_0}$ direction from $\point{0}$. So, using Theorem~\ref{thm:midptwand} and $\log m\geq\Cr{long1}\log n$, we get \[ \Prob(\event{E}_3^c)\leq \C e^{-\C J_0^2\log n}. \] Therefore, using $\ltwo{\point{a}}\leq\Cr{long2}n$ and assuming $J_0$ is large enough, we get \begin{equation} \Exp\left(\Lp T(\point{0},\point{a})-T^\prime(\point{0},\point{a})\right)^2\mathbf{1}(\event{E}_3^c)\right)\leq\C. \label{eq:longclaim21} \end{equation} If we have $\mathbb{T}\in\event{E}_3$, $\Gamma(\point{0},\point{a})$ passes through $\point{x}\in\segment{I}$, and $\Gamma^\prime(\point{0},\point{a})$ passes through $\point{y}\in\segment{I}$, then \[ T(\point{0},\point{x})-T^\prime(0,\point{x})\leq T(\point{0},\point{a})-T^\prime(\point{0},\point{a})\leq T(\point{0},\point{y})-T^\prime(\point{0},\point{y}). \] Therefore \begin{equation} \Exp\left(\Lp T(\point{0},\point{a})-T^\prime(\point{0},\point{a})\right)^2\mathbf{1}(\event{E}_3)\right) \leq\Exp\max_{\point{z}\in\segment{I}}\left( T(\point{0},\point{z})-T^\prime(\point{0},\point{z})\right)^2. \label{eq:longclaim22} \end{equation} For every $\point{z}\in\segment{I}$, $T(\point{0},\point{z})$ and $T^\prime(\point{0},\point{z})$ have the same mean. Using \eqref{eq:long1}, $J\leq n^\delta$, $\delta\leq(1-\beta)/2$, and \eqref{A2}, we get $\ltwo{\point{z}}\leq\C m$ for all $\point{z}\in\segment{I}$. Therefore, by \eqref{A1} and \eqref{A2}, for all $t>0$ \[ \Prob\left(\max_{\point{z}\in\segment{I}} |T(\point{0},\point{z})-T^\prime(\point{0},\point{z})|\geq t\sigma(m)\right)\leq \C m e^{-\C t}. \] Therefore, \[ \Exp\max_{\point{z}\in\segment{I}}\left( T(\point{0},\point{z})-T^\prime(\point{0},\point{z})\right)^2\leq\C\sigma^2(m)\log m. \] Combining this with \eqref{eq:longclaim21}, \eqref{eq:longclaim22}, and using $m\leq n$ proves Claim~\ref{longrangeclaim2}. Combining \eqref{eq:long6} and \eqref{eq:longrangeclaim2a} we get \[ \Exp\Var\left( T(\point{0},\point{a})|\mathcal{F}\right)\leq\C\sigma^2(m)\log n. \] By symmetry, same statement holds if we replace $\point{a}$ by $\point{b}$. Therefore by \eqref{eq:long5} we get \begin{equation} \Exp\Cov\left( T(\point{0},\point{a}),T(\point{0},\point{b})|\mathcal{F}\right)\leq\C\sigma^2(m)\log n. \label{eq:long7} \end{equation} Therefore, by \eqref{A2}, the bound on the covariance of the conditional expectations in \eqref{eq:longrangeclaim1} is negligible compared to the bound on the expectation of the conditional covariance in \eqref{eq:long7}. Thus, combining \eqref{eq:longrangeclaim1} and \eqref{eq:long7}, proves Theorem~\ref{thm:longrange}. \bibliographystyle{plainnat}
2,877,628,089,876
arxiv
\section{Introduction} A key point of gravitation is that the metric tensor defines neither curvature nor torsion by itself \cite{koba}. As a matter of fact, curvature and torsion require a {\em connection} to be defined, and many different connections, with different curvature and torsion, can be defined on the very same metric spacetime \cite{livro}. How can we determine the relevant connection for gravitation? This is a fundamental question, which has more than one answer. For example, when constructing general relativity, Einstein chose the {\em zero--torsion} Levi--Civita, or Christoffel connection, which is a connection completely specified by the ten components of the metric tensor. In this theory, therefore, torsion is chosen to vanish from the very beginning, and the gravitational field is represented by curvature. A second possibility would be to choose a {\em zero--curvature} Lorentz connection not related to gravitation, but to inertial effects only. The gravitational theory that emerges from this choice is teleparallel gravity, a gauge theory for the translation group, in which curvature is assumed to vanish from the very beginning. In this theory, the gravitational field turns out to be represented by a translational gauge potential, which appears as the non--trivial part of the tetrad field and gives rise to a non--vanishing torsion, the field strength of the theory. One may wonder why a gauge theory for the translation group, and not for any other group related to spacetime. The answer is related to the source of gravitation: energy and momentum. From Noether's theorem, an instrumental piece of gauge theories \cite{kopo9}, we know that the energy--momentum tensor is conserved provided the source lagrangian is invariant under spacetime translations. If gravity is to be described by a gauge theory with energy--momentum as source, therefore, it must be a gauge theory for the translation group \cite{livro2}. Although equivalent to general relativity, teleparallel gravity provides a new insight into gravitation. The purpose of these lectures is to explore some of these insights, as well as discuss how this approach could help to answer some old questions permeating general relativity, like for example the energy localizability of the gravitational field and the problem of quantum gravity. \section{Linear Frames and Tetrads} \label{sec:framestetrads} Spacetime is the common arena on which the four presently known fundamental interactions manifest themselves. Electromagnetic, weak and strong interactions are described by gauge theories involving transformations taking place in {\em internal} spaces, by themselves unrelated to spacetime. The basic setting of gauge theories are the principal bundles, in which a copy of the corresponding gauge group is attached at each point of spacetime --- the base space of the bundle. Gravitation, on the other hand, is deeply linked to the very structure of spacetime. The geometrical setting of gravitation is the tangent bundle, a natural construction always present in any differentiable manifold: at each point of spacetime there is a tangent space attached to it --- the fiber of the bundle, which is seen as a vector space. In what follows we are going to use the Greek alphabet $(\mu, \nu, \rho, \dots = 0,1,2,3)$ to denote indices related to spacetime, and the first letters of the Latin alphabet $(a,b,c, \dots = 0,1,2,3)$ to denote indices related to the tangent space, a Minkowski spacetime whose Lorentz metric, in cartesian coordinates, is assumed to have the form \begin{equation} \eta_{ab} = \mathrm{diag}(+1,-1,-1,-1). \label{eq:etaofMinko} \end{equation} A general spacetime is a 4-dimensional differential manifold, indicated ${\mathbb R}^{3,1}$, whose tangent space is, at each point, a Minkowski spacetime. Spacetime coordinates will be denoted by $\{x^\mu\}$, whereas tangent space coordinates will be denoted by $\{x^a\}$. Such coordinate systems determine, on their domains of definition, local bases for vector fields, formed by the sets of gradients \begin{equation} \{\partial_\mu\} \equiv \{ {\partial}/{\partial x^\mu} \} \quad \mbox{and} \quad \{\partial_a\} \equiv \{ {\partial}/{\partial x^a} \}, \end{equation} as well as bases $\{dx^\mu\}$ and $\{dx^a\}$ for covector fields, or differentials. These bases are dual, in the sense that \begin{equation} dx^\mu \, ({\partial_\nu}) = \delta^\mu_\nu \quad \mbox{and} \quad dx^a \, ({\partial_b}) = \delta^a_b. \end{equation} On the respective domains of definition, any vector or covector can be expressed in terms of these bases, which can furthermore be extended by direct product to constitute bases for general tensor fields of any order. \subsection{Trivial Frames} \label{sec:frames} Trivial frames, or trivial tetrads,\index{Frame!field} will be denoted by \begin{equation} \{e_{a}\} \quad \mbox{and} \quad \{e^{a}\}. \end{equation} The above mentioned coordinate bases \begin{equation} \{{e_a}\} = \{{\partial_a}\} \quad \mbox{and} \quad \{{e^a}\} = \{{d x^a}\} \end{equation} are very particular cases, whose name stems from their relationship to a coordinate system. Any other set of four linearly independent fields $\{e_{a}\}$ will form another basis, and will have a dual $\{e^{a}\}$ whose members are such that \begin{equation} e^{a}(e_b) = \delta^a_b. \label{OrtoLiFra} \end{equation} Notice that, on a general manifold, vector fields are (like coordinate systems) only locally defined~---~and linear frames, as sets of four such fields, are only defined on restricted domains. These frame fields are the general linear bases on the spacetime differentiable manifold ${\mathbb R}^{\,3,1}$. The whole set of such bases, under conditions making of it also a differentiable manifold, constitutes the {\em bundle of linear frames}. A frame field provides, at each point $p \in {\mathbb R}^{\,3,1}$, a basis for the vectors on the tangent space ${T}_p{\mathbb R}^{\,3,1}$. Of course, on the common domains they are defined, each member of a given base can be written in terms of the members of any other. For example, \begin{equation} e_a = e_a{}^\mu \, \partial_\mu \quad \mbox{and} \quad e^{a} = e^{a}{}_\mu \, dx^\mu, \end{equation} and conversely, \begin{equation} \partial_\mu = e^a{}_\mu \, e_a \quad \mbox{and} \quad dx^\mu = e_{a}{}^\mu \, e^{a}. \label{eq:partialmu} \end{equation} On account of the orthogonality conditions (\ref{OrtoLiFra}), the frame components satisfy \begin{equation} e^{a}{}_{\mu} e_{a}{}^{\nu} = \delta_{\mu}^{\nu} \quad \mbox{and} \quad e^{a}{}_{\mu} e_{b}{}^{\mu} = \delta^{a}_{b}. \label{eq:frameprops1} \end{equation} Notice that these frames, with their bundles, are constitutive parts of spacetime: they are automatically present as soon as spacetime is taken to be a differentiable manifold. A general linear base $\{e_{a}\}$ satisfies the commutation relation \begin{equation} [e_{a}, e_{b}] = f^{c}{}_{a b} \; e_{c}, \label{eq:comtable0} \end{equation} with $f^{c}{}_{a b}$ the so--called structure coefficients, or coefficients of anholonomy, or still the anholonomy of frame $\{e_{a}\}$. As a simple computation shows, they are defined by \begin{equation} f^c{}_{a b} = e_a{}^{\mu} e_b{}^{\nu} (\partial_\nu e^c{}_{\mu} - \partial_\mu e^c{}_{\nu} ). \label{fcab0} \end{equation} A preferred class is that of inertial frames, denoted $e'_a$, those for which \begin{equation} f'^{a}{}_{cd} = 0. \label{fcabinertial} \end{equation} Base $\{e'^{a}\}$ is then said to {\em holonomic}. Of course, all coordinate bases are holonomic. This is not a local property, in the sense that it is valid everywhere for frames belonging to this inertial class. Consider now the Minkowski spacetime metric, which in cartesian coordinates has the form \begin{equation} \eta_{\mu \nu} = \mathrm{diag}(+1,-1,-1,-1). \label{eq:etaofMinkoST} \end{equation} In any other coordinates, $\eta_{\mu \nu}$ will be a function of the spacetime coordinates. The linear frame \begin{equation} e_{a} = e_{a}{}^{\mu} \, {\partial_{\mu}}, \end{equation} provides a relation between the tangent--space metric $\eta_{a b}$ and the spacetime metric $\eta_{\mu \nu}$. Such relation is given by \begin{equation} \eta_{a b} = {\eta}_{\mu \nu} \, e_{a}{}^{\mu} e_{b}{}^{\nu}. \label{gtoeta} \end{equation} Using the orthogonality conditions (\ref{eq:frameprops1}), the inverse relation is found to be \begin{equation} {\eta}_{\mu \nu} = \eta_{a b} \, e^{a}{}_{\mu} e^{b}{}_{\nu}. \label{eq:tettomet0} \end{equation} Independently of whether $e_{a}$ is holonomic or not, or equivalently, whether they are inertial or not, they always relate the tangent Minkowski space to a Minkowski spacetime. These are the frames appearing in special relativity, and are usually called trivial frames --- or trivial tetrads. \subsection{Nontrivial Frames} \label{sec:tetrads} Nontrivial frames, or tetrads, will be denoted by \begin{equation} \{h_{a}\} \quad \mbox{and} \quad \{h^{a}\}. \end{equation} They are defined as linear frames whose coefficient of anholonomy is related to both inertial effects {\it and} gravitation. Let us consider a general pseudo--riemannian spacetime with metric components $g_{\mu \nu}$ in the some dual holonomic basis $\{d x^{\mu}\}$. The tetrad field \begin{equation} h_{a} = h_{a}{}^{\mu} \, {\partial_{\mu}} \quad \mbox{and} \quad h^a = h^a{}_\mu dx^\mu, \end{equation} is a linear basis that relates $g_{\mu \nu}$ to the tangent--space metric $\eta_{a b}$ through the relation \begin{equation} \eta_{a b} = g_{\mu \nu} \, h_{a}{}^{\mu} h_{b}{}^{\nu}. \label{eq:gtoeta} \end{equation} The components of the dual basis members $h^{a} = h^{a}{}_{\nu} dx^{\nu}$ satisfy \begin{equation} h^{a}{}_{\mu} \, h_{a}{}^{\nu} = \delta_{\mu}^{\nu} \quad {\rm and} \quad h^{a}{}_{\mu} \, h_{b}{}^{\mu} = \delta^{a}_{b}, \label{eq:tetradprops1} \end{equation} so that Eq.~(\ref{eq:gtoeta}) has the inverse \begin{equation} g_{\mu \nu} = \eta_{a b} \, h^{a}{}_{\mu} h^{b}{}_{\nu}. \label{eq:tettomet} \end{equation} We see from these relations that \begin{equation} h = \det (h^a{}_\mu) = \sqrt{-g} \, , \end{equation} with $g = \det(g_{\mu \nu})$. A tetrad basis $\{h_{a}\}$ satisfies the commutation relation \begin{equation} [h_{a}, h_{b}] = f^{c}{}_{a b}\, h_{c}, \label{eq:comtable} \end{equation} with $f^{c}{}_{a b}$ the structure coefficients, or coefficients of anholonomy, of frame $\{h_{a}\}$. The basic difference in relation to the linear bases $\{e_a\}$ is that now the $f^{c}{}_{a b}$ represent both inertia and gravitation. As before, the structure coefficients are given by \begin{equation} f^c{}_{a b} = h_a{}^{\mu} h_b{}^{\nu} (\partial_\nu h^c{}_{\mu} - \partial_\mu h^c{}_{\nu} ). \label{fcab} \end{equation} Although nontrivial tetrads are, by definition, anholonomic due to the presence of gravitation, it is still possible that {\em locally}, $f^{c}{}_{a b}$ = $0$. In this case, $d h^{a} = 0$, which means that $h^{a}$ is locally a closed differential form. In fact, if this holds at a point $p$, then there is a neighborhood around $p$ on which functions (coordinates) $x^a$ exist such that \[ h^{a} = dx^a. \] We say that a closed differential form is always locally integrable, or exact. This is the case of locally inertial frames, which are always holonomic. In these frames, inertial effects locally compensate gravitation. \section{Lorentz Connections} \label{sec:connections} A {\em Lorentz connection} $A_\mu$, frequently referred to also as {\it spin connection}, is a 1-form assuming values in the Lie algebra of the Lorentz group, \begin{equation} A_\mu = \textstyle{\frac{1}{2}} \, A^{ab}{}_\mu \, S_{ab}, \end{equation} with $S_{ab}$ a given representation of the Lorentz generators. As these generators are antisymmetric in the algebraic indices, $A^{ab}{}_\mu$ must be equally antisymmetric in order to be lorentzian. This connection defines the Fock--Ivanenko covariant derivative \cite{fi1,fi2} \begin{equation} {\mathcal D}{}_\mu = \partial_\mu - \textstyle{\frac{i}{2}} \, A^{ab}{}_\mu \, S_{ab}, \label{eq:FockIvanenko} \end{equation} whose second part acts only on the algebraic, or tangent space indices. For a scalar field $\phi$, for example, the generators are \begin{equation} S_{ab} = 0. \end{equation} For a Dirac spinor $\psi$, they are given by \cite{dirac1} \begin{equation} S_{ab} = \textstyle{\frac{i}{4}} \left[\gamma_a, \gamma_b \right], \label{eq:spinorep} \end{equation} with $\gamma_a$ the Dirac matrices. A Lorentz vector field $\phi^c$, on the other hand, is acted upon by the vector representation of the Lorentz generators, matrices $S_{ab}$ with entries \cite{ramond1} \begin{equation} (S_{ab})^c{}_d = i \left(\eta_{bd} \, \delta_a^c - \eta_{ad} \, \delta_b^c \right).\label{eq:vecrep} \end{equation} The Fock--Ivanenko derivative is, in this case, \begin{equation} {\mathcal D}{}_\mu \phi^c = \partial_\mu \phi^c + A^{c}{}_{d \mu} \, \phi^d, \label{VectorFI} \end{equation} and so on for any other fundamental field. On account of the soldered character of the tangent bundle, a tetrad field relates tangent space (or internal) tensors with spacetime (or external) tensors. For example, if $\phi^a$ is an internal, or Lorentz vector, then \begin{equation} \phi^\rho = h_a{}^\rho \, \phi^a \label{ixe} \end{equation} will be a spacetime vector. Conversely, we can write \begin{equation} \phi^a = h^a{}_\rho \, \phi^\rho. \label{exi} \end{equation} On the other hand, due to its non--tensorial character, a connection will acquire a vacuum, or non--homogeneous term, under the same operation. For example, to each spin connection $A^{a}{}_{b \mu}$, there is a corresponding general linear connection $\Gamma{}^{\rho}{}_{\nu \mu}$, given by \begin{equation} \Gamma^{\rho}{}_{\nu \mu} = h_{a}{}^{\rho} \partial_{\mu} h^{a}{}_{\nu} + h_{a}{}^{\rho} A^{a}{}_{b \mu} h^{b}{}_{\nu} \equiv h_{a}{}^{\rho} \, {\mathcal D}{}_{\mu} h^{a}{}_{\nu}, \label{geco} \end{equation} where ${\mathcal D}{}_{\mu}$ is the covariant derivative (\ref{VectorFI}), in which the generators act on internal (or tangent space) indices only. The inverse relation is, consequently, \begin{equation} A^{a}{}_{b \mu} = h^{a}{}_{\nu} \partial_{\mu} h_{b}{}^{\nu} + h^{a}{}_{\nu} \Gamma^{\nu}{}_{\rho \mu} h_{b}{}^{\rho} \equiv h^{a}{}_{\nu} \nabla_{\mu} h_{b}{}^{\nu}, \label{gsc} \end{equation} where $\nabla_{\mu}$ is the standard covariant derivative in the connection $\Gamma^{\nu}{}_{\rho \mu}$, which acts on external indices only. For a spacetime vector $\phi^\nu$, for example, it is given by \begin{equation} \nabla_\mu \phi^\nu = \partial_\mu \phi^\nu + \Gamma^\nu{}_{\rho \mu} \, \phi^\rho. \end{equation} Using relations (\ref{ixe}) and (\ref{exi}), it is easy to verify that \cite{kibble} \begin{equation} {\mathcal D}{}_\mu \phi^d = h^d{}_\rho \, \nabla_\mu \phi^\rho. \label{iDxeD} \end{equation} Equations (\ref{geco}) and (\ref{gsc}) are simply different ways of expressing the property that the total covariant derivative of the tetrad~---~that is, a covariant derivative with connection terms for both internal and external indices~---~vanishes identically: \begin{equation} \partial_{\mu} h^{a}{}_{\nu} - \Gamma^{\rho}{}_{\nu \mu} h^{a}{}_{\rho} + A^{a}{}_{b \mu} h^{b}{}_{\nu} = 0. \label{todete} \end{equation} \subsection{Curvature and Torsion} \label{sec:CurvTor} Curvature and torsion require a Lorentz connection to be defined \cite{koba}. Given a Lorentz connection $A^{a}{}_{b \mu}$, the corresponding curvature is a 2-form assuming values in the Lie algebra of the Lorentz group, \begin{equation} R_{\nu \mu} = \textstyle{\frac{1}{2}} \; R^{ab}{}_{\nu \mu} \; S_{ab}. \end{equation} Torsion is also a 2-form, but assuming values in the Lie algebra of the translation group, \begin{equation} T_{\nu \mu} = T^{a}{}_{\nu \mu} \, P_a, \end{equation} with $P_a = \partial_a$ the translation generators. The curvature and torsion components are given, respectively, by \begin{equation} R^{a}{}_{b \nu \mu} = \partial_{\nu} A^{a}{}_{b \mu} - \partial_{\mu} A^{a}{}_{b \nu} + A^a{}_{e \nu} A^e{}_{b \mu} - A^a{}_{e \mu} A^e{}_{b \nu} \label{curvaDef} \end{equation} and \begin{equation} T^a{}_{\nu \mu} = \partial_{\nu} h^{a}{}_{\mu} - \partial_{\mu} h^{a}{}_{\nu} + A^a{}_{e \nu} h^e{}_{\mu} - A^a{}_{e \mu} h^e{}_{\nu}. \label{tordef} \end{equation} Through contraction with tetrads, these tensors can be written in spacetime--indexed forms: \begin{equation} R^\rho{}_{\lambda\nu\mu} = h_a{}^\rho \, h^b{}_\lambda \, R^a{}_{b \nu \mu}, \label{RatoRmi} \end{equation} and \begin{equation} T^\rho{}_{\nu \mu} = h_a{}^\rho \, T^a{}_{\nu \mu}. \label{TatoTmi} \end{equation} Using relation (\ref{gsc}), their components are found to be \begin{equation} \label{sixbm} R^\rho{}_{\lambda\nu\mu} = \partial_\nu \Gamma^\rho{}_{\lambda \mu} - \partial_\mu \Gamma^\rho{}_{\lambda \nu} + \Gamma^\rho{}_{\eta \nu} \Gamma^\eta{}_{\lambda \mu} - \Gamma^\rho{}_{\eta \mu} \Gamma^\eta{}_{\lambda \nu} \end{equation} and \begin{equation} T^\rho{}_{\nu \mu} = \Gamma^\rho{}_{\mu\nu} - \Gamma^\rho{}_{\nu\mu}. \label{sixam} \end{equation} Since the spin connection $A^a{}_{b \nu}$ is a four--vector in the last index, it satisfies \begin{equation} A^a{}_{bc} = A^a{}_{b \nu} \, h_c{}^\nu. \end{equation} It can thus be verified that, in the anholonomic basis $\{h_a\}$, the curvature and torsion components are given respectively by \begin{equation} R{}^{a}{}_{b cd } = h_c \left(A^a{}_{b d} \right) - h_d \left(A^a{}_{b c} \right) + A^a{}_{e c}A^e{}_{b d} - A^a{}_{e d} A^e{}_{b c} - f^{e}{}_{c d} A^a{}_{b e} \label{13bm} \end{equation} and \begin{equation}\ T^a{}_{bc} = A^a{}_{cb} - A^a{}_{bc} - f^a{}_{bc}, \label{13am} \end{equation} where, we recall, $h_c = h_c{}^\mu \partial_\mu$. Use of~(\ref{13am}) for three different combinations of indices gives \begin{equation}% A^{a}{}_{b c} = \textstyle{\frac{1}{2}} (f_{b}{}^{a}{}_{c} + T_{b}{}^{a}{}_{c} + f_{c}{}^{a}{}_{b} + T_{c}{}^{a}{}_{b} - f^{a}{}_{b c} - T^{a}{}_{b c}). \label{tobetaken2} \end{equation}% This expression can be rewritten in the form \begin{equation} A^a{}_{bc} = {\stackrel{~\circ}{A}}{}^a{}_{bc} + K^a{}_{bc}, \label{rela0alge} \end{equation} where \begin{equation}% {\stackrel{~\circ}{A}}{}^{a}{}_{b c} = \textstyle{\frac{1}{2}} \left(f_{b}{}^{a}{}_{c} + f_{c}{}^{a}{}_{b} - f^{a}{}_{b c} \right) \label{tobetaken30} \end{equation}% is the usual expression of the general relativity spin connection in terms of the coefficients of anholonomy, and \begin{equation}% K^{a}{}_{b c} = \textstyle{\frac{1}{2}} \left(T_{b}{}^{a}{}_{c} + T_{c}{}^{a}{}_{b} - T^{a}{}_{b c} \right) \label{contorDef} \end{equation}% is the {contortion tensor}. Equation~(\ref{rela0alge}) is actually the content of a theorem, which states that any Lorentz connection can be decomposed into the spin connection of general relativity plus the contortion tensor \cite{HGV721}. The corresponding expression in terms of the spacetime--indexed linear connection reads \begin{equation} \Gamma^\rho{}_{\mu\nu} = {\stackrel{\circ}{\Gamma}}{}^{\rho}{}_{\mu \nu} + K^\rho{}_{\mu\nu}, \label{prela0} \end{equation} where \begin{equation} {\stackrel{\circ}{\Gamma}}{}^{\sigma}{}_{\mu \nu} = \textstyle{\frac{1}{2}} \, g^{\sigma \rho} \left( \partial_{\mu} g_{\rho \nu} + \partial_{\nu} g_{\rho \mu} - \partial_{\rho} g_{\mu \nu} \right) \label{lci} \end{equation} is the zero--torsion Christoffel, or Levi--Civita connection, and \begin{equation} K^\rho{}_{\mu\nu} = {\textstyle \frac{1}{2}} \left(T_\nu{}^\rho{}_\mu+T_\mu{}^\rho{}_\nu- T^\rho{}_{\mu\nu}\right) \label{contor} \end{equation} is the spacetime--indexed contortion tensor. \subsection{Behavior Under Lorentz Transformations} \label{sec:LorentzTransf} A local Lorentz transformation is fundamentally a transformation of the tangent space coordinates $x^a$: \begin{equation} x'^a = \Lambda^a{}_b(x) \, x^b. \end{equation} Under such a transformation, the tetrad transforms according to \begin{equation} h'^{a} = \Lambda^{a}{}_b(x) \, h^b. \end{equation} At each point of a riemannian spacetime, Eq.~(\ref{eq:tettomet}) only determines the tetrad up to transformations of the six--parameter Lorentz group in the tangent space indices. This means that there exists actually an infinity of tetrads $h_{a}{}^{\mu}$, each one relating the spacetime metric $g_{\mu \nu}$ to the tangent space metric $\eta_{c d}$ by Eqs.~(\ref{eq:gtoeta}) and (\ref{eq:tettomet}). In fact, any other Lorentz--rotated tetrad $\{h'_{a}\}$ will also relate the same metrics \begin{equation} g_{\mu \nu} = \eta_{c d}\, h'^{c}{}_{\mu} h'^{d}{}_{\nu}. \label{etatogmunu} \end{equation} Under a local Lorentz transformation $\Lambda^{a}{}_{b}(x)$, the spin connection undergoes the transformation \begin{equation} A'^{a}{}_{b \mu} = \Lambda^{a}{}_{c}(x) \, A^{c}{}_{d \mu} \, \Lambda_{b}{}^{d}(x) + \Lambda^{a}{}_{c}(x) \, \partial_{\mu} \Lambda_{b}{}^{c}(x). \label{ltsc} \end{equation} Of course, both curvature $R^a{}_{b \nu \mu}$ and torsion $T^a{}_{\nu \mu}$ transform covariantly: \begin{equation} R'^{a}{}_{{b} \nu \mu} = \Lambda^{a}{}_{c}(x) \,\Lambda_b{}^d(x) \, R^c{}_{d \nu \mu} \quad \mbox{and} \quad T'^{a}{}_{\nu \mu} = \Lambda^{a}{}_{b}(x) \, T^b{}_{\nu \mu}. \end{equation} \subsection{Purely Inertial Lorentz Connection} \label{InerGra} In special relativity, Lorentz connections represent only inertial effects present in a given frame. In the class of inertial frames, where these effects are absent, the Lorentz connection vanishes identically. Since this is the class most used in field theory, Lorentz connections do not routinely show up in relativistic physics. Of course, as long as physics is frame independent, it can be described in any class of frames. For the sake of simplicity, however, one always uses the class of inertial frames when dealing with non--gravitational physics. To see how an inertial Lorentz connection shows up, let us denote by $e^a{}_\mu$ a generic frame in Minkowski spacetime. The class of inertial (or holonomic) frames, defined by all frames for which $f'^{c}{}_{a b} = 0$, will be denoted by $e'^a{}_\mu$. In a general coordinate system, the frames belonging to this class have the holonomic form \begin{equation} e'^a{}_\mu = \partial_\mu x'^a, \label{frame0} \end{equation} with $x'^a$ a spacetime--dependent Lorentz vector: $x'^a = x'^a(x^\mu)$. The spacetime metric \begin{equation} \eta'_{\mu \nu} = e'^a{}_\mu e'^b{}_\nu \, \eta_{ab} \label{eeee0} \end{equation} still represents the Minkowski metric, but in a general coordinate system. In the specific case of cartesian coordinates, the inertial frame assumes the form \begin{equation} e'^a{}_\mu = \delta_\mu^a \label{frame00} \end{equation} and the spacetime metric $\eta'_{\mu \nu}$ is that given by Eq.~(\ref{eq:etaofMinkoST}). Under a local Lorentz transformation, the holonomic frame (\ref{frame0}) transforms according to \begin{equation} e^a{}_\mu = \Lambda^a{}_b(x) \, e'^b{}_\mu. \label{LoreTrans-e} \end{equation} As a simple computation shows, it has the explicit form \begin{equation} e^a{}_\mu = \partial_\mu x^a + {\stackrel{\bullet}{A}}{}^a{}_{b \mu} \, x^b \equiv {\stackrel{~\bullet}{\mathcal D}}{}_\mu x^a, \label{InertiaTetrad} \end{equation} where \begin{equation} {\stackrel{\bullet}{A}}{}^a{}_{b \mu} = \Lambda^a{}_e(x) \, \partial_\mu \Lambda_b{}^e(x) \label{InerConn} \end{equation} is a Lorentz connection that represents the inertial effects present in the new frame $e^a{}_\mu$. As can be seen from Eq.~(\ref{ltsc}), it is just the connection obtained from a Lorentz transformation of the vanishing spin connection ${\stackrel{\bullet}{A}}{}'^e{}_{d \mu} = 0$: \begin{equation} {\stackrel{\bullet}{A}}{}^a{}_{b \mu} = \Lambda^a{}_e(x) \, {\stackrel{\bullet}{A}}{}'^e{}_{d \mu} \, \Lambda_b{}^d(x) + \Lambda^a{}_e(x) \, \partial_\mu \Lambda_b{}^e(x). \end{equation} Starting from an inertial frame, different classes of frames are obtained by performing {\em local} (point--dependent) Lorentz transformations $\Lambda^{a}{}_b(x^\mu)$. Inside each class, the infinitely many frames are related through {\em global} (point--independent) Lorentz transformations, $\Lambda^{a}{}_b =$~constant. The inertial connection~(\ref{InerConn}) is sometimes referred to as the Ricci coefficient of rotation \cite{MTW732}. Due to its presence, the transformed frame $e^a{}_\mu$ is no longer holonomic. In fact, its coefficient of anholonomy is given by \begin{equation} f^c{}_{a b} = - \left({\stackrel{\bullet}{A}}{}^c{}_{a b} - {\stackrel{\bullet}{A}}{}^c{}_{b a} \right), \end{equation} where we have used the identity ${\stackrel{\bullet}{A}}{}^a{}_{b c} = {\stackrel{\bullet}{A}}{}^a{}_{b \mu} \, e_c{}^\mu$. The inverse relation is \begin{equation} {\stackrel{\bullet}{A}}{}^{a}{}_{b c} = \textstyle{\frac{1}{2}} \left(f_{b}{}^{a}{}_{c} + f_{c}{}^{a}{}_{b} - f^{a}{}_{b c} \right). \label{InerConefff} \end{equation} Of course, as a purely inertial connection, $\,{\stackrel{\bullet}{A}}{}^a{}_{b \mu}$ has vanishing curvature and torsion: \begin{equation} {\stackrel{\bullet}{R}}{}^{a}{}_{b \nu \mu} \equiv \partial_{\nu} {\stackrel{\bullet}{A}}{}^{a}{}_{b \mu} - \partial_{\mu} {\stackrel{\bullet}{A}}{}^{a}{}_{b \nu} + {\stackrel{\bullet}{A}}{}^a{}_{e \nu} \, {\stackrel{\bullet}{A}}{}^e{}_{b \mu} - {\stackrel{\bullet}{A}}{}^a{}_{e \mu} \, {\stackrel{\bullet}{A}}{}^e{}_{b \nu} = 0 \label{curvaDefW} \end{equation} and \begin{equation} {\stackrel{\bullet}{T}}{}^a{}_{\nu \mu} \equiv \partial_{\nu} e^{a}{}_{\mu} - \partial_{\mu} e^{a}{}_{\nu} + {\stackrel{\bullet}{A}}{}^a{}_{e \nu} \, e^e{}_{\mu} - {\stackrel{\bullet}{A}}{}^a{}_{e \mu} \, e^e{}_{\nu} = 0. \label{tordefW} \end{equation} \subsection{Equation of Motion of Free Particles} As a concrete example, let us consider the equation of motion of a free particle. In the class of inertial frames $e'^a{}_\mu$, such particle is described by the equation of motion \begin{equation} \frac{d u'^a}{d\sigma} = 0, \end{equation} with $u'^a$ the particle four--velocity, and \begin{equation} d \sigma^2 = \eta_{\mu \nu} \, dx^\mu dx^\nu \label{MinkoInter} \end{equation} the quadratic Minkowski invariant interval. In a anholonomic frame $e^a{}_\mu$, related to $e'^a{}_\mu$ by the local Lorentz transformation (\ref{LoreTrans-e}), the equation of motion assumes the manifestly covariant form \begin{equation} \frac{d u^a}{d\sigma} + {\stackrel{\bullet}{A}}{}^a{}_{b \mu} \, u^b \, u^\mu = 0, \label{anholoEM} \end{equation} where \begin{equation} u^a = \Lambda^a{}_b(x) \, u'^b \end{equation} is the Lorentz transformed four--velocity, with \begin{equation} u^\mu = u^a \, e_a{}^\mu \end{equation} the spacetime--indexed four--velocity, which has the usual holonomic form \begin{equation} u^\mu = \frac{d x^\mu}{d\sigma}. \end{equation} Observe that the inertial forces coming from the frame non--inertiality are represented by the inertial connection of the left--hand side, which is of course non--covariant. \section{General Relativity} \label{InerEquations} General Relativity conceives the gravitational interaction as a change in the geometry of spacetime itself. Specifically, as a change from the Lorentz metric $\eta_{\mu \nu}$ of Minkowski space into a riemannian metric $g_{\mu \nu}$. This new metric plays the role of basic field, and is in principe defined everywhere. Derivatives compatible with this overall presence of the same metric must preserve it, must parallel--transport it everywhere. Of all such Lorentz connections preserving $g_{\mu \nu}$, the most natural choice from the point of view of universality is to pick up the Christoffel, or Levi--Civita connection \begin{equation} {\stackrel{\circ}{\Gamma}}{}}{^\sigma{}_{\mu \nu} = \textstyle{\frac{1}{2}} \, g^{\sigma \rho} \left( \partial_{\mu} g_{\rho \nu} + \partial_{\nu} g_{\rho \mu} - \partial_{\rho} g_{\mu \nu} \right), \end{equation} which is a connection determined solely by the ten components of the metric tensor $g_{\mu \nu}$. It is the only metric--preserving connection with vanishing torsion, a magnitude which is then found not to play any role in the general--relativistic description of the gravitational interaction. The corresponding spin connection is \begin{equation} {\stackrel{~\circ}{A}}{}^{a}{}_{b \mu} = h^{a}{}_{\nu} \partial_{\mu} h_{b}{}^{\nu} + h^{a}{}_{\nu} {\stackrel{\circ}{\Gamma}}{}}{^{\nu}{}_{\rho \mu} h_{b}{}^{\rho}. \end{equation} Its Riemann curvature, \begin{equation} {\stackrel{\circ}{R}}{}^{a}{}_{b \nu \mu} = \partial_{\nu} {\stackrel{~\circ}{A}}{}^{a}{}_{b \mu} - \partial_{\mu} {\stackrel{~\circ}{A}}{}^{a}{}_{b \nu} + {\stackrel{~\circ}{A}}{}^a{}_{e \nu} {\stackrel{~\circ}{A}}{}^e{}_{b \mu} - {\stackrel{~\circ}{A}}{}^a{}_{e \mu} {\stackrel{~\circ}{A}}{}^e{}_{b \nu}, \end{equation} represents the fundamental field of the theory: gravitation is present whenever at least one of its components is non--vanishing. The field equation governing the dynamics of general relativity is Einstein equation \begin{equation} {\stackrel{\circ}{R}}{}^a{}_{\nu} - {\textstyle{\frac{1}{2}}}\, {\stackrel{\circ}{R}}{} \, h^a{}_{\nu} = k \, \Theta^a{}_{\nu}, \label{EinsFiEq} \end{equation} where $k = 8 \pi G/c^{4}$, \begin{equation} {\stackrel{\circ}{R}}{}^a{}_{\nu} = {\stackrel{\circ}{R}}{}^{\rho a}{}_{\rho \nu} \quad \mbox{and} \quad {\stackrel{\circ}{R}}{} = h^a{}_{\nu} {\stackrel{\circ}{R}}{}_a{}^{\nu} \label{RicciScalar} \end{equation} are, respectively, the Ricci and the scalar curvature, and \begin{equation} \Theta^a{}_{\nu} = -\, \frac{1}{\sqrt{-g}} \, \frac{\delta {\mathcal L}_s}{\delta h_a{}^{\nu}} \label{eq:Phys.8.[2.5]} \end{equation} is the symmetric source energy--momentum tensor modified by the presence of gravitation,with ${\mathcal L}_s$ the source field lagrangian. This equation can be obtained from the lagrangian \begin{equation} {\mathcal L} = {\stackrel{\circ}{\mathcal L}}{} + {\mathcal L}_s, \label{eq:Phys.8.[1.12]} \end{equation} where \begin{equation} {\stackrel{\circ}{\mathcal L}}{} = -\; \frac{1}{2k} \, \sqrt{-g} \; {\stackrel{\circ}{R}}{} \label{EinsHilberLag} \end{equation} is the Einstein--Hilbert lagrangian of general relativity. \subsection{How Does General Relativity Describe Gravitation?} The curvature of the Levi--Civita connection gives rise to a geometric description of the gravitational interaction. To understand what such a {\em geometrical description} does mean, let us consider the motion of a (spinless) particle in a gravitational field. In Minkowski spacetime, such a particle obeys the equation \begin{equation} \frac{d u^a}{d\sigma} = 0, \label{FreePartEqMot} \end{equation} with $d\sigma$ the Minkowski invariant interval~(\ref{MinkoInter}). To obtain the equation valid in the presence of gravitation, a rule turns up, which is reminiscent of the gauge prescription: the minimal coupling prescription. According to this prescription, all ordinary derivatives must be replaced by covariant derivatives. In the specific case of general relativity, the free equation of motion (\ref{FreePartEqMot}) becomes the usual geodesic equation \begin{equation} \frac{du^a}{ds} + {\stackrel{~\circ}{A}}{}^{a}{}_{b \nu} \, u^b u^{\nu} = 0, \label{eq:geodesic} \end{equation} where \begin{equation} ds^2 = g_{\mu \nu} \, dx^\mu dx^\nu \label{CurveInter} \end{equation} is the riemannian spacetime quadratic interval. This equation describes the motion of a test particle in the presence of a gravitational field. It says essentially that the four--acceleration of the particle vanishes: \begin{equation} {\stackrel{\circ}{a}}{}^{a} = 0. \end{equation} This means that in General Relativity {\em there is no concept of gravitational force}. In this theory, the gravitational interaction is geometrized: the presence of gravitation produces a curvature in spacetime, and the gravitational interaction is described by letting particles to follow freely the spacetime curvature. \section{Teleparallel Gravity} \label{sec:TeleGrav} Teleparallel gravity corresponds to a gauge theory for the translation group. Accordingly, the gravitational field is represented by a translational gauge potential ${B^a{}_\mu}$, a 1-form assuming values in the Lie algebra of the translation group: \begin{equation} B_\mu = B^a{}_\mu \, P_a. \end{equation} It appears as the non--trivial part of the tetrad, \begin{equation} h^a{}_\mu = e^a{}_\mu + B^a{}_\mu, \label{TeleTetrada2} \end{equation} with \begin{equation} e^a{}_\mu \equiv {\stackrel{~\bullet}{\mathcal D}}{}_\mu x^a = \partial_\mu x^a + {\stackrel{\bullet}{A}}{}^a{}_{b \mu} \, x^b \label{NonGraTetra} \end{equation} the trivial (non--gravitational) tetrad. Under a gauge translation \begin{equation} \delta x^a = \varepsilon^a, \end{equation} the potential $B^a{}_\mu$ transforms according to \begin{equation} \delta B^a{}_\mu = - \, {\stackrel{~\bullet}{\mathcal D}}{}_\mu \varepsilon^a. \label{BamGauTrans} \end{equation} The tetrad is consequently gauge invariant: \begin{equation} \delta h^a{}_\mu = 0. \end{equation} The field strength of teleparallel gravity is a 2-form assuming values in the Lie algebra of the translation group. In a general Lorentz frame its components are given by \begin{equation} {\stackrel{\bullet}{T}}{}^a{}_{\mu \nu} = \partial_\mu B^a{}_\nu - \partial_\nu B^a{}_\mu + {\stackrel{\bullet}{A}}{}^a{}_{b \mu} B^b{}_{\nu} - {\stackrel{\bullet}{A}}{}^a{}_{b \nu} B^b{}_{\mu}, \label{tfs} \end{equation} or equivalently \begin{equation} {\stackrel{\bullet}{T}}{}^{a}{}_{\mu \nu} = {\stackrel{~\bullet}{\mathcal D}}{}_\mu B^a{}_\nu - {\stackrel{~\bullet}{\mathcal D}}{}_\nu B^a{}_\mu. \label{tfs1} \end{equation} Since \begin{equation} {\stackrel{~\bullet}{\mathcal D}}{}_\mu {\stackrel{~\bullet}{\mathcal D}}{}_\nu x^a - {\stackrel{~\bullet}{\mathcal D}}{}_\nu {\stackrel{~\bullet}{\mathcal D}}{}_\mu x^a = 0, \end{equation} it can be rewritten in the form \begin{equation} {\stackrel{\bullet}{T}}{}^a{}_{\mu \nu} = {\stackrel{~\bullet}{\mathcal D}}{}_\mu h^a{}_\nu - {\stackrel{~\bullet}{\mathcal D}}{}_\nu h^a{}_\mu. \label{tfs3} \end{equation} We see in this way that the field strength is nothing else, but torsion. On account of the gauge invariance of the tetrad, the field strength is also invariant under gauge transformations: \begin{equation} {\stackrel{\bullet}{T}}{}'^a{}_{\mu \nu} = {\stackrel{\bullet}{T}}{}^a{}_{\mu \nu}. \end{equation} This is an expected result. In fact, considering that the generators of the adjoint representation are the coefficients of structure of the group taken as matrices, and considering that these coefficients vanish for abelian groups, fields belonging to the adjoint representations of abelian gauge theories will always be gauge invariant. \subsection{Teleparallel Lorentz Connection} \label{sec:BasicFields} The fundamental Lorentz connection of teleparallel gravity is the purely inertial connection (\ref{InerConn}). This means that in this theory Lorentz connections keep the special--relativistic role of representing inertial effects only. Of course, as a purely inertial connection, its curvature vanishes identically: \begin{equation} {\stackrel{\bullet}{R}}{}^{a}{}_{b \mu \nu} = \partial_{\mu} {\stackrel{\bullet}{A}}{}^{a}{}_{b \nu} - \partial_{\nu} {\stackrel{\bullet}{A}}{}^{a}{}_{b \mu} + {\stackrel{\bullet}{A}}{}^a{}_{e \mu} {\stackrel{\bullet}{A}}{}^e{}_{b \nu} - {\stackrel{\bullet}{A}}{}^a{}_{e \nu} {\stackrel{\bullet}{A}}{}^e{}_{b \mu} = 0. \end{equation} However, for a tetrad involving a non--trivial translational gauge potential $B^a{}_\mu$, that is, for \begin{equation} B^a{}_\mu \neq {\stackrel{~\bullet}{\mathcal D}}{}_\mu \varepsilon^a, \label{NonTriB} \end{equation} torsion will be non--vanishing: \begin{equation} {\stackrel{\bullet}{T}}{}^a{}_{\mu \nu} = \partial_{\mu} h^{a}{}_{\nu} - \partial_{\nu} h^{a}{}_{\mu} + {\stackrel{\bullet}{A}}{}^a{}_{e \mu} h^e{}_{\nu} - {\stackrel{\bullet}{A}}{}^a{}_{e \nu} h^e{}_{\mu} \neq 0. \end{equation} In teleparallel gravity, therefore, gravitation is represented by torsion, not by curvature. This is at variance with General Relativity, whose spin connection ${\stackrel{~\circ}{A}}{}^{a}{}_{b \mu}$ has vanishing torsion \begin{equation} {\stackrel{\circ}{T}}{}^a{}_{\mu \nu} = \partial_{\mu} h^{a}{}_{\nu} - \partial_{\nu} h^{a}{}_{\mu} + {\stackrel{~\circ}{A}}{}^a{}_{e \mu} h^e{}_{\nu} - {\stackrel{~\circ}{A}}{}^a{}_{e \nu} h^e{}_{\mu} = 0, \end{equation} but non--vanishing curvature \begin{equation} {\stackrel{\circ}{R}}{}^{a}{}_{b \mu \nu} = \partial_{\mu} {\stackrel{~\circ}{A}}{}^{a}{}_{b \nu} - \partial_{\nu} {\stackrel{~\circ}{A}}{}^{a}{}_{b \mu} + {\stackrel{~\circ}{A}}{}^a{}_{e \mu} {\stackrel{~\circ}{A}}{}^e{}_{b \nu} - {\stackrel{~\circ}{A}}{}^a{}_{e \nu} {\stackrel{~\circ}{A}}{}^e{}_{b \mu} \neq 0. \end{equation} The spacetime--indexed linear connection corresponding to the inertial spin connection (\ref{InerConn}) is \begin{equation} {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\nu \mu} = h_{a}{}^{\rho} \partial_{\mu} h^{a}{}_{\nu} + h_{a}{}^{\rho}{\stackrel{\bullet}{A}}{}^a{}_{b \mu} \, h^b{}_\nu \equiv h_{a}{}^{\rho} \, {\stackrel{~\bullet}{\mathcal D}}{}_{\mu} h^{a}{}_{\nu}. \label{gecow} \end{equation} This is the so--called Weitzenb\"ock connection. Its definition is equivalent to the identity \begin{equation} \partial_{\mu}h^a{}_{\nu} + {\stackrel{\bullet}{A}}{}^a{}_{b \mu} \, h^b{}_\nu - {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\nu \mu} \, h^a{}_{\rho} = 0. \label{cacd1} \end{equation} In the class of frames in which the spin connection ${\stackrel{\bullet}{A}}{}^a{}_{b \mu}$ vanishes, it reduces to \begin{equation} \partial_{\mu}h^a{}_{\nu} - {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\nu \mu} \, h^a{}_{\rho} = 0, \label{cacd0} \end{equation} which is the absolute, or distant parallelism condition, from where teleparallel gravity got its name. We notice finally that, for the specific case of the Weitzenb\"ock connection, identity~(\ref{prela0}) assumes the form \begin{equation} {\stackrel{\bullet~}{\Gamma}}{}^\rho{}_{\mu \nu} = {\stackrel{\circ}{\Gamma}}{}}{^\rho{}_{\mu \nu} + {\stackrel{\bullet}{K}}{}^\rho{}_{\mu \nu}, \label{AwKbol} \end{equation} where \begin{equation} {\stackrel{\bullet}{K}}{}^\rho{}_{\mu \nu} = \textstyle{\frac{1}{2}} \left({\stackrel{\bullet}{T}}{}_\mu{}^\rho{}_\nu + {\stackrel{\bullet}{T}}{}_\nu{}^\rho{}_\mu - {\stackrel{\bullet}{T}}{}^\rho{}_{\mu \nu} \right) \label{STIContorDef} \end{equation} is the contortion of the Weitzenb\"ock torsion \begin{equation} {\stackrel{\bullet}{T}}{}^\rho{}_{\nu \mu} = {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\mu \nu} - {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\nu \mu}. \label{TGaminusGa} \end{equation} \subsection{Teleparallel Lagrangian} The lagrangian density of teleparallel gravity is \cite{maluf94} \begin{equation} {\stackrel{~\bullet}{\mathcal L}}{} = \frac{h}{4 k} \; {\stackrel{\bullet}{T}}{}_{\rho\mu\nu} \, {\stackrel{\bullet}{S}}{}^{\rho\mu\nu}, \label{gala} \end{equation} where \begin{equation} {\stackrel{\bullet}{S}}{}^{\rho\mu\nu} = - \, {\stackrel{\bullet}{S}}{}^{\rho\nu\mu} = {\stackrel{\bullet}{K}}{}^{\mu\nu\rho} - g^{\rho\nu}\,{\stackrel{\bullet}{T}}{}^{\sigma\mu}{}_{\sigma} + g^{\rho\mu}\,{\stackrel{\bullet}{T}}{}^{\sigma\nu}{}_{\sigma} \label{S} \end{equation} is the so--called superpotential, with \begin{equation} {\stackrel{\bullet}{K}}{}^{\nu}{}_{\rho \mu} = {\textstyle \frac{1}{2}} \left( {\stackrel{\bullet}{T}}{}_{\rho}{}^{\nu}{}_{\mu} + {\stackrel{\bullet}{T}}{}_{\mu}{}^{\nu}{}_{\rho} - {\stackrel{\bullet}{T}}{}^{\nu}{}_{\rho \mu} \right) \end{equation} the contortion tensor of the teleparallel torsion. In terms of contortion it assumes the form \begin{equation} {\stackrel{~\bullet}{\mathcal L}}{} = {\frac{h}{2 k}} \left({\stackrel{\bullet}{K}}{}^{\mu \nu \rho} {\stackrel{\bullet}{K}}{}_{\rho \nu \mu} - {\stackrel{\bullet}{K}}{}^{\mu \rho}{}_\mu {\stackrel{\bullet}{K}}{}^\nu{}_{\rho \nu} \right). \label{teleMoller} \end{equation} Substituting ${\stackrel{\bullet}{K}}{}^{\rho\mu\nu}$, we find \begin{equation} {\stackrel{~\bullet}{\mathcal L}}{} = {\frac{h}{2 k}} \left(\textstyle{\frac{1}{4}} \; {\stackrel{\bullet}{T}}{}^\rho{}_{\mu\nu} \, {\stackrel{\bullet}{T}}{}_\rho{}^{\mu \nu} + \textstyle{\frac{1}{2}} \; {\stackrel{\bullet}{T}}{}^\rho{}_{\mu\nu} \, {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\rho - \, {\stackrel{\bullet}{T}}{}^\rho{}_{\mu\rho} \, {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\nu \right). \label{TeleLagra} \end{equation} The first term corresponds to the usual lagrangian of internal gauge theories. The existence of the other two terms is related to the soldered character of the bundle. In fact, the presence of a tetrad field allows internal and external indices to be treated on the same footing, and consequently new contractions turn out to be possible. In terms of algebraic--indexed torsion, the teleparallel lagrangian assumes the form \begin{equation} {\stackrel{~\bullet}{\mathcal L}}{} = {\frac{h}{2 k}} \left(\textstyle{\frac{1}{4}} \; {\stackrel{\bullet}{T}}{}^a{}_{bc} \, {\stackrel{\bullet}{T}}{}_a{}^{bc} + \textstyle{\frac{1}{2}} \; {\stackrel{\bullet}{T}}{}^a{}_{bc} \, {\stackrel{\bullet}{T}}{}^{c b}{}_a - \, {\stackrel{\bullet}{T}}{}^a{}_{ba} \, {\stackrel{\bullet}{T}}{}^{c b}{}_c \right). \label{TeleLagraBis} \end{equation} Notice that torsion is a Lorentz tensor~---~it transforms covariantly under local Lorentz transformations. It then follows that each term of this lagrangian is local Lorentz invariant, and consequently the whole lagrangian is also invariant independently of the numerical value of the coefficients. \subsection{Equivalence with Einstein--Hilbert} \label{sec:EquivWithEinsteinHilbert} As we have already discussed, the curvature of the Weitzenb\"ock connection vanishes identically: \begin{equation} {\stackrel{\bullet}{R}}{}^\rho{}_{\lambda\nu\mu} \equiv \partial_\nu {\stackrel{\bullet~}{\Gamma}}{}^\rho{}_{\lambda \mu} - \partial_\mu {\stackrel{\bullet~}{\Gamma}}{}^\rho{}_{\lambda \nu} + {\stackrel{\bullet~}{\Gamma}}{}^\rho{}_{\eta \nu} {\stackrel{\bullet~}{\Gamma}}{}^\eta{}_{\lambda \mu} - {\stackrel{\bullet~}{\Gamma}}{}^\rho{}_{\eta \mu} {\stackrel{\bullet~}{\Gamma}}{}^\eta{}_{\lambda \nu} = 0. \vspace{8pt} \end{equation} Substituting the relation \begin{equation} {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\mu \nu} = {\stackrel{\circ}{\Gamma}}{}^{\rho} {}_{\mu \nu} + {\stackrel{\bullet}{K}}{}^{\rho}{}_{\mu \nu}, \label{rela0bis} \end{equation} we find \begin{equation} {\stackrel{\bullet}{R}}{}^{\rho}{}_{\theta \mu \nu} \equiv {\stackrel{\circ}{R}}{}^{\rho}{}_{\theta \mu \nu} + {\stackrel{\bullet}{Q}}{}^{\rho}{}_{\theta \mu \nu} = 0, \label{relar} \end{equation} where ${\stackrel{\circ}{R}}{}^{\rho}{}_{\theta \mu \nu}$ is the curvature of the Levi--Civita connection, and \begin{eqnarray} {\stackrel{\bullet}{Q}}{}^{\rho} {}_{\theta \mu \nu} = \partial_{\mu}{}{\stackrel{\bullet}{K}}{}^{\rho}{}_{\theta \nu} - \partial_{\nu}{}{\stackrel{\bullet}{K}}{}^{\rho}{}_{\theta \mu} + {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\sigma \mu} \, {\stackrel{\bullet}{K}}{}^{\sigma}{}_{\theta \nu} - {\stackrel{\bullet~}{\Gamma}}{}^{\rho}{}_{\sigma \nu} \, {\stackrel{\bullet}{K}}{}^{\sigma}{}_{\theta \mu}~~~~~~~\quad \nonumber \\ - \, {\stackrel{\bullet~}{\Gamma}}{}^{\sigma}{}_{\theta \mu} \, {\stackrel{\bullet}{K}}{}^{\rho}{}_{\sigma \nu} + {\stackrel{\bullet~}{\Gamma}}{}^{\sigma}{}_{\theta \nu} \, {\stackrel{\bullet}{K}}{}^{\rho}{}_{\sigma \mu} + {\stackrel{\bullet}{K}}{}^{\rho}{}_{\sigma \nu} \, {\stackrel{\bullet}{K}}{}^{\sigma}{}_{\theta \mu} - {\stackrel{\bullet}{K}}{}^{\rho}{}_{\sigma \mu} \, {\stackrel{\bullet}{K}}{}^{\sigma}{}_{\theta \nu} \label{qdk} \vspace{8pt} \end{eqnarray} is a tensor written in terms of the Weitzenb\"ock connection only. By taking appropriate contractions, the scalar version of identity (\ref{relar}) is found to be \begin{equation} -\, {\stackrel{\circ}{R}}{} = {\stackrel{\bullet}{Q}}{} \equiv \left({\stackrel{\bullet}{K}}{}^{\mu \nu \rho} {\stackrel{\bullet}{K}}{}_{\rho \nu \mu} - {\stackrel{\bullet}{K}}{}^{\mu \rho}{}_\mu {\stackrel{\bullet}{K}}{}^\nu{}_{\rho \nu} \right) + \frac{2}{h} \, \partial_\mu \left(h \, {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\nu\right). \end{equation} Comparing with the teleparallel lagrangian~(\ref{teleMoller}), we see that \begin{equation} {\stackrel{~\bullet}{\mathcal L}}{} = {\stackrel{\circ}{\mathcal L}}{} - \partial_\mu \Big(\frac{h}{k} \; {\stackrel{\bullet}{T}}{}^{\nu \mu}{}_\nu \Big), \label{LagraEquiva} \end{equation} where \begin{equation} {\stackrel{\circ}{\mathcal L}}{} = -\, \frac{h}{2 k} \; {\stackrel{\circ}{R}}{} \label{e-hl} \end{equation} is the Einstein--Hilbert lagrangian of general relativity. Up to a divergence, therefore, the lagrangian of teleparallel gravity is equivalent to the lagrangian of general relativity. To understand the presence of a divergence term between the two lagrangians, let us recall that the Einstein--Hilbert lagrangian (\ref{e-hl}) depends on the metric, as well as on the first and second derivatives of the metric. Equivalently, in the context of the tetrad formalism, we can say that it depends on the tetrad, as well as on the first and second derivatives of the tetrad field. The terms containing second derivatives, however, reduce to a divergence term \cite{landau}. In consequence, it is possible to rewrite the Einstein--Hilbert lagrangian in a form stating this aspect explicitly: \begin{equation} {\stackrel{\circ}{\mathcal L}}{} = {\stackrel{\circ}{\mathcal L}}{}_1 + \partial_\mu (\sqrt{-g} \, w^\mu), \end{equation} where ${\stackrel{\circ}{\mathcal L}}{}_1$ is a lagrangian that depends solely on the tetrad and on its first derivatives, and $w^\mu$ is a four--vector. On the other hand, the teleparallel lagrangian (\ref{TeleLagraBis}) depends only on the tetrad and on its first derivative. The divergence in the equivalence relation (\ref{LagraEquiva}) is then necessary to remove the terms containing second derivatives of the tetrad from the Einstein--Hilbert lagrangian. \subsection{Field Equations} \label{sec:FieldEquations} Consider now the lagrangian \begin{equation} {\mathcal L} = {\stackrel{~\bullet}{\mathcal L}}{} + {\mathcal L}_s, \end{equation} with ${\mathcal L}_s$ the lagrangian of a general source field. Variation with respect to the gauge potential $B^a{}_\rho$ ~---~or equivalently, with respect to the tetrad field $h^a{}_\mu$~---~yields the teleparallel version of the gravitational field equation \cite{livro2} \begin{equation} \partial_\sigma(h {\stackrel{\bullet}{S}}{}_a{}^{\rho \sigma}) - k \, h {\stackrel{~\bullet}{J}}{}_{a}{}^{\rho} = k \, h\, {\Theta}_{a}{}^{\rho}. \label{tfe10} \end{equation} In this equation, \begin{equation} h {\stackrel{\bullet}{S}}{}_a{}^{\rho \sigma} \equiv - \; k \; \frac{\partial {{\stackrel{~\bullet}{\mathcal L}}{}}}{\partial (\partial_\sigma h^a{}_{\rho})} = {\stackrel{\bullet}{K}}{}^{\rho \sigma}{}_{a} - h_{a}{}^{\sigma} \, {\stackrel{\bullet}{T}}{}^{\nu \rho}{}_{\nu} + h_{a}{}^{\rho} \, {\stackrel{\bullet}{T}}{}^{\nu \sigma}{}_{\nu} \end{equation} is the superpotential, whereas the term \begin{equation} h {\stackrel{~\bullet}{J}}{}_{a}{}^{\rho} \equiv -\, \frac{\partial {\stackrel{~\bullet}{\mathcal L}}{}}{\partial h^a{}_{\rho}} = \frac{1}{k} \, h_a{}^{\mu} \, {\stackrel{\bullet}{S}}{}_c{}^{\nu \rho} \, {\stackrel{\bullet}{T}}{}^c{}_{\nu \mu} - \frac{h_a{}^{\rho}}{h} \, {\stackrel{~\bullet}{\mathcal L}}{} + \frac{1}{k} \, {\stackrel{\bullet}{A}}{}^c{}_{a \sigma} {\stackrel{\bullet}{S}}{}_c{}^{\rho \sigma} \label{ptem10} \end{equation} stands for the gauge current, which in this case represents the Noether energy--momentum density of gravitation itself \cite{gemt9}. Finally, \begin{equation} h\, {\Theta}_{a}{}^{\rho} = -\, \frac{\delta {\mathcal L}_s}{\delta h^a{}_{\rho}} \equiv - \left( \frac{\partial {\mathcal L}_s}{\partial h^a{}_{\rho}} - \partial_\mu \frac{\partial {\mathcal L}_s}{\partial_\mu \partial h^a{}_{\rho}} \right) \label{memt1} \end{equation} is the source energy--momentum tensor. Due to the anti--symmetry of the superpotential in the last two indices, the total --- that is, gravitational plus source --- energy--momentum density is conserved in the ordinary sense: \begin{equation} \partial_\rho \big(h {\stackrel{~\bullet}{J}}{}_{a}{}^{\rho} + h\, {\Theta}_{a}{}^{\rho} \big) = 0. \end{equation} The left--hand side of the gravitational field equation~(\ref{tfe10}) depends on the Weitzenb\"ock connection only. Using the identity~(\ref{rela0bis}), through a lengthy but straightforward calculation, it can be rewritten in terms of the Levi--Civita connection only: \begin{equation} \partial_\sigma \big(h\, {\stackrel{\bullet}{S}}{}_a{}^{\rho \sigma}\big) - k \, h {\stackrel{~\bullet}{J}}{}_{a}{}^{\rho} = h\, \big({\stackrel{\circ}{R}}_a{}^{\rho} - \textstyle{\frac{1}{2}} \, h_a{}^{\rho} \, {\stackrel{\circ}{R}} \big). \label{ident} \end{equation} We see from this expression that, as expected due to the equivalence between the corresponding lagrangians, the teleparallel field equation (\ref{tfe10}) is equivalent to Einstein's field equation \begin{equation} {\stackrel{\circ}{R}}{}_a{}^{\rho} - \textstyle{\frac{1}{2}} \, h_a{}^{\rho} \, {\stackrel{\circ}{R}}{} = k \; {\Theta}_{a}{}^{\rho}. \end{equation} Observe that the energy--momentum tensor appears as the source in both theories: as the source of curvature in general relativity, and as the source of torsion in teleparallel gravity. This shows that, according to teleparallel gravity, curvature and torsion are related to the same degrees of freedom of the gravitational field. \subsection{How Does Teleparallel Gravity Describe Gravitation?} \label{sec:PartGrav} Let us take the geodesic equation of general relativity: \begin{equation} \frac{du^a}{ds} + {\stackrel{~\circ}{A}}{}^a{}_{b \nu} \, u^b \, u^\nu = 0. \label{GeReGeoEqu} \end{equation} Substituting the identity \begin{equation} {\stackrel{~\circ}{A}}{}^a{}_{b \nu} = {\stackrel{\bullet}{A}}{}^a{}_{b \nu} - {\stackrel{\bullet}{K}}{}^a{}_{b \nu}, \end{equation} we obtain \begin{equation} \frac{du^a}{ds} + {\stackrel{\bullet}{A}}{}^a{}_{b \nu} \, u^b \, u^\nu = {\stackrel{\bullet}{K}}{}^a{}_{b \nu} \, u^b \, u^\nu. \label{TeleForceEqu} \end{equation} This is the teleparallel equation of motion of a particle of mass $m$ in a gravitational field --- as seen from a general Lorentz frame. It is a {\em force equation}, with contortion playing the role of gravitational force. The inertial forces coming from the frame non--inertiality are represented by the connection of the left-hand side, which is non-covariant by its very nature. In teleparallel gravity, therefore, whereas the gravitational effects are described by a covariant force, the non--inertial effects of the frame remain {\it geometrized} in the sense of general relativity, and are represented by an inertia--related Lorentz connection. Notice that in the geodesic equation~(\ref{GeReGeoEqu}), both inertial and gravitational effects are described by the connection term of the left--hand side. Considering that the teleparallel force equation and the geodesic equation of general relativity are formally the same, the teleparallel description of the gravitational interaction is found to be equivalent to the description of general relativity. There are conceptual differences, though. In general relativity, a theory fundamentally based on the weak equivalence principle, curvature is used to {\it geometrize} the gravitational interaction. The gravitational interaction in this case is described by letting (spinless) particles to follow the curvature of spacetime. Geometry replaces the concept of force, and the trajectories are determined, not by force equations, but by geodesics. Teleparallel gravity, on the other hand, attributes gravitation to torsion. Torsion, however, accounts for gravitation not by geometrizing the interaction, but by acting as a force. In consequence, there are no geodesics in teleparallel gravity, only force equations quite analogous to the Lorentz force equation of electrodynamics \cite{AndPer976}. This is actually an expected result because, like electrodynamics, teleparallel gravity is a gauge theory. \section{Final Remarks} Although equivalent to general relativity, teleparallel gravity presents several distinctive features and achievements in relation to general relativity. For example, according to the geometric description of general relativity, which makes use of the torsionless Levi--Civita connection, there is a widespread belief that gravity produces a curvature {in spacetime}. In consequence, the Universe as a whole should be curved. However, the advent of teleparallel gravity breaks this paradigm. In fact, it becomes a matter of convention to describe the gravitational interaction in terms of curvature or in terms of torsion. This means that the attribution of curvature to spacetime is not an absolute, but a model--dependent statement. Here, we will discuss two additional points: the possibility of separating inertial effects from gravitation, and the existence of a true gravitational variable in the usual sense of classical field theory. \subsection{Separating Inertial Effects from Gravitation} \label{SepaInerGrav} Let us consider again the tetrad field \begin{equation} h^a{}_\mu = {\stackrel{~\bullet}{\mathcal D}}{}_\mu x^a + B^a{}_\mu. \label{Tetra6} \end{equation} Whereas the first term on the right--hand side is purely inertial, the second is purely gravitational. This means that both inertia and gravitation are included in $h^a{}_\mu$. As a consequence, the coefficient of anholonomy of $h_a$, \begin{equation} f^c{}_{a b} = h_a{}^{\mu} h_b{}^{\nu} (\partial_\nu h^c{}_{\mu} - \partial_\mu h^c{}_{\nu}), \label{fcabBIS} \end{equation} will also represent both inertia and gravitation. Of course, the same is true for the spin connection of general relativity, \begin{equation}% {\stackrel{~\circ}{A}}{}^{a}{}_{b c} = \textstyle{\frac{1}{2}} (f_b{}^a{}_c + f_c{}^a{}_b - f^{a}{}_{b c}). \label{tobetaken3} \end{equation}% In a local frame in which inertial effects exactly compensate gravitation, that connection vanishes, \begin{equation} {\stackrel{~\circ}{A}}{}^{a}{}_{b c} \doteq 0, \label{146} \end{equation} and gravitation becomes locally undetectable. On the other hand, considering that the teleparallel spin connection represents inertial effects only, the identity \begin{equation} {\stackrel{~\circ}{A}}{}^{a}{}_{b c} = {\stackrel{\bullet}{A}}{}^{a}{}_{b c} - {\stackrel{\bullet}{K}}{}^{a}{}_{b c} \label{splittingPartimec} \end{equation} corresponds actually to a separation of ${\stackrel{~\circ}{A}}{}^{a}{}_{b c}$ into inertial and gravitational parts \cite{Einstein056}. In fact, in the local frame in which (\ref{146}) holds, the identity (\ref{splittingPartimec}) becomes \begin{equation} {\stackrel{\bullet}{A}}{}^{a}{}_{b c} \doteq {\stackrel{\bullet}{K}}{}^{a}{}_{b c}. \end{equation} This expression shows explicitly that, in such a local frame, inertial effects (left--hand side) exactly compensate gravitation (right--hand side). It is interesting to remark that, although the inertial part of ${\stackrel{~\circ}{A}}{}^{a}{}_{b c}$ does not contribute to some physical quantities, like curvature and torsion, it does contribute to others. An example is the energy--momentum density of gravitation, whose expression in general relativity always include, in addition to the energy--momentum density of gravity itself, also the energy--momentum density of inertial effects, which is non--tensorial by its very nature. This is the reason why in general relativity this density always shows up as a pseudotensor.\footnote{A sample of different pseudotensors can be found, for example, in Refs.~\cite{pseudo1, pseudo2, pseudo3, pseudo4, pseudo5, pseudo6, pseudo7, pseudo8, pseudo9}.} Furthermore, owing to its odd asymptotic behavior, the contribution of the inertial effects often yields unphysical (divergent or trivial) results for the total energy and momentum of a gravitational system. As a consequence, it is in general necessary to make use of a regularizing process to eliminate the spurious contribution coming from those inertial effects \cite{maluf10}. Due to the possibility of separating inertial effects from gravitation, in teleparallel gravity it is possible to write down a purely gravitational energy--momentum density which, as for any other field, is a true tensor. The existence of such a tensorial density allows one to compute unequivocally the energy and momentum of any gravitational system without necessity of a regularization process \cite{regW10}. \subsection{A Genuine Gravitational Connection} \label{GenuConne} Due to the fact that spin connection of general relativity involves both gravitation and inertial effects, it is always possible to find a local frame in which inertial effects exactly compensate gravitation, in such a way that the connection vanishes at a point: \begin{equation} {\stackrel{~\circ}{A}}{}^{a}{}_{b c} \doteq 0. \end{equation} Since we know there is gravitation at that point, such connection is not a genuine gravitational variable in the usual sense of field theory. Notice, in particular, that any approach to quantum gravity based on this connection will necessarily include a quantization of the inertial forces --- whatever that may come to mean. Considering furthermore the divergent asymptotic behavior of the inertial effects, such approach will likely face additional difficulties. Notice furthermore that the connection behavior of ${\stackrel{~\circ}{A}}{}^{a}{}_{b c}$ under local Lorentz transformations is due to its inertial content, not to gravitation itself. This can be seen from the decomposition (\ref{splittingPartimec}): whereas the first term on the right--hand side represents its inertial, non--covariant part, the second represents its gravitational part, which is a tensor. This means that it is not a genuine gravitational connection --- its gravitational content is covariant --- but just an inertial connection. One should not expect, therefore, any dynamical effect coming from a ``gaugefication'' of the Lorentz group. In this sense, local Lorentz transformations are similar to diffeomorphism, another symmetry empty of dynamical meaning. As a matter of fact, these two kind of transformations are used indistinctly in the metric formulation of general relativity, leading sometimes to the somewhat strange concept of ``locally inertial coordinate system''. This concept makes sense only if local Lorentz transformations between frames are considered on an equal footing with general coordinate transformations. Of course, this can be done as both transformations are empty of dynamical meaning. In teleparallel gravity, on the other hand, the gravitational field is not represented by Lorentz connections, but by a translational--valued gauge potential $B^a{}_\mu$, the non--trivial part of the tetrad field. In this theory, Lorentz connections keep their special relativistic role, representing inertial effects only. Considering that the translational gauge potential represents gravitation only, to the exclusion of inertial effects, it cannot be made to vanish in a point through a choice of an appropriate frame. It is, for this reason, a true field variable in the usual sense of classical field theory. It is, furthermore, a genuine gravitational connection, and consequently the natural field--variable to be quantized in any approach to quantum gravity~\cite{livro2}. \section*{Acknowledgments} The author would like to thank R. Aldrovandi for useful discussions. He would like to thank also FAPESP, CAPES and CNPq for partial financial support.
2,877,628,089,877
arxiv
\section{Introduction} \subsection{Background} The Cauchy problem of the nonlinear Schr\"odinger equation (NLS): \begin{equation} \begin{cases}\label{NLS1} i \partial_t u + \Delta u = \pm |u|^{p-1}u \\ u\big|_{t = 0} = u_0 \in H^s(\mathbb{R}^d), \end{cases} \qquad ( t, x) \in \mathbb{R} \times \mathbb{R}^d \end{equation} \noindent has been studied extensively over recent years. One of the key ingredients in studying \eqref{NLS1} is the dispersive effect of the associated linear flow. Such dispersion is often expressed in terms of the Strichartz estimates (see Lemma \ref{LEM:Str} below), which have played an important role in studying various problems on \eqref{NLS1}, in particular, local and global well-posedness issues. It is well-known that \eqref{NLS1} is invariant under several symmetries. In the following, we concentrate on the dilation symmetry. The dilation symmetry states that if $u(t, x)$ is a solution to \eqref{NLS1} on $\mathbb{R}^d$ with an initial condition $u_0$, then $u^\lambda(t, x) = \lambda^{-\frac{2}{p-1}} u (\lambda^{-2}t, \lambda^{-1}x)$ is also a solution to \eqref{NLS1} with the $\lambda$-scaled initial condition $u_0^\lambda(x) = \lambda^{-\frac{2}{p-1}} u_0 (\lambda^{-1}x)$. Associated to the dilation symmetry, there is a scaling-critical Sobolev index $s_c := \frac d2 - \frac{ 2}{p-1}$ such that the homogeneous $\dot{H}^{s_c}$-norm is invariant under the dilation symmetry. For example, when $p = \frac{4}{d-2}+1$, we have $s_c = 1$ and \eqref{NLS1} is called energy-critical. It is known that \eqref{NLS1} is ill-posed in the supercritical regime, that is, in $H^s$ for $s < s_c$; see \cite{CCT, BGT2, Carles, AC}. In an effort to study the invariance of the Gibbs measure for the defocusing (Wick ordered) cubic NLS on $\mathbb{T}^2$, Bourgain \cite{BO7} considered random initial data of the form: \begin{equation} u_0^\omega(x) = \sum_{n \in \mathbb{Z}^2} \frac{g_n(\o)}{\sqrt{1+|n|^2}}e^{i n \cdot x}, \label{I1} \end{equation} \noindent where $\{g_n\}_{n \in \mathbb{Z}^2}$ is a sequence of independent complex-valued standard Gaussian random variables. The function \eqref{I1} represents a typical element in the support of the Gibbs measure, more precisely, in the support of the Gaussian free field on $\mathbb{T}^2$ associated to this Gibbs measure, and is critical with respect to the scaling. With a combination of deterministic PDE techniques and probabilistic arguments, Bourgain showed that the (Wick ordered) cubic NLS on $\mathbb{T}^2$ is well-posed almost surely with respect to random initial data \eqref{I1}. Burq-Tzvetkov \cite{BT2} further explored the study of Cauchy problems with more general random initial data. They considered the cubic nonlinear wave equation (NLW) on a three dimensional compact Riemannian manifold $M$ without a boundary, where the scaling-critical Sobolev index $s_c$ is given by $s_c = \frac 12 $. Given $u_0(x) = \sum_{n = 1}^\infty c_n e_n(x) \in H^s(M)$, $s\geq \frac 14$, they proved almost sure local well-posedness with random initial data of the form:\footnote{For NLW, one needs to specify $(u, \partial_t u)|_{t = 0}$ as an initial condition. For simplicity of presentation, we only displayed $u|_{t = 0}$ in \eqref{I2}.} \begin{equation} u_0^\omega (x) = \sum_{n = 1}^\infty g_n (\omega) c_n e_n(x) \label{I2} \end{equation} \noindent where $\{g_n\}_{n = 1}^\infty$ is a sequence of independent mean-zero random variables with a uniform bound on the fourth moments and $\{e_n\}_{n = 1}^\infty$ is an orthonormal basis of $L^2(M)$ consisting of the eigenfunctions of the Laplace-Beltrami operator. It was also shown that $u_0^\o$ in \eqref{I2} has the same Sobolev regularity as the original function $u_0$ and is not smoother, almost surely. In particular, if $u_0 \in H^s(M) \setminus H^\frac{1}{2}(M)$, their result implies almost sure local well-posedness in the supercritical regime. There are several works on Cauchy problems of evolution equations with random data that followed these results, including some on almost sure global well-posedness: \cite{Bo97, Thomann, CO, BTT, Deng, DS1, BT3, NPS, DS2, R, BTT2, BB1, BB2, NS, PRT, LM}. We point out that many of these works are on compact domains, where there is a countable basis of eigenfunctions of the Laplacian and thus there is a natural way to introduce a randomization. On $\mathbb{R}^d$, randomizations were introduced with respect to a countable basis of eigenfunctions of the Laplacian with a confining potential such as a harmonic oscillator $\Delta - |x|^2$; we note that functions in Sobolev spaces associated to the Laplacian with a confining potential have an extra decay in space. Our goal is to introduce a randomization for functions in the usual Sobolev spaces on $\mathbb{R}^d$ without such extra decay. For this purpose, we first review some basic notions and facts concerning the so-called \emph{modulation spaces} of time-frequency analysis. \subsection{Modulation spaces} The modulation spaces were introduced by Feichtinger \cite{Fei} in early eighties. In following collaborations with Gr\"ochenig \cite{FG1, FG2}, they established the basic theory of these function spaces, in particular their invariance, continuity, embeddings, and convolution properties. The difference between the Besov spaces and the modulation spaces consists in the geometry of the frequency space employed: the dyadic annuli in the definition of the former spaces are replaced by unit cubes $Q_n$ centered at $n\in \mathbb{Z}^d$ in the definition of the latter ones. Thus, the modulation spaces arise via a uniform partition of the frequency space $\mathbb{R}^d = \bigcup_{n \in \mathbb{Z}^d} Q_n$, which is commonly referred to as a~{\it Wiener decomposition} \cite{W}. In certain contexts, this decomposition allows for a finer analysis by effectively capturing the time-frequency concentration of a distribution. For $x, \xi\in \mathbb{R}^d$, let $\mathcal Fu (\xi)=\widehat u(\xi)=\int_{\mathbb{R}^d} u(x)e^{-2\pi ix\cdot\xi}\,dx$ denote the Fourier transform of a distribution $u$. Typically, the (weighted) modulation spaces $M^{p, q}_s(\mathbb{R}^d)$, $p, q>0, s\in \mathbb{R}$, are defined by imposing the $L^p(dx)L^q(\jb{\xi}^s d\xi)$ integrability of the short-time (or windowed) Fourier transform of a distribution $V_\phi u(x, \xi) :=\mathcal F(u\overline{T_x\phi})(\xi)$. Here, $\langle \xi\rangle^s=(1+|\xi|^2)^{\frac s2}$, $\phi$ is some fixed non-zero Schwartz function, and $T_x$ denotes the translation defined by $T_x(\phi)(y) = \phi(y - x)$. When $s=0$, one simply writes $M^{p,q}$. Modulation spaces satisfy some desirable properties: they are quasi-Banach spaces, two different windows $\phi_1, \phi_2$ yield equivalent norms, $M_s^{2,2}(\mathbb{R}^d)=H^s(\mathbb{R}^d)$, $(M_s^{p,q}(\mathbb{R}^d))'=M_{-s}^{p',q'}(\mathbb{R}^d)$, $M_{s_1}^{p_1, q_1}(\mathbb{R}^d)\subset M_{s_2}^{p_2, q_2}(\mathbb{R}^d)$ for $s_1 \geq s_2$, $p_1\leq p_2$, and $q_1\leq q_2$, and $\mathcal S(\mathbb{R}^d)$ is dense in $M_s^{p,q}(\mathbb{R}^d)$. We prefer to use an equivalent norm on the modulation space $M^{p, q}_s$, which is induced by a corresponding Wiener decomposition of the frequency space. Given $\psi \in \mathcal{S}(\mathbb{R}^d)$ such that $\supp \psi \subset [-1, 1]^d$ and $\sum_{n \in \mathbb{Z}^d} \psi(\xi -n) \equiv 1$, let \begin{equation} \|u\|_{M_s^{p, q}(\mathbb{R}^d)} = \big\| \jb{n}^s \|\psi(D-n) u \|_{L_x^p(\mathbb{R}^d)} \big\|_{\l^q_n(\mathbb{Z}^d)}. \label{mod2} \end{equation} \noindent Note that $\psi(D-n)$ is just a Fourier multiplier with symbol $\chi_{Q_n}$ conveniently smoothed: \[\psi(D-n)u(x)=\int_{\mathbb{R}^d}\psi(\xi-n)\widehat u(\xi)e^{2\pi ix\cdot \xi}\,d\xi.\] It is worthwhile to compare the definition \eqref{mod2} with the one for the Besov spaces which uses a dyadic partition of the frequency domain. Let $\varphi_0, \varphi \in \mathcal{S}(\mathbb{R}^d)$ such that $\supp \varphi_0 \subset \{ |\xi| \leq 2\}$, $\supp \varphi \subset \{ \frac{1}{2}\leq |\xi| \leq 2\}$, and $ \varphi_0(\xi) + \sum_{j = 1}^\infty \varphi(2^{-j}\xi) \equiv 1.$ With $\varphi_j(\xi) = \varphi(2^{-j}\xi)$, we define the Besov spaces $B_s^{p, q}$ via the norm \begin{equation} \label{besov1} \|u\|_{B_s^{p, q}(\mathbb{R}^d) } = \big\| 2^{js} \|\varphi_j(D) u \|_{L^p(\mathbb{R}^d)} \big\|_{\l^q_j(\mathbb{Z}_{\geq 0})}. \end{equation} \noindent There are several known embeddings between the Besov, Sobolev, and modulation spaces; see, for example, Okoudjou \cite{Ok}, Toft \cite{To}, Sugimoto-Tomita \cite{suto2}, and Kobayashi-Sugimoto \cite{kosu}. \subsection{Randomization adapted to the Wiener decomposition} Given a function $\phi$ on $\mathbb{R}^d$, we have \[ \phi = \sum_{n \in \mathbb{Z}^d} \psi(D-n) \phi,\] \noindent where $\psi(D-n)$ is defined above. The identity above leads to a randomization that is naturally associated to the Wiener decomposition, and hence to the modulation spaces, as follows. Let $\{g_n\}_{n \in \mathbb{Z}^d}$ be a sequence of independent mean zero complex-valued random variables on a probability space $(\O, \mathcal{F}, P)$, where the real and imaginary parts of $g_n$ are independent and endowed with probability distributions $\mu_n^{(1)}$ and $\mu_n^{(2)}$. Then, we can define the \emph{Wiener randomization of $\phi$} by \begin{equation} \phi^\omega : = \sum_{n \in \mathbb{Z}^d} g_n (\omega) \psi(D-n) \phi. \label{R1} \end{equation} We note that L\"uhrmann-Mendelson \cite{LM} also considered a similar randomization of the form \eqref{R1} in the study of NLW on $\mathbb{R}^3$. See Remark \ref{REM:LM} below. The randomization in \cite{LM} stems from yet another one used by Zhang and Fang \cite{ZF} in their study of the Navier-Stokes equations. We point out, however, that the main purpose of our paper is to explain how the randomization of the form \eqref{R1} is naturally associated to the Wiener decomposition and hence the modulation spaces. See also our previous paper \cite{BP} in the periodic setting. Thus, from the perspective of time-frequency analysis, the Wiener randomization seems to be the ``right" one. In the sequel, we make the following assumption; there exists $c>0$ such that \begin{equation} \bigg| \int_{\mathbb{R}} e^{\gamma x } d \mu_n^{(j)}(x) \bigg| \leq e^{c\gamma^2} \label{R2} \end{equation} \noindent for all $\gamma \in \mathbb{R}$, $n \in \mathbb{Z}^d$, $j = 1, 2$. Note that \eqref{R2} is satisfied by standard complex-valued Gaussian random variables, standard Bernoulli random variables, and any random variables with compactly supported distributions. It is easy to see that, if $\phi \in H^s(\mathbb{R}^d)$, then the randomized function $\phi^\o$ is almost surely in $H^s(\mathbb{R}^d)$; see Lemma \ref{LEM:Hs} below. One can also show that there is no smoothing upon randomization in terms of differentiability; see, for example, Lemma B.1 in \cite{BT2}. Instead, the main point of this randomization is its improved integrability; if $\phi \in L^2(\mathbb{R}^d)$, then the randomized function $\phi^\o$ is almost surely in $L^p(\mathbb{R}^d)$ for any finite $p \geq 2$; see Lemma \ref{LEM:Lp} below. Such results for random Fourier series are known as Paley-Zygmund's theorem \cite{PZ}; see also Kahane's book \cite{Kahane} and Ayache-Tzvetkov \cite{AT}. \begin{remark}\rm One may fancy a randomization associated to Besov spaces, of the form: \begin{equation*} \phi^\omega : = \sum_{j = 0}^\infty g_n (\omega) \varphi(D) \phi. \end{equation*} \noindent In view of the Littlewood-Paley theory, such a randomization does not yield any improvement on differentiability or integrability and thus it is of no interest. \end{remark} \subsection{Main results.} The Wiener randomization of an initial condition allows us to establish some improvements of the Strichartz estimates. In turn, these probabilistic Strichartz estimates yield an almost sure well-posedness result for NLS. First, we recall the usual Strichartz estimates on $\mathbb{R}^d$ for the reader's convenience. We say that a pair $(q, r)$ is \emph{Schr\"odinger admissible} if it satisfies \begin{equation} \frac{2}{q} + \frac{d}{r} = \frac{d}{2} \label{Str2} \end{equation} \noindent with $2\leq q, r \leq \infty$ and $(q, r, d) \ne (2, \infty, 2)$. Let $S(t)= e^{it\Delta}$. Then, the following Strichartz estimates are known to hold. \begin{lemma}[\cite{Strichartz, Yajima, GV, KeelTao}]\label{LEM:Str} Let $(q, r)$ be Schr\"odinger admissible. Then, we have \begin{equation} \| S(t) \phi\|_{L^q_t L^r_x (\mathbb{R}\times \mathbb{R}^d)} \lesssim \|\phi\|_{L^2_x(\mathbb{R}^d)}. \label{Str1} \end{equation} \end{lemma} Next, we present improvements of the Strichartz estimates under the Wiener randomization. Proposition \ref{PROP:R2} will be then used for a local-in-time theory, while Proposition \ref{PROP:R5} is useful for small data global theory. The proofs of Propositions \ref{PROP:R2} and \ref{PROP:R5} are presented in Section 2. \begin{proposition}[Improved local-in-time Strichartz estimate]\label{PROP:R2} Given $\phi \in L^2(\mathbb{R}^d)$, let $\phi^\o$ be its randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, given $2\leq q, r<\infty$, there exist $C, c>0$ such that \begin{align} P\Big( \|S(t) \phi^\omega\|_{L^q_t L^r_x([0, T]\times \mathbb{R}^d)}> \lambda\Big) \leq C\exp\bigg(-c\frac{\lambda^2}{T^{\frac{2}{q}}\|\phi\|_{L^2}^{2}}\bigg) \label{R2aa} \end{align} \noindent for all $T > 0$ and $\lambda > 0$. \end{proposition} \noindent In particular, by setting $\lambda = T^\theta \|\phi\|_{L^2}$, we have \begin{equation*} \|S(t) \phi^\o\|_{L^q_tL^r_x([0, T]\times \mathbb{R}^d)} \lesssim T^\theta \|\phi\|_{L^2(\mathbb{R}^d)} \end{equation*} \noindent outside a set of probability at most $ C \exp \big(-c T^{2\theta - \frac{2}{q}}\big).$ Note that, as long as $\theta < \frac{1}{q}$, this probability can be made arbitrarily small by letting $T\to 0$. Moreover, for fixed $T>0$, we have the following; given any small $\varepsilon>0$, we have \[\|S(t) \phi^\o\|_{L^q_tL^r_x([0, T]\times \mathbb{R}^d)} \leq C_T \Big( \log \frac{1}{\varepsilon}\Big)^{\frac{1}{2}} \|\phi\|_{L^2}\] \noindent outside a set of probability $< \varepsilon$. The next proposition states an improvement of the Strichartz estimates in the global-in-time setting. \begin{proposition}[Improved global-in-time Strichartz estimate]\label{PROP:R5} Given $\phi \in L^2(\mathbb{R}^d)$, let $\phi^\o$ be its randomization defined in \eqref{R1}, satisfying \eqref{R2}. Given a Schr\"odinger admissible pair $(q, r)$ with $q, r < \infty$, let $\widetilde {r} \geq r$. Then, there exist $C, c>0$ such that \begin{align} P\Big( \|S(t) \phi^\omega\|_{L^q_t L^{\widetilde{r}}_x ( \mathbb{R} \times \mathbb{R}^d)} > \lambda\Big) \leq Ce^{-c \lambda^2 \|\phi\|_{L^2}^{-2}} \label{R5a} \end{align} \noindent for all $\lambda > 0$. In particular, given any small $\varepsilon > 0$, we have \[ \|S(t) \phi^\omega\|_{L^q_t L^{\widetilde{r}}_x ( \mathbb{R} \times \mathbb{R}^d)} \lesssim \Big( \log \frac{1}{\varepsilon}\Big)^\frac{1}{2} \|\phi\|_{L^2} \] \noindent outside a set of probability at most $\varepsilon$. \end{proposition} We conclude this introduction by discussing an example of almost sure local well-posedness of NLS with randomized initial data below a scaling critical regularity. In the following, we consider the energy-critical cubic NLS on $\mathbb{R}^4$: \begin{equation} \label{NLS4d}i \partial_t u + \Delta u = \pm |u|^{2}u, \quad \qquad (t, x) \in \mathbb{R} \times \mathbb{R}^4. \end{equation} \noindent Cazenave-Weissler \cite{CW} proved local well-posedness of \eqref{NLS4d} with initial data in the critical space $\dot H^1(\mathbb{R})$. See Ryckman-Vi\c{s}an \cite{RV}, Vi\c{s}an \cite{Visan}, and Kenig-Merle \cite{KM} for global-in-time results. In the following, we state a local well-posedness result of \eqref{NLS4d} with random initial data below the critical space. More precisely, given $\phi \in H^s(\mathbb{R}^4) \setminus H^1(\mathbb{R}^4)$, $s \in( \frac {3}{5}, 1)$, and its randomization $\phi^\o$ defined in \eqref{R1}, we prove that \eqref{NLS4d} is almost surely locally well-posed with random initial data $\phi^\o$. Although $\phi$ and its randomization $\phi^\o$ lie in a supercritical regularity regime, the Wiener randomization essentially makes the problem {\it subcritical}. This is a common feature for many of the probabilistic well-posedness results. \begin{theorem}\label{THM:1} Let $s \in (\frac{3}{5}, 1)$. Given $\phi \in H^s(\mathbb{R}^4)$, let $\phi^\o$ be its randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, the cubic NLS \eqref{NLS4d} on $\mathbb{R}^4$ is almost surely locally well-posed with respect to the randomization $\phi^\omega$ as initial data. More precisely, there exist $ C, c, \gamma>0$ and $\sigma = 1+$ such that for each $T\ll 1$, there exists a set $\O_T \subset \O$ with the following properties: \smallskip \begin{itemize} \item[\textup{(i)}] $P(\O\setminus\O_T) \leq C \exp\Big(-\frac{c}{T^{\gamma} \|\phi\|_{H^s}^{2}}\Big)$, \item[\textup{(ii)}] For each $\o \in \O_T$, there exists a (unique) solution $u$ to \eqref{NLS4d} with $u|_{t = 0} = \phi^\o$ in the class \[ S(t) \phi^\o + C([-T, T]: H^{\sigma} (\mathbb{R}^4)) \subset C([-T, T]:H^s(\mathbb{R}^4)).\] \end{itemize} \end{theorem} \noindent The details of the proof of Theorem~\ref{THM:1} are presented in Section 3. We discuss here a very brief outline of the argument. Denoting the linear and nonlinear parts of $u$ by $z (t) = z^\o(t) : = S(t) \phi^\o$ and $v(t) := u(t) - S(t) \phi^\o$ respectively, we can reduce \eqref{NLS4d} to \begin{equation} \begin{cases} i \partial_t v + \Delta v = \pm |v + z|^2(v+z)\\ v|_{t = 0} = 0. \end{cases} \label{NLS4d2} \end{equation} \noindent We then prove that the Cauchy problem \eqref{NLS4d2} is almost surely locally well-posed for $v$, viewing $z$ as a random forcing term. This is done by using the standard subcritical $X^{s, b}$-spaces with $b > \frac 12$ defined by \[ \|u\|_{X^{s, b}(\mathbb{R}\times \mathbb{R}^4)} = \| \jb{\xi}^s \jb{\tau + |\xi|^2}^b \widehat{u}(\tau, \xi)\|_{L^2_{\tau, \xi}(\mathbb{R}\times\mathbb{R}^4)}.\] \noindent We point out that the uniqueness in Theorem \ref{THM:1} refers to uniqueness of the nonlinear part $v(t) = u(t) - S(t) \phi^\o$ of a solution $u$. We conclude this introduction with several remarks. \begin{remark}\rm Theorem \ref{THM:1} holds for both defocusing and focusing cases (corresponding to the $+$ sign and the $-$ sign in \eqref{NLS1}, respectively) due to the local-in-time nature of the problem. \end{remark} \begin{remark}\rm Theorem \ref{THM:1} can also be proven with the variants of the $X^{s, b}$-spaces adapted to the $U^p$- and $V^p$-spaces introduced by Koch, Tataru, and their collaborators \cite{KochT, HHK, HTT11}. These spaces are designed to handle problems in critical regularities. We decided to present the proof with the classical subcritical $X^{s, b}$-spaces, $b > \frac 12$, to emphasize that the problem has become subcritical upon randomization. We should, however, point out that, with the spaces introduced by Koch and Tataru, we can also prove probabilistic small data global well-posedness and scattering as a consequence of the probabilistic global-in-time Strichartz estimates (Proposition \ref{PROP:R5}). See our paper \cite{BOP2} for an example of such results for the cubic NLS on $\mathbb{R}^d$, $d \geq 3$. It is of interest to consider almost sure global existence for \eqref{NLS4d}. While the mass of $v$ in \eqref{NLS4d2} has a global-in-time control, there is no energy conservation for $v$ and thus we do not know how to proceed at this point. In \cite{BOP2}, we establish almost sure global existence for \eqref{NLS4d}, assuming an a priori control on the $H^1$-norm of the nonlinear part $v$ of a solution. We also prove there, without any assumption, global existence with a large probability by considering a randomization, not on unit cubes but on dilated cubes this time. In the context of the defocusing cubic NLW on $\mathbb{R}^4$, one can obtain an a priori bound on the energy of the nonlinear part of a solution, see \cite{BT3}. As a consequence, the third author \cite{POC} proved almost sure global well-posedness of the energy-critical defocusing cubic NLW on $\mathbb{R}^4$ below the scaling critical regularity. \end{remark} \begin{remark}\rm In Theorem \ref{THM:1}, we simply used $\sigma = 1+$ as the regularity of the nonlinear part $v$. It is possible to characterize the possible values of $\sigma$ in terms of the regularity $s < 1$ of $\phi$. However, for simplicity of presentation, we omitted such a discussion. \end{remark} \begin{remark}\rm In probabilistic well-posedness results \cite{Bo2, Bo97, CO, NS} for NLS on $\mathbb{T}^d$, random initial data are assumed to be of the following specific form: \begin{equation} \label{I3} u_0^\omega(x) = \sum_{n \in \mathbb{Z}^d} \frac{g_n(\o)}{(1+|n|^2)^\frac{\alpha}{2}}e^{i n \cdot x}, \end{equation} \noindent where $\{g_n\}_{n \in \mathbb{Z}^d}$ is a sequence of independent complex-valued standard Gaussian random variables. The expression \eqref{I3} has a close connection to the study of invariant (Gibbs) measures and, hence, it is of importance. At the same time, due to the lack of a full range of Strichartz estimates on $\mathbb{T}^d$, one could not handle a general randomization of a given function as in \eqref{I2}. In Theorem \ref{THM:1}, we consider NLS on $\mathbb{R}^4$ and thus we do not encounter this issue thanks to a full range of the Strichartz estimates. For NLW, finite speed of propagation allows us to use a full range of Strichartz estimates even on compact domains, at least locally in time; thus, in that context, one does not encounter such an issue. \end{remark} \begin{remark}\label{REM:LM}\rm In a recent preprint, L\"uhrmann-Mendelson \cite{LM} considered the defocusing NLW on $\mathbb{R}^3$ with randomized initial data defined in \eqref{R1} in a supercritical regularity and proved almost sure global well-posedness in the energy-subcritical case, following the method developed in \cite{CO}. For the energy-critical quintic NLW, they obtained almost sure local well-posedness along with small data global existence and scattering. \end{remark} \section{Probabilistic Strichartz estimates} In this section, we state and prove some basic properties of the randomized function $\phi^\o$ defined in \eqref{R1}, including the improved Strichartz estimates (Propositions \ref{PROP:R2} and \ref{PROP:R5}). First, recall the following probabilistic estimate. See \cite{BT2} for the proof. \begin{lemma} \label{LEM:R1} Assume \eqref{R2}. Then, there exists $C>0$ such that \[ \bigg\| \sum_{n \in \mathbb{Z}^d} g_n(\omega) c_n\bigg\|_{L^p(\Omega)} \leq C \sqrt{p} \| c_n\|_{\l^2_n(\mathbb{Z}^d)}\] \noindent for all $p \geq 2$ and $\{c_n\} \in \l^2(\mathbb{Z}^d)$. \end{lemma} Given $\phi \in H^s$, it is easy to see that its randomization $\phi^\o \in H^s$ almost surely, for example, if $\{g_n\}$ has a uniform finite variance. Under the assumption \eqref{R2}, we have a more precise description on the size of $\phi^\o$. \begin{lemma} \label{LEM:Hs} Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, we have \begin{align} P\Big( \| \phi^\omega \|_{ H^s( \mathbb{R}^d)} > \lambda\Big) \leq C e^{-c \lambda^2 \|\phi\|_{ H^s}^{-2}} \label{Hs1} \end{align} \noindent for all $\lambda > 0$. \end{lemma} \begin{proof} By Minkowski's integral inequality and Lemma \ref{LEM:R1}, we have \begin{align*} \Big(\mathbb{E} \| \phi^\o \|_{ H^s(\mathbb{R}^d)}^p\Big)^\frac{1}{p} & \leq \big\| \|\jb{\nabla}^s \phi^\o \|_{L^p(\Omega)} \big\|_{L^2_{ x}( \mathbb{R}^d)} \lesssim \sqrt p \big\| \|\psi(D-n) \jb{\nabla}^s \phi \|_{\l^2_n}\big\|_{L^2_{x}} \notag \\ & \sim \sqrt p \| \phi \|_{ H^s} \end{align*} \noindent for any $p \geq 2$. Thus, we have obtained \[ \mathbb{E}[ \|\phi^\o\|_{H^s}^p] \leq C_0^p p^\frac{p}{2} \|\phi\|_{H^s}^p. \] \noindent By Chebyshev's inequality, we have \begin{align} P\Big( \|\phi^\o\|_{H^s} > \lambda \Big) < \bigg(\frac{C_0 p^\frac{1}{2} \|\phi\|_{H^s}}{\lambda}\bigg)^p \label{Cheby} \end{align} \noindent for $p \geq 2$. Let $p =\Big( \frac{\lambda}{C_0e\|\phi\|_{H^s}}\Big)^2$. If $p \geq 2$, then by \eqref{Cheby}, we have \begin{align*} P\Big( \|\phi^\o\|_{H^s} > \lambda \Big) < \bigg(\frac{C_0 p^\frac{1}{2} \|\phi\|_{H^s}}{\lambda}\bigg)^p = e^{-p} = e^{-c \lambda^2\|\phi\|_{H^s}^{-2}}. \end{align*} \noindent Otherwise, i.e.~if $p =\Big( \frac{\lambda}{C_0e\|\phi\|_{H^s}}\Big)^2 \leq 2$, we can choose $C$ such that $C e^{-2} \geq 1$. Then, we have \begin{align*} P\Big( \|\phi^\o\|_{H^s} > \lambda \Big) \leq 1 \leq C e^{-2} \leq C e^{-c \lambda^2\|\phi\|_{H^s}^{-2}}, \end{align*} thus giving the desired result. \end{proof} The next lemma shows that, if $\phi \in L^2(\mathbb{R}^d)$, then its randomization $\phi^\o$ is almost surely in $L^p(\mathbb{R}^d)$ for any $p\in [2, \infty)$. \begin{lemma} \label{LEM:Lp} Given $\phi \in L^2(\mathbb{R}^d)$, let $\phi^\o$ be its randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, given finite $p \geq 2$, there exist $C, c >0$ such that \begin{align} P\Big( \| \phi^\omega \|_{ L^p( \mathbb{R}^d)} > \lambda\Big) \leq C e^{-c \lambda^2 \|\phi\|_{ L^2}^{-2}} \label{Lp} \end{align} \noindent for all $\lambda > 0$. In particular, $\phi^\o$ is in $L^p(\mathbb{R}^d)$ almost surely. \end{lemma} \begin{proof By Lemma \ref{LEM:R1}, we have \begin{align*} \Big(\mathbb{E} \| \phi^\o \|_{ L^p_x (\mathbb{R}^d)}^r\Big)^\frac{1}{r} & \leq \big\| \| \phi^\o \|_{L^r(\Omega)} \big\|_{L^p_{ x}( \mathbb{R}^d)} \lesssim \sqrt r \big\| \|\psi(D-n) \phi \|_{\l^2_n}\big\|_{L^p_{x}} \notag \\ & \leq \sqrt r \big\| \|\psi(D-n) \phi \|_{L^p_x}\big\|_{\l^2_n} \leq \sqrt r \big\| \|\psi(D-n) \phi \|_{L^2_x}\big\|_{\l^2_n} \notag \\ & \sim \sqrt r \| \phi \|_{L^2_x} \end{align*} \noindent for any $r \geq p$. Then, \eqref{Lp} follows as in the proof of Lemma~\ref{LEM:Hs}. \end{proof} We conclude this section by presenting the proofs of the improved Strichartz estimates under randomization. Before continuing further, we briefly recall the definitions of the smooth projections from Littlewood-Paley theory. Let $\varphi$ be a smooth real-valued bump function supported on $\{\xi\in \mathbb{R}^d: |\xi|\leq 2\}$ and $\varphi\equiv 1$ on $\{\xi: |\xi|\leq 1\}$. If $N>1$ is a dyadic number, we define the smooth projection $\mathbf{P}_{\leq N}$ onto frequencies $\{|\xi| \leq N\}$ by \[\widehat{\mathbf{P}_{\leq N}f}(\xi):=\varphi\big(\tfrac{\xi}N\big)\widehat f(\xi).\] \noindent Similarly, we can define the smooth projection $\mathbf{P}_{N}$ onto frequencies $\{|\xi|\sim N\}$ by \[\widehat{\mathbf{P}_{N}f}(\xi):=\Big(\varphi\big(\tfrac{\xi}N\big)-\varphi\big(\tfrac{2\xi}N\big)\Big)\widehat f(\xi).\] We make the convention that $\mathbf{P}_{\leq 1}=\mathbf{P}_1$. Bernstein's inequality states that \begin{equation} \|\mathbf{P}_{\leq N} f\|_{L^q(\mathbb{R}^d)} \lesssim N^{\frac dp-\frac dq}\|\mathbf{P}_{\leq N} f\|_{L^p(\mathbb{R}^d)}, \qquad 1\leq p \leq q \leq \infty. \label{R3} \end{equation} \noindent The same inequality holds if we replace $\mathbf{P}_{\leq N}$ by $\mathbf{P}_N$. As an immediate corollary, we have \begin{equation} \|\psi(D -n) \phi\|_{L^q(\mathbb{R}^d)} \lesssim \|\psi(D-n) \phi \|_{L^p(\mathbb{R}^d)}, \qquad 1\leq p \leq q \leq \infty, \label{R4} \end{equation} \noindent for all $n \in \mathbb{Z}^d$. This follows from applying \eqref{R3} to $\phi_n(x) := e^{2\pi i n\cdot x} \psi(D-n) \phi(x)$ and noting that $\supp \widehat \phi_n \subset [-1, 1]^d$ . The point of \eqref{R4} is that once a function is (roughly) restricted to a cube, we do not need to lose any regularity to go from the $L^q$-norm to the $L^p$-norm, $q \geq p$. \begin{proof}[Proof of Proposition \ref{PROP:R2}] Let $q, r \geq 2$. We write $L_T^q$ to denote $L^q_t([0, T])$. By Lemma \ref{LEM:R1} and \eqref{R4}, we have \begin{align*} \Big(\mathbb{E} \|S(t) \phi^\omega & \|_{L^q_t L^r_x( [0, T] \times \mathbb{R}^d)}^p\Big)^\frac{1}{p} \leq \Big\| \| S(t) \phi^\omega\|_{L^p(\Omega)} \Big\|_{L^q_T L^r_x} \leq \sqrt{p}\Big\| \| \psi(D-n) S(t) \phi\|_{\l^2_n } \Big\|_{L^q_T L^r_x}\\ & \leq \sqrt{p}\Big\| \| \psi(D-n) S(t) \phi\|_{L^r_x } \Big\|_{ L^q_T \l^2_n } \lesssim \sqrt{p}\Big\| \| \psi(D-n) S(t) \phi\|_{L^2_x } \Big\|_{ L^q_T \l^2_n}\\ & \lesssim T^\frac{1}{q} \sqrt{p}\|\phi\|_{L^2_x} \end{align*} \noindent for $p \geq \max (q, r)$. Then, \eqref{R2aa} follows as in the proof of Lemma~\ref{LEM:Hs}. \end{proof} \begin{proof}[Proof of Proposition \ref{PROP:R5}] Let $(q, r)$ be Schr\"odinger admissible and $\widetilde{r} \geq r$. Then, proceeding as before, we have \begin{align*} \Big(\mathbb{E} \|S(t) \phi^\omega \|_{L^q_t L^{\widetilde r}_x( \mathbb{R} \times \mathbb{R}^d)}^p\Big)^\frac{1}{p} & \lesssim \sqrt p \Big\| \|\psi(D-n)S(t) \phi\|_{L^{\widetilde r}_x}\Big\|_{\l^2_n L^q_{t}} \lesssim \sqrt p \Big\| \|\psi(D-n)S(t) \phi\|_{ L^q_{t} L^{r}_x}\Big\|_{\l^2_n} \notag \\ \intertext{By Lemma \ref{LEM:Str}, } & \lesssim \sqrt p \big\| \|\psi(D-n) \phi \|_{L^2_x}\big\|_{\l^2_n} \sim \sqrt p \| \phi \|_{L^2_x} \end{align*} \noindent for $p \geq \max(q, \widetilde{r})$. Finally, \eqref{R5a} follows as above. \end{proof} \begin{remark}\rm The Cauchy problem \eqref{NLS1} has also been studied for initial data in the modulation spaces $M_s^{p, 1}$ for $ 1 \leq p \leq \infty$ and $s\geq 0$; see, for example, \cite{BO} and \cite{wahahu}. Thus, it is tempting to consider what happens if we randomize an initial condition in a modulation space $M^{p, q}_s$. In this case, however, there is no improvement in the Strichartz estimates in terms of integrability, i.e.~$p$, hence, no improvement of well-posedness with respect to $M^{p, q}_s$ in terms of differentiability, i.e.~in $s$. Indeed, by computing the moments of the modulation norm of the randomized function \eqref{R1}, one immediately sees that the modulation norm remains essentially unchanged due to the outside summation over $n$. In the proof of Propositions \ref{PROP:R2} and \ref{PROP:R5}, the averaging effect of a linear combination of the random variables $g_n$ played a crucial role. For the modulation spaces, we do not have such an averaging effect since the outside summation over $n$ forces us to work on a piece restricted to each cube, i.e. each random variable at a time. \end{remark} \section{Almost sure local well-posedness} Given $\phi \in H^s(\mathbb{R}^d)$, let $\phi^\o$ be its randomization defined in \eqref{R1}. In the following, we consider the Cauchy problem \eqref{NLS4d} with random initial data $u|_{t = 0} = \phi^\o$. By letting $z (t) = z^\o(t) : = S(t) \phi^\o$ and $v(t) := u(t) - S(t) \phi^\o$, we can reduce \eqref{NLS1} to \begin{equation} \begin{cases} i \partial_t v + \Delta v = \pm |v + z|^2(v+z)\\ v|_{t = 0} = 0. \end{cases} \label{NLS2} \end{equation} \noindent By expressing \eqref{NLS2} in the Duhamel formulation, we have \begin{equation} v(t) = \mp i\int_0^t S(t-t') \mathcal{N} (v+ z)(t') dt', \label{NLS3} \end{equation} \noindent where $\mathcal{N}(u) = |u|^2 u=u\bar u u$. Let $\eta$ be a smooth cutoff function supported on $[-2, 2]$, $\eta \equiv 1$ on $[-1, 1]$, and let $\eta_{_T}(t) = \eta\big(\frac{t}{T}\big)$. Note that, if $v$ satisfies \begin{equation} v(t) = \mp i\eta(t)\int_0^t S(t-t') \eta_{_T}(t')\mathcal{N} (\eta v+ \eta_{_T}z)(t') dt' \label{NLS4} \end{equation} \noindent \noindent for some $T \ll 1$, then it also satisfies \eqref{NLS3} on $[-T, T]$. Hence, we consider \eqref{NLS4} in the following. Given $z(t) = S(t) \phi^\o$, define $\Gamma$ by \begin{equation} \Gamma v(t) = \mp i \eta(t)\int_0^t S(t-t')\eta_{_T}(t') \mathcal{N} (\eta v+ \eta_{_{T}}z)(t') dt'. \label{NLS5} \end{equation} \noindent Then, the following nonlinear estimates yield Theorem \ref{THM:1}. \begin{proposition}\label{PROP:LWP} Let $s\in\big(\frac 35, 1\big)$. Given $\phi \in H^s(\mathbb{R}^4)$, let $\phi^\o$ be its randomization defined in \eqref{R1}, satisfying \eqref{R2}. Then, there exists $\sigma = 1+$, $ b = \frac{1}{2}+$ and $\theta = 0+$ such that for each small $T\ll 1$ and $R>0$, we have \begin{align} \|\Gamma v\|_{X^{\sigma, b}} & \leq C_1T^\theta (\|v\|_{X^{\sigma, b}}^3 + R^3), \label{nl0a}\\ \|\Gamma v_1 - \Gamma v_2 \|_{X^{\sigma, b}} & \leq C_2 T^\theta (\|v_1\|_{X^{\sigma, b}}^2+\|v_2\|_{X^{\sigma, b}}^2 + R^2) \|v_1 -v_2 \|_{X^{\sigma, b}}, \label{nl0b} \end{align} \noindent outside a set of probability at most $ C \exp\Big(-c \frac{R^2}{\|\phi\|_{H^s}^{2}}\Big)$. \end{proposition} We first present the proof of Theorem \ref{THM:1}, assuming Proposition \ref{PROP:LWP}. Then, we prove Proposition \ref{PROP:LWP} at the end of this section. \begin{proof}[Proof of Theorem \ref{THM:1}] Let $B_1$ denote the ball of radius 1 centered at the origin in $X^{\sigma, b}$. Then, given $T \ll 1$, we show that the map $\Gamma$ is a contraction on $B_1$. Given $T \ll 1$, we choose $R = R(T) \sim T^{-\frac{\gamma}{2}}$ for some $\gamma \in (0, \frac{2\theta}{3})$ such that \[ C_1 T^\theta(1+ R^3) \leq 1\quad \text{and} \quad C_2 T^\theta(2+R^2) \leq \frac 12. \] \noindent Then, for $v, v_1, v_2 \in B_1$, Proposition \ref{PROP:LWP} yields \begin{align*} \|\Gamma v\|_{X^{\sigma, b}} & \leq 1, \\ \|\Gamma v_1 - \Gamma v_2 \|_{X^{\sigma, b}} & \leq \frac {1}{2} \|v_1 -v_2 \|_{X^{\sigma, b}} \end{align*} \noindent outside an exceptional set of probability at most \[ C \exp\bigg(-c \frac{R^2}{\|\phi\|_{H^s}^{2}}\bigg) = C \exp \bigg(-\frac{c}{T^{\gamma}\|\phi\|_{H^s}^{2}}\bigg).\] \noindent Therefore, by defining $\O_T$ to be the complement of this exceptional set, it follows that, for $\o\in \O_T$, there exists a unique $v^\o \in B_1$ such that $\Gamma v^\o = v^\o$. This completes the proof of Theorem \ref{THM:1}. \end{proof} Hence, it remains to prove Proposition \ref{PROP:LWP}. Before proceeding further, we first present some lemmata on the basic $X^{s, b}$-estimates. See \cite{Bo2, KPV93, TAO} for the basic properties of the $X^{s, b}$-spaces. \begin{lemma}\label{LEM:Xsb1} \textup{(i)} Linear estimates: Let $T \in( 0, 1)$ and $b \in \big(\frac{1}{2}, \frac{3}{2}\big]$. Then, for $s \in \mathbb{R}$ and $\theta \in \big[0, \frac{3}{2} - b\big)$, we have \begin{align} \| \eta_{_T} (t) S(t) \phi \|_{X^{s, b}(\mathbb{R}\times \mathbb{R}^4)} & \lesssim T^{\frac{1}{2}-b}\|\phi\|_{H^s(\mathbb{R}^4) }, \label{Xsb1}\\ \bigg\| \eta(t) \int_0^t S(t - t') \eta_{_T}(t') F(t') dt' \bigg\|_{X^{s, b}(\mathbb{R}\times \mathbb{R}^4)} & \lesssim T^{\theta} \|F\|_{X^{s, b -1+\theta}(\mathbb{R}\times \mathbb{R}^4)}.\notag \end{align} \noindent \textup{(ii)} Strichartz estimates: Let $(q, r)$ be Schr\"odinger admissible and $p \geq 3$. Then, for $ b > \frac{1}{2}$ and $N_1 \leq N_2$, we have \begin{align} \| u \|_{L^q_t L^r_x(\mathbb{R}\times \mathbb{R}^4)} & \lesssim \|u\|_{X^{0, b}(\mathbb{R}\times \mathbb{R}^4)}, \label{Xsb2}\\ \| u \|_{L^p_{t, x}(\mathbb{R}\times \mathbb{R}^4)} & \lesssim \big\||\nabla|^{2 - \frac 6p} u\big\|_{X^{0, b}(\mathbb{R}\times \mathbb{R}^4)}, \label{Xsb3}\\ \| \mathbf{P}_{N_1} u_1 \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}(\mathbb{R}\times \mathbb{R}^4)} & \lesssim N_1 \bigg(\frac{N_1}{N_2}\bigg)^{\frac 12} \|\mathbf{P}_{N_1} u_1\|_{X^{0, b}(\mathbb{R}\times\mathbb{R}^4)}\|\mathbf{P}_{N_2} u_2\|_{X^{0, b}(\mathbb{R}\times\mathbb{R}^4)}. \label{Xsb4} \end{align} \end{lemma} \noindent Recall that \eqref{Xsb2} follows from the Strichartz estimate \eqref{Str1} and \eqref{Xsb3} follows from Sobolev inequality and \eqref{Str1}, while \eqref{Xsb4} follows from a refinement of the Strichartz estimate by Bourgain \cite{Bo98} and Ozawa-Tsutsumi \cite{OT}. As a corollary to Lemma \ref{LEM:Xsb1}, we have the following estimates. \begin{lemma}\label{LEM:Xsb2} Given small $\varepsilon > 0$, let $\varepsilon_1 = 2\varepsilon+$. Then, for $N_1 \leq N_2$, we have \begin{align} \| u \|_{L^\frac{3}{1+\varepsilon_1}_{t, x}(\mathbb{R}\times \mathbb{R}^4)} & \lesssim \|u\|_{X^{0, \frac12 - 2\varepsilon }(\mathbb{R}\times \mathbb{R}^4)}, \label{Xsb5}\\ \| \mathbf{P}_{N_1} u_1 \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}(\mathbb{R}\times \mathbb{R}^4)} & \lesssim C(N_1, N_2) \|\mathbf{P}_{N_1}u_1\|_{X^{0, \frac{1}{2}+}(\mathbb{R}\times\mathbb{R}^4)}\|\mathbf{P}_{N_2} u_2\|_{X^{0, \frac{1}{2}-2\varepsilon}(\mathbb{R}\times\mathbb{R}^4)}, \label{Xsb6} \end{align} \noindent where $C(N_1, N_2)$ is given by \[ C(N_1, N_2) = \begin{cases} N_1^{\frac{3}{2} + \varepsilon_1 +} N_2^{-\frac 12 + \varepsilon_1} & \text{if } N_1 \leq N_2, \\ N_1^{- \frac{1}{2} + 5\varepsilon_1 +} N_2^{\frac 32 - 3 \varepsilon_1} & \text{if } N_1 \geq N_2. \end{cases} \] \end{lemma} \noindent \begin{proof} The first estimate \eqref{Xsb5} follows from interpolating \eqref{Xsb2} with $q = r =3$ and $\| u\|_{L^2_{t, x}} = \|u\|_{X^{0, 0}}.$ The second estimate \eqref{Xsb6} follows from interpolating \eqref{Xsb4} and \[ \| \mathbf{P}_{N_1} u_1 \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}} \leq\| \mathbf{P}_{N_1} u_1\|_{L^\infty_{t, x}}\| \mathbf{P}_{N_2}u_2\|_{L^2_{t, x}} \lesssim \| \mathbf{P}_{N_1} u_1\|_{X^{2 +, \frac 12+}}\| \mathbf{P}_{N_2}u_2\|_{X^{0, 0}}. \qedhere \] \end{proof} We are now ready to prove Proposition \ref{PROP:LWP}. \begin{proof}[Proof of Proposition \ref{PROP:LWP}] We only prove \eqref{nl0a} since \eqref{nl0b} follows in a similar manner. By Lemma \ref{LEM:Xsb1} (i) and duality, we have \begin{align} \|\Gamma v(t)\|_{X^{\sigma, b}} & \lesssim T^\theta \| \mathcal{N} (\eta v+ \eta_{_{T}}z) \|_{X^{\sigma, b-1+\theta}} \notag \\ & = T^\theta \sup_{\|v_4\|_{X^{0, 1-b-\theta}}\leq1} \bigg|\iint_{\mathbb{R}\times \mathbb{R}^4} \jb{\nabla}^\sigma \big[ \mathcal{N} (\eta v+ \eta_{_{T}}z) \big] v_4 dx dt\bigg|. \label{nl1} \end{align} \noindent We estimate the right-hand side of \eqref{nl1} by performing case-by-case analysis of expressions of the form: \begin{align} \bigg| \iint_{\mathbb{R}\times \mathbb{R}^4} \jb{\nabla}^\sigma ( w_1 w_2 w_3 )v_4 dx dt\bigg|, \label{nl2} \end{align} \noindent where $\|v_4\|_{X^{0, 1-b-\theta}} \leq 1$ and $w_j= \eta v$ or $\eta_{_{T}} z$, $j = 1, 2, 3$. Before proceeding further, let us simplify some of the notations. In the following, we drop the complex conjugate sign since it plays no role. Also, we often suppress the smooth cutoff function $\eta$ (and $\eta_{_{T}}$) from $w_j = \eta v$ (and $w_j = \eta_{_{T}} z$) and simply denote them by $v_j$ (and $z_j$, respectively). Lastly, in most of the cases, we dyadically decompose $w_j$, $j = 1, 2, 3$, and $v_4$ such that their spatial frequency supports are $\{ |\xi_j|\sim N_j\}$ for some dyadic $N_j \geq 1$ but still denote them by $w_j$, $j = 1, 2, 3$, and $v_4$. Let $b = \frac{1}{2} + \varepsilon$ and $\theta = \varepsilon $ for some small $\varepsilon > 0$ (to be chosen later) so that $ 1- b - \theta = \frac 12 - 2 \varepsilon$. In the following, we set $\varepsilon_1 = 2\varepsilon+$. \medskip \noindent {\bf Case (1):} $v v v$ case. In this case, we do not need to perform dyadic decompositions and we divide the frequency spaces into $\{|\xi_1| \geq |\xi_2|, |\xi_3|\}$, $\{|\xi_2| \geq |\xi_1|, |\xi_3|\}$, and $\{|\xi_3| \geq |\xi_1|, |\xi_2|\}$. Without loss of generality, assume that $|\xi_1| \gtrsim |\xi_2|, |\xi_3|$. By $L^3L^{\frac{6}{1-\varepsilon_1}}L^{\frac{6}{1-\varepsilon_1}}L^{\frac{3}{1+\varepsilon_1}}$-H\"older's inequality and Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} \jb{\nabla}^\sigma v_1 v_2 v_3 v_4 dx dt \bigg| & \leq \| \jb{\nabla}^\sigma v_1\|_{L^3_{t, x}} \|v_2\|_{L^{\frac{6}{1-\varepsilon_1}}_{t, x}}\|v_3\|_{L^{\frac{6}{1-\varepsilon_1}}_{t, x}} \|v_4\|_{L^{\frac{3}{1+\varepsilon_1}}_{t, x}}\\ & \lesssim \prod_{j = 1}^3 \|v_j\|_{X^{\sigma, \frac{1}{2}+}} \|v_4\|_{X^{0, \frac{1}{2}-2\varepsilon}} \lesssim \prod_{j = 1}^3 \|v_j\|_{X^{\sigma, b}} \end{align*} \noindent for $\sigma \geq 1 + \varepsilon_1 = 1+ 2\varepsilon+$. \medskip \noindent {\bf Case (2):} $zz z$ case. \quad Without loss of generality, assume $N_3 \geq N_2 \geq N_1$. \smallskip \noindent {\bf $\bullet$ Subcase (2.a):} $N_2 \sim N_3$. By $L^{\frac{6}{1-2\varepsilon_1}}L^4L^4L^{\frac{3}{1+\varepsilon_1}}$-H\"older's inequality and Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} z_1 z_2 \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| & \lesssim \|z_1\|_{L^\frac{6}{1-2\varepsilon_1}_{t, x}} \|\jb{\nabla}^\frac{\sigma}{2}z_2\big\|_{L^4_{t, x}} \|\jb{\nabla}^\frac{\sigma}{2} z_3 \|_{L^4_{t, x}}\| v_4 \|_{X^{0, \frac{1}{2}-2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $\lesssim R^3$ outside a set of probability \begin{equation} \leq C\exp\bigg(- c\frac{R^2}{T^\frac{1-2\varepsilon_1}{3}\|\phi\|_{L^2}^2}\bigg) + C\exp\bigg(- c\frac{R^2}{T^\frac{1}{2}\|\phi\|_{H^s}^2}\bigg) \label{ARP1} \end{equation} \noindent provided that $s > \frac{\sigma}{2}$. Note that $s$ needs to be strictly greater than $\frac \sigma 2$ due to the summations over dyadic blocks. For the convenience of readers, we briefly show how this follows. In summing $\|\jb{\nabla}^\frac{\sigma}{2}\mathbf{P}_{N_3}z_3\|_{L^4_{t, x}}$ over dyadic blocks in $N_3$, we have \begin{align*} \sum_{\substack{N_3 \geq 1\\\text{dyadic}}} \|\jb{\nabla}^\frac{\sigma}{2}\mathbf{P}_{N_3}z_3\|_{L^4_{t, x}} &\leq \Big(\sum_{N_3} N_3^{0-}\Big)^{\frac{3}{4}} \|\jb{\nabla}^{\frac{\sigma}{2}+}\mathbf{P}_{N_3}z_3\|_{\l^4_{N_3}L^4_{t, x}}\\ & = \Big(\sum_{N_3} N_3^{0-}\Big)^{\frac{3}{4}} \|\jb{\nabla}^{\frac{\sigma}{2}+}\mathbf{P}_{N_3}z_3\|_{L^4_{t, x}\l^4_{N_3}}\\ & \leq\Big(\sum_{N_3 } N_3^{0-}\Big)^{\frac{3}{4}} \|\jb{\nabla}^{\frac{\sigma}{2}+}\mathbf{P}_{N_3}z_3\|_{L^4_{t, x}\l^2_{N_3}} \lesssim \|\jb{\nabla}^{\frac{\sigma}{2}+}z_3\|_{L^4_{t, x}}, \end{align*} \noindent where the last inequality follows from the Littlewood-Paley theory. By Proposition~\ref{PROP:R2} with $q = r = 4$, we obtain the second term in \eqref{ARP1} as long as $ s> \frac \sigma 2$. Moreover, while the terms with $z_1$ and $z_2$ also suffer a slight loss of derivative, we can hide the loss in $N_1$ and $N_2$ under the $z_3$-term since $N_3 \geq N_1, N_2$. Similar comments also apply in the sequel. \medskip \noindent {\bf $\bullet$ Subcase (2.b):} $N_3 \sim N_4 \gg N_1, N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.i):} $N_1, N_2 \ll N_3^\frac{1}{3}$. We include the detailed calculation only in this case, with similar comments applicable in the following. By Lemmata \ref{LEM:Xsb1} (ii) and \ref{LEM:Xsb2}, with $ b = \frac 12 +$ and $\delta = 0+$, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} z_1 z_2& \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|z_1 \jb{\nabla}^\sigma z_3\|_{L^2_{t, x}} \|z_2 v_4\|_{L^2_{t, x}}\\ & \lesssim N_1^\frac{3}{2} N_3^{-\frac 12+\sigma} N_2^{\frac 32 + \varepsilon_1 +\delta} N_4^{-\frac12 + \varepsilon_1} \prod_{j = 1}^3 \| z_j\|_{X^{0, b}} \| v_4\|_{X^{0,\frac 12 - 2\varepsilon }}\\ & \lesssim N_1^{\frac{3}{2}-s} N_2^{\frac 32 + \varepsilon_1 -s +\delta}N_3^{-\frac 12+\sigma - s} N_4^{-\frac12 + \varepsilon_1} \prod_{j = 1}^3\| z_j\|_{X^{s, b}} \| v_4\|_{X^{0,\frac 12 - 2\varepsilon }}\\ \intertext{By $N_1, N_2, \ll N_3^\frac{1}{3}$, $N_3 \sim N_4$, and Lemma \ref{LEM:Xsb1} (i), we have} & \ll T^{0-}N_3^{-\frac{5}{3}s + \sigma + \frac{4}{3} \varepsilon_1 +\frac 13\delta} \prod_{j = 1}^3 \|\mathbf{P}_{N_j}\phi^\o\|_{H^s} \| v_4\|_{X^{0,\frac 12 - 2\varepsilon }}. \end{align*} \noindent Here, we lost a small power of $T$ in applying \eqref{Xsb1}. Note that such a loss in $T$ can be hidden under $T^\theta$ in \eqref{nl1} and does not cause a problem. Now, we want the power of the largest frequency $N_3$ to be strictly negative so that we can sum over dyadic blocks. This requires \begin{equation} \frac{5}{3} s > \sigma + \frac{4}{3} \varepsilon_1. \label{nl3} \end{equation} \noindent Provided this condition holds, using Lemma \ref{LEM:Hs}, we see that the contribution to \eqref{nl2} in this case is at most $\lesssim T^{0-} R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg). \end{equation*} \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.ii):} $N_2\gtrsim N_3^\frac{1}{3} \gg N_1$. By Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} & z_1 z_2 \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|z_2\|_{L^4_{t, x}} \|\jb{\nabla}^\sigma z_3 \|_{L^4_{t, x}}\|z_1 v_4 \|_{L^2_{t, x}}\\ & \lesssim T^{0-} N_1^{ \frac{3}{2} + \varepsilon_1-s+} N_2^{-s} N_3^{\sigma-s - \frac 12 +\varepsilon_1} \|\mathbf{P}_{N_1}\phi^\o\|_{H^s} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^4_{t, x}} \|v_4\|_{X^{0, \frac 12 - 2\varepsilon}}. \end{align*} \noindent Hence, by Lemma \ref{LEM:Hs} and Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $\lesssim T^{0-} R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ T^\frac{1}{2} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that \eqref{nl3} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (2.b.iii):} $N_1, N_2\geq N_3^\frac{1}{3} $. By $L^{\frac{9}{2-\varepsilon_1}}L^{\frac{9}{2-\varepsilon_1}}L^{\frac{9}{2-\varepsilon_1}}L^{\frac{3}{1+\varepsilon_1}}$-H\"older's inequality and Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} z_1 z_2 \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| & \lesssim N_3^{\sigma - \frac 53 s } \prod_{j =1}^3 \big\| \jb{\nabla}^s z_j\|_{L^\frac{9}{2-\varepsilon_1}_{t, x}} \|v_4\|_{X^{0, \frac 12 - 2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $\lesssim R^3$ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac{4 - 2\varepsilon_1 }{9} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that \begin{equation} \frac{5}{3} s > \sigma. \label{nl4} \end{equation} Therefore, given $s > \frac{3}{5}$, we choose $\sigma = 1+$ and $\varepsilon = 0+$ for Case (2) such that \eqref{nl3} and \eqref{nl4} are satisfied. \medskip \noindent {\bf Case (3):} $v v z$ case. Without loss of generality, assume $N_1 \geq N_2$. \medskip \noindent {\bf $\bullet$ Subcase (3.a):} $N_1 \gtrsim N_3$. By $L^3L^\frac{6}{1-\varepsilon_1}L^\frac{6}{1-\varepsilon_1}L^\frac{3}{1+\varepsilon_1}$-H\"older's inequality and Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg| \int_{\mathbb{R}\times \mathbb{R}^4} \jb{\nabla}^\sigma v_1 v_2 z_3 v_4 dx dt \bigg| & \lesssim \|v_1\|_{X^{\sigma, \frac{1}{2}+}} \|v_2\|_{X^{1+\varepsilon_1, \frac{1}{2}+}} \|z_3\|_{L^\frac{6}{1-\varepsilon_1}_{t, x} } \|v_4\|_{X^{0, \frac12 - 2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim R \prod_{j = 1}^2 \|v_j\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac{1 - \varepsilon_1 }{3} \|\phi\|_{H^{0+}}^{2}}\bigg) \label{Z1} \end{equation} \noindent provided that $\sigma > 1 + \varepsilon_1 = 1+ 2\varepsilon+$. Note that we have $\|\phi\|_{H^{0+}}$ instead of $\|\phi\|_{L^2}$ in \eqref{Z1} due to the summation over $N_3$. \medskip \noindent {\bf $\bullet$ Subcase (3.b):} $N_3\sim N_4 \gg N_1 \geq N_2$. By Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} v_1 v_2 & \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|v_1\|_{L^4_{t, x}} \|\jb{\nabla}^\sigma z_3 \|_{L^4_{t, x}}\|v_2 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_2^{\frac32 + \varepsilon_1 -\sigma+ } N_3^{\sigma - s} N_4^{-\frac 12 + \varepsilon_1} \|v_1\|_{X^{\frac{1}{2}, \frac{1}{2}+}} \|v_2\|_{X^{\sigma, \frac 12+}} \|\jb{\nabla}^{s} z_3\|_{L^4_{t, x}} \|v_4\|_{X^{0, \frac 12 - 2\varepsilon}}\\ & \lesssim N_1^{2 - 2\sigma +\varepsilon_1+} N_3^{\sigma - s - \frac 12 + \varepsilon_1} \|v_1\|_{X^{\sigma, \frac{1}{2}+}} \|v_2\|_{X^{\sigma, \frac 12+}} \|\jb{\nabla}^{s} z_3\|_{L^4_{t, x}} \|v_4\|_{X^{0, \frac 12 - 2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim R \prod_{j = 1}^2 \|v_j\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac{1}{2} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that $2 - 2\sigma + \varepsilon_1 < 0$ and $ s > \sigma - \frac 12 + \varepsilon_1$. Given $s > \frac{1}{2}$, these conditions are satisfied by taking $\sigma = 1+$ and $\varepsilon = 0+$. \medskip \noindent {\bf Case (4):} $v z z$ case. Without loss of generality, assume $N_3 \geq N_2$. \medskip \noindent {\bf $\bullet$ Subcase (4.a):} $N_1 \gtrsim N_3$. By $L^3L^\frac{6}{1-\varepsilon_1}L^\frac{6}{1-\varepsilon_1}L^\frac{3}{1+\varepsilon_1}$-H\"older's inequality and Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} \jb{\nabla}^\sigma v_1 z_2 z_3 v_4 dx dt \bigg| & \lesssim \|v_1\|_{X^{\sigma, \frac{1}{2}+}} \|z_2\|_{L^\frac{6}{1-\varepsilon_1}_{t, x}} \|z_3\|_{L^\frac{6}{1-\varepsilon_1}_{t, x}} \|v_4\|_{X^{0, \frac 12 - \varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim R^2 \|v_1\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac{1-\varepsilon_1}{3} \|\phi\|_{H^{0+}}^{2}}\bigg). \end{equation*} \medskip \noindent {\bf $\bullet$ Subcase (4.b):} $N_3 \gg N_1$. First, suppose that $N_2 \sim N_3$. Then, by Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2} (after separating the argument into two cases: $N_1 \leq N_4$ or $N_1 \geq N_4$), we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} v_1 z_2 & \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|\jb{\nabla}^\frac{\sigma}{2}z_2\|_{L^4_{t, x}} \|\jb{\nabla}^\frac{\sigma}{2} z_3 \|_{L^4_{t, x}}\|v_1 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{1+2\varepsilon_1 - \sigma+} N_3^{\sigma - 2s} \|v_1\|_{X^{\sigma, \frac12+}} \|\jb{\nabla}^s z_2\|_{L^4_{t, x}} \|\jb{\nabla}^s z_3 \|_{L^4_{t, x}} \|v_4 \|_{X^{0, \frac 12 -2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim R^2 \|v_1\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac{1}{2} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that $\sigma >1+2\varepsilon_1 $ and $s > \frac{1}{2}\sigma$. Given $s > \frac{1}{2}$, these conditions are satisfied by taking $\sigma = 1+$ and $\varepsilon = 0+$. Hence, it remains to consider the case $N_3 \sim N_4 \gg N_1, N_2$. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.i):} $N_1, N_2\ll N_3^\frac 13$. By Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} & v_1 z_2 \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|v_1\jb{\nabla}^\sigma z_3 \|_{L^2_{t, x}}\|z_2 v_4 \|_{L^2_{t, x}}\\ & \lesssim T^{0-} N_1^{\frac 32 -\sigma} N_2^{\frac 32+\varepsilon_1 - s+} N_3^{\sigma - s-\frac 12 }N_4^{-\frac 12 + \varepsilon_1} \|v_1\|_{X^{\sigma, \frac 12 +}} \prod_{j = 2}^3 \|\mathbf{P}_{N_j}\phi^\o\|_{H^s}\|v_4\|_{X^{0, \frac 12 -2\varepsilon}}\\ & \lesssim T^{0-} N_3^{\frac 23 \sigma - \frac 43 s + \frac 43\varepsilon_1+} \|v_1\|_{X^{\sigma, \frac 12 +}} \prod_{j = 2}^3 \|\mathbf{P}_{N_j}\phi^\o\|_{H^s}\|v_4\|_{X^{0, \frac 12 -2\varepsilon}}. \end{align*} \noindent Hence, by Lemma \ref{LEM:Hs}, the contribution to \eqref{nl2} in this case is at most $ \lesssim T^{0-} R^2 \|v_1\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that \begin{equation} s > \frac 12 \sigma + \varepsilon_1. \label{nl5} \end{equation} \noindent Given $s > \frac{1}{2}$, this condition is satisfied by taking $\sigma = 1+$ and $\varepsilon = 0+$. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.ii):} $N_1\ll N_3^{\frac 13} \lesssim N_2$. By Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} v_1 z_2 & \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|z_2\|_{L^4_{t, x}} \|\jb{\nabla}^\sigma z_3 \|_{L^4_{t, x}}\|v_1 v_4 \|_{L^2_{t, x}}\\ & \lesssim N_1^{\frac32 +\varepsilon_1 - \sigma+ } N_2^{-s} N_3^{\sigma - s - \frac 12 + \varepsilon_1} \|v_1\|_{X^{\sigma, \frac 12+}} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^4_{t, x}}\|v_4\|_{X^{0, \frac 12-2\varepsilon}}\\ & \lesssim N_3^{\frac 23 \sigma - \frac 43 s +\frac 43\varepsilon_1+} \|v_1\|_{X^{\sigma, \frac 12+}} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^4_{t, x}}\|v_4\|_{X^{0, \frac 12-2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim R^2 \|v_1\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac 12 \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that \eqref{nl5} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.iii):} $N_2 \ll N_3^{\frac 13} \lesssim N_1$. By Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} & v_1 z_2 \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| \lesssim \|v_1\|_{L^3_{t, x}} \|\jb{\nabla}^\sigma z_3 \|_{L^6_{t, x}}\|z_2 v_4 \|_{L^2_{t, x}}\\ & \lesssim T^{0-} N_1^{ - \sigma} N_2^{\frac 32 +\varepsilon_1 - s+} N_3^{\sigma - s -\frac 12+ \varepsilon_1 } \|v_1\|_{X^{\sigma, \frac {1}{2}+}} \|\mathbf{P}_{N_2}\phi^\o \|_{H^s} \|\jb{\nabla}^s z_3\|_{L^6_{t, x}}\|v_4\|_{X^{0, \frac 12-2\varepsilon}}\\ & \lesssim T^{0-} N_3^{\frac 23 \sigma - \frac 43 s +\frac 43 \varepsilon_1+} \|v_1\|_{X^{\sigma, \frac {1}{2}+}} \|\mathbf{P}_{N_2}\phi^\o \|_{H^s} \|\jb{\nabla}^s z_3\|_{L^6_{t, x}}\|v_4\|_{X^{0, \frac 12-2\varepsilon}}. \end{align*} \noindent Hence, by Lemma \ref{LEM:Hs} and Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim T^{0-} R^2 \|v_1\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ \|\phi\|_{H^{s}}^{2}}\bigg) + C\exp\bigg(-c \frac {R^2}{ T^\frac 13 \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that \eqref{nl5} is satisfied. \smallskip \noindent $\circ$ \underline{Subsubcase (4.b.iv):} $N_1, N_2\gtrsim N_3^{\frac 13}$. By $L^3L^\frac{6}{1-\varepsilon_1}L^\frac{6}{1-\varepsilon_1}L^\frac{3}{1+\varepsilon_1}$-H\"older's inequality and Lemmata \ref{LEM:Xsb1} and \ref{LEM:Xsb2}, we have \begin{align*} \bigg|\int_{\mathbb{R}\times \mathbb{R}^4} v_1 z_2 \jb{\nabla}^\sigma z_3 v_4 dx dt \bigg| & \lesssim \|v_1\|_{L^3_{t, x}} \|z_2\|_{L^\frac {6}{1-\varepsilon_1}_{t, x}} \|\jb{\nabla}^\sigma z_3 \|_{L^\frac {6}{1-\varepsilon_1}_{t, x}}\| v_4 \|_{L^\frac{3}{1+\varepsilon_1}_{t, x}}\\ & \lesssim N_1^{- \sigma} N_2^{-s} N_3^{\sigma-s} \|v_1\|_{X^{\sigma, \frac 12+}} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^\frac {6}{1-\varepsilon_1}_{t, x}}\|v_4\|_{X^{0, \frac 12-2\varepsilon}}\\ & \lesssim N_3^{\frac 23 \sigma-\frac 43s} \|v_1\|_{X^{\sigma, \frac 12+}} \prod_{j = 2}^3 \|\jb{\nabla}^s z_j\|_{L^\frac {6}{1-\varepsilon_1}_{t, x}}\|v_4\|_{X^{0, \frac 12-2\varepsilon}}. \end{align*} \noindent Hence, by Proposition \ref{PROP:R2}, the contribution to \eqref{nl2} in this case is at most $ \lesssim R^2 \|v_1\|_{X^{\sigma, \frac{1}{2}+}} $ outside a set of probability \begin{equation*} \leq C\exp\bigg(-c \frac {R^2}{ T^\frac{1-\varepsilon_1}{3} \|\phi\|_{H^{s}}^{2}}\bigg) \end{equation*} \noindent provided that $s > \frac {1}{2}\sigma$. Given $s> \frac 12$, this condition is satisfied by setting $\sigma = 1+$. \end{proof}
2,877,628,089,878
arxiv
\section{Introduction} In medical image analysis, segmentation is undoubtedly one of the most widely researched problems in the literature. The goal is to identify the structure of interest~(SOI) with pixel level accuracy, acquiring rich information, such as the position, size, and texture statistics, to assist clinicians for making assessments on diseases and better treatment plans. In the recent years, tremendous progress has been made for {\em fully-automatic} segmentations by training deep neural networks on large-scale datasets in a {\em supervised manner}, for example, FCN~\cite{Long15}, UNet~\cite{Ronneberger15}, nnU-Net~\cite{Isensee20}. Occasionally, such heavily-supervised approaches have already approached similar performance level as human expert. However, apart from the well-solved problems, one critical issue remains, {\bf what else can we do to improve models' usability on the challenging scenarios}, where automatic predictions significantly under-perform humans, either due to the lack of large-scale training set, or the difficulty nature of problem, for example, on cancer segmentation. One potential solution is interactive segmentation, with the goal of refining the automatic predictions by only a few user inputs, {\em e.g.}~clicks, scribbles, boundary delineation, etc. In the literature, such line of research involves a long list of seminal works, including the early attempts that took inspiration from mathematics and topology, adopting variational methods~\cite{Chan01,Mumford89} to group pixels that share certain common features as the users' initialisation, {\em e.g.}~intensity, texture, contrast, etc. However, such variational approaches usually involve heavy parameter-tuning and long inference time~({\em i.e.}~tens of minutes for large volumes), thus limiting its practical usefulness. Till recently, approaches based on deep learning become popular, by transforming users' annotations into certain distance maps, such as euclidean~\cite{li2018interactive}, gaussian~\cite{maninis2018deep} or geodesic distance maps~\cite{wang2018deepigeos,luo2021mideepseg}, the ConvNets are trained to exploit such information together with images; concurrently, a set of works have also considered to ease the interaction process, for example, \cite{maninis2018deep,luo2021mideepseg} only requires to click on extreme points and~\cite{zhang2020interactive} use both positive and negative points. Here, we continue this vein of research, and propose a novel {\bf T}ransformer-based architecture for {\bf I}nteractive {\bf S}egmentation, termed as {\bf TIS}. In particular, for images that end up with unsatisfactory predictions, our proposed TIS only requires a few clicks from the end users, each pixel on the image only needs to compare with these ``examplar'' clicks, and copy the label information from its closest click. Such ``compare-and-copy'' procedure can be elegantly achieved by adopting a variant of Transformer Decoder. Notably, in contrast to the existing approaches that can only segment single structure at a time, our TIS allows the end users to edit arbitrary number of categories simultaneously, given each is provided with at least one examplar click. We conduct evaluations on three challenging datasets and demonstrate superior performance over the state-of-the-art approaches. \section{Methods} In this paper, we consider a practical scenario for segmenting challenging structures in medical images, that is, to enable end users to correct the model's prediction with only {\em a few clicks} during inference time. \subsection{Problem Scenario} Given a \textbf{training set} of $n$ image-mask pairs, $\mathcal{D}_{\text{train}} = \{(\mathcal{I}_1, y_1), \allowbreak \dots, (\mathcal{I}_n, y_n)\}$, where $\mathcal{I} \in \mathbb{R}^{H \times W \times D }$ and $y \in \mathbb{R}^{H \times W \times D \times C}$, with $H, W, D, C$ denoting height, width, depth, and number of categories, respectively. Our goal is to train a segmentation model that can not only give automatic predictions, but also allow the end users to refine the predictions during {\bf inference time}: \begin{align} \hat{y} = \mathrm{\Phi_{\text{REF}}}(\mathrm{\Phi_{\text{ENC}}}(\mathcal{I}; \Theta_e), \mathcal{A}; \Theta_r) \in \mathbb{R}^{H \times W \times D \times C}, \end{align} where $\hat{y}$ denotes the final prediction, $\mathcal{A} = \{ (p_1, c_k), \dots, (p_n, c_k)\}$ refers the user's interaction in the form of pixel clicks, with $p_i \in \mathbb{R}^{3}$ denoting the spatial position for each given click, and $c_i \in \mathbb{R}^C$ referring the semantic category for the annotated pixel. In the following sections, we will detail the building blocks of our proposed architecture~(as shown in Fig.~\ref{overview}(a)), namely, an encoder network~($\mathrm{\Phi}_{\text{ENC}}$, parameterized with $\Theta_e$) that extracts image features, and provides automatic structure segmentations; and a refinement module~($\mathrm{\Phi}_{\text{REF}}$, parameterized with $\Theta_r$) that refines its prediction with the provided clicks from end users through {\em click encoding} and {\em label assignment}. Generally, the outputs from the encoder are architecture-agnostic, and experts can continuously interact with the refinement module until being satisfied. \begin{figure}[t] \includegraphics[width=\textwidth]{fig/framework.pdf} \caption{Structure of our method. (a) Overview of the whole framework. (b) Structure of click encoding module. (d) Structure of label assignment module. } \label{overview} \end{figure} \subsection{Encoder~($\mathrm{\Phi_{\text{ENC}}}$)} \label{sec:Encoder} As for image encoder, we adopt the popular nnU-Net~\cite{Isensee20}, that maps the input images to segmentation masks: \begin{align} \{y_{\text{mask}}, \text{\hspace{2pt}} f_{\text{enc}}\} = \mathrm{\Phi}_{\text{ENC}}(\mathcal{I}; \text{\hspace{2pt}}\Theta_e), \end{align} where $y_{\text{mask}} \in \mathbb{R}^{H \times W \times D \times C}$ refers to the output mask with $C$ categories, and $f_{\text{enc}} \in \mathbb{R}^{\frac{H}{2} \times \frac{W}{2} \times \frac{D}{2} \times m}$ denotes the dense feature embeddings from the penultimate layer of nnU-Net. During training, the encoder is simply trained with standard pixelwise cross-entropy loss. \\ \subsection{Refinement~($\mathrm{\Phi_{\text{REF}}}$)}\label{sec:Decoder} At a high-level, our method is based on the intuition that users' click can be treated as a set of ``examplars'', and each pixel to be segmented can simply compare with these ``examplars'', and copy the category information from its closest clicks. Technically, we adopt an architecture that interleaves Transformer Decoder variants, alternating between {\em click encoding} and {\em label assignment}, as detailed below. \paragraph{\bf Tokenization: } To start with, we vectorize the encoder's output encodings: \begin{align} \mathcal{V}_{\text{enc}} &= \phi_{\text{flatten}}(f_{\text{enc}}) + \phi_{\text{PE}}(\Omega), \end{align} where $\phi_{\text{flatten}}$ refers to a reshape operation, and $\phi_{\text{PE}}$ refers to the learnable positional encodings for a dense 3D grid~($\Omega$), $\mathcal{V}_{\text{enc}} \in \mathbb{R}^{HWD \times m}$ ends up to be the tokenized vector sequences of the encoder embeddings. \paragraph{\bf Click Encoding: } To incorporate the users' feedback in refinement, we encode these clicks into vectors: \begin{align} & \mathcal{V}_{\text{click}} = \phi_{\text{index}}(f_{\text{enc}}, \mathcal{A}(p)),\\ & \mathcal{V}_{\text{click}} = \psi_{\text{T-Dec}}(\underbrace{W^q_1 \cdot \mathcal{V}_{\text{click}}}_{\text{Query}}, \text{\hspace{3pt}} \underbrace{W^k_1 \cdot \mathcal{V}_{\text{enc}}}_{\text{Key}}, \text{\hspace{3pt}} \underbrace{W^v_1 \cdot \mathcal{V}_{\text{enc}}}_{\text{Value}}), \end{align} where $\phi_{\text{index}}$ refers to an indexing function that simply pick out the vectors from the dense feature map, based on corresponding spatial locations of the clicks. To avoid notation abuse, we use $\mathcal{V}_{\text{click}} \in \mathbb{R}^{k \times m}$ to represent the results from indexing initial click embeddings, and the one after enrichment from a standard Transformer Decoder~($\psi_{\text{T-Dec}}$), with input {\em Query} derived from a linear projection of the click embeddings, while {\em Key} and {\em Value} are generated by applying two different linear transformations on the encoder's outputs. In specific, the Transformer Decoder consists of multi-head cross-attention (MHCA), multi-head self-attention (MHSA), a feed-forward network (FFN), and residual connections, which effectively enriches the click embeddings by aggregating information from the encoder outputs. For more details, we would refer the readers to the original Transformer paper~\cite{Vaswani17}. \paragraph{\bf Label Assignment: } Here, we construct a ``compare-and-copy'' mechanism, that assigns labels to each pixel based on two factors: (1) the similarity between pixels and users' clicks, (2) the weighting of the automatic segmentation. Specifically, we adopt a Transformer Decoder variant, as shown below: \begin{gather*} \mathcal{V}_{\text{enc}} = \alpha \cdot \psi_{\text{T-Dec}}(\underbrace{W^q_2 \cdot \mathcal{V}_{\text{enc}}}_{\text{Query}}, \text{\hspace{3pt}} \underbrace{W^k_2 \cdot \mathcal{V}_{\text{click}}}_{\text{Key}}, \text{\hspace{3pt}} \underbrace{\phi_{\text{CE}}(\mathcal{A}(c))}_{\text{Value}}) +(1-\alpha) \cdot \phi_{\text{CE}}(y_{\text{mask}}), \end{gather*} where $\phi_{\text{CE}}(\cdot)$ refers to a projection~(accomplished by a learnable MLP) on the category labels to high-dimensional embeddings, operating on both user's click and automatic segmentations, these embeddings are then used for constructing the {\em Value} in Transformer Decoder. As for {\em Query} and {\em Key}, they are computed by applying linear transformations on the dense features and click embeddings respectively. {\bf Note that}, with such architecture design, the cross attention effectively computes a similarity matching between each pixel~($\mathcal{V}_{ \text{enc}}$) and users' click~($\mathcal{V}_{\text{click}}$), and copying back the ``category'' information, balanced with a {\em learnable} weighting scalar~($\alpha$) between predictions and user's clicks. \vspace{-3pt} \paragraph{\bf Training: } As mentioned above, features after label assignment now have incorporated the category information obtained from both automatic segmentation and clicks. To properly train the model, we simulate the user interactions at training stage, where clicks are sampled based on the discrepancy between automatic prediction and the groundtruth annotations, {\em i.e.}~erroneous predictions. After stacking 6 layers of {\em click encoding} and {\em label assignment} modules, we adopt a {\bf linear MLP layer} to predict the segmentation, and train it with pixelwise cross-entropy loss. \vspace{-3pt} \paragraph{\bf Discussion: } Inspired by the observation that pixels of the same category should ideally be clustered together in some high-dimensional space, we thus adopt a variant of Transformer Decoder, which naturally facilitates the ``compare-and-copy'' mechanism, {\em i.e.}~compute similarity between pixels and user's click, then copy the category information correspondingly. Note that, such procedure works in the same manner for segmentation of arbitrary class, this is in contrast to the existing approaches that are only limited to work for binary segmentation. \section{Experiments} \subsection{Datasets} In this paper, we conduct experiments on the Medical Segmentation Decathlon (MSD) datasets~\cite{antonelli2021medical}. Specifically, we focus on three challenging subsets: \paragraph{\bf Lung (training set) } consists of preoperative thin-section CT scans from 63 patients with non-small cell lung cancer. The goal is to segment the tumors within the lung (L1). We randomly split into 50 cases for training and the rest 13 cases for evaluation. \vspace{-3pt} \paragraph{\bf Colon (training set) } consists of 126 portal venous phase CT scans of patients undergoing resection of primary colon cancer. The goal is to segment the colon cancer primaries (L1). 100 cases are split for training randomly and the remaining 26 for evaluation. \vspace{-3pt} \paragraph{\bf Pancreas (training set) } consists of 281 portal venous phase CT scans of patients undergoing resection of pancreatic masses. The goal is to segment {\em both} pancreatic parenchyma (L1) and pancreatic tumor (L2). 224 cases are randomly picked for training and the remaining 57 for evaluation. \subsection{Evaluation Metrics} For quantitative evaluation, we employ Dice Similarity Coefficient (DSC): \begin{equation} \operatorname{DSC}\left(\mathcal{R}_{p}, \mathcal{R}_{g}\right)=\frac{2\left|\mathcal{R}_{p} \cap \mathcal{R}_{g}\right|}{\left|\mathcal{R}_{p}\right|+\left|\mathcal{R}_{g}\right|}, \end{equation} where $\mathcal{R}_{p}, \mathcal{R}_{g}$ represent the region of prediction and the ground-truth, respectively. $|\cdot|$ is the number of pixels/voxels in the corresponding region. And the goal is thus to get a higher accuracy with less user clicks. \subsection{Implementation Details} We use nnU-Net~\cite{Isensee20} as our encoder, and retrain it on corresponding datasets under the default settings. According to the official code of nnU-Net\footnote{\url{https://github.com/MIC-DKFZ/nnUNet.}}, the setting we used to train it is that the ``network'' is ``3d-fullres'', the ``network-trainer'' is ``nnUNetTrainerV2'' and leave other options as default. Due to the complexity of Transformer and the cost of memory, we used feature of the penultimate layer of nnU-Net. In practice, the feature will be cropped based on the automatic segmentation and clicks before processed by our model. In practice, the feature will be cropped based on the automatic segmentation and clicks before processed by our model. In the experiment, we train our model for 200 epochs. It is optimized using AdamW optimizer, starting with a learning rate of $10^{-2}$, and decreasing with a rate factor 0.9 every 10 epochs. For more details, please see supplementary materials. \begin{table}[t] \centering \setlength\tabcolsep{8pt} \resizebox{.9\textwidth}{!}{% \begin{tabular}{l|c|c|c|c|c|c} \toprule \multirow{2}{*}{Metric}& \multirow{2}{*}{Method}& \multirow{2}{*}{Year}& \multicolumn{1}{c|}{Lung} & \multicolumn{1}{c|}{Colon} & \multicolumn{2}{c}{Pancreas} \\ \cmidrule{4-7} & & &\multicolumn{1}{c|}{L1} & \multicolumn{1}{c|}{L1} & \multicolumn{1}{c}{L1} & \multicolumn{1}{c}{L2} \\ \midrule \multirow{6}{*}{Dice}& Automatic~\cite{Isensee20} & 2018 & 64.99$$ & 44.84$$ $$ & 82.16$$ & 49.34\\ & InterCNN~\cite{bredell2018iterative} & 2018 & 80.07$_{\pm2.65}$ & 69.58$_{\pm2.97}$ & 82.31$_{\pm3.28}$ & 74.17$_{\pm2.91}$ \\ & DeepIGeoS~\cite{wang2018deepigeos} & 2019& 81.74$_{\pm1.72}$ & 70.61$_{\pm2.46}$ & 82.77$_{\pm1.51}$ & 75.36$_{\pm2.60}$ \\ & BS-IRIS~\cite{ma2020boundary} & 2020 & 81.67$_{\pm2.14}$ & 71.27$_{\pm1.82}$ & 85.16$_{\pm1.34}$ & 76.49$_{\pm2.48}$ \\ & MIDeepSeg~\cite{luo2021mideepseg} & 2021& 82.31$_{\pm3.58}$ & 71.89$_{\pm3.09}$ & 84.69$_{\pm4.03}$ & 70.34$_{\pm4.36}$ \\ & {\bf Ours} & 2022 & {\bf 85.07}$_{\pm1.55}$ & {\bf 73.03}$_{\pm1.68}$ & {\bf 87.72}$_{\pm1.28}$ & {\bf 77.91}$_{\pm2.07}$ \\ \bottomrule \end{tabular}}% \vspace{5pt} \caption{Performances of different methods on three datasets with 10 clicks. } \label{tab:10clickspreformance} \vspace{-0.7cm} \end{table} \section{Results} \subsection{User Interactions Simulation} Following previous works~\cite{wang2018deepigeos,ma2020boundary,luo2021mideepseg}, we also adopt a robust agent to simulate the user clicks. The clicking positions are chosen as the center region of the largest mis-segmented regions according to the ground-truth. For each clicking step, a small disturbance $\epsilon$ (10 pixels) is added on each click's positions to imitate the behavior of a real user and force the model to be more robust. Additionally, in cases when the centroid is not within the mis-segmented region, then we pick an arbitrary point in the non-boundary region as the click. \subsection{Comparisons with State-of-the-Art} \paragraph{\bf Performance on Different Datasets: } As shown in Tabel~\ref{tab:10clickspreformance}, given 10 clicks, our approach achieves the best performance in all three challenging datasets. It is worth noting that the ``Automatic'' row shows the results from state-of-the-art automatic segmentation model~(nnU-Net~\cite{Isensee20}), while our proposed TIS can improve the prediction by a large margin with only a few clicks. \vspace{-3pt} \paragraph{\bf Improvements in One Interaction Sequence: } In Fig.~\ref{improvements_different}, we plot the results with multi-round refinement for different approaches. As can be seen, despite all interactive segmentation approaches have shown improvements with interaction, our proposed TIS maintains the growing trend with more clicks available, and substantially outperforms the others in the end. Specifically, for datasets that have small regions and heterogeneous appearance (for example, cancers), our method has smoother and more robust refinements through interaction. \begin{figure}[t] \includegraphics[width=\textwidth]{fig/squence_improvement.pdf} \caption{Improvements in one interactive sequence of different methods on three datasets (four structures). Note that figures with gray background are two different structures in the same dataset. This figure is best viewed on pdf.} \label{improvements_different} \end{figure} \subsection{Ablation Study} In this section, we perform a detailed ablation study by removing the component from TIS. All the ablation experiments are performed on the Pancreas dataset. \vspace{-3pt} \begin{table}[!htb] \centering \setlength\tabcolsep{5pt} \resizebox{.8\textwidth}{!}{% \begin{tabular}{cc|cc|cc} \hline \multicolumn{2}{c|}{Modules} & \multicolumn{2}{c|}{5 clicks} & \multicolumn{2}{c}{10 clicks} \\ \hline click encoding & label assignment & pancreas & cancer & pancreas & cancer \\ \hline \XSolidBrush& \Checkmark & 82.74 & 66.74 & 83.32 & 71.52 \\ \Checkmark & \XSolidBrush & 81.23 & 70.54 & 80.78 & 69.82 \\ \Checkmark & \Checkmark & {\bf 86.15} & {\bf 72.82} & {\bf 87.72} &{\bf 77.91 } \\ \hline \end{tabular}} \vspace{5pt} \caption{Quantitive ablation study on critical modules.}\label{ablation} \vspace{-1.2cm} \end{table} \paragraph{\bf Effect of Click Encoding: } The main purpose of click encoding is to add context information to the click embeddings. To validate its effect, we only index embeddings, but do not encode them through Transformer Decoder~($\psi_{\text{T-Dec}}$). As shown in Table.~\ref{ablation}, the performance has a severe recession, due to the lack of context information, especially after feature update of label assignment, the network degrades, refining only a small area around the click. \paragraph{\bf Effect of Label Assignment: } In this part, we {\bf do not} encode the category label as {\em Value} , rather, use a MLP projection, as normally did in standard transformer. Through a stack of Transformer layers, the cross-attention still computes the similarity between pixels and clicks, but unable to directly copy the label information. And at the end, the click embeddings obtained from click encoding are used as classifiers to segment based on similarity. In order to make a fair comparison, we still use the weighting scalar~($\alpha$) to combine the automatic segmentation and the Transformer segmentation. As shown in Table.~\ref{ablation}, the performance has dropped drastically. With the number of clicks increasing, it even gets worse. In practice, despite we click at the mis-segmented region, the network is still unable to correct it. \subsection{Visualization of Results} Fig.~\ref{visualization} shows a comparison of the final segmentation results for the automatic nnU-net and four interactive methods. It can be seen that the state-of-the-art automatic segmentation method nnU-net often fails in these challenging scenarios, showing the potential of interactive segmentation methods. And our method, with the same 10 clicks provided, have a more detailed segmentation and largest performance improvements compared with all interactive methods, especially for some small regions like lung cancer and colon cancer. \begin{figure}[t] \includegraphics[width=.97\textwidth]{fig/vis_single.pdf} \caption{Visualization of qualitative comparison on three datasets (four structures). All interactive methods are provided with 10 clicks.} \label{visualization} \vspace{-2pt} \end{figure} \subsection{Visualization of the Interaction Process} Fig.~\ref{appendix} shows the interaction process, where two examples from Pancreas dataset are detailed. For each step, we provide the mis-segmented region~(``Error''), the click position~(``Click'') and the prediction of our method~(``Pred''). The performance improves as the number of clicks increasing, with the mis-segmented region decreases alongside. \begin{figure}[h] \centering \includegraphics[width=0.93\textwidth]{fig/app_single.pdf} \caption{Visualization of the interaction process with 5 clicks. Two examples are shown. } \label{appendix} \end{figure} \section{Conclusion} We propose a novel Transformer-based framework~({\bf TIS}) for interactive segmentation. It is designed for a realistic and challenging scenario, where automatic segmentations largely under-perform human performance. The core idea is to treat the users' clicks as ``examplars'', and our proposed method is able to segment SOI by comparing each pixel on the image with the provided clicks, and copy the label information from them. Specifically, our method interleaves two variants of Transformer Decoders, alternating between {\em click encoding} and {\em label assignment}. We validate our method on three challenging datasets and demonstrate superior performance over the existing state-of-the-art methods. Additionally, our methods breaks through the limitation that previous methods can only complete single classification.
2,877,628,089,879
arxiv
\section{Introduction: Superalgebras, constructions} Denote $\NO=\{0,1,2,\dots\}$. By $K$ denote the ground field of characteristic $\ch K\ne 2$, $\langle S\rangle_K$ a linear span of a subset $S$ in a $K$-vector space. \subsection{Associative and Lie superalgebras} Superalgebras appear naturally in physics and mathematics~\cite{Kac77,Scheunert,BMPZ}. Let $\Z_2=\{\bar 0,\bar 1\}$, the group of order 2. A {\em superalgebra} $A$ is a $\Z_2$-graded algebra $A=A_{\bar 0}\oplus A_{\bar 1}$. The elements $a\in A_\alpha$ are called {\em homogeneous of degree} $|a|=\alpha\in\Z_2$. The elements of $A_{\bar 0}$ are {\em even}, those of $A_{\bar 1}$ {\em odd}. In what follows, if $|a|$ enters an expression, then it is assumed that $a$ is homogeneous of degree $|a|\in\Z_2$, and the expression extends to the other elements by linearity. Let $A,B$ be superalgebras, a {\em tensor product} $A\otimes B$ is a superalgebra whose space is the tensor product of the spaces $A$ and $B$ with the induced $\Z_2$-grading and the product satisfying Kaplansky's rule: $$ (a_1\otimes b_1) (a_2\otimes b_2)=(-1)^{|b_1|\cdot |a_2|}a_1a_2\otimes b_1b_2,\quad a_i\in A,\ b_i\in B. $$ An {\em associative superalgebra} $A$ is just a $\Z_2$-graded associative algebra $A=A_{\bar 0}\oplus A_{\bar 1}$. Let $V=V_{\bar 0}\oplus V_{\bar 1}$ be a $\Z_2$-graded vector space. Then $\End(V)$ is an associative superalgebra, where $\End(V)_{a}=\{\phi\in\End (V)| \phi(V_{b})\subset V_{a+b},b\in\Z_2\}$, $a\in \Z_2$. In case $\dim V_{\bar 0}=m$, $\dim V_{\bar 1}=k$ this superalgebra is denoted by $\M(m|k)$. One has an isomorphism of superalgebras $\M(a|b)\otimes \M(c|d)\cong \M(ac+bd|ad+bc)$ for all $a,b,c,d\ge 0$. A {\em Lie superalgebra} is a $\Z_2$-graded algebra $L=L_{\bar 0}\oplus L_{\bar 1}$ with an operation $[\ ,\ ]$ satisfying the axioms ($\ch K\ne 2,3$): \begin{itemize} \item $[x,y]=-(-1)^{|x|\cdot |y| }[y,x]$,\qquad\qquad (super-anticommutativity); \item $[x,[y,z]]=[[x,y],z]+(-1)^{|x|\cdot| y|}[y,[x,z]]$,\qquad (super Jacobi identity). \end{itemize} \subsection{Poisson superalgebras}\label{SSPoisson} A $\Z_2$-graded vector space $A=A_{\bar 0}\oplus A_{\bar 1}$ is called a {\it Poisson superalgebra} provided that, beside the addition, $A$ has two $K$-bilinear operations as follows: \begin{itemize} \item $A=A_{\bar 0}\oplus A_{\bar 1}$ is an associative superalgebra with unit whose multiplication is denoted by $a\cdot b$ (or $ab$), where $a, b\in A$. We assume that $A$ is {\it supercommutative}, i.e. $a\cdot b=(-1)^{|a| |b|}b\cdot a$, for all $a,b\in A$. \item $A=A_{\bar 0}\oplus A_{\bar 1}$ is a Lie superalgebra whose product is traditionally denoted by the {\it Poisson bracket} $\{a, b\}$, where $a, b\in A$. \item these two operations are related by the {\it super Leibnitz rule}: \begin{equation*} \{a\cdot b, c\}=a\cdot\{b, c\}+(-1)^{|b|\cdot |c|}\{a, c\}\cdot b,\qquad a, b, c \in A. \end{equation*} \end{itemize} Let $L$ be a Lie superalgebra, $\{U_n| n\ge 0\}$ the natural filtration of its universal enveloping algebra $U(L)$ by degree in $L$. Consider the {\it symmetric algebra} $S(L)=\gr U(L)=\mathop{\oplus}\limits_{n=0}^\infty U_{n}/U_{n+1}$ (see~\cite{Dixmier}). Recall that $S(L)$ is identified with the supercommutative superalgebra $K[v_i\,|\, i\in I]\otimes \Lambda (w_j,|\,j\in J)$, where $\{v_i\,|\, i\in I\}$, $\{w_j\,|\, j\in J\}$, are bases of $L_{\bar 0}$, $L_{\bar 1}$, respectively. Define a Poisson bracket by setting $\{v,w\}=[v,w]$, where $v,w\in L$, and extending to the whole of $S(L)$ by linearity and the Leibnitz rule. Then, $S(L)$ is turned into a Poisson superalgebra, called the {\it symmetric algebra} of $L$. Let $L(X)$ be the free Lie superalgebra generated by a graded set $X$, then $S(L(X))$ is a free Poisson superalgebra~\cite{Shestakov93}. Let us consider one more example. Let $H_{n}=\Lambda(x_1,\dots,x_n,y_1,\dots,y_n)$ be the Grassmann superalgebra supplied with a bracket determined by: $\{x_i,y_j\}=\delta_{i,j}$, $\{x_i,x_j\}=\{y_i,y_j\}=0$ for $1\le i,j\le n$. We obtain the simple {\it Hamiltonian Poisson superalgebra} with the bracket: \begin{equation}\label{poissonHn} \{f,g\}=(-1)^{|f|-1}\sum_{i=1}^n \bigg(\frac{\partial f}{\partial x_i}\frac{\partial g}{\partial y_i} +\frac{\partial f}{\partial y_i}\frac{\partial g}{\partial x_i}\bigg),\qquad f,g\in H_{n}. \end{equation} Let $A$, $P$ be Poisson superalgebras, their tensor product $A\otimes P$ is a Poisson superalgebra with operations: \begin{itemize} \item $(a\otimes v)\cdot (b\otimes w)= (-1)^{|v||b|}ab\otimes vw$, \item $\{a\otimes v, b\otimes w\}= (-1)^{|v||b|} (\{a, b\}\otimes vw+ ab\otimes \{v,w\})$, for all $a,b\in A$, $v,w\in P$. \end{itemize} \subsection{Jordan superalgebras} While studying Jordan (super)algebras we always assume that $\ch K\ne 2$. A {\it Jordan algebra} is an algebra $J$ satisfying the identities \begin{itemize} \item $ab=ba$; \item $a^2(ca)=(a^2c)a$. \end{itemize} A {\em Jordan superalgebra} is a $\Z_2$-graded algebra $J=J_{\bar 0} \oplus J_{\bar 1}$ satisfying the graded identities: \begin{itemize} \item $ab=(-1)^{|a||b|}ba$; \item $(ab)(cd)+(-1)^{|b||c|}(ac)(bd)+(-1)^{(\!|b|+|c|)|d|}(ad)(bc) \\ =((ab)c)d+(-1)^{|b|(\!|c|+|d|)+|c||d|}((ad)c)b+(-1)^{|a|(\!|b|+|c|+|d|)+|c||d|}((bd)c)a.$ \end{itemize} Let $A=A_{\bar 0}\oplus A_{\bar 1}$ be an associative superalgebra. The same space supplied with the product $a\circ b=\frac 12(ab+(-1)^{|a||b|}ba)$ is a Jordan superalgebra $A^{(+)}$. A Jordan superalgebra $J$ is called {\it special} if it can be embedded into a Jordan superalgebra of the type $A^{(+)}$. Also, $J$ is called {\it i-special} (or {\it weakly special}) if it is a homomorphic image of a special one. I.L.~Kantor suggested the following doubling process, which is applied to a Poisson (super)algebra $A$ and the result is a Jordan superalgebra $\Kan(A)$~\cite{Kantor92}. The $K$-space $\Kan(A)$ is the direct sum $A\oplus \bar A$, where $\bar A$ is a copy of $A$, let $a\in A$ then $\bar a$ denotes the respective element in $\bar A$. In the construction of $\Kan(A)$, the subspace $\bar A$ is supplied with the opposite $\Z_ 2$-grading, i.e., $|\bar a| = 1 - |a|$ for a $\Z_2$-homogeneous $a\in A$. The multiplication $\bullet$ on $\Kan(A)$ is defined by: \begin{align*} a \bullet b &= ab,\\ \bar a \bullet b &= (-1)^{|b|} \overline{ab},\\ a \bullet \bar b &= \overline{ab}, \\ \bar a \bullet \bar b &= (-1)^{|b|} \{a,b\},\qquad a,b\in A. \end{align*} This construction is important because it yielded a new series of finite dimensional simple Jordan superalgebras $\Kan(\Lambda(n))$, $n\ge 2$, where $\Lambda(n)$ is the Grassmann algebra in $n$ variables~\cite{Kantor92,KingMcCrimon92}. \medskip Simple finite dimensional nontrivial Jordan superalgebras over an algebraically closed field of characteristic zero were classified~\cite{Kac77CA,Kantor92}. Infinite-dimensional simple $\Z$-graded Jordan superalgebras with a unit element over an algebraically closed field of characteristic zero which components are uniformly bounded are classified in~\cite{KacMarZel01}. Recently, just infinite Jordan superalgebras were studied in~\cite{ZhePan17}. \subsection{Growth} We recall the notion of {\em growth}. Let $A$ be an associative (or Lie) algebra generated by a finite set $X$. Denote by $A^{(X,n)}$ the subspace of $A$ spanned by all monomials in $X$ of length not exceeding $n$, $n\ge 0$. In case of a Lie superalgebra of $\ch K=2$ we also consider formal squares of odd monomials of length at most $n/2$~\cite{PeOtto,PeSh18FracPJ}. One defines an {\em (ordinary) growth function}: $$ \gamma_A(n)=\gamma_A(X,n)=\dim_KA^{(X,n)},\quad n\ge 0. $$ Let $f,g:\N\to\R^+$ be eventually increasing and positive valued functions. Write $f(n)\preccurlyeq g(n)$ if and only if there exist positive constants $N,C$ such that $f(n)\le g(Cn)$ for all $n\ge N$. Introduce equivalence $f(n)\sim g(n)$ if and only if $f(n)\preccurlyeq g(n)$ and $g(n)\preccurlyeq f(n)$. By $f(n)\approx g(n)$, $n\to\infty$, denote that $\lim_{n\to\infty} f(n)/g(n)=1$. Different generating sets of an algebra yield equivalent growth functions~\cite{KraLen}. It is well known that the exponential growth is the highest possible growth for finitely generated Lie and associative algebras. A growth function $\gamma_A(n)$ is compared with polynomial functions $n^\alpha$, $\alpha\in\R^+$, by computing the {\em upper and lower Gelfand-Kirillov dimensions}~\cite{KraLen}: \begin{align*} \GKdim A&=\limsup_{n\to\infty} \frac{\ln\gamma_A(n)}{\ln n}=\inf\{\a>0\mid \gamma_A(n)\preccurlyeq n^\a\} ;\\ \LGKdim A&=\liminf_{n\to\infty}\, \frac{\ln\gamma_A(n)}{\ln n}=\sup\{\a>0\mid \gamma_A(n)\succcurlyeq n^\a\}. \end{align*} By Bergman's theorem, the Gelfand-Kirillov dimension of an associative algebra cannot belong to the interval $(1,2)$~\cite{KraLen}. Similarly, Martinez and Zelmanov proved that there are no finitely generated Jordan algebras with Gelfand-Kirillov dimension strictly between 1 and 2~\cite{MaZe96}. But such a gap does not exist for Lie algebras, the Gelfand-Kirillov dimension of a finitely generated Lie algebra can be an arbitrary number $\{0\}\cup [1,+\infty)$~\cite{Pe97}. It is known that the construction of Golod yields associative nil-algebras of exponential growth. Using specially chosen relations, Lenagan and Smoktunowicz constructed associative nil-algebras of polynomial growth~\cite{LenSmo07}. Suppose that $L$ is a Lie (super)algebra and $X\subset L$. By $\Lie(X)$ denote the subalgebra of $L$ generated by $X$. In case of associative, Poisson, and Jordan (super)algebras we use notations $\Alg(X)$, $\Poisson(X)$, and $\Jord(X)$, respectively. A grading of an algebra is called {\em fine} if it cannot be splitted by taking a bigger grading group (see e.g.~\cite{BaSeZa01}). Pro-$p$-groups and $\N$-graded Lie algebras cannot be simple. Instead, one has another important notion. A group (algebra) is {\it just infinite} if and only if it has no non-trivial normal subgroups (ideals) of infinite index (codimension). A group (algebra) is said {\it hereditary just infinite} if and only if any normal subgroup (ideal) of finite index (codimension) is just infinite. The Gupta-Sidki groups were the first in the class of periodic groups to be shown to be just infinite~\cite{GuptaSidki83A}. Also, the Grigorchuk group is just infinite but not hereditary just infinite~\cite{Grigorchuk00horizons}. \section{Jordan double of Lie superalgebra} First, we recall the construction of a double of a Lie superalgebra suggested by the second author~\cite{She99}. The goal of the present paper is to study its different applications. Let $L$ be an arbitrary Lie superalgebra. Its symmetric algebra $S(L)$ has the structure of a Poisson superalgebra. Observe, that the subspace $H\subset S(L)$ spanned by all tensors of length at least two is its ideal. Thus, one obtains a (rather trivial) Poisson superalgebra $P(L)=S(L)/H$, which equivalently can be obtained as a vector space endowed with two Poisson products which are nontrivial in the following cases: \begin{equation}\label{defP} P(L)=\langle 1\rangle \oplus L,\qquad 1\cdot x=x,\quad \{x,y\}=[x,y], \quad x,y\in L. \end{equation} Using Kantor's double, define a Jordan superalgebra $\Jor(L)=\Kan(P(L))$. Equivalently, one can just take a vector space supplied with a product $\bullet$ which is nontrivial in the following cases (see the example at the end~\cite{She99}): \begin{equation}\label{defJor} \Jor(L)=\langle 1\rangle \oplus L\oplus \langle \bar 1\rangle \oplus \bar L,\qquad \bar x\bullet \bar y=[x,y], \quad x\bullet \bar 1=(-1)^{|x|}\bar 1\bullet x=\bar x,\quad x,y\in L;\quad 1\text{ the unit}. \end{equation} If an associative superalgebra $A$ is just infinite then the related Jordan superalgebra $A^{(+)}$ is just infinite as well~\cite{ZhePan17}. We establish a similar fact, for convenience of the reader we repeat our arguments, see~\cite{PeSh18FracPJ}. \begin{Lemma}\label{Ljust-inf-Jor} Let $L$ be a Lie superalgebra, consider the Jordan superalgebra $\Jor(L)$. \begin{enumerate} \item $\Jor(L)$ is just infinite if and only if $L$ is just infinite. \item The ideal without unit $\Jor^o(L)=L\oplus \langle \bar 1\rangle \oplus \bar L$ is solvable of length 3. \item $(a^2)^2=0$ for any $a\in\Jor^o(L)$. \end{enumerate} \end{Lemma} \begin{proof} Let $L$ be not just infinite. Then there exists an ideal of infinite codimension $0\ne I\triangleleft L$ and $I\oplus \bar I$ is a nontrivial ideal of infinite codimension in $\Jor(L)$. Therefore, $\Jor(L)$ is not just infinite. Conversely, suppose that $L$ is just infinite. By way of contradiction, assume that $0\ne H\subset \Jor(L)$ is an ideal of infinite codimension. Then $\tilde H=H\cap (L\oplus \bar L)\subset \Jor(L)$ is also an ideal of infinite codimension. Denote by $H_0$ and $\bar H_1$ the projections of $\tilde H$ onto $L$, $\bar L$, respectively ($\bar H_1$ being the copy of a subspace $H_1\subset L$). Since $\tilde H$ is an ideal, $\bar 1\bullet \tilde H=\bar H_0\subset \bar H_1$ and $\bar L\bullet \tilde H=[L,H_1]\subset H_0$ and we get $[L,H_1]\subset H_0\subset H_1\subset L$. Hence $H_0\subset L$ is an ideal, which must be either zero or of finite codimension by our assumption. Let $H_0\subset L$ be of finite codimension then $\tilde H\subset \Jor(L)$ is of finite codimension, a contradiction. Now assume that $H_0=0$. Then $[L,H_1]=0$ and $H_1$ is central. By taking $0\ne z\in H_1$, we get an ideal $\langle z\rangle \subset L$ of infinite codimension, a contradiction. Thus, $\Jor(L)$ is just infinite. To prove the second claim we repeat the arguments of~\cite{She99}. Denote $J=\Jor^o(L)$. Then $J^2\subset L\oplus \bar L$, $(J^2)^2\subset L$, and $((J^2)^2)^2=0$. Thus, $J$ is solvable of length 3. To prove the last claim let $a\in J$, then $a=\alpha \bar 1+u_0+u_1+\bar w_0+\bar w_1$, where $\alpha \in K$, $u_0,w_0\in L_{\bar 0}$, $u_1,w_1\in L_{\bar 1}$. Then $a^2=\{w_1,w_1\}+ 2\alpha\bar u_0 \in L_{\bar 0}+\bar L_{\bar 0}$. By the same computations, $(a^2)^2=0$. \end{proof} \section{On Gelfand-Kirillov dimension of Jordan superalgebras} In~\cite{PeSh18FracPJ} we constructed a Jordan superalgebra $\KK$ whose Gelfand-Kirillov dimension belongs to $(1,2)$, that is not possible for associative and Jordan algebras~\cite{KraLen,MaZe96}. The goal of this section is to prove a more general fact that the gap $(1,2)$ does not exist for Jordan {\it super}algebras (Theorem~\ref{TgapSJordan}). \begin{Lemma}\label{LJor} Let $K$ be a field, $\ch K\ne 2$. Suppose that a Lie superalgebra $L$ is $\Z^k$-graded by multidegree in a generating set $X=\{x_1,\ldots,x_k\}$. Let $L=\mathop{\oplus}\limits _{n= 1}^\infty L_n$ be the $\N$-gradation by total degree in the generators. Consider the Jordan double $J=\Jor(L)$. Then \begin{enumerate} \item one obtains $J=\mathop{\oplus}\limits_{n=0}^\infty J_n$, the $\NO$-gradation by degree in the generating set $\bar X=X\cup \{\bar 1\}$, where \item $J_0=\langle 1\rangle$, $J_1=L_1\oplus \langle \bar 1\rangle$, and the remaining components are as follows: $$J_{3n-2}=L_n,\quad J_{3n-1}=\bar L_n,\quad J_{3n}=0, \qquad n\ge 1. $$ \item $J$ is $\Z^{k+1}$-graded by multidegree in $\bar X$. \end{enumerate} \end{Lemma} \begin{proof} The Jordan superalgebra $J=\Jor(L)$ is generated by $\bar X=X\cup \{\bar 1\}$. Clearly, the $\N$-grading of $L$ by degree in $X$ extends to $J$ as well. Let $J_{n,k}$ denote the space of Jordan monomials that include $n$ letters from $X$ and $k$ letters $\bar 1$, where $n,k\ge 0$. Let us prove that \begin{equation}\label{jnk} J_{n,k}= \begin{cases} \langle 1\rangle, & n=k=0;\\ \langle \bar 1\rangle, & n=0,\ k=1;\\ \delta_{k,2n-2}L_{n}, & k \text { even};\\ \delta_{k,2n-1}\bar L_{n}, & k \text { odd}; \end{cases}\qquad\qquad n,k\ge 0. \end{equation} We proceed by induction on $l=n+k$. The base of induction is $l=0$ and $l=1$, in which cases we have $J_{0,0}=\langle 1\rangle $, $J_{1,0}=L_{1}=\langle X\rangle$ and $J_{0,1}=\langle \bar 1\rangle$. Assume that $l\ge 2$. Observe that to have a nonzero component we need at least one letter $\bar 1$, thus $k\ge 1$. Let $k\ge 1$ be odd. Since $\bar{\quad}$ yields a $\Z_2$-grading we have $J_{n,k}\subset \bar L$, such elements can appear only as the products $\bar 1\bullet L$. Using the inductive assumption, $$J_{n,k}=\bar 1\bullet J_{n,k-1}=\bar 1\bullet \delta_{k-1,2n-2} L_{n}=\delta_{k-1,2n-2}\bar L_{n}=\delta_{k,2n-1}\bar L_n.$$ Let $k\ge 2$ be even. As above, $J_{n,k}\subset L$ and such elements can appear only as the products $\bar L \bullet \bar L$. Using the inductive assumption, we have $$J_{n,k} =\sum_{\substack{n_1+n_2=n\\ k_1+k_2=k\\ k_1,k_2 \text { odd}}} J_{n_1,k_1} \bullet J_{n_2,k_2} =\sum_{\substack{n_1+n_2=n\\ k_1+k_2=k\\ k_1=2n_1-1\\ k_2=2n_2-1}} \bar L_{n_1} \bullet \bar L_{n_2} =\sum_{\substack{n_1+n_2=n\\k=2n-2} } [L_{n_1},L_{n_2}]=\delta_{k,2n-2}L_n. $$ The inductive step is proved. By~\eqref{jnk}, we obtain a direct sum $J=\mathop{\oplus}\limits_{n,k\ge 0}J_{n,k}$, which is a $\Z^2$-gradation of $J$ by the respective multidegree in $X\cup \{\bar 1\}$. By setting $J_m=\mathop{\oplus}\limits_{n+k=m} J_{n,k}$, $m\ge 0$, we get the claimed $\NO$-gradation. Counting the total degree in~\eqref{jnk}, we get~(ii). The multidegree $\Z^{k+1}$-gradation is proved similarly. \end{proof} Define generating functions: \begin{align*} \H(L,t)&=\sum_{n=1}^\infty \dim L_n t^n,\\ \H(J,t_1,t_2)&=\sum_{n,m=0}^\infty \dim J_{n,m} t_1^n t_2^m,\\ \H(J,t)&=\sum_{n=0}^\infty \dim J_n t^n=\H(J,t,t). \end{align*} \begin{Corollary} \label{Ccomp} Using notations above \begin{enumerate} \item \begin{align*} \H(J,t_1,t_2)&=1+t_2+\bigg(\frac 1{t_2}+\frac 1{t_2^2}\bigg)\H(L,t_1t_2^2);\\ \H(J,t)&=1+t+\bigg(\frac 1{t}+\frac 1{t^2}\bigg)\H(L,t^3). \end{align*} \item We obtain an equivalent growth function: $\gamma_J(\bar X,n)\sim \gamma_L(X,n)$. \item $J$ has the same (lower and upper) Gelfand-Kirillov dimensions as those for $L$ (provided that they exist). \item Let $\gamma_L(X,n)\approx Cn^r$, $n\to \infty$ for some constants $C>0$, $r\ge 1$, moreover assume that $C_1n^{r-1}\le \dim L_n\le C_2n^{r-1}$, $n\ge n_0$, for some constants $n_0,C_1,C_2$. Then $\gamma_J(\bar X,n)\approx 2C(n/3)^r$, $n\to\infty$. \end{enumerate} \end{Corollary} \begin{proof} Using~\eqref{jnk} we get the formulas for the generating functions. We have the growth functions $\gamma_L(n,X)=\sum_{k=1}^n\dim L_n$, $n\ge 1$, and $\gamma_J(n,\bar X)=\sum_{k=0}^n\dim J_n$, $n\ge 0$. By Lemma~\ref{LJor} (ii), we get \begin{align*} \gamma_{J}(\bar X, 3m)=\gamma_{J}(\bar X, 3m-1)& =2+2\gamma_L(X,m); \\ \gamma_{J}(\bar X,3m-2)&=2+2\gamma_L(X,m)-\dim L_m,\qquad m\ge 1. \end{align*} Now it remains to use the polynomial estimates on the growth of $L$. \end{proof} \begin{Theorem}\label{TgapSJordan} Let $K$ be a field, $\ch K\ne 2$. Fix any real number $r\ge 1$. There exists a three generated Jordan superalgebra $J$ with the following properties. \begin{enumerate} \item $\GKdim J=\LGKdim J=r$; \item $J$ is graded by degree in the generators, we have $J=\mathop{\oplus}\limits_{n=0}^\infty J_n$ where $J_{3n}=0$ for all $n\ge 1$; \item its ideal without unit $J^0$ is solvable of length 3. \end{enumerate} \end{Theorem} \begin{proof} The first author constructed a 2-generated Lie algebra $L$ such that $\GKdim L=\LGKdim L=r$ and $L$ is $\NO^2$-graded by multidegree in the generators, moreover, we have estimates $C_1n^{r-1}\le \dim L_n\le C_2n^{r-1}$, $n\ge 1$, for some constants $C_1,C_2$~\cite{Pe97}. Also $(L^2)^3=0$, this condition can be written as $L\in \mathbf N_2\mathbf A$ using notations of varieties of Lie algebras~\cite{Ba}. Now we consider $J=\Jor(L)$ and apply Lemma~\ref{LJor}, Corollary~\ref{Ccomp}, and Lemma~\ref{Ljust-inf-Jor}. \end{proof} \section{Just infinite nil Jordan superalgebra of finite width} The Grigorchuk and Gupta-Sidki groups play fundamental role in modern group theory~\cite{Grigorchuk80,GuptaSidki83}. They are natural examples of self-similar finitely generated periodic groups. We discuss their analogues in different classes of algebras. First, we recall their properties. \subsection{Examples of infinite $p$-groups, their width, and related Lie rings} Let $G$ be a group and $G=G_1\supset G_2\supset \cdots$ a {\it central series}, i.e. $(G_i,G_j)\subset G_{i+j}$ for all $i,j\ge 1$. One constructs a related $\N$-graded Lie ring $L=\oplus_{i\ge 1} L_i$, where $L_i= G_i/G_{i+1}$, $i\ge 1$. A product is given by $[a G_{i+1},b G_{j+1}]=(a,b)G_{i+j+1}$, where $a\in G_i$, $b\in G_j$, and $(a,b)=a^{-1}b^{-1}ab$. In particular, consider the {\it lower central series} $\gamma_n(G)$, $n\ge 1$ formed by the commutator subgroups. One obtains the Lie ring $$ L(G)=\mathop{\oplus}\limits_{n\ge 1} \gamma_n(G)/\gamma_{n+1}(G). $$ Let $\Delta$ be the {\it augmentation ideal} of the group ring $K[G]$. One defines the {\it dimension subgroups} $G_n=\{g\in G\mid 1-g\in \Delta^n\}$, $n\ge 1$, they yield a central series. Assume that $\ch K=p>0$. Then $(G_n|n\ge 1)$ is also called a {\it lower central $p$-series}, its factors are elementary abelian $p$-groups, and one obtains a restricted Lie algebra over the prime field $\F_p$~\cite{Passi}: $$ L_{\F_p}(G)=\mathop{\oplus}\limits_{n\ge 1} G_n/G_{n+1}. $$ One also defines $L_{R}(G)=L_{\F_p}(G)\otimes_{\F_p} R$, where $R$ is a commutative ring. Structural constants of both Lie rings $L(G)$ and $L_{\F_p}(G)$ of the Grigorchuk group $G$ are computed in~\cite{BaGr00}. Also, Bartholdi presented $L_{\F_2}(G)$ as a self-similar restricted Lie algebra~\cite{Bartholdi15}. A graded Lie ring $L=\oplus_{i\ge 1} L_i$ is said {\it nil graded} if for any homogeneous element $x\in L_i$, $i\ge 1$, the mapping $\ad x$ is nilpotent. In case of the Grigorchuk group $G$, the restricted Lie algebra $L_{\F_2}(G)$ is nil graded because the group $G$ is periodic~\cite{BaGr00,Zel93}. There is also a similar general result: the Lie ring $L(G)$ associated to the lower central series of a finitely generated residually-$p$ torsion group is nil graded~\cite{MaZe16}. Thus, the Lie ring $L(G)$ of the Grigorchuk group $G$ is also nil graded. On the other hand, the following ring extension of the restricted Lie algebra related to the dimension series $L_{\F_2[x,y]}(G)$ of the Grigorchuk group $G$ is no longer nil graded, namely, the first component has non-nil elements~\cite{SmaZel06}. (Those arguments also one can apply to the Lie ring $L(G)$ while extending scalars to $\Z[x,y]$). Bartholdi established a stronger fact that the restricted Lie algebra $L_{\F_2}(G)$ over the prime field $\F_2$ is nil graded while $L_{\F_4}(G)$ (i.e. extension to the field $\F_4$) is no longer nil graded~\cite{Bartholdi15}. A residually $p$-group $G$ is said to be of {\it finite width} if all factors $G_i/G_{i+1}$ are finite groups with uniformly bounded orders. The Grigorchuk group $G$ is of finite width, namely, $\dim_{\F_2} G_i/G_{i+1}\in\{1,2\}$ for $i\ge 2$~\cite{Rozh96,BaGr00}. Thus, the respective Lie algebra $L_{\F_2}(G)=\oplus_{i\ge 1} L_i$ is of linear growth. On the other hand, the Gupta-Sidki group is of infinite width~\cite{Bartholdi05}. Similarly, one defines a {\it width} of a residually nilpotent Lie (super)algebra $L$ as the maximum of dimensions of the lower central series factors $L^n/L^{n+1}$, $n\ge 1$. Lie algebras of finite width over a field of positive characteristic and a possibility of their classification under additional conditions are discussed in~\cite{ShaZel97,ShaZel99}. Infinite dimensional $\N$-graded Lie algebras $L=\mathop{\oplus}\limits_{n=1}^\infty L_n$ with one-dimensional components in characteristic zero were classified by Fialowski~\cite{Fial83}. \subsection{Nonexistence of nil graded Lie algebras in characteristic zero and Jordan algebras of characteristic not 2} Since the Grigorchuk group is of finite width, probably, a "right analogue" of it should be a Lie algebra of finite width having $\ad$-nil elements. In the next result, the components are of bounded dimension and consist of $\ad$-nil elements. Informally speaking, there are no "natural analogs" of the Grigorchuk and Gupta-Sidki groups in the world of {\it Lie algebras of characteristic zero}, we say this strictly in terms of the following result. \begin{Theorem}[{Martinez and Zelmanov~\cite{MaZe99}}] \label{TMarZel} Let $L=\oplus_{\a\in\Gamma}L_\alpha$ be a Lie algebra over a field $K$ of characteristic zero graded by an abelian group $\Gamma$. Suppose that \begin{enumerate} \item there exists $d>0$ such that $\dim_K L_\alpha \le d $ for all $\alpha\in\Gamma$, \item every homogeneous element $a\in L_\a$, $\a\in\Gamma$, is ad-nilpotent. \end{enumerate} Then the Lie algebra $L$ is locally nilpotent. \end{Theorem} Strictly in terms of the next result, we say again that there are no "natural analogs" of the Grigorchuk and Gupta-Sidki groups in the class of {\it Jordan algebras} too. \begin{Theorem}[{Zelmanov, private communication~\cite{Zelmanov}}]\label{TZelmanov} Jordan algebras satisfy a verbatim analogue of Theorem~\ref{TMarZel} over a field $K$, $\ch K\ne 2$. \end{Theorem} \subsection{Examples of nil restricted Lie algebras and Lie superalgebras in arbitrary chracteristics} The first author constructed an analogue of the Grigorchuk group in case of restricted Lie algebras of characteristic~2~\cite{Pe06}, Shestakov and Zelmanov extended this construction to an arbitrary positive characteristic~\cite{ShZe08}. Thus, we have examples of finitely generated restricted Lie algebras with a nil $p$-mapping. See further constructions in~\cite{PeSh09,PeSh13fib,PeShZe10,Bartholdi15}. A family of restricted Lie algebras of slow growth with a nil $p$-mapping was constructed in~\cite{Pe17}, in particular, we construct a continuum subfamily of such algebras with Gelfand-Kirillov dimension one but their growth is not linear. As a particular case, we get a nil $\Z_2$-graded Lie algebra of width~2 over a field of characteristic~2, which $p$-hull has width~3 and a non-nil $p$-mapping~\cite{PeOtto}. In contrast with Theorem~\ref{TMarZel}, there are natural analogs of the Grigorchuk and Gupta-Sidki groups in the world of {\it Lie superalgebras} of {\it arbitrary characteristics}~\cite{Pe16}, where two Lie superalgebras were constructed, the most interesting cases being those of characteristic zero, see further examples in~\cite{PeOtto,PeSh18FracPJ}. In all these examples (actually, four examples), $\ad a$ is nilpotent, $a$ being an even or odd element with respect to the corresponding $\Z_2$-gradings. This property is an analogue of the periodicity of the Grigorchuk and Gupta-Sidki groups. The second Lie superalgebra $\QQ$ in~\cite{Pe16} has a natural fine $\Z^3$-gradation with at most one-dimensional components. In particular, $\QQ$ is a nil finely $\Z^3$-graded Lie superalgebra, which shows that an extension of Theorem~\ref{TMarZel} for the Lie {\it super}algebras of characteristic zero is not valid. \medskip \subsection{Examples of nil graded Lie and Jordan superalgebras of finite width} The Jordan superalgebra $\KK$ constructed in~\cite{PeSh18FracPJ} shows that an extension of Theorem~\ref{TZelmanov} for the {\it Jordan superalgebras}, characteristic being not 2, is not valid. The goal of the present section is to provide a similar but "smaller" example, namely, a nil graded Jordan superalgebra $\JJ$ of {\it finite width} (Theorem~\ref{TJOtto}). These facts resemble those for Lie algebras and superalgebras mentioned above. \medskip Both Lie superalgebras of~\cite{Pe16} and the Lie superalgebra of~\cite{PeSh18FracPJ} are of infinite width. Now, we shall use the Lie superalgebra of finite width constructed in~\cite{PeOtto}. Let $\Lambda=\Lambda(x_i\vert i\geq 0)$ be the Grassmann algebra. The Grassmann variables and respective superderivatives $\lbrace x_i,\dd_i\mid i\geq 0\rbrace$ are odd elements of the superalgebra $\End\Lambda$. Define so called {\it pivot elements}: \begin{equation}\label{pivot} v_i=\dd_i+x_{i}x_{i+1}(\dd_{i+2}+x_{i+2}x_{i+3}(\dd_{i+4}+x_{i+4}x_{i+5}(\dd_{i+6}+\ldots)))\in\Der\Lambda,\qquad i\geq 0. \end{equation} Define the {\em shift} mappings: \begin{equation*} \tau(x_i)=x_{i+1},\quad \tau(\dd_i)=\dd_{i+1},\quad \tau(v_i)=v_{i+1},\qquad i\ge 0. \end{equation*} Consider the Lie superalgebra $\RR=\Lie(v_0,v_1)\subset\Der\Lambda$ and its associative hull $\AA=\Alg(v_0,v_1)\subset \End \Lambda$. We formulate their main properties, for more details see the original paper~\cite{PeOtto}. \begin{Theorem}[{O. de Morais Costa, V. Petrogradsky~\cite{PeOtto}}]\label{TOtto} Consider the Lie superalgebra $\RR=\Lie(v_0,v_1)$ and its associative hull $\AA=\Alg(v_0,v_1)$, where $\ch K\ne 2$. Then \begin{enumerate} \renewcommand{\theenumi}{\roman{enumi}} \item $\RR$ has a monomial basis consisting of standard monomials of two types. \item $\RR$ and $\AA$ are $\Z^2$-graded by multidegree in the generators. Also, $\RR$ has the degree $\N$-gradation, which components are isomorphic to the factors of the lower central series. \item We put the basis monomials of $\RR$ and $\AA$ at lattice points of plane $\Z^2\subset\R^2$ using the multidegree. These monomials are in regions of plane bounded by pairs of logarithmic curves. \item The components of the $\Z^2$-grading of $\RR$ are at most one-dimensional, thus, the $\Z^2$-grading of $\RR$ is fine. \item $\GKdim\RR=\LGKdim\RR=1$, moreover, $\RR$ has a linear growth and the growth function satisfies $\gamma_\RR(m)\approx 3m$ as $m\to\infty$. \item Moreover, $\RR$ is of finite width 4. Namely, let $\RR=\mathop{\oplus}\limits_{n=1}^\infty\RR_n$ be the $\N$-grading by degree in the generators, where $\RR_n\cong \RR^n/\RR^{n+1}$, $n\ge 1$, are the lower central series factors (see (ii)). Then the coefficients $(\dim\RR_n| n\ge 1)$, are $\{2,3,4\}$. This sequence is non-periodic. \item $\GKdim\AA=\LGKdim\AA=2$. \item Homogeneous elements of the grading $\mathbf{R}=\mathbf{R}_{\bar 0}\oplus\mathbf{R}_{\bar 1}$ are $\ad$-nilpotent. \item $\RR$ is just infinite but not hereditary just infinite. \item $\RR$ again shows that an extension of~Theorem~\ref{TMarZel} (Martinez and Zelmanov~\cite{MaZe99}) to the Lie superalgebras of characteristic zero is not valid. \end{enumerate} \end{Theorem} \begin{Remark} The first counterexample of a nil finely $\Z^3$-graded Lie superalgebra of slow polynomial growth in any characteristic was suggested before (the second Lie superalgebra $\QQ$ in~\cite{Pe16}). The virtue of the nil finely $\Z^2$-graded Lie superalgebra $\RR$ above is that it is of linear growth, moreover, of finite width 4, and just infinite. Claim~(vi) is analogous to the fact that the Grigorchuk group is of finite width~\cite{Rozh96,BaGr00}. Thus, $\RR$ is a "more appropriate" analogue of the Grigorchuk group than the second Lie superalgebra $\QQ$ of~\cite{Pe16} or the Lie superalgebra considered recently in~\cite{PeSh18FracPJ}, both being of infinite width (because their Gelfand-Kirillov dimensions are bigger than 1). \end{Remark} Actually, the non-periodicity of the sequence above (claim~(vi)) was proved only in case $p=2$~\cite{PeOtto}, but the proof of non-periodicity for other characteristics is similar. \medskip Now, we construct the following Jordan superalgebra and describe its properties. We use the Lie superalgebra $\RR$ described above and take its Jordan double $\JJ=\Jor(\RR)$. \begin{Theorem}\label{TJOtto} Let $\ch K\ne 2$ and $\RR=\Lie(v_0,v_1)$ be the Lie superalgebra of Theorem~\ref{TOtto}. Consider the Jordan superalgebra $\JJ=\Jor(\RR)$ and its subalgebra without unit $\JJ^o$. They have the following properties. \begin{enumerate} \renewcommand{\theenumi}{\roman{enumi}} \item $\JJ$ is $\Z^3$-graded by multidegree in $\bar X=\{v_0,v_1,\bar 1\}$. \item We put monomials of $\JJ$ at lattice points of plane $\Z^2\subset\R^2$ using the partial multidegree in $\{v_0,v_1\}$. These monomials are bounded by a pair of logarithmic curves. \item The components of the $\Z^3$-grading of $\JJ$ are at most one-dimensional. \item $\GKdim \JJ=\LGKdim\JJ=1$, moreover, $\JJ$ is of linear growth and $\gamma_\JJ(\bar X,m)\approx 2m$, as $m\to\infty$. \item Let $\JJ=\mathop{\oplus}\limits_{n=0}^\infty\JJ_n$ be the $\NO$-grading by degree in $\bar X$. Then $\JJ$ is of finite width 4, and the coefficients $(\dim\JJ_n| n\ge 1)$ are $\{0,2,3,4\}$, where the trivial components are $\JJ_{3m}=0$, $m\ge 1$. This sequence is non-periodic. \item $\JJ$ is just infinite but not hereditary just infinite. \item $(a^2)^2=0$ for any $a\in\JJ^o$. \item Let $a\in\JJ^o$ and $a\in J_{n,m,k}$, $(n,m,k)\in\Z^3$, then $a^2a=aa^2=0$. Thus, $\JJ^o$ is nil $\Z^3$-graded. \item $\JJ$ again shows that an extension of~Theorem~\ref{TZelmanov} of Zelmanov~\cite{Zelmanov} to the Jordan superalgebras of characteristic not 2 is not valid. \end{enumerate} \end{Theorem} \begin{proof} Almost all statements follow from the properties of $\RR$ described in Theorem~\ref{TOtto} by applying Lemma~\ref{Ljust-inf-Jor}, Lemma~\ref{LJor}, and Corollary~\ref{Ccomp}. The fact that $\JJ$ is not hereditary just infinite is proved as the same property of the second Jordan superalgebra $\KK$ in~\cite[Theorem~13.4]{PeSh18FracPJ}. Let $a\in J_{n,m,k}$, recall that the latter component is one-dimensional. If $a\in L$ then $a^2=0$. Let $a\in \bar L$, then the square is again zero except the pivot elements, namely, let $a=\bar v_n$ then $a^2=\bar v_n\bullet \bar v_n=\{v_n,v_n\}=x_{n+1}v_{n+2}\in L$ (see~\cite{PeOtto}) and $a^2a=aa^2=0$ by~\eqref{defJor}. \end{proof} \begin{Remark} A similar example of a just-infinite nil finely $\Z^4$-graded Jordan superalgebra of slow polynomial growth was suggested before, see the second Jordan superalgebra $\KK$ in~\cite{PeSh18FracPJ}. But, the present example is a more "appropriate analogue" of the Grigorchuk group, because the present Jordan superalgebra $\JJ$ is of linear growth, moreover, of finite width 4. This property resembles the finite width of the Grigorchuk group. \end{Remark} \begin{Remark} The example $\JJ$ above shows that just infinite $\Z$-graded Jordan superalgebras of finite width can have a complicated structure unlike the classification of such simple algebras over an algebraically closed field of characteristic zero~\cite{KacMarZel01}. \end{Remark} \section{On self-similarity of superalgebras}\label{Sself-similar} \subsection{Self-similarity of Lie superalgebras} We say that an algebra is {\it fractal} provided that it contains infinitely many copies of itself. In this section we discuss the notion of self-similarity for our superalgebras. The notion of self-similarity plays an important role in group theory~\cite{Grigorchuk00horizons,Nekr05}. The Fibonacci Lie algebra introduced by the first author is "self-similar"~\cite{PeSh09} but not in terms of the definition of self-similarity given by Bartholdi~\cite{Bartholdi15}. Namely, a Lie algebra $L$ is called {\it self-similar} if it affords a homomorphism~\cite{Bartholdi15}: \begin{equation}\label{selfQ} \psi:L\rightarrow\Der A\rightthreetimes (A\otimes L), \end{equation} where $A$ is a commutative algebra, $\Der A$ the Lie algebra of derivations, the product of the right hand side is defined via the natural action of $\Der A$ on $A$. The first author constructed a family of two-generated restricted Lie algebras with a nil $p$-mapping determined by two infinite sequences, if these sequences are periodic we get self-similar restricted Lie algebras~\cite{Pe17}. Recently, self-similar Lie algebras were studied in~\cite{FutKochSid}. This definition easily extends to Lie superalgebras by setting $A$ to be a supercommutative associative superalgebra and $\Der A$ the Lie superalgebra of superderivations. We have two original examples of ad-nil self-similar Lie superalgebras of slow polynomial growth over an arbitrary field~\cite{Pe16}. Recall the construction of the second Lie superalgebra of~\cite{Pe16}. Let $\ch K\ne 2$ and $\Lambda=\Lambda[x_i,y_i,z_i| i\ge 0]$ the Grassmann superalgebra, denote the respective partial superderivatives as $\{\partial_{x_i},\partial_{y_i},\partial_{z_i}| i\ge 0\}$. Define series of elements, called the {\it pivot elements}, of the Lie superalgebra of superderivations $\Der\Lambda$: \begin{equation}\label{aibici0} \begin{split} a_i &= \partial_{x_i} + y_ix_i(\partial_{x_{i+1}}+ y_{i+1}x_{i+1}(\partial_{x_{i+2}} +y_{i+2}x_{i+2}(\partial_{x_{i+3}}+ \cdots ))),\\ b_i &= \partial_{y_i} + z_iy_i(\partial_{y_{i+1}}+ z_{i+1}y_{i+1}(\partial_{y_{i+2}} +z_{i+2}y_{i+2}(\partial_{y_{i+3}}+ \cdots ))),\\ c_i &= \partial_{z_i} + x_iz_i(\partial_{z_{i+1}}+ x_{i+1}z_{i+1}(\partial_{z_{i+2}} +x_{i+2}z_{i+2}(\partial_{z_{i+3}}+ \cdots ))), \end{split} \qquad i\ge 0. \end{equation} Define the {\em shift} mapping $\tau:\Lambda\to \Lambda$ and its natural extensions to the elements defined above: \begin{align*} \begin{split} \tau(x_i)&=x_{i+1}, \quad\ \tau(y_i)=y_{i+1},\quad \quad \tau(z_i)=z_{i+1},\\ \tau(\partial_{x_i})&=\partial_{x_{i+1}}, \quad \tau(\partial_{y_i})=\partial_{y_{i+1}},\quad \tau(\partial_{z_i})=\partial_{z_{i+1}},\\ \tau(a_i)&=a_{i+1}, \quad \ \tau(b_i)=b_{i+1},\quad \quad \tau(c_i)=c_{i+1},\\ \end{split} \qquad\quad i\ge 0. \end{align*} Define the Lie superalgebra $\QQ=\Lie(a_0,b_0,c_0)\subset \Der\Lambda$ and its associative hull $\AA=\Alg(a_0,b_0,c_0)\subset \End \Lambda$. For more details see the original paper~\cite{Pe16}. We observe only the following. \begin{Lemma} The Lie superalgebra $\QQ=\Lie(a_0,b_0,c_0)$ defined above is self-similar with a natural self-similarity embedding: $$ \psi:\QQ\hookrightarrow \langle \partial_{x_0},\partial_{y_0},\partial_{z_0}\rangle_K\rightthreetimes \Lambda[x_0,y_0,z_0]\otimes \tau(\QQ). $$ \end{Lemma} \begin{proof} By~\eqref{aibici0} we obtain a recursive presentation: \begin{equation}\label{recursiveQ} \begin{split} a_0 &= \partial_{x_0} + y_0x_0 \tau(a_0),\\ b_0 &= \partial_{y_0} + z_0y_0 \tau(b_0),\\ c_0 &= \partial_{z_0} + x_0z_0 \tau(c_0). \end{split} \end{equation} Observe that it is sufficient to find a desired presentation in the form~\eqref{selfQ} for the generators. Indeed, we extend presentation~\eqref{recursiveQ} to an arbitrary $a\in\QQ=\Lie(a_0,b_0,c_0)$, using that $\tau:\QQ\to\QQ$ is a shift monomorphism. \end{proof} \begin{Conjecture} Let $\RR=\Lie(v_0,v_1)$ be the Lie superalgebra, where $v_0,v_1$ are defined by~\eqref{pivot} (i.e. the example of~\cite{PeOtto}). Then $\RR$ is not self-similar. We conjecture that the Lie superalgebra of~\cite{PeSh18FracPJ} is not self-similar as well. \end{Conjecture} Indeed, recall that $\RR$ is fractal. But, a self-similarity embedding for $\RR$ might look like: $$\psi:\RR\hookrightarrow \langle \dd_0\rangle_K\rightthreetimes \Lambda(x_0)\otimes \tau(\RR).$$ Recall that $\RR$ is generated by $\{v_0,v_1\}$, where the second generator is of the required form $v_1=\tau(v_0)\in \tau(\RR)$. By~\eqref{pivot}, one has $v_0=\dd_0+x_0\cdot x_1v_2$ where $x_1v_2\notin\tau(\RR)$. Indeed, $x_1v_2=\tau(x_0v_1)$ where $x_0v_1\notin \RR$ (see~\cite{PeOtto}). It seems that we cannot split our variables in any similar way. \subsection{Self-similarity of associative superalgebras} Similar to the case of associative algebras~\cite{Bartholdi06,Sidki09,PeSh13ass}, we say that an associative superalgebra $A$ is {\it self-similar} provided that there exists a {\em self-similarity embedding}: \begin{equation}\label{selfA} \psi:A \hookrightarrow \M(n|m)\otimes A, \end{equation} for some matrix superalgebra $\M(n|m)$, the tensor product being that of associative superalgebras. \begin{Lemma} The associative superalgebra $\AA=\Alg(a_0,b_0,c_0)$ defined above~\eqref{aibici0} is self-similar with a self-similarity embedding: $$ \psi:\AA\hookrightarrow \M(4|4) \otimes \AA. $$ \end{Lemma} \begin{proof} As above, we use presentation~\eqref{recursiveQ} and get desired presentations for all elements of $\AA$. Observe that $\Alg(\partial_{x_0},x_0)\cong \M(1|1)$, where $\M(1|1)$ is the superalgebra of $2\times 2$-matrices, even part consists of diagonal matrices, and odd part consists of off-diagonal elements. Next, we use that $\Alg(\partial_{x_0},x_0,\partial_{y_0},y_0,\partial_{z_0},z_0)\cong \M(1|1)\otimes \M(1|1)\otimes \M(1|1)\cong \M(4|4)$. \end{proof} \begin{Conjecture} Consider the associative superalgebra $A=\Alg(v_0,v_1)$ corresponding to the Lie superalgebra considered above $\RR=\Lie(v_0,v_1)$, where $v_0,v_1$ are defined by~\eqref{pivot} (i.e. the example of~\cite{PeOtto}). Then $A$ is not self-similar. We conjecture that the respective associative superalgebra of~\cite{PeSh18FracPJ} is not self-similar as well. \end{Conjecture} \subsection{Self-similarity of Poisson superalgebras} Let us call a Poisson superalgebra $P$ {\it self-similar} if there exist a Poisson superalgebra $H$ and an embedding: \begin{equation}\label{selfP} \psi:P\hookrightarrow H\otimes P, \end{equation} the tensor product being that of Poisson superalgebras. Following constructions of~\cite{PeSh18FracPJ}, let us suggest a Poisson superalgebra related to $\QQ$ above (this was not done in the original paper~\cite{Pe16}, we introduced the Poisson and Jordan superalgebras in case of another example in~\cite{PeSh18FracPJ}). Consider the Grassmann superalgebra $H=\Lambda[x_i,y_i,z_i,X_i,Y_i,Z_i| i\ge 0]$ and supply it with the Poisson bracket, which nontrivial products are $\{X_i,x_i\}=1$, $\{Y_i,y_i\}=1$, $\{Z_i,z_i\}=1$ for all $i\ge 0$. Thus, $H$ is turned into a Poisson superalgebra. We formally substitute big letters instead of respective derivatives in~\eqref{aibici0}: \begin{equation}\label{aibiciP} \begin{split} A_i &= X_i + y_ix_i(X_{i+1}+ y_{i+1}x_{i+1}(X_{i+2} +y_{i+2}x_{i+2}(X_{i+3}+ \cdots ))),\\ B_i &= Y_i\, + z_iy_i(Y_{i+1}\,+ z_{i+1}y_{i+1}(Y_{i+2}\, +z_{i+2}y_{i+2}(Y_{i+3}\,+ \cdots ))),\\ C_i &= Z_i + x_iz_i(Z_{i+1}+ x_{i+1}z_{i+1}(Z_{i+2} +x_{i+2}z_{i+2}(Z_{i+3}+ \cdots ))), \end{split} \qquad i\ge 0. \end{equation} We define the shift endomorphism $\tau:H\to H$ as above. Actually, we need to consider our elements in a completion of the Poisson superalgebra $H$, see~\cite{PeSh18FracPJ}. Now we define a Poisson superalgebra $\PP=\Poisson(A_0,B_0,C_0)$. \begin{Lemma} The Poisson superalgebra $\PP=\Poisson(A_0,B_0,C_0)$ is self-similar with the self-similarity embedding: $$ \PP\hookrightarrow H_3\otimes \tau(\PP).$$ \end{Lemma} \begin{proof} By~\eqref{aibiciP} we obtain a recursive presentation: \begin{equation}\label{recursiveP} \begin{split} A_0 &= X_0 + y_0x_0 \tau(A_0),\\ B_0 &= Y_0 + z_0y_0 \tau(B_0),\\ C_0 &= Z_0 + x_0z_0 \tau(C_0). \end{split} \end{equation} Also, we observe that $\Poisson(x_0,y_0,z_0,X_0,Y_0,Z_0)\cong H_3$, see~\eqref{poissonHn}. \end{proof} \begin{Remark} Consider a "small" Poisson superalgebra related to $\QQ$ defined by~\eqref{defP}, namely, $P(\QQ)=\langle 1\rangle \oplus \QQ$. This algebra is fractal. But, it seems that it is not self-similar according to our definition. Namely, there is a problem with a homomorphism for the associative product. \end{Remark} \subsection{Self-similarity and wreath products of Jordan superalgebras} We have the notion of self-similarity for Lie superalgebras~\eqref{selfQ} (a modification of that of Bartholdi~\cite{Bartholdi15}), associative superalgebras~\eqref{selfA}, and Poisson superalgebras~\eqref{selfP}. But we lack a similar notion of self-similarity for Jordan superalgebras. We start with an observation. Let $P$ be a Poisson superalgebra and $J=\Kan(P)=P\oplus \bar P$ its Kantor double, which is a Jordan superalgebra. Define a mapping $$D:J\to J,\qquad D(a)=0,\quad a\in P,\qquad D(\bar a)=(-1)^{|a|} a,\quad \bar a\in \bar P.$$ One checks that $D$ is an odd superderivation of $J$ and $D^2=0$. Assume that we have a self-similar Poisson superalgebra $P$ with an embedding~\eqref{selfP} $\psi:P\hookrightarrow H\otimes P_1$, where $H$ is a Poisson superalgebra, and $P_1\cong P$. Denote the Kantor double $J_1=\Kan(P_1)$. We apply the Kantor double to both algebras in the embedding $\psi$ above \begin{align*} J&=\Kan(P)=P\oplus \bar P \hookrightarrow\Kan(H\otimes P_1)= H\otimes P_1\oplus \overline {H\otimes P_1}\\ &\cong H\otimes (P_1\oplus \bar P_1)\cong H\otimes\Kan(P_1)=H\otimes J_1, \end{align*} where the isomorphisms in the last line are that of vector spaces. Let us express the product $\bullet$ of the last space, which is identified with the Jordan superalgebra $\Kan(H\otimes P_1)$, in terms of the Jordan product $\circ$ of $J_1=\Kan(P_1)$ and two products $(\ \cdot\ , \{\ , \ \} )$ of the Poisson superalgebra $H$. Recall that the product $\bullet$ on $\Kan(H\otimes P_1)$ was defined by Kantor's construction. Take homogeneous $x,y\in H$ and $a,b\in P_1$. Consider four cases, where the unspecified products are the associative supercommutative products of the Poisson superalgebra $H\otimes P_1$ \begin{align*} x a\bullet yb&= xa yb=(-1)^{|a||y|}(xy) (ab)=(-1)^{|a||y|}(x\cdot y)(a\circ b);\\ x a\bullet y\bar b&= xay\bar b=(-1)^{|a||y|}(xy) (a\bar b)=(-1)^{|a||y|}(x\cdot y)(a\circ \bar b);\\ x \bar a\bullet yb&= (-1)^{|yb|}xay \bar b=(-1)^{|y|+|b|+|a||y|} (xy) (a\bar b)=(-1)^{|y|+|a||y|}(x\cdot y)(\bar a\circ b)\\ &=(-1)^{|\bar a||y|}(x\cdot y)(\bar a\circ b);\\ x \bar a\bullet y\bar b&=(-1)^{|yb|} \{xa,yb\}=(-1)^{|y|+|b|+|a||y|}(xy\{a,b\} +\{x,y\}ab)\\ &=(-1)^{|y|+|b|+|a||y|}\left ((-1)^{|b|} (x\cdot y) (\bar a\circ \bar b)+ (-1)^{|a|+|b|}\{x,y\} (D(\bar a)\circ D(\bar b))\right )\\ &=(-1)^{|\bar a||y|} \left ((x\cdot y)(\bar a\circ \bar b)+(-1)^{|\bar a|+1} \{x,y\} (D(\bar a)\circ D(\bar b))\right). \end{align*} Now let $f,g\in J_1=\Kan(P_1)$, belonging to either $P_1$ or $\bar P_1$ and $x,y\in H$. We combine four cases above: \begin{equation}\label{new_prod} (x\otimes f)\bullet (y\otimes g) =(-1)^{|f||y|} \left (x\cdot y\otimes f\circ g+(-1)^{|f|+1} \{x,y\}\otimes D(f)\circ D(g) \right). \end{equation} By these arguments we have the following statement. \begin{Lemma} Let $P$ be a self-similar Poisson superalgebra, i.e. there exist a Poisson superalgebra $H$, which products being $(\ \cdot\ , \{\ , \ \} )$, and a self-similarity embedding~\eqref{selfP}. Then the Kantor double $J=\Kan(P)$, which is a Jordan superalgebra with a product $\circ$, enjoys the self-similarity embedding \begin{equation* J\hookrightarrow H\otimes J, \end{equation*} where the right hand side is a Jordan superalgebra that product $\bullet$ satisfies~\eqref{new_prod}, and the superderivative $D:J\to J$ was defined above. \end{Lemma} \begin{Corollary} Consider the Poisson superalgebra $\PP=\Poisson(A_0,B_0,C_0)$ related to $\QQ$ as above. Let $\JJ=\Kan(\PP)$ be its Kantor double. Then the Jordan superalgebra $\JJ$ has a self-similarity embedding \begin{equation* \JJ\hookrightarrow H_3\otimes \JJ. \end{equation*} \end{Corollary} The notion of the {\it wreath product} plays an important role in group theory~\cite{KalKra51,Nekr05}. Analogous notion of a {\it wreath product} was defined for arbitrary two Lie algebras~\cite{PeRaSh}, see also~\cite{Bartholdi15,FutKochSid}. Similarly, the notion of the wreath products of associative algebras has many applications, see a recent paper~\cite{AlAlJaZe}. The observations above allow us to suggest that there exists a notion of a {\it wreath product} of a Jordan superalgebra with a Poisson superalgebra as follows. \begin{Conjecture} Let $J_1$ be a Jordan superalgebra with the product $\circ$, $D:J_1\to J_1$ an odd superderivative such that $D^2=0$. Let $H$ be a Poisson superalgebra with products $(\ \cdot,\ \{\ ,\ \} )$. Supply $J=H\otimes J_1$ with the product~\eqref{new_prod}. Is it true that $J$ is a Jordan superalgebra? In this case, $J$ should be called the {\em wreath product} of $J_1$ with $H$. \end{Conjecture} On the other hand consider a "small" Jordan superalgebra related to the Lie superalgebra $\QQ$ above. Namely, by~\eqref{defJor} set $\KK=\Jor(\QQ)=\langle 1\rangle \oplus \QQ\oplus \langle \bar 1\rangle \oplus \bar \QQ$. Then it seems that $\KK$ does not have a self-similarity embedding. On the other hand, $\KK$ is fractal.
2,877,628,089,880
arxiv
\section{Introduction} Solid state defect quantum emitters are at the heart of quantum technologies. Quantum information technologies~\cite{Kane1998,Togan2010, Atature2018} and nanoscale sensor applications~\cite{RevModPhys.89.035002, Casola2018, PhysRevApplied.6.034001} require defects with long coherence times. Further, a quantum emitter with good optical properties allows for convenient optically-assisted spin state initialization and readout~\cite{Robledo2011,PhysRevLett.113.263602}. However, inhomogeneities in the host crystal can lead to degradation of optical properties. A significant contribution is usually spectral diffusion and inhomogeneous broadening caused by the Stark shift effect. This spectral diffusion refers to the broadening of the integrated photoluminenscence zero-phonon-line (ZPL) after repetitive measurements on a single center as stray electric fields contribute fluctuating Stark shift to the transition. The mechanism is that optical excitation induces charge fluctuations of parasitic defects, which influence the optical properties of the investigated system~\cite{PhysRevB.61.R5086, PhysRevLett.107.266403}. Quantum emitters for quantum communication technology should emit predominantly coherent photons at large rate. However, especially for solid state emitters, photons are emitted either between purely electronic states (zero-phonon emission) or between states with phonons being excited. Photons of the latter emission are not coherent. Therefore, the ratio of the emitted photons with zero-phonon contribution to the total emission, i.e., the Debye-Waller factor of the quantum emitter also categorizes the quality of the quantum emitter. Deficiencies in the spin coherence time can be principally circumvented by dynamical decoupling schemes, which can extend the coherence times further at the expense of longer measurement times~\cite{Awschalom2018}, that is an inevitable technique in such host materials where no spin-free isotopes are available. The intensity of coherent emission of defects with low Debye-Waller factors can be significantly improved by designing and building optical cavities around the quantum emitter which is a technological challenge but principally doable. On the other hand, the minimal spectral diffusion criterion is, in particular, inherently bound to such defect qubit property that cannot be efficiently and systematically circumvented either by means of materials design or quantum optical control of the quantum bits. As the elimination of the parasitic defects near the quantum bits is a too high challenge for materials scientists, a simple solution to the problem of spectral diffusion is utilizing quantum defects with inversion symmetry that \emph{ab ovo} do not couple to static electric stray fields~\cite{PhysRevLett.113.113602}. This condition is intimately connected to the crystal structure of the host material of the quantum emitter. This symmetry requirement excludes compound semiconductors or insulator crystals that \emph{per se} do not host defects with inversion symmetry, although advanced production and processing technologies exist for those platforms. However, a plethora of compound semiconductors or insulators are potential candidates for hosting quantum emitters, e.g., defects in silicon carbide (SiC) have favorable coherence times in naturally abundant hosts without any dynamical decoupling procedures such as neutral divacancies~\cite{PhysRevLett.96.055501, Koehl2011, Falk2013, PhysRevX.7.021046} and negatively charged silicon-vacancies~\cite{PhysRevB.66.235202, PhysRevLett.109.226402, Kraus2013, Kraus2014, PhysRevB.93.081207, Widmann2015}. In this paper, we show that the inversion symmetry is not a prerequisite criterion for a spectrally stable defect quantum emitter. We demonstrate this principle by means of density functional perturbation theory calculations on the so-called V1 center, i.e., the negatively charged silicon-vacancy at the so-called hexagonal ($h$) site in the compound semiconductor 4H SiC. \section{Computational methods} We determined the coupling of optical excitation to the external electric fields by calculating the permanent dipole moments in the corresponding $^{4}A_{2}$ ground state and $^{4}A_{2}$ excited state of the negatively charged silicon vacancy (V1) h-site defect in 4H-SiC~\cite{Janzen2009, Ivady2017} and compared those to the corresponding $^{3}A$ ground state and $^{3}E$ excited state of the negatively charged nitrogen-vacancy center in diamond~\cite{Davies1976, Goss1996, Gali2008} using density functional theory (DFT). We also determined the radiative lifetime of V1 center in 4H SiC. \subsection{Electronic structure calculation} We applied DFT for electronic structure calculation and geometry relaxation, using the plane-wave Vienna Ab initio Simulation Package (\textsc{VASP})~\cite{VASP1,VASP2,VASP3,VASP4}. The core electrons were treated in the projector augmented-wave (PAW) formalism~\cite{paw}. The calculations were performed with $420~\text{eV}$ plane wave cutoff energy and with $\Gamma$ centered $2\times2\times2$ k-point mesh for the 4H SiC supercell, $420~\text{eV}$ plane wave cutoff energy and $\Gamma$-point for the diamond supercell, respectively. We applied spinpolarized PBE functional in these calculations~\cite{PBE}. The model of V1 center in bulk 4H SiC was constructed using a 432-atom hexagonal supercell whereas we used the 512-atom simple cubic supercell to model nitrogen-vacancy (NV) center in diamond. The excited state electronic structure and geometry were calculated by constraint occupation of states, or $\Delta$SCF method~\cite{Gali2009}. \subsection{Permanent dipole moment calculation} We calculated the permanent dipole moments in the ground and excited state. The difference in the dipole moments is associated with the coupling parameter of the electric fields and the optical transition. To calculate the permanent dipole moments of the corresponding states, we used the \textsc{VASP} implementation of both Born effective charge calculation using density functional perturbation theory~\cite{PhysRevB.73.045112} and the Berry phase theory of polarization~\cite{PhysRevB.47.1651, PhysRevB.48.4442, RevModPhys.66.899}. In a DFT calculation, one can define the change in macroscopic electronic polarization ($\mathbf{P}$) as an adiabatic change in the Kohn-Sham potential ($V_{\text{KS}}$) \begin{align}\label{response} \frac{\partial \mathbf{P}}{\partial \lambda}=&-\frac{ife\hbar}{\Omega m_{e}}\sum_{\mathbf{k}}\sum_{n=1}^{M}\sum_{m=M+1}^{\infty}\\\nonumber &\frac{\left<\psi_{\mathbf{k}n}^{(\lambda)}\right|\hat{\mathbf{p}}\left|\psi_{\mathbf{k}m}^{(\lambda)}\right>\left<\psi_{\mathbf{k}m}^{(\lambda)}\right|\frac{\partial V_{\text{KS}}}{\partial\lambda}\left|\psi_{\mathbf{k}n}^{(\lambda)}\right>} {\left(\epsilon_{\mathbf{k}n}^{(\lambda)}-\epsilon_{\mathbf{k}m}^{(\lambda)}\right)^{2}}+c.c.\text{,} \end{align} where $f$ is the occupation number, $e$ elemental charge, $m_{e}$ electron mass, $\Omega$ cell volume, $M$ number of occupied bands, $\vec{\hat{p}}$ momentum operator. The total change of polarization can be calculated by integrating over the adiabatic parameter $\lambda$. In a periodic gauge, where the wavefunctions ($u_{\mathbf{k}}$) are cell periodic and periodic in the reciprocal space, the expectation values in \eqref{response} can be expressed as \begin{equation}\label{pelement} \left<\psi_{\mathbf{k}n}^{(\lambda)}\right|\hat{\mathbf{p}}\left|\psi_{\mathbf{k}m}^{(\lambda)}\right>= \frac{m_{e}}{\hbar} \left<u_{\mathbf{k}n}^{(\lambda)}\right| \left[\nabla_{\mathbf{k}},H_{\mathbf{k}}\right] \left|u_{\mathbf{k}m}^{(\lambda)}\right>\text{,} \end{equation} \begin{equation}\label{Velement} \left<\psi_{\mathbf{k}n}^{(\lambda)}\right|\frac{\partial V_{\text{KS}}}{\partial\lambda}\left|\psi_{\mathbf{k}m}^{(\lambda)}\right>= \left<u_{\mathbf{k}n}^{(\lambda)}\right| \left[\frac{\partial V_{\text{KS}}}{\partial\lambda},H_{\mathbf{k}}\right] \left|u_{\mathbf{k}m}^{(\lambda)}\right>\text{,} \end{equation} where $H_{\mathbf{k}}=\frac{1}{2m_{e}}(-i\hbar\mathbf{\nabla}+\hbar \mathbf{k})^2+V_{\text{KS}}$ is the periodic Hamiltonian. Substituting \eqref{pelement} and \eqref{Velement} into \eqref{response}, the only contribution is $\left<u_{\mathbf{k}n}^{(\lambda)}\right|\nabla_{\mathbf{k}}\left|u_{\mathbf{k}n}^{(\lambda)}\right>$, as $\left<u_{\mathbf{k}n}^{(\lambda)}\right|\frac{\partial}{\partial\lambda}\left|u_{\mathbf{k}n}^{(\lambda)}\right>$ is periodic in the reciprocal space. The permanent dipole moment takes a form similar to the Berry phase expression \begin{equation} \mathbf{P}=\frac{ife}{8\pi^{3}}\sum_{n=1}^{M}\int_{\text{BZ}}d\mathbf{k}\left<u_{\mathbf{k}n}\right|\nabla_{\mathbf{k}}\left|u_{\mathbf{k}n}\right>\text{.} \end{equation} Using density functional perturbation theory (DFPT), $\nabla_{\mathbf{k}}\left|u_{\mathbf{k}n}\right>$ can be calculated from the Sternheimer equations with similar self-consistent iterations as in the self-consistent field DFT \begin{equation} \left(H_{\mathbf{k}}-\epsilon_{\mathbf{k}n}\right)\nabla_{\mathbf{k}}\left|u_{\mathbf{k}n}\right>=-\frac{\partial \left(H_{\mathbf{k}}-\epsilon_{\mathbf{k}n}\right)}{\partial \mathbf{k}}\left|u_{\mathbf{k}n}\right>\text{.} \end{equation} \subsection{Radiative lifetime calculation}\label{lifetime} We determined the radiative transition rate between the ground and excited $^{4}A_{2}$ states by calculating the energy dependent dielectric function $\epsilon_{r}(E)$. The spontaneous transition rate is given by the Einstein coefficient \begin{equation} A=\frac{n\omega^{3}\left|\mu\right|^2}{3\pi\epsilon_{0}\hbar c^{3}}\text{,} \end{equation} where $n$ is the refractive index, $\hbar\omega$ is the transition energy, $\mu$ is the optical transition dipole moment, $\epsilon_{0}$ is the vacuum permittivity, $c$ is the speed of light. $\mu$ is proportional to the integrated imaginary dielectric function ($I$) of the given transition \begin{equation} \left|\mu\right|^2=\frac{\epsilon_{0} V}{\pi}\int \operatorname{Im}\epsilon_{r}(E) dE=\frac{\epsilon_{0} VI}{\pi}\text{,} \end{equation} where $V$ is the volume of the supercell. Thus, the radiative lifetime can be given by \begin{equation} \label{eq:tau} \tau_{\text{r}}=\frac{3\pi^{2}\hbar c^{3}}{n\omega^{3}VI}\text{.} \end{equation} In our particular implementation, we applied the following procedure and parameters. We fit a Lorentzian function to the first peak of $\operatorname{Im}\epsilon_{r}(E)$. The results are $I=0.389~\mathrm{eV}$ and $\hbar\omega=1.387~\mathrm{eV}$. Using the refractive index $n=2.6473$ of 4H SiC and the cell volume of $V=4.5346~\mathrm{nm}^{3}$, the radiative lifetime can be calculated using Eq.~\eqref{eq:tau}. \section{Results} \subsection{Microscopic model of an ideal defect quantum emitter} An ideal quantum emitter should show no spectral diffusion. This property can be achieved if the electric dipole moment remains either be zero or unchanged during the optical excitation process between the ground ($\left|g\right>$) and excited ($\left|e\right>$) states \begin{equation}\label{eq:electric} \left|\left<e\right|\vec{r}\left|e\right>-\left<g\right|\vec{r}\left|g\right>\right|^2=0 \text{.} \end{equation} On the other hand, the optical transition rate should be large \begin{equation}\label{eq:transition} \left|\left<e\right|\vec{r}\left|g\right>\right|^2 > 10~\mathrm{Debye}^{2} \text{.} \end{equation} The expectation values in Eqs.~\eqref{eq:transition} are generally nonzero according to the selection rules of quantum mechanics. As vector operators have $P=-1$ parity, the wavefunctions in the integral must have different parity, in order to result in a nonzero scalar. For color centers with inversion symmetry, Eq.~\eqref{eq:electric} is satisfied as the individual integrals are zero, where the wavefunctions have either gerade (even) or ungerade (odd) parity. The high optical transition rate can be achieved by large overlap between a gerade orbital and an ungerade orbital in these optical centers. The main point of the present paper is the following statement: the inversion symmetry is not an ultimate criterion in simultaneous fulfillment of these requirements as Eq.~\eqref{eq:electric} may be satisfied without the restriction that all the individual terms in Eq.~\eqref{eq:electric} should be set to zero. We show below that other types of optical centers may satisfy Eq.~\eqref{eq:electric}. Defects usually have axial symmetry in compound semiconductors. In this case, Eq.~\eqref{eq:electric} can be satisfied for identical charge densities of the ground and excited state. In systems with axial symmetry, Eq.~\eqref{eq:transition} can be separated into two parts, \begin{equation} \left|\left<e\right|\vec{r}_\perp\left|g\right>\right|^2+\left|\left<e\right|\vec{r}_\parallel\left|g\right>\right|^2\text{,} \end{equation} where the first contribution is typically zero for orbitally non-degenerate ground state. The second contribution can be maximized by a large overlap of the wavefunctions (already satisfied by the same density requirement), if they are well separated in their signs along the symmetry axis, i.e., alternating phase of wavefunctions. This condition restricts the optical polarization to be parallel to the symmetry axis. A possible realization of the wavefunctions that fulfills the requirements detailed above is depicted in Fig.~\ref{fig:ideal} for a defect with axial ($C_{3v}$) symmetry. \begin{figure} \includegraphics{ideal.pdf} \caption{Wavefunction of ideal defect quantum emitter with axial symmetry in a binary semiconductor. The different types of atoms are colored by gray and yellow balls, whereas the positive (negative) isovalues of the corresponding wavefunctions participating in the optical transition are depicted as red (blue) lobes.} \label{fig:ideal} \end{figure} The other requirement for an ideal quantum emitter is a high Debye-Waller factor, in order to achieve emission of coherent photons at high rate. This can be fulfilled for those defects that have negligible geometry relaxation upon optical excitation. \subsection{V1 center as an nearly ideal defect quantum emitter} In this paper, we identify the $h$-site silicon-vacancy defect (V1 center) in 4H SiC as a nearly ideal quantum emitter with no inversion symmetry. Long coherence time was already reported for this defect making it a promising candidate for spin-based quantum applications~\cite{Simin2017, PRA2018}. In order to demonstrate that this defect also possesses all the optical requirements for quantum communication, we performed DFT calculations and compared this defect to the negatively charged nitrogen-vacancy (NV) center in diamond, which is known to exhibit a few tenths of GHz spectral diffusion even in high purity diamond samples~\cite{Siyushev:PRL2013}. The results of Berry phase evaluation for macroscopic dipole moment calculation for V1 center in 4H SiC and NV center in diamond are shown in Table~\ref{tab:Berry_h_PBE}. We find that the change in the permanent dipole moments upon optical excitation for V1 center in 4H SiC is nearly 20 times smaller than that for NV center in diamond. This translates to weak coupling of optical excitation to stray electric fields for isolated Si-vacancies in 4H SiC, in good agreement with very recent experimental data~\cite{arxiv:1810.10296, arxiv:1811.01293}. \begin{table} \caption{Macroscopic electric dipole moment of the negatively charged $h$-site Si-vacancy defect (V1) in 4H SiC and nitrogen-vacancy (NV) center in diamond as calculated within Berry phase approximation. ex(gr) notes the excited state electron configuration calculated with fixed ground state geometry.} \begin{ruledtabular} \begin{tabular}{lllll} centre & transition & $\Delta p_{\text{ion}}~(e\AA)$ & $\Delta p_{\text{el}}~(e\AA)$ & $\Delta p_{\text{tot}}~(e\AA)$\\\hline V1 & gr $\rightarrow$ ex & $0$ & $0.044$ & $0.044$ \\ V1 & gr $\rightarrow$ ex(gr) & $0$ & $0.039$ & $0.039$ \\ NV & gr $\rightarrow$ ex & $0.061$ & $0.842$ & $0.903$\\ \end{tabular} \end{ruledtabular} \label{tab:Berry_h_PBE} \end{table} The geometry relaxation in the excitation transition of the V1 center is depicted in Fig.~\ref{fig:relax} which shows that the ions move outward going from the electronic ground state to the excited state. This leads to smaller than unity Debye-Waller factor in the luminescence spectrum. It is experimentally verified~\cite{PRA2018} to remain $\sim$0.5, which is about an order of magnitude larger than that of NV center in diamond~\cite{Alkauskas2014, Thiering2017, Atature2018}. The second row in Table~\ref{tab:Berry_h_PBE} shows the change in the dipole moments without relaxation effect at fixed ground state geometry. We conclude that the outward relaxation of ions upon optical excitation has little effect on the final difference in the dipole moments, and the change is associated with the nature of the ground state and excited state wavefunctions. \begin{figure} \centering \includegraphics{relax.pdf} \caption{Geometry relaxation of V1 center upon optical excitation. Small gray, large yellow and glass balls represent carbon, silicon atoms, and the vacant site, respectively. The movement of the first neighbor C-atoms are shown by arrows with the corresponding distances.} \label{fig:relax} \end{figure} \subsection{Origin of permanent dipole moments and strong emission from V1 center in 4H SiC} The results on the microscopic level can be interpreted by considering the electron density of defect states in the ground state. The corresponding in-gap defect levels and labels are shown in Fig.~\ref{fig:levels}. This approximation neglects relaxation effects of ions and the effect of the delocalized electron bath. We approximate the origin of total change of electric dipole moment by studying the difference of electron density of these in-gap Kohn-Sham states, which change occupation during optical transition (V1 $u$ and $v$ levels and NV $v$ and $e$ levels in Fig.~\ref{fig:levels}). \begin{figure} \includegraphics{levels.pdf} \caption{Schematic visualization of ground state Kohn-Sham levels of NV and V1 center in the band gap, respectively. $v$ and $u$ states have non-degenerate $a_1$ symmetry whereas $e$ state is double degenerate. The occupation of these levels shows the ground state electronic configuration. In the spin minority channel (spin down), the electron is promoted to the $e$ level in NV center and $v$ level in V1 center, respectively, in the excited state electronic configuration.} \label{fig:levels} \end{figure} To visualize this scenario, we plot the electron density in the minority spin channel of the localized Kohn-Sham orbitals in the ground state (Fig.~\ref{fig:densities}) which represents the ground state and excited state in the photoexcitation process. NV in diamond shows a rather large change in localization and magnitude of the electron densities going from the excited state to the ground state which leads to a considerable change of electric dipole moment. However, V1 center manifests small change in localization and magnitude of the electron densities that suggests small change of electric dipole moment. \begin{figure*} \includegraphics{densities.pdf} \caption{Electron density isosurfaces (0.025\AA $^{-3}$ in purple color) of the ground state in the minority spin channel for the localized Kohn-Sham states of NV center in diamond (a, b) and V1 center in 4H SiC (d, e). Their positive (negative) difference is shown in red (blue) colors in (c, f). Gray, yellow and blue balls represent carbon, silicon and nitrogen atoms, respectively. Vacancy is represented by a glass ball.} \label{fig:densities} \end{figure*} Figure \ref{fig:wavefunction} (a, b) shows overlap of same phase for the lobe in axial direction resulting in a positive contribution. The basal lobes overlap with different phase resulting in negative contribution. These spatially well-separated contributions with opposite sign integrate to a rather large transition dipole moment. The same argument can be made for NV center in the perpendicular direction to the defect axis. The calculated radiative lifetime of V1 center is 12~ns which is fairly comparable to the calculated 11~ns~\cite{Siyushev2013} and measured $\sim$12~ns~\cite{Batalov2008} lifetime of NV center in diamond. \begin{figure} \centering \includegraphics{wave.pdf} \caption{Square moduli of the corresponding wavefunctions of the ground state in the minority spin channel in V1 (a, b) and NV (c, d) center. The same color code is used as in Fig.~\ref{fig:densities}(f).} \label{fig:wavefunction} \end{figure} These lines of the discussion are inspired by Eqs.~\eqref{eq:electric} and \eqref{eq:transition} that indeed explain well our \emph{ab initio} results on the change of the permanent dipole moments versus the radiative lifetime. \section{Discussion} Our findings have important implications in seeking materials hosting defect quantum emitters for realizing robust spin-to-photon interfaces. It is a common wisdom in the present literature that only defects with inversion symmetry are decoupled from stray electric fields, which significantly constrains the type of host materials. For instance, only 84 centrosymmetrical three-dimensional (3D) crystal structures exist among the total 230 feasible 3D crystal structures. In particular, commercially available wide band gap semiconductors, such as SiC, ZnO or GaN are all not centrosymmetrical. In addition, two-dimensional (2D) transition-metal dichalgonides (TMD) are also \emph{per se} not centrosymmetrical, thus they cannot host single photon defect emitters with inversion symmetry. We show theoretical arguments that the quest of inversion can be relaxed and other types of defect quantum emitters are in contention, such as V1 center in 4H SiC. In particular, 4H SiC hosts spin-active emitters, e.g., a recent study on molybdenum~\cite{Csore2016, Bosma2018}, that might have also favorable optical stability against stray electric fields. Quantum emitters in other semiconductors such as ZnO~\cite{Morfa2012, Neitzke2015} and GaN~\cite{Berhane2017, Zhou2018} should be also revisited in this regard. Our study also provides a guidance to design novel defect quantum emitters in 2D compound materials~\cite{Tran2016, Tongay2013, Saigal2016, He2015, Srivastava2015} for stable spin-to-photon interface at a single photon level. \section*{Acknowledgment} The support from National Research, Development and Innovation Office in Hungary (NKFIH) Grant Nos.\ 2017-1.2.1-NKP-2017-00001 (National Quantum Technology Program) and NVKP\_16-1-2016-0043 (NVKP Program) as well as Grant No.\ NN127902 (EU QuantERA Nanospin consortial project), and from the EU Commission (ASTERIQS project with Grant No.~820394) as well as BMBF project Q.Link.X is acknowledged. We acknowledge the discussion with Alp Sipahigil.
2,877,628,089,881
arxiv
\section{Introduction} Designing and understanding self-propelling biological or artificial microswimmers is the basis for the physics of active systems. Swimming on the microscale is governed by low Reynolds numbers and requires special propulsion mechanisms which are effective in the presence of dominating viscous forces. The first propulsion principle that comes to mind is shape-changing swimmers, which deform their body in a cyclic way in order to propel. At low Reynolds numbers, the cyclic deformation pattern of a swimmer must not be invariant under time-reversal according to the scallop theorem \cite{Purcell1977}. In nature, many different examples of deformation swimmers can be found such as bacteria, algae and spermatozoa \cite{Lauga2009}. Realizing this concept in synthetic microswimmers is often difficult as the scallop theorem requires control of at least two parameters. Shape changing swimmers force the surrounding fluid via no-slip boundary conditions on the surface of their moving parts. Another successful class of synthetic microswimmers is phoretic swimmers, which actively create slip velocities at their surface. Self-propelling phoretic swimmers autonomously create gradients in external fields such as concentration of a ``fuel'' or temperature, which in turn give rise to symmetry-breaking interfacial fluid flow in a thin interaction layer \cite{Anderson1989}. This fluid flow constitutes an effective slip velocity leading to propulsion \cite{ebbens2010,Illien2017}. Examples of such autophoretic swimmers are thermophoretic or diffusiophoretic swimmers, which generate gradients in temperature or concentration of interacting particles along their body. Self-diffusiophoretic swimmers generate a non-vanishing interfacial slip velocity on the particle surface via asymmetries in the solute concentration field and a short-range interaction between solute and swimmer \cite{Anderson1989}. Diffusiophoretic models typically neglect advection of the fuel concentration \cite{Popescu2009,Popescu2011,Mozaffari2016}, but this has been included in Refs.\ \cite{Michelin2014,Yariv2015}. A lot of different aspects of swimmer behavior have been studied for self-diffusiophoretic swimmers such as efficiency \cite{Sabass2010}, confinement effects \cite{Popescu2009,Mozaffari2016} cargo transport \cite{Popescu2011,Baraban2012} or the rich behavior during collisions with walls \cite{Uspal2015,Bayati2019}. While diffusiophoresis creates concentration gradients within the liquid surrounding the swimmer, concentration gradients or surface active molecules (surfactants) within the interface of \emph{liquid} swimmers can also generate symmetry-breaking interfacial forces based on the Marangoni effect \cite{Scriven1960}. These propulsion mechanism based on the Marangoni effects are utilized in different liquid Marangoni swimmers, such as active liquid droplets or active emulsions \cite{Herminghaus2014}. Examples are pure water in an oil-surfactant medium (squalane and monoolein) \cite{Izri2014} or liquid crystal droplets in surfactant solutions \cite{Herminghaus2014} but many other systems can be generated making this a versatile route to microswimmer production. This type of Marangoni swimmer is a liquid drop fully immersed in a liquid carrying surfactant, and propulsion is generated by the Marangoni effect along the liquid-liquid interface between swimmer and surrounding liquid, where a surfactant concentration gradient is maintained. In Ref.\ \cite{Izri2014}, an auto-diffusiophoretic mechanism \cite{michelin2013,Michelin2014} has been proposed to maintain the surfactant concentration gradient. Another mechanism that has been proposed is increased adsorption of surfactant at the front (in swimming direction) of the swimmer, which depresses the interfacial tension in the front \cite{Yoshinaga2012,Herminghaus2014,Schmitt2016}. This gives rise to a Marangoni stress toward the back (where the interfacial tension is higher). The Marangoni stress forces the surrounding fluid toward the back of the swimmer resulting in a swimmer motion toward the front of the swimmer. For all proposed mechanisms, the liquid swimmer autonomously maintains an increased surfactant concentration in the front of its interface with the surrounding liquid, and it propels in the direction of \emph{higher} surfactant concentration at its own interface. The self-phoretic and Marangoni swimming mechanisms discussed so far do not generate net forces on the swimmer but non-vanishing slip velocities on the particle surface via asymmetries in a temperature or solute concentration field. There is another class of self-propelling swimmers partly based on the Marangoni effect and with a long history \cite{Tomlinson1864}, which are so-called soap or camphor boats (or surfers), which we call \emph{Marangoni boats} in the following. Marangoni boats are moving at the liquid-air interface \cite{Nakata2015}; typically, they are \emph{solid} swimmers and operate at the centimeter scale. They are often used as a popular demonstration experiment for the Marangoni effect \cite{Renney2013}. As ``fuel'' serve surface active molecules, which are deposited on the floating swimmer \cite{Renney2013} or in which the swimmer is soaked \cite{Hayashima2001,Nagayama2004, Soh2008,Akella2018,Boniface2019,Sur2019}, or the swimmer body itself is made from dissolving surfactant \cite{Loffler2019}. There are many examples based on DMF (dimethylformamide) \cite{Wang2016}, alcohol \cite{Renney2013,Sur2019}, soap \cite{Sur2019}, camphor \cite{Hayashima2001,Nagayama2004,Soh2008,Suematsu2014,Akella2018,Boniface2019} or camphene \cite{Loffler2019} that have also been characterized quantitatively. The surfactant molecules are emitted or dissolved from the swimmer and a radial concentration gradient is established at the air-water interface by diffusion, eventually aided by evaporation for volatile surfactants. The radial concentration gradient creates (i) surface tension gradients and (ii) Marangoni stresses on the fluid. This leads, however, not necessarily to swimming as long as the surface tension is symmetric and uniform around the swimmer. The surface tension is pulling in normal direction on the closed air-water-swimmer contact line. A uniform surface tension cancels along any arbitrarily shaped closed three-phase contact line, but a gradient in surfactant concentration along the contact line can generate a net propulsion force. We call this net force generated by surface tension gradients \emph{direct Marangoni force} in the following. Also symmetry-broken Marangoni flows created by the Marangoni effect can contribute to (or impede) the propulsion via hydrodynamic drag onto the swimmer surface. We denote the resulting forces that Marangoni flows exert by \emph{Marangoni flow forces} in the following. The Marangoni boat mechanism is relying on both types of forces. If surfactant emission is anisotropic the boat is, in general, propelled into the direction of higher surface tension, i.e., \emph{lower} surfactant concentration along the air-water-swimmer contact line. We note that this is opposite to the propulsion in the direction of higher surfactant concentration for the active liquid swimmers discussed before. The Marangoni boat mechanism is also employed by some insects (rove beetle and \emph{Velia}) \cite{Bush2005} to propel on the water surface. There are also recent experiments \cite{Dietrich2020} and theoretical work \cite{Wurger2014} on a closely related system of thermally driven Marangoni boats or surfers. A full quantitative theory of Marangoni boats including hydrodynamics, surfactant advection, direct Marangoni forces and Marangoni flows is still elusive despite previous progress \cite{Soh2008,Nakata2015,Gidituri2019,JafariKang2020}. Some theoretical approaches ignore the advection \cite{Lauga2012,Wurger2014,Vandadi2017}, several ignore the hydrodynamic flow fields \cite{Hayashima2001,Nagayama2004,Heisler2012,Iida2014,Suematsu2014,Koyano2017} or approximate it by uniform flow \cite{Boniface2019}, which clearly oversimplifies the description of surfactant transport. In particular, on the numerical side, a recent paper of Kang {\it et al.} \cite{JafariKang2020} provides progress by including advection fully into the numerical solution for an anisotropic Marangoni boat. A theoretical description is complicated by the fact that most of the Marangoni boats operate at higher Reynolds numbers, and fluid flow generated during Marangoni propulsion is typically featuring vortices \cite{Sur2019,JafariKang2020}. Miniaturization to the microscale leads to low Reynolds numbers. Therefore, miniaturization is not only attractive for possible applications but also provides a starting point for the development of hydrodynamic theories, as the simpler linear Stokes equation holds for fluid flow at low Reynolds numbers. This has been initiated in Refs.\ \cite{Lauga2012,Wurger2014,Vandadi2017,Gidituri2019}. Another question is regarding the role of intrinsic anisotropy, namely, whether isotropic swimmers with no intrinsically defined motion direction are also capable of a spontaneous motion which then spontaneously breaks the symmetry of the system. This question has been answered positively for autophoretic swimmers \cite{michelin2013,Michelin2014}, where it has been shown that advection by the surrounding fluid can maintain the necessary gradients in fields and/or concentrations above a critical strength of the advection (characterized by a dimensionless Peclet number). Liquid Marangoni swimmers are always symmetric by construction and have to maintain an increased surfactant concentration in the front of their interface by adsorption of surfactant (or micelles) or by autophoretic effects \cite{Yoshinaga2012,Izri2014,Schmitt2016}. For the Marangoni boats the question regarding spontaneous symmetry breaking has been addressed experimentally in Ref.\ \cite{Boniface2019}, where symmetric camphor disks have been shown to propel and swimming velocities have been shown to be largely independent of intrinsic swimmer anisotropy. So far, a theoretical answer is missing for Marangoni boats. \begin{figure} \centerline{\includegraphics[width=0.99\linewidth]{kRSMVE.png}} \caption{Schematic of the PEG-alginate capsule. The water-soluble ``fuel'' or spreading molecule PEG is incorporated during alginate capsule synthesis in the core and diffusively emitted during swimming.} \label{fig:design} \end{figure} Here, we present a combined experimental and theoretical approach. We try to further approach the microscale by synthesizing alginate capsules as swimmer bodies, which provide a porous matrix that can accept surface active molecules. Several weakly surface active fuels are tested, among which polyethylene glycol (PEG) turns out to be the most effective. The PEG-alginate swimmers exhibit fast and prolonged propulsion. In general, we find prolonged propulsion only if spreading molecules are water-soluble as for PEG; then the air-water interface can regenerate by the fuel being dissolved in water. The PEG swimmers are approximately half-spherical, i.e., symmetric; therefore, we can address the question of spontaneous motion for a symmetric swimmer design. Moreover, a half-spherical shape turns out to be very convenient for theoretical modeling, and has also been employed in Ref.\ \cite{JafariKang2020}. For small Reynolds numbers, this geometry allows for a complete theoretical description of Marangoni boat propulsion by analyzing the coupled problems of surfactant diffusion and advection, fluid flow, which includes surfactant-induced fluid Marangoni flow, and surfactant adsorption at the air-water interface; we also include a possible evaporation of surfactant. The swimming speed is determined from the balance of Marangoni forces (both direct forces from surface tension gradients and from Marangoni flow forces) and drag forces. We can address the problem of spontaneous symmetry breaking and predict the swimmer's speed in a stationary state. This solution gives also hints how to generalize to higher Reynolds numbers using the concept of the Nusselt number, for which many results are known phenomenologically. On the experimental side, we find further effects, such as the repulsive interaction of PEG-alginate swimmers with walls and the tendency to move in curved trajectories, which can be explained in the framework of the Marangoni boat mechanism. \section{Alginate based capsule swimmers} \subsection{Swimmer synthesis and characterization} The synthesized capsules show typical propelling mechanisms similar to phenomena observed for the insect class of \emph{Microvelia}. Our artificial microswimmers consist of PEG droplets, which were surrounded by thin alginate shells (see Fig.\ \ref{fig:design}). For the preparation of these particles, we first form an aqueous PEG-alginate composite solution (standard: $w_{\rm PEG 300}=0.5\%$, $w_{\rm alginate}=0.5\%$). A droplet of this mixture is then deposed on the surface of an aqueous $\ce{CaCl2}$ solution (standard: $w_{\ce{CaCl2}\cdot\ce{2H2O}}=0.5\%$). The $\ce{Ca_{2+}}$ ions serve as cross-linker and induce, within several microseconds, the gelation of the alginate membranes according to the box-egg model \cite{Thiele1967,Leick2011,Klein1983,Cao2020}. Immediately after the formation of these particles, the capsules start to swim along the water surface. Dripping microliter amounts of alginate into a cross-linker salt solution containing counterions starts an ionotropic gelation and produces approximately half-spherical alginate gel capsules of millimeter radius (see Fig.\ \ref{fig:dripping}) \cite{Thiele1967}. We report results for $a\sim 1500\,{\rm \mu m}$; radii $a\sim 150\,{\rm \mu m}$ can be reached. For alginate gelation, different salt solutions can be used containing divalent cations such as $\ce{CaCl2}$, $\ce{CuCl2}$, or $\ce{BaCl2}$ solutions. \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{fig2.pdf} \caption{Synthesis of PEG-alginate swimmers by pipetting microliter amounts of PEG-alginate solution into cross-linker solution. Side view of a PEG-alginate swimmer showing its half-spherical shape.} \label{fig:dripping} \end{center} \end{figure} Adding surfactant to the alginate solution before dripping automatically loads the porous gel capsule with surfactant molecules. For suitable surfactants, capsules start to propel spontaneously on the air-water interface directly after dripping. The simple dripping technique allowed us to test many different ``fuels'': successfully propelling fuels are polyethylene glycols (PEGs) with molar weights $200-35000 \,{\rm g/mol}$, alcohols, acetic acid (stronger acids lead to protonization of alginate and subsequent coagulation), and organic solvents. A complete list of successfully tested fuels substances is given in Table \ref{tab:fuels}. Swimmers fueled by PEG (or polypropylene glycol (PPG)), in particular PEG 300, exhibit the best results regarding propulsion speed and propulsion duration; the reason is a suitable combination of diffusion constant, solubility, but also gelation properties of the alginate-PEG mixture. Corresponding monomers and dimers (ethylene glycol, propylene glycol, diethylene glycol) also exhibit good swimming properties but with lower speed and duration. It is particularly important for a prolonged propulsion that the fuel substance lowers the air-water surface tension but is also water-soluble such that it dissolves in the water reservoir after spreading in order to regenerate the air-water interface. Evaporation from the air-water interface is another mechanism to achieve such a regeneration, which is at work in camphor boats \cite{Soh2008,Suematsu2014,Akella2018,Boniface2019}. Strong surfactants and detergents, such as sodium dodecyl sulfate, generate spreading pressures that can rupture the alginate capsule. Moreover, they quickly saturate the air-water interface such that concentration gradients and, thus, swimming can not be established. In the following, we report results for solutions of alginate and PEG 300 ($w_{\rm alginate}=0.5\%$ and $w_{\rm PEG 300}=0.5\%$) dripped into a $\ce{CaCl2}$ cross-linker solution ($w_{\ce{CaCl2}\cdot\ce{2H2O}}=0.5\%$). \begin{table*} \begin{center} \label{tab:fuels} \caption{Fuel substances leading to successful alginate capsule propulsion} \begin{tabular}{llll} \hline\noalign{\smallskip} Polymers & Alcohols & Acids & Organic solvents \\ \noalign{\smallskip}\hline\noalign{\smallskip} PEG 200 & Ethylene glycol & Acetic acid & Acetone \\ PEG 300 & Propylene glycol & & Dimethyl sulfoxide\\ PEG 400 & Diethylene glycol & & Tetrahydrofuran \\ PEG 600 & Ethanol & & \\ PEG 1000 & Isopropanol & &\\ PEG 6000 & 1-pentanol & &\\ PEG 20000 & Benzyl alcohol & & \\ PEG 35000 & 1-hexanol & &\\ PPG 400 & 2-butanol, Tert-butanol & & \\ & Dodecanol & &\\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} Alginate gels have a porous structure \cite{Thiele1967,Leick2011,Klein1983}. Scanning electron microscopy (SEM) of the alginate capsules reveals their porosity and also a certain roughness on the microscale with asperities on the capsule surface (see Fig.\ \ref{fig:SEM}). The pores are essential for the slow diffusive emission of surfactant from the capsule \cite{Leick2011,Klein1983}. PEG diffusion through the porous alginate matrix is much slower than PEG diffusion in water; therefore, PEG should be released with a slowly varying controlled diffusive current that is limited by its slow diffusion in the alginate. The shape of the capsule and the spatial distribution of pores on the surface can break the overall spherical symmetry and give rise to small anisotropies in the emission, in principle. \begin{figure} \centerline{\includegraphics[width=0.99\linewidth]{fig3_small.png}} \caption{Scanning electron microscopy images of the porous structure of unloaded alginate capsules (1a,1b) and PEG-loaded alginate capsules (2a,2b) in 5000-fold (1a,2a) and 15000-fold (1b,2b) magnification.} \label{fig:SEM} \end{figure} \subsection{Swimming motion} The alginate-PEG swimmers exhibit a fast and sustained motion. The swimming motion was observed in a cylindrical dish (diameter $24\,{\rm cm}$) for up to $20\,{\rm min}$. The swimmers exhibit typical speeds $U_{\rm swim} \sim 2-3\, {\rm cm/s}$ corresponding to $10-20$ swimmer sizes per second (see Fig.\ \ref{fig:trajectory}); after $20\,{\rm min}$, velocities $U_{\rm swim} \sim 1 \,{\rm cm/s}$ can still be measured. This swimming performance is comparable to camphor boats \cite{Akella2018,Boniface2019} and active liquid droplets \cite{Izri2014,Herminghaus2014}. \begin{figure} \centerline{\includegraphics[width=0.99\linewidth]{fig4.pdf}} \caption{Typical swimming trajectory of a PEG-alginate swimmer in the cross-linker solution. Color-coded is the swimming velocity $U_{\rm swim}$, the trajectory shows the first $84\,{\rm s}$ of swimming.} \label{fig:trajectory} \end{figure} A typical swimming trajectory (lasting $84 \,{\rm s}$) far from a wall is shown in Fig.\ \ref{fig:trajectory}. We obtained this trajectory from a single particle tracking analysis (using ImageJ); typical swimming velocities are $U_{\rm swim} \sim 2-3 \,{\rm cm/s}$ corresponding to $10-20$ swimmer sizes per second. This corresponds to moderate Reynolds numbers ${\rm Re} = {\rho U_{\rm swim} 2a}/{\mu}\sim 60$ (with the swimmer diameter $2a\simeq 3000\,{\rm \mu m}$ as length scale and the viscosity and density of water, $\mu \simeq 10^{-3} \,{\rm Pa s}$ and $\rho = 10^3 \,{\rm kg/m^3}$). \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{fig5.pdf} \caption{Swimming trajectory of two PEG-alginate swimmers in a cylindrical container. Color-coded is the sign of the curvature, blue/red trajectories curve clockwise/counter-clockwise. Swimmers prepared according to same protocol exhibit individually different curving behavior (left: mostly counter-clockwise, right: mostly clockwise). Reflections at walls are of different duration. } \label{fig:trajectory2} \end{center} \end{figure} Swimming trajectories such as in Fig.\ \ref{fig:trajectory} and in confinement in Fig.\ \ref{fig:trajectory2} exhibit phases with a characteristic curvature. Marking the swimmer with elongated plastic fragments shows that the elongated fragment is not turning with respect to the direction of motion, i.e., the curving of the trajectory is correlated with a reorientation of the swimmer. This is a hint that during curved swimming the swimming direction is linked to the orientation of the particle and, therefore, that the spherical symmetry is slightly broken by irregularities in the porous structure of the alginate particle (see SEM pictures in Fig.\ \ref{fig:SEM}). The swimming direction is selected by dominating pores which determine a preferred direction of emission and, thus, propulsion by the resulting surfactant gradients. Curving itself can be caused by additional torques from asperities of the alginate capsule where surfactant is emitted preferentially in the tangential direction. A similar mechanism is at work at camphor-driven rotors \cite{Nakata1997,Koyano2017}. This is supported by the finding that the curving behavior of swimmers prepared by the same protocol (such as the swimmers in Fig.\ \ref{fig:trajectory2}) is individually different and seems to depend on small differences between irregularities acquired in the preparation process. Recently, also vortex shedding at Reynolds numbers ${\rm Re}\sim 100-200$ have been proposed to cause curving of trajectories \cite{Sur2019}. Swimmers are also repelled by walls and reverse their direction of motion normal to the wall. In a course of a collision in normal direction, the swimmer keeps, however, its orientation while the direction of motion is reversed, i.e., during normal wall collisions the swimming direction also reverses with respect to the particle orientation. Swimming direction reversal has also been observed for camphor boats \cite{Hayashima2001,Nagayama2004,Nakata2015}. Swimming trajectories in Fig.\ \ref{fig:trajectory2} also show collisions with walls that last longer; these collisions can also feature a reorientation of swimmer, similar to what has been predicted for self-diffusiophoretic swimmers \cite{Uspal2015}. \subsection{Swimming mechanism} The order of magnitude of swimming speeds can only be explained as a result of a modulation of the large liquid-air surface tension. Marangoni mechanisms based on surface tension variations within the gel-liquid interface between alginate capsule and surrounding water are unlikely because the interfacial tensions and, thus, also Marangoni stresses, are too small for solid-liquid or gel-liquid interfaces. This hints at a Marangoni boat propulsion mechanism for the alginate-PEG swimmers. There is further experimental evidence supporting the Marangoni boat mechanism: (a) Sinking capsules stop swimming which excludes a phoretic or Marangoni mechanism based only on the swimmer-liquid interface such as the active liquid droplet mechanisms \cite{Izri2014,Herminghaus2014}. (b) Only water-soluble spreading molecules lead to prolonged propulsion because they allow regeneration of the air-water interface by re-dissolving after spreading, which is crucial to establish concentration gradients at the air-water interface. (c) Local Wilhelmy plate surface tension measurements demonstrate surface tension modulations depending on the distance to a swimmer; this demonstrates that additional surfactant is emitted close to the swimmer. (d) Particle image velocimetry (PIV) measurements and selective staining with aniline show fluid motion consistent with surfactant spreading by surface tension reduction. (e) Swimming speed depends on the diffusive mass outflux. We will develop a theory for this dependence in the theoretical part of the paper, which describes the data (without free fitting parameters). (f) Repulsion and direction reversal without reorientation of the swimmer can be explained by an accumulation of surfactant emitted by the swimmer in front of the wall. This points to a motion opposite to the surfactant concentration gradient, while the direction of motion is not completely fixed relative to swimmer orientation. More details regarding points (d) and (e) are given below. All these results suggest that the swimmer diffusively emits surfactant which reduces the surface tension. The swimmers are spherically symmetric to a good approximation and this symmetry is strongly broken by the concentration profile in the fast moving state. The only available mechanism for symmetry breaking in the moving state is by advection to the surrounding moving liquid, which selects a swimming direction spontaneously. The results regarding curved trajectories and wall collision suggest that spherical symmetry is not perfect and large pores in the capsule shell can select a weakly preferred propulsion direction and link capsule orientation to swimming direction. This weak link can be deleted during a normal collision with a wall, when the swimmer reverses direction without changing orientation. \subsubsection{PIV measurements} PIV measurements were performed with polymethyl methacrylate (PMMA) tracer particles with sizes between $30$ and $50\,{\rm nm}$ and visualize the fluid flow close to the air-water interface. Figure \ref{fig:PIV} shows the results directly after swimmer synthesis by dripping (A), i.e., in the initial starting phase of the swimmer and (B) shortly after the swimming started. \begin{figure*} \begin{center} \includegraphics[width=0.99\linewidth]{fig6_small.png} \caption{PIV measurements of a PEG-alginate swimmer (A) after dripping and (B) in motion. The velocity scale bar refers to the fluid velocities, which are also color-coded for velocity. The white dashed arrows indicate the direction of the particle motion. On the right we indicate the fluid motion in (B) featuring radial Marangoni flow (blue), combined with flows corresponding to two counter-rotating vortices created by particle motion (green), and downstream Karman-like vortices (red). } \label{fig:PIV} \end{center} \end{figure*} In Fig.\ \ref{fig:PIV}(A) we observe strong radial spreading of surfactant by initial Marangoni flows. Then, the symmetry is spontaneously broken when swimming is initiated and Fig.\ \ref{fig:PIV}(B) shows the fluid surface flow in the initial swimming stage. During swimming we still observe radial Marangoni flows (blue) but the fluid flow around the swimming object creates a tangential backward component (green) because two counter-rotating vortices form; moreover, Karman-like vortices appear on the rear side (red). Vortex formation demonstrates that the fluid motion happens at moderate Reynolds numbers ${\rm Re} \sim 60$. Similar vortex structures have also been observed in Ref.\ \cite{Sur2019} for disks propelled by alcohol. Nevertheless, Reynolds numbers are moderate (${\rm Re} \ll 200$) such that we can expect a steady fluid flow (eventually with boundary layer separation from the sphere and stationary Marangoni vortices). Only at higher Reynolds numbers ${\rm Re} > 200$, we expect unsteady or even turbulent flow around a sphere \cite{Johnson1999}. \subsubsection{Mass outflux and velocity measurements} We propose that the swimming motion is caused by surfactant that is diffusively emitted by the PEG-alginate capsule. Therefore, a relative slow reduction of the total mass of the PEG-alginate capsules should be measured, which also correlates with the swimming speed. Overall spherical symmetry of the capsule implies that the emission current density $\alpha$ is uniform on the capsule surface. Quantitative measurements of the mass outflux are difficult. In Ref.\ \cite{Boniface2019} this has been achieved only indirectly by measuring the increase in surfactant in the surrounding solution. Here, we measure the mass outflux directly by removing swimmers (prepared according to the same protocol) after times $t=1,2,3,... {\rm min}$ from the swimming solution, dry freezing the swimmers to completely remove water from the alginate hydrogel, and determine their weight, which gives the mass $m(t)$ of the swimmer at times $t=1,2,3,... {\rm min}$. Figure \ref{fig:mdot}(left) shows the results for the mass averaged over 10 swimmers, error bars are the standard deviation. Diffusive outflux through a porous shell of thickness $h$ ($h<a$) approximately follows an exponential decay. The emission current density is $\alpha \approx -D_s (c_i-c_0)/h$, where $D_s$ is the diffusion constant in the gel, $c_i$ the interior PEG concentration and $c_0$ the exterior PEG concentration in solution. We assume that PEG has to diffuse over a fixed distance $h$ for release; more refined release models use a time-dependent diffusion distance \cite{Higuchi1961,Higuchi1963,Boniface2019}. For a half-sphere, this results in $\dot{c}_i = 3\alpha/a = -3 D_s (c_i-c_0)/ha$ and an exponential decay of $c_i(t)$ and, thus, $m(t)$. This motivates an error-weighted least-square fit with an exponential \begin{equation} m(t) = m_\infty + m_0\exp(-t/\tau_m), \end{equation} which describes the data well with an ``empty'' mass $m_\infty \approx 180\,{\rm \mu g}$, a mass loss $m_0\approx 182\, {\rm mu g}$, and a time constant $\tau_m \approx 9.95\,{\rm min}$ (see Fig.\ \ref{fig:mdot}(left)). This confirms a slow diffusional surfactant release, i.e., $\alpha$ is changing on a large time scale $\tau_m$; this time scale is large compared to any microscopic time scale of the fluid flow and the surfactant diffusion. Therefore, we expect that fluid flow and surfactant concentration are always in a quasi-stationary state during swimming, i.e., adiabatically following a slowly changing $\alpha$. Differentiating with respect to time gives the mass outflux $\dot{m}$ as a function of time [see Fig.\ \ref{fig:mdot}(middle)]. \begin{figure*} \begin{center} \includegraphics[width=0.99\linewidth]{mdot3vexp2.pdf} \caption{Left: Mass of the PEG-alginate swimmers as a function of time with an exponential fit $m(t) = m_\infty + m_0\exp(-t/\tau_m)$ (see text); error bars (light blue) denote the standard deviation. Middle: Resulting mass loss $-\dot{m}$ as a function of time. Right: Corresponding velocity of the swimmer averaged over 10 swimmers; error bars (light blue) denote the standard deviation. The orange line is a fit $U_{\rm swim}(t) = u_0\exp(-t/\tau_{u,0})+ u_1\exp(-t/\tau_{u,1}) $ (see text) motivated by the existence of two swimming phases. } \label{fig:mdot} \end{center} \end{figure*} The corresponding swimming velocity $U_{\rm swim}$ of the PEG-alginate swimmers is measured via the single particle tracking analysis. Figure \ref{fig:mdot}(right) shows the results for the velocity averaged over 10 swimmers prepared according to the same protocol as for the mass measurements, error bars are the standard deviation. The data clearly shows a fast initial drop of the velocity in a first phase, followed by a slower decay in a second phase. During the first phase, slow diffusion of surfactant through the porous alginate matrix might not be necessary yet and gelation of the capsule alginate shell might also be incomplete. The existence of several swimming phases has also been observed for camphor disks in Ref.\ \cite{Akella2018}. The second phase should be characteristic for propulsion triggered by slow diffusional release as described above. This motivates a fit \begin{equation} U_{\rm swim}(t) = u_0\exp(-t/\tau_{u,0})+ u_1\exp(-t/\tau_{u,1}) \end{equation} with two exponentials. The resulting error-weighted least-square fit describes the data well as shown in Fig.\ \ref{fig:mdot}(right). The first phase has a time constant $\tau_{u,0} \simeq 0.37\,{\rm min}$ (and $u_0\simeq 0.74\,{\rm cm/s}$), whereas the second phase has $\tau_{u,1} \simeq 15.89\,{\rm min}$ (and $u_1\simeq 2.27\,{\rm cm/s}$), which is comparable with $\tau_m$. This further supports that diffusive release of surfactant causes the propulsion during the second phase. \section{Theoretical results} \subsection{Model} In the theoretical part of the paper, we focus on the dependence of swimming speed on diffusive surfactant release. So far, this important question has not received attention in the literature. The strategy to calculate the swimming speed is as follows. We first prescribe a stationary velocity $\vec{U} = U \vec{e}_z$ of the swimmer and analyze the following three coupled problems for their stationary state: \begin{itemize} \item[(i)] Surface tension reduction by surfactant adsorption at the air-water interface; depending on the volatility of the surfactant we also need to include a possible evaporation of surfactant. PEG is not volatile but the theory will apply to the physics of the Marangoni boat mechanism in general and should also explain quantitative results on camphor boats from Ref.\ \cite{Boniface2019}. As opposed to PEG, camphor is a volatile surfactant which quickly evaporates from the air-water interface. \item[(ii)] Fluid flow, which includes both the fluid flow induced by motion of the half-spherical capsule and additional surfactant-induced Marangoni flow inside the fluid. \item[(iii)] Diffusive surfactant release from the swimmer and subsequent diffusion and advection. \end{itemize} Solving these three coupled problems we can obtain the Marangoni forces as a function of the prescribed velocity $U$ from the surfactant concentration profile. Finally, the actual swimming velocity $U=U_{\rm swim}$ is determined by the force equilibrium between drag force, direct propelling Marangoni forces from the surface tension gradient along the air-water-swimmer contact line, and Marangoni flow forces, which can increase either the drag or the direct Marangoni propulsion force. The fluid flow part (ii), the drag force, and also the Marangoni forces in the force balance strongly depend on the Reynolds number. Although the Reynolds number for the PEG-alginate swimmers is moderate (${\rm Re} \sim 60$), we will first develop a low Reynolds number theory, and try to address higher Reynolds numbers afterward using phenomenological results for the Nusselt number. We introduce coordinates such that the origin $r=0$ is at the center of the circular planar solid surface of the half-sphere, and the liquid-air interface is at $y=0$ (with $y<0$ being the liquid phase) and $\vec{e}_z$ will coincide with the spontaneously selected swimming direction. We also use spherical coordinates such that $\theta = 0$ is the swimming direction and the interfacial plane is located at $\phi = 0,\pi$ ($y=0$). The half-sphere has radius $a$ such that the contact line is at $r=a$ and $\phi = 0,\pi$ (and parametrized by $\theta$). We denote the half-spherical surface of the swimmer by $S$, the circular air-water-swimmer contact line by $L$, and the liquid-air interface outside the swimmer as $S_{\rm Int}$, see Fig.\ \ref{fig:scheme}. \begin{figure} \centerline{\includegraphics[width=0.99\linewidth]{scheme.pdf}} \caption{Side view (top) and top view (bottom) of the half-spherical Marangoni swimmer geometry with surfactant concentration field $c(\vec{r})$ and coordinates. } \label{fig:scheme} \end{figure} \subsubsection{Surface tension reduction by surfactant adsorption and evaporation} We start with problem (i), which is independent of the Reynolds number. In equilibrium, the surfactant concentration $\Gamma(\vec{r})$ at the interface $y=0$ for a given bulk subsurface concentration $c(\vec{r})$ is given by Langmuir adsorption \begin{equation} \Gamma(c) = \Gamma_{\rm max} {K_Lc}/(1+K_Lc) \end{equation} (with the adsorption equilibrium constant $K_L$ and the maximal surfactant surface concentration $\Gamma_{\rm max}$). In Langmuir adsorption, we assume ideal behavior of the surfactant molecules. According to the Gibbs adsorption isotherm, the interfacial surface tension $\gamma$ is related by $d\gamma = - RT \Gamma d\ln c$ to surface concentration and bulk concentration \cite{Rosen2012}. Together with the Langmuir equation, this leads to the Szyszkowski equation \begin{align} \Delta \gamma &= -RT\Gamma(c_0) \frac{1}{c_0} \Delta c = -R T \Gamma_{\rm max}\frac{ K_L}{1+K_Lc_0}\Delta c \label{eq:Szyszkowski} \end{align} (with the gas constant $R=N_A k_B$ and $\Gamma_{\rm max}$ in mol per area), formulated for small local variations around a background, $c(\vec{r}) = c_0 + \Delta c(\vec{r})$. Small surfactant concentration variations thus linearly reduce the surface tension, \begin{align} \Delta \gamma(\vec{r}) &= -\kappa \Delta c(\vec{r}) ~~\mbox{with} ~~ \kappa = RT \Gamma_{\rm max} \frac{K_L}{1+K_Lc_0}, \label{eq:kappa} \end{align} where $\vec{r}$ is an interfacial vector with $y=0$. We choose the background $c_0$ as the bulk value $c_0 = c(\infty)$ for $|\vec{r}|\to \infty$. In formulating Eq.\ (\ref{eq:kappa}) locally, we already assumed that the on and off kinetics of surfactant to the interface is fast such that equilibrium can be assumed to be established instantaneously at \emph{every} point $\vec{r}$ on the interface. Then, also the surface concentration $\Gamma(\vec{r})$ is slaved to the bulk and only a passive ``reporter'' of the bulk subsurface concentration $\left.c(\vec{r})\right|_{y=0}$, and we do not have to solve a separate dynamics for $\Gamma(\vec{r})$ in the interface. This assumption is typically valid for small surfactant molecules \cite{Li1994}, in particular for water-soluble spreading molecules such as PEG. The assumption also implies that there is no flux imbalance within the interface, and also the bulk diffusive flux to the interface $S_{\rm Int}$ has to vanish, \begin{equation} j_{\rm Int} = -\left. D \vec{\nabla} c(\vec{r})\cdot \vec{n}^{\rm out}\right|_{y=0} =-D \left. \partial_y c(\vec{r})\right|_{y=0}= 0, \label{eq:jInt} \end{equation} which provides the corresponding boundary condition to the diffusion-advection problem (iv) in the bulk. Here, $D$ is the surfactant diffusion constant in the bulk liquid. The surface concentration $\Gamma(\vec{r})$ should also be small enough to avoid saturation of the air-water interface, which also requires water-soluble spreading molecules such as PEG. So far, we did not consider the possibility of surfactant evaporation from the interface. This enters the balance of fluxes to and from the interface (see Fig.\ \ref{fig:scheme}), and we have to replace Eq.\ (\ref{eq:jInt}) by \begin{equation} j_{\rm Int} = -\left. D \vec{\nabla} c(\vec{r})\cdot \vec{n}^{\rm out}\right|_{y=0} = - j_{\rm ev} = k\left. c(\vec{r})\right|_{y=0}, \label{eq:jev} \end{equation} where $k$ is the rate constant for evaporation. \subsubsection{Fluid flow at low Reynolds numbers} We consider the rest frame of the swimmer and linearly decompose the total fluid flow field into a field $ \vec{v}(\vec{r})$, which is the flow field of a half-sphere pulled with velocity $U\vec{e}_z$ through the liquid and a correction $\vec{{v}}_{\rm M}(\vec{r})$ from Marangoni flows, $ \vec{v}_{\rm tot}(\vec{r}) = \vec{v}(\vec{r}) + \vec{{v}}_{\rm M}(\vec{r})$. For low Reynolds numbers, \emph{both} $ \vec{v}(\vec{r})$ and $\vec{{v}}_{\rm M}(\vec{r})$ (and the associated pressure fields) fulfill the incompressibility condition $\vec{\nabla}\cdot \vec{v} = 0$ and the linear Stokes equation $\mu \vec{\nabla}^2 \vec{v} = \vec{\nabla} p$, where $\mu$ is the fluid viscosity. The Stokes equations for $\vec{v}(\vec{r})$ and $\vec{v}_{\rm M}(\vec{r})$ are decoupled because of linearity; this will be different at high Reynolds numbers. The flow field $\vec{v}(\vec{r})$ of an externally pulled half-sphere has no-slip boundary conditions on the surface of the sphere, stress-free boundary conditions at the liquid-air interface, and $\vec{v}(\infty) = - U\vec{e}_z$ at infinity. The total flow field $\vec{v}_{\rm tot}(\vec{r})$ also has no-slip boundary conditions on the surface of the sphere, assumes $\vec{v}_{\rm tot}(\infty) = - U\vec{e}_z$ at infinity, but is subject to Marangoni stresses at the liquid-air interface. Consequently, the difference $\vec{v}_{\rm M}(\vec{r}) = \vec{v}_{\rm tot}(\vec{r})- \vec{v}(\vec{r}) $ from Marangoni flows has no-slip boundary conditions on the surface of the sphere, has vanishing velocity $\vec{v}_{\rm M}(\infty) = 0$ at infinity, and is subject to Marangoni stresses at the liquid-air interface. Moreover, for all three flow fields, there is no normal flow across the liquid-air interface $\left. v_{\rm tot, y}(\vec{r}) \right|_{y=0} = \left. v_y(\vec{r}) \right|_{y=0} = \left. v_{\rm M, y}(\vec{r}) \right|_{y=0} = 0$. We will assume that the liquid-air interface remains flat, even if the sphere moves. This requires that typical viscous forces remain small compared to interfacial stress, $\mu U \ll \gamma$, which is fulfilled with $\mu U \sim 10^{-5}\, {\rm N/m}$ for $U\sim 1 \,{\rm cm/s}$ and $\gamma \sim 0.07 \,{\rm N/m}$ for the air-water interface. We also neglect a possible curvature of the interface from wetting effects. The Marangoni flow is caused by tangential Marangoni stresses at the liquid-air interface $y=0$, \begin{equation} \mu \vec{n}^{\rm out}\cdot \vec{\nabla} \left. \vec{v}_{\rm M}(\vec{r})\right|_{y=0} = \mu \partial_y \left. \vec{v}_{\rm M}(\vec{r})\right|_{y=0} = \vec{\nabla}_S \Delta \gamma(\vec{r}), \label{eq:Marangoni} \end{equation} which act both on $ \vec{v}_{\rm M}$ and $ \vec{v}_{\rm tot}$. At low Reynolds numbers, the flow field $\vec{v}(\vec{r})$ is given by ``half'' ($y<0$) the Stokes flow field around a sphere, which automatically fulfills the boundary condition $\left. v_y(\vec{r}) \right|_{y=0} = 0$ for symmetry reasons. In spherical coordinates, the axisymmetric Stokes flow field is \begin{subequations} \begin{align} \vec{v}(\vec{r}) &= U \cos\theta u(r/a)\vec{e}_r + U \sin\theta v(r/a) \vec{e}_\theta ~~\mbox{with} \nonumber\\ u(r/a) &= \left[-\frac{1}{2} \left(\frac{a}{r}\right)^3 + \frac{3}{2} \frac{a}{r} - 1 \right] \label{u}\\ v(r/a) &= \left[-\frac{1}{4} \left(\frac{a}{r}\right)^3 - \frac{3}{4} \frac{a}{r} + 1 \right] \label{v} \end{align} \end{subequations} \subsubsection{Surfactant diffusion and advection} Surfactant molecules are emitted from the half-spherical surface $S$ and diffuse into the liquid phase. At the same time, they are advected by the total fluid flow. In the stationary limit, the bulk concentration field is governed by the diffusion-advection equation \begin{align} 0= \partial_t c &= D \vec{\nabla}^2 c - (\vec{v}(\vec{r})+ \vec{v}_{\rm M}(\vec{r})) \cdot \vec{\nabla} c. \label{eq:diffadv0} \end{align} Because of the slow diffusional surfactant release the appropriate boundary condition on $S$ is a constant flux boundary condition, \begin{align} \left. \vec{j}\cdot\vec{n}\right|_S &=- D\left. \vec{\nabla} c\cdot\vec{n} \right|_S =\alpha, \label{eq:constantflux} \end{align} together with $c(\infty)=0$ and the no-flux boundary condition (\ref{eq:jInt}) at the interface $S_{\rm Int}$. The flux $\alpha$ is only slowly changing (on the time scale $\tau_m$) and approximated as a constant for the calculation of quasi-stationary fluid flow and concentration fields. \subsubsection{Drag and Marangoni forces at low Reynolds numbers} The half-spherical swimmer moving at velocity $U$ is subject to three forces. First, there is the drag force, which is, at low Reynolds numbers, given by the Stokes drag for a half-sphere, \begin{equation} \vec{F}_{\rm D} = F_{\rm D}\vec{e}_z = -3\pi \mu a U \vec{e}_z. \label{FD} \end{equation} Second, there is the direct Marangoni propulsion force $\vec{F}_{\rm M} = F_{\rm M} \vec{e}_z$ from integrating the surface stress $\Delta\gamma(\vec{r})= -\kappa c(\vec{r})$ along the air-water-swimmer contact line, \begin{align} F_{\rm M} &\equiv -\kappa \oint_L ds (\vec{e}_n\cdot \vec{e}_z) c(\vec{r}) \nonumber\\ &= - 2 \kappa a \int_0^\pi d\theta \cos\theta c(a,\theta)|_{y=0}. \label{FM} \end{align} Third, there is the Marangoni flow force $\vec{F}_{\rm M, fl} = F_{\rm M, fl} \vec{e}_z$, which is by definition the force transmitted by fluid stresses of the Marangoni flow onto the sphere, \begin{equation} F_{\rm M, fl} \equiv - \int_{S} da_i \sigma_{\rm M, iz}. \label{eq:FMfl} \end{equation} For low Reynolds numbers, we can apply the reciprocal theorem to the flow fields $\vec{v}$ and $\vec{v}_{\rm M}$ and their associated stress tensors to calculate the Marangoni flow force without explicitly calculating the Marangoni flow $\vec{v}_{\rm M}$, as has been shown in detail in Ref.\ \cite{Masoud2014}. This gives the identity $0= \int_{S+S_{\rm Int}} da_i v_{j} \sigma_{\rm M, ij}$, which leads to a Marangoni flow force \begin{align} F_{\rm M,fl} &= -\kappa \int_{S_{\rm Int}} dS \frac{\vec{v}(\vec{r})+U\vec{e}_z}{U} \cdot \vec{\nabla}_S c(\vec{r}). \label{eq:FMfl2} \end{align} The total Marangoni force \begin{equation} F_{\rm M,tot} \equiv F_{\rm M} + F_{\rm M,fl}, \label{eq:FMtot} \end{equation} is obtained by using (\ref{FM}) and the Gauss theorem, \begin{align} F_{\rm M,tot} &= \kappa \int_{S_{\rm Int}} dS \left(\vec{\nabla}_S\cdot \frac{\vec{v}(\vec{r})}{U}\right) c(\vec{r})\nonumber\\ &=-\frac{3\kappa a}{2} \int_1^\infty d\rho \int_0^\pi d\theta \cos\theta \left(\frac{1}{\rho} - \frac{1}{\rho^{3}} \right) {c}(\rho a,\theta)|_{y=0}. \label{FMtotal} \end{align} Because $\rho^{-1} - \rho^{-3} >0$ for $\rho>1$, the total Marangoni force is always positive for concentration profiles, which are increasing toward the rear side. This implies that the total Marangoni force is always propulsive, i.e., points in the same direction as the imposed velocity $U$ regardless of its absolute value. This is a necessary condition for self-propulsion. When the particle is pulled by an external force, this also implies that the total Marangoni force will always support the pulling force instead of increasing the drag. The Marangoni flow contribution $F_{\rm M,fl}$, however, can have both signs. For $F_{\rm M,fl}>0$, the flow force increases the direct Marangoni force resulting in $F_{\rm M,tot}> F_{\rm M}$; for $F_{\rm M,fl}<0$, the flow force is directed backward and increases the drag force resulting in $F_{\rm M,tot}< F_{\rm M}$. As opposed to Ref.\ \cite{Lauga2012}, we will find that both cases are possible. In the stationary swimming state, drag and total Marangoni force have to be balanced, \begin{equation} 0= F_{\rm D} + F_{\rm M, tot} = F_{\rm D} + F_{\rm M} + F_{\rm M,fl}, \label{eq:swimcond0} \end{equation} such that the swimmer is force-free. This is the swimming condition that finally determines the actual swimmer velocity $U = U_{\rm swim}$. \subsection{Non-dimensionalization} To proceed, we make the system of coupled equations governing our sub-problems (i)-(iii) and the Marangoni forces dimensionless by measuring lengths in units of $a$, velocities in units of $D/a$, concentrations in units of $\alpha a/D$, and forces in units of $D\mu$. We introduce \begin{align} \vec{\rho} &\equiv \vec{r}/a, ~~\bar{\vec{\nabla}} \equiv a \vec{\nabla} = \vec{\nabla}_\rho, ~~\bar{\vec{v}}\equiv \vec{v} \frac{a}{D},~~ \bar{U} \equiv U \frac{a}{D}, \nonumber \\ \bar{c} &\equiv c \frac{D}{\alpha a},~~ \bar{j} \equiv j\frac{1}{\alpha}, \nonumber \\ \bar{F} &\equiv F\frac{1}{D\mu},~~ \bar{p} \equiv p \frac{a^2}{D\mu}. \label{eq:nondim} \end{align} The prescribed dimensionless velocity $\bar{U}$ of the swimmer is the first control parameter of the problem,\footnote{In many publications on the diffusion-advection problem, such as Refs.\ \cite{Acrivos1960,Acrivos1962,Acrivos1965} but also in Refs.\ \cite{Lauga2012,Yariv2015,Vandadi2017,Boniface2019,JafariKang2020}, $\bar{U}$ is called the Peclet number. Here, we define the Peclet number as ${\rm Pe} \equiv \bar{U}_{\alpha}$, i.e., by the characteristic velocity $\bar{U}_{\alpha} = {\kappa \alpha a}/{D \mu}$, where a typical direct Marangoni force $F_{\rm M} \sim \kappa a^2 \partial_rc(r=a)\sim \kappa a^2\alpha/D$ is balanced by the typical Stokes drag force $F_D \sim \mu a U$. The Peclet number is a dimensionless measure of propulsion strength with this definition. } which is related to the Reynolds number, ${\rm Re} = 2\bar{U}/{\rm Sc}$, via the Schmidt number ${\rm Sc} \equiv\mu /\rho D$. Our dimensionless set of equations for problems (i)-(iii) becomes \begin{subequations} \label{eq:i-iii} \begin{align} {\rm (i)}&&& -\left. \bar{\vec{\nabla}} \bar{c}(\vec{\rho})\cdot \vec{n}^{\rm out} \right|_{\bar{y}=0}\approx 0\nonumber\\ &&&\mbox{without evaporation,} \label{eq:bcnoevap} \\ &&& -\left. \bar{\vec{\nabla}} \bar{c}(\vec{\rho})\cdot \vec{n}^{\rm out} \right|_{\bar{y}=0} \approx \bar{k}\left. \bar{c}(\vec{\rho}) \right|_{\bar{y}=0} \nonumber\\ &&&\mbox{with evaporation,} \label{eq:bcevap} \\ {\rm (ii)}&& \bar{\vec{v}}_{\rm tot}(\vec{\rho}) &= \bar{\vec{v}}(\vec{\rho}) + \bar{\vec{v}}_{\rm M}(\vec{\rho}), \nonumber \\ {\rm (iia)}&& \bar{\vec{v}}(\rho,\theta) &= \bar{U}\cos\theta u(\rho) \vec{e}_r + \bar{U} \sin\theta v(\rho) \vec{e}_\theta \nonumber\\ &&&\mbox{Stokes~flow~field,} \label{eq:Stokes} \\ {\rm (iib)}&& \bar{\vec{\nabla}}\cdot \bar{\vec{v}}_{\rm M} &= 0\nonumber\\ &&&\mbox{Marangoni~flow~field,} \nonumber \\ &&\bar{\vec{\nabla}}^2 \bar{\vec{v}}_{\rm M} &= \bar{\vec{\nabla}} \bar{p}_{\rm M}, \nonumber \\ && \bar{\vec{v}}_{\rm M}(\infty) &=0, \nonumber \\ && \left. \bar{\vec{v}}_{\rm M}(\vec{\rho}) \right|_{\rho=1}&=0, \nonumber \\ &&\left. \bar{v}_{\rm M, y}(\vec{\rho}) \right|_{\bar{y}=0} &=0, \nonumber \\ && \left. \partial_{\bar{y}} \bar{\vec{v}}_{\rm M}(\vec{\rho}) \right|_{\bar{y}=0} &= -{\rm Pe} \left. \bar{\vec{\nabla}}_{S} \bar{c}(\vec{\rho}) \right|_{\bar{y}=0}, \label{eq:Mflow}\\ {\rm (iii)}&& 0 &= \bar{\vec{\nabla}}^2 \bar{c} - (\bar{\vec{v}}(\vec{\rho})+ \bar{\vec{v}}_{\rm M}(\vec{\rho})) \cdot \bar{\vec{\nabla}} \bar{c}, \nonumber \\ && \bar{c}(\infty) &= 0, \nonumber \\ && \left. \bar{\vec{j}}\cdot\vec{n}\right|_S &=- \left. \bar{\vec{\nabla}} \bar{c}\cdot\vec{n} \right|_S =1, \label{eq:boundconstflux} \end{align} \end{subequations} with the dimensionless Peclet number \begin{align} {\rm Pe} &\equiv \frac{\kappa \alpha a^2}{D^2 \mu}= \frac{\kappa \dot{m}}{2\pi D^2 \mu}, \label{eq:Pe} \end{align} where $\dot{m}= 2\pi a^2\alpha$ is the mass loss per time of the swimmer (see Fig.\ \ref{fig:mdot}). The Peclet number is a dimensionless measure of propulsion strength. Typical values for the PEG-alginate swimmer are very high, ${\rm Pe} \sim 10^7$ (see Table \ref{tab:nondimen}). We also introduced the dimensionless Biot number \begin{equation} \bar{k} \equiv \frac{ak}{D} \label{eq:Biot} \end{equation} governing possible evaporation, which is practically absent for PEG. From Eq.\ (\ref{eq:Mflow}), we see that the Peclet number ${\rm Pe}$ determines the velocity scale of the Marangoni flow field. Therefore, we can also assign a Reynolds number ${\rm Re}_{\rm M} = {2{\rm Pe}}/{{\rm Sc}}= {\rm Re} {\rm Pe}/\bar{U}$ to the Marangoni flow. Typical values for the PEG-alginate swimmer are ${\rm Re}_{\rm M} \sim 10^4$ (see Table \ref{tab:nondimen}), which agrees with the experimentally observed turbulent features of Marangoni flows (see Fig.\ \ref{fig:PIV}). Via the advection with $\bar{\vec{v}}(\vec{\rho})+ \bar{\vec{v}}_{\rm M}(\vec{\rho})$, the concentration field $c(\vec{\rho})$ depends both on the dimensionless velocity scale $\bar{U}$ of the Stokes field and the dimensionless velocity scale ${\rm Pe}$ of the Marangoni flow field, in general. All dimensionless parameters are summarized in Table \ref{tab:nondimen}, along with typical values for the PEG-alginate swimmers and in comparison with camphor boats according to Ref.\ \cite{Boniface2019}. \begin{table*} \begin{center} \caption{\label{tab:nondimen} Dimensionless parameters. ${\rm Re}$ or $\bar{U}$, ${\rm Sc}$, ${\rm Pe}$ and $\bar{k}$ are control parameters of the problem. ${\rm Re}_{\rm M}$ and ${\rm Nu}$ cannot be independently controlled but characterize the resulting solutions; the swimming velocity $\bar{U}_{\rm swim}$ is determined by the force balance swimming condition.} \begin{tabular}{ l l @{\qquad} | l | l | l} \hline\noalign{\smallskip} Parameter & Formula & Eq & PEG-alginate swimmer & Camphor boat\cite{Boniface2019} \\ \noalign{\smallskip}\hline\noalign{\smallskip} Reynolds number ${\rm Re}$ & $ ={2\rho U a}/{\mu} = {2\bar{U}}/{{\rm Sc}}$ & & $30-80$ & $60-3000$ \\ dimensionless velocity $\bar{U}$ & $ = U{a}/{D} $ & (\ref{eq:nondim}) & $4\times 10^4-1.2\times 10^5$ & $4\times 10^4-1.2\times 10^6$ \\ Schmidt number ${\rm Sc}$ & $ = {\mu}/{\rho D}$ & & $2860$ & $1390$ \\ Peclet number ${\rm Pe}$ & $ = {\kappa \alpha a^2}/{D^2 \mu}$ & (\ref{eq:Pe}) & $3.5\times 10^{6} - 8.8\times 10^{7}$ & $(9.3\times 10^{9})(a/4{\rm mm})^2$\\ Biot number $\bar{k}$ & $= {ak}/{D}$ & (\ref{eq:Biot}) & $\approx 0$ & $\approx 550$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} swimming velocity $\bar{U}_{\rm swim}$ & $ = U_{\rm swim}{a}/{D} $ & (\ref{eq:swimcond}) & $4\times 10^4-1.2\times 10^5$ & $4\times 10^4-1.2\times 10^6$ \\ Marangoni Reynolds number ${\rm Re}_{\rm M}$ & $= {2{\rm Pe}}/{{\rm Sc}}= {\rm Re} {\rm Pe}/\bar{U}$ & & $2.4\times 10^3-6.2\times 10^4$ & $(1.4\times 10^7) (a/4{\rm mm})^2$ \\ Nusselt (Sherwood) number ${\rm Nu}$ (${\rm Sh}$) & $ ={-\partial_\rho \bar{c}_0(1)}/{\bar{c}_0(1)}$ & (\ref{eq:Nu}) & \multicolumn{2}{l}{(\ref{eq:NuU})} \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} In the following, we will solve the problems (i)-(iii), in order to obtain the dimensionless direct and total Marangoni forces [see Eqs.\ (\ref{FM}] and (\ref{FMtotal})) from the concentration profiles by \begin{align} \bar{F}_{\rm M} &= - 2 {\rm Pe} \int_0^\pi d\theta \cos\theta \bar{c}(1,\theta)|_{\bar{y}=0}, \label{FMdim}\\ \bar{F}_{\rm M,tot} &=-\frac{3{\rm Pe}}{2} \int_1^\infty d\rho \int_0^\pi d\theta \cos\theta \left(\frac{1}{\rho} - \frac{1}{\rho^{3}} \right) \bar{c}(\rho,\theta)|_{\bar{y}=0}. \label{FMtotaldim} \end{align} for a prescribed swimmer velocity $\bar{U}$. Finally, force balance gives the dimensionless version of the swimming condition (\ref{eq:swimcond0}), \begin{equation} -\bar{F}_{\rm D} = 3\pi \bar{U}_{\rm swim} = \bar{F}_{\rm M}({\rm Pe},\bar{U}_{\rm swim}) + \bar{F}_{\rm M,fl} ({\rm Pe},\bar{U}_{\rm swim}), \label{eq:swimcond} \end{equation} which then selects the actual swimmer velocity $\bar{U}=\bar{U}_{\rm swim}$ as a function of the remaining control parameters ${\rm Pe}$ (``fuel'' emission) and eventually $\bar{k}$ (evaporation). The non-dimensionalization reveals that the coupled problems (i)-(iii) and the Marangoni forces depend on three dimensionless control parameters (see also table \ref{tab:nondimen}): First, the prescribed dimensionless velocity of the swimmer $\bar{U}$; second, the Peclet number $\rm Pe$ characterizing the strength $\alpha$ of the surfactant emission, and third, the Biot number $\bar{k}$ characterizing the evaporation. We also see that the Peclet number both controls the strength of the Marangoni flow via Eq.\ (\ref{eq:Mflow}) and the strength of all Marangoni forces. We note, however, that $\bar{F}_{\rm M}/{\rm Pe}$ and $\bar{F}_{\rm M,tot}/{\rm Pe}$ still depend on $\bar{U}$ and ${\rm Pe}$ via the dependence of $\bar{c}(\vec{\rho})$ on these parameters. Another important finding from non-dimensionalization is that the diffusion-advection problem decouples from the Marangoni flow problem for ${\rm Pe} \ll \bar{U}$, where we can neglect $\vec{v}_{\rm M}$ in the advection term. Then, the concentration profile is only determined by Stokes flow, becomes axisymmetric, and only depends on $\bar{U}$. In this limit, the Marangoni flow field need not to be calculated in order to calculate the total Marangoni force via Eq.\ (\ref{FMtotaldim}). \subsection{Numerical methods} Numerically, we only address the low Reynolds number regime. In general, we consider the problems (i)-(iii), i.e., solve the coupled diffusion-advection problem and the Marangoni flow problem for a prescribed swimmer velocity $\bar{U}$. From the solution for the concentration field, we then calculate the Marangoni forces as a function of $\bar{U}$ and ${\rm Pe}$ in order to finally solve the force balance swimming condition. We use an iterative finite element scheme to solve the full coupled problem; this approach is explained in detail in Appendix \ref{app:numerical}. \subsection{Low Reynolds number results} Low Reynolds numbers ${\rm Re}\ll 1$ are realized for $\bar{U} \ll {\rm Sc}/2$, which can still be much larger than unity as typical Schmidt numbers for surfactants in aqueous solutions are of the order of $1000$ (see Table \ref{tab:nondimen}). Therefore, we have to discuss \emph{both} the diffusive limit $\bar{U}\ll 1$ \emph{and} the advective limit $\bar{U}\gg 1$. \subsubsection{Decoupled limit ${\rm Pe} \ll \bar{U}$} First, we will consider the limit ${\rm Pe} \ll \bar{U}$, where the diffusion-advection problem for a half-sphere with prescribed velocity $U$ decouples from the Marangoni flow problem. We also focus on the case in the absence of evaporation first, as it is appropriate for the PEG-alginate swimmer. Diffusive release from a moving emitter or from a resting emitter in a fluid flow can be characterized by the average Nusselt number (or Sherwood number {\rm Sh}), \begin{equation} {\rm Nu} \equiv \frac{\int_S \vec{j}(\vec{r})\cdot\vec{n}\,dA} { (D/a) \int_S c(\vec{r})\,dA}, \label{eq:Nu} \end{equation} which is the dimensionless ratio of the total emitted flux and the typical diffusive flux \cite{Leal}. A Nusselt number of one is realized for purely diffusive transport, Nusselt numbers much larger than one indicate strong advective transport. In the decoupled limit ${\rm Pe} \ll \bar{U}$, we find for the Nusselt number \begin{align} {\rm Nu} = \frac{1}{\bar{c}_0(\rho=1)}&= \begin{cases} 1+\frac{1}{2} \tilde{U} & \mbox{for}~\bar{U}\ll 1\\ 0.65\, \bar{U}^{1/3} & \mbox{for}~\bar{U}\gg 1 \end{cases}, \label{eq:NuU} \end{align} where $c_0(\rho) \equiv \frac{1}{2} \int_0^\pi d\theta \sin\theta \bar{c}(\rho,\theta)$ is the zeroth Legendre coefficient. There are two regimes, a diffusive regime for $\bar{U}\ll 1$ characterized by a Nusselt number close to one and an advective regime for $\bar{U} \gg 1$ where the Nusselt number becomes large, which is also clearly supported by the numerical results in Fig.\ \ref{fig:Nusselt}. The result (\ref{eq:NuU}) can be derived analytically \cite{Ender2020}, apart from the value of the prefactor $0.65$, which we determined numerically from the data in Fig.\ \ref{fig:Nusselt}. A short derivation based on scaling arguments for the advective regime is presented below. The numerical results in Fig.\ \ref{fig:Nusselt} show perfect agreement and clearly confirm the existence of just two regimes. The Nusselt number has been originally defined for constant concentration boundary conditions $\bar{c}(1,\theta)=1$, For constant concentration boundary conditions, the result is well-known \cite{Acrivos1960,Acrivos1962,Acrivos1965,Leal} and very similar (see Fig.\ \ref{fig:Nusselt}), \begin{align} {\rm Nu} = -\partial_\rho\bar{c}_0(\rho=1)&= \begin{cases} 1+\frac{1}{2} \tilde{U} + ... & \mbox{for}~\bar{U}\ll 1\\ 0.6245\, \bar{U}^{1/3} & \mbox{for}~\bar{U}\gg 1 \end{cases} \label{eq:NuUB} \end{align} with a prefactor that can be calculated analytically. This indicates that literature results for the Nusselt number for constant concentration boundary conditions also apply to our situation of constant flux boundary conditions, which is an important insight that we will assume to also hold for higher Reynolds numbers below. \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{NusseltU4.pdf} \caption{\label{fig:Nusselt} Average Nusselt number as a function of $\bar{U}$ for constant flux and constant concentration boundary conditions in the decoupled limit ${\rm Pe} \ll \bar{U}$. All results are from numerical FEM solutions of the axisymmetric diffusion-advection equation in two-dimensional angular representation with $\rho<\bar{R}=30$. } \end{center} \end{figure} In the decoupled limit ${\rm Pe} \ll \bar{U}$, we also find a diffusive and an advective regime for the Marangoni forces \begin{align} \frac{\bar{F}_{\rm M} }{\pi {\rm Pe}} &= \begin{cases} \frac{3}{16} \bar{U} & \mbox{for}~\bar{U}\ll 1\\ d_{\rm M} \bar{U}^{-1/3}~\mbox{with}~d_{\rm M}\simeq 0.8 & \mbox{for}~\bar{U}\gg 1 \end{cases}, \label{eq:c1Unew} \\ \frac{\bar{F}_{\rm M, tot}}{\pi \rm Pe} &= \begin{cases} - \frac{1081}{1280} \bar{U} +\frac{3}{8} \bar{U}\ln \bar{R} & \mbox{for}~\bar{U}\ll 1\\ d_{\rm M, tot} \bar{U}^{-2/3}~\mbox{with}~d_{\rm M, tot}\simeq 1.4 & \mbox{for}~\bar{U}\gg 1 \end{cases}, \label{eq:c1Mflnew} \end{align} where numerical constants $d_{\rm M}$ and $d_{\rm M, tot}$ are obtained from the numerical results, see Fig.\ \ref{fig:FM}, and $\bar{R}$ is the radial system size. Again, the numerical results (Fig.\ \ref{fig:FM}) show perfect agreement and clearly confirm the existence of just two regimes, a diffusive and an advective regime. Direct and total Marangoni force reach maximal values $\bar{F}_{\rm M},\bar{F}_{\rm M, tot}\sim 0.15\pi{\rm Pe}$ in the crossover region $\bar{U} \sim 1$ between diffusive and advective transport. \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{FUA3.pdf} \caption{\label{fig:FM} Marangoni forces ${\bar{F}_{\rm M}}/{\pi \rm Pe}$ and ${\bar{F}_{\rm M, tot}}/{\pi \rm Pe}$ as a function of imposed velocity $\bar{U}$ together with corresponding concentration profiles (in the $\bar{z}\bar{x}$-plane and in three dimensions) in the decoupled limit ${\rm Pe} \ll \bar{U}$. All results are from numerical FEM solutions of the axisymmetric diffusion-advection equation in two-dimensional angular representation with $\rho<\bar{R}=30$. In the advective regime $\bar{U}\gg 1$ a concentration boundary layer develops [see Eq.\ (\ref{eq:Deltarho})]. } \end{center} \end{figure} Also, the results (\ref{eq:c1Unew}) and (\ref{eq:c1Mflnew}) can be derived analytically from a calculation of the concentration field \cite{Ender2020}, apart from the value of the numerical constants. Here we present a short derivation based on scaling arguments. In the diffusive limit $\bar{U}\ll 1$, there is a linear response of the concentration field, which leads to a linear response of the Nusselt number and Marangoni forces. The coefficients can be calculated by perturbation theory about the concentration field $c^{(0)}(\vec{\rho}) = 1/\rho$ at $\bar{U}=0$ in powers of $\bar{U}$. A remarkable result of this calculation is that the linear term for the total Marangoni force diverges logarithmically with the system size $R$, see Eq.\ (\ref{eq:c1Mflnew}), while the linear term for the direct Marangoni force stays finite. This means that the Marangoni flow forces strongly \emph{increase} the direct force for $\bar{U}\ll 1$; such a behavior could not be found in Ref.\ \cite{Lauga2012}. For very large system sizes $\bar{R}\gg 1/\bar{U}$, the large scale cutoff $\bar{R}$ in (\ref{eq:c1Mflnew}) will be replaced by $1/\bar{U}$ because the convection term can no longer be treated perturbatively in the region $\rho\gg 1/\bar{U}$, regardless how small $\bar{U}$ is \cite{Acrivos1962}. We also note that the result ${\rm Nu} \approx 1+\frac{1}{2} \tilde{U}$ from Eqs.\ (\ref{eq:NuU}) and (\ref{eq:NuUB}) for the Nusselt number in the diffusive regime $\bar{U}\ll 1$ is derived from the non-perturbative matching procedure for very large system sizes $\bar{R}\gg 1/\bar{U}$ \cite{Acrivos1962}, while a perturbative calculation gives ${\rm Nu} \approx 1+1/\bar/{R}+ O(\bar{U}^2)$ with the radial system size $\bar{R}$. This perturbative result describes our numerical data for a finite system actually better, see Fig.\ \ref{fig:Nusselt}. In the limit of strong advection $\bar{U}\gg 1$, a concentration boundary layer develops around the half-sphere, as can be clearly seen in the concentration profiles in Fig.\ \ref{fig:FM}). Its width $\Delta r$ is determined by the distance that a surfactant molecule can diffuse during the time $\Delta t\sim a/v(\Delta r/a)$ [see Eq.\ (\ref{v})] it takes to be transported along the sphere by advection: $\Delta r^2 \sim D \Delta t$. Because $v(\Delta r/a) \sim U \Delta r/a$ [see Eq.\ (\ref{v})], we find \begin{equation} \Delta \rho = \Delta r/a \sim \bar{U}^{-1/3}. \label{eq:Deltarho} \end{equation} This is a classic result for the diffusion-advection problem for constant concentration boundary conditions \cite{Acrivos1960,Leal}, but also holds for constant flux boundary conditions. Because the concentration will drop in radial direction from its value at the surface $S$ of the half-sphere to zero within the concentration boundary layer of width $\Delta \rho$, we also have $1= -\partial_\rho \bar{c}(\rho=1,\theta)\sim \bar{c}(\rho=1)/\Delta \rho $, which leads to a scaling \begin{equation} \bar{c}(\rho=1,\theta) \sim \Delta \rho \sim \bar{U}^{-1/3} \label{eq:cDeltarho} \end{equation} of the symmetry-breaking concentration level at the surface $S$ of the half-sphere for constant flux boundary conditions. For strong advection, the Marangoni forces decrease as a function of $\bar{U}$ because the concentration boundary layer width $\Delta \rho$, in which forces are generated, and the concentration level around the sphere, to which forces are proportional, both decay with velocity as $\bar{U}^{-1/3}$. The scaling property (\ref{eq:cDeltarho}) for $\bar{c}(\rho=1,\theta)$ directly explains the results (\ref{eq:NuU}), ${\rm Nu} \sim 1/\bar{c}(\rho=1,\theta) \sim \bar{U}^{1/3}$, for the Nusselt number and (\ref{eq:c1Unew}), $ \bar{F}_{\rm M}/{\rm Pe}\sim \bar{c}(\rho=1,\theta) \sim \bar{U}^{-1/3}$, for the direct Marangoni force in the advective limit $\bar{U}\gg 1$. They are clearly confirmed by all numerical results in Figs.\ \ref{fig:Nusselt} and \ref{fig:FM}. The result for the total Marangoni force (\ref{eq:c1Mflnew}) seems to deviate from this advective scaling Here, the expected scaling from the radial concentration boundary layer of width $\Delta \rho$ is $ \bar{F}_{\rm M,tot}/{\rm Pe}\sim \Delta \rho^2 \bar{c}(\rho=1,\theta) \sim \bar{U}^{-1}$ (see Eq.\ (\ref{FMtotaldim})), which is clearly not found in the numerics (yellow line in Fig.\ \ref{fig:FM}). The reason is that this contribution is actually only sub-dominant. The leading contribution comes from the advective tail of angular width $\Delta \theta \sim \bar{U}^{-1/3}$; the width of the tail follows from the scaling of the stream function $\psi \propto r^2\Delta \theta^2$ in the tail and $\psi \propto 3\Delta r^2\sin^2\theta/2$ in the boundary layer and the fact that a fluid particle should follow a Stokes flow stream line $\psi= {\rm const}$ in the advective limit. Therefore, the dominant contributions in Eq.\ (\ref{FMtotaldim}) are $ \bar{F}_{\rm M,tot}\sim {\rm Pe} \Delta \theta \bar{c}(\rho=1,\theta)\sim \bar{U}^{-2/3}$ in agreement with the numerical results in Fig.\ \ref{fig:FM}. Comparing the curves for direct and total Marangoni force in Fig.\ \ref{fig:FM}, we observe a crossing such that ${\bar{F}_{\rm M}} < {\bar{F}_{\rm M, tot}}$ in the diffusive regime $\bar{U}\ll 1$, while ${\bar{F}_{\rm M}} > {\bar{F}_{\rm M, tot}}$ in the advective regime $\bar{U} \gg 1$. This means that the Marangoni flow force $\bar{F}_{\rm M,fl} = \bar{F}_{\rm M,tot} - \bar{F}_{\rm M}$ \emph{increases} the propulsion force in the diffusive regime $\bar{U} \ll 1$ but \emph{decreases} the propulsion force (or increases the drag) for $\bar{U} \gg 1$. This subtle result is related to the structure of the Marangoni flows, which are generated by the surfactant concentration gradients $ \left.\bar{\vec{\nabla}}_{S} \bar{c}(\vec{\rho}) \right|_{\bar{y}=0}$ within the liquid-air interface (see Eq.\ (\ref{eq:Mflow})) and can be qualitatively rationalized with the help of Eq.\ (\ref{eq:FMfl2}) for the Marangoni flow force. \begin{figure} \centerline{\includegraphics[width=0.99\linewidth]{scheme_flow.pdf}} \caption{Schematic of concentration field (as concentration contour lines) and Marangoni flow field (arrows) and resulting direct force $F_{\rm M}$ and Marangoni flow force $F_{\rm M,fl}$ in the diffusive ($\bar{U}\ll 1$) and advective ($\bar{U}\gg 1$) regime. The dominant Marangoni flow contributions are marked in blue. In the diffusive regime, Marangoni flows increase the direct force, in the advective regime they decrease the direct force. It is important to note that concentration contour lines are not approaching a spherical shape close to the swimmer in the advective regime because we have constant flux boundary conditions such that the swimmer surface is a contour of constant radial gradient by construction. The elongated tail gives rise to a slower decay of the gradient itself and, thus, larger radial gradients in the tail, which is reflected by additional contour lines emerging on the rear side. } \label{fig:scheme_flow} \end{figure} We can decompose the surfactant concentration gradient into tangential and radial components \begin{equation} \left.\bar{\vec{\nabla}}_{S} \bar{c}(\vec{\rho}) \right|_{\bar{y}=0} = \frac{1}{\rho}\partial_\theta \bar{c}\, \vec{e}_\theta + \partial_\rho\bar{c} \,\vec{e}_r. \end{equation} Because of advection the tangential component points from the front to the rear corresponding to an increasing surfactant concentration toward the rear ($\partial_\theta \bar{c}>0$). It gives rise to a forward Marangoni flow and $F_{\rm M,fl}>0$ in Eq.\ (\ref{eq:FMfl2}) because $ -\vec{e}_z \cdot \rho^{-1}\partial_\theta \bar{c} \vec{e}_\theta \propto \sin\theta >0$. This is the dominating effect in the diffusive regime $\bar{U} \ll 1$, where the perturbation theory gives to leading linear order in $\bar{U}$ a contribution of the form $\bar{U}\bar{c}^{(1)} \propto -\bar{U}\bar{c}_1(\rho) \cos\theta$ resulting in $\rho^{-1}\partial_\theta \bar{c}= \bar{U}\rho^{-1}\bar{c}_1(\rho) \sin\theta>0$. The front-directed tangential Marangoni flow components give rise to the blue flow directions in the schematic in Fig.\ \ref{fig:scheme_flow} (top). The radial component points inward ($ \partial_\rho\bar{c} <0$) because of the radially decaying surfactant concentration. This gives rise to radially outward Marangoni flows. Because $ -\vec{e}_z \cdot \partial_\rho\bar{c} \vec{e}_r\propto \cos\theta$ in Eq.\ (\ref{eq:FMfl2}), this increases the direct force in the front (around $\theta=0$) but decreases it in the back (around $\theta=\pi$). Advection leads to bigger surfactant concentrations in the rear, which also result in bigger radial concentration gradients on the rear side and lead to an overall decrease in the direct force $\bar{F}_{\rm M,fl}<0$ and, thus, an increased drag. This is the dominating effect in the advective regime $\bar{U}\gg 1$, where a concentration boundary layer of width $\Delta \rho \sim \bar{U}^{-1/3}$ forms around the half-sphere, which results in steep radial concentration gradients that are stronger on the rear side. The stronger radial Marangoni flow components in the rear are indicated by the blue arrows in Fig.\ \ref{fig:scheme_flow} (bottom); this phenomenon is also visible in the experimental PIV measurements in Fig.\ \ref{fig:PIV}(B) during motion. There are also slightly bigger radial concentration gradients in the diffusive regime $\bar{U}\ll 1$, but they are sub-dominant for the slow radial decay of the function $\bar{c}_1(\rho)$ in the absence of a radial concentration boundary layer. \subsubsection{Strong Marangoni flow ${\rm Pe} \gg \bar{U}$} \label{sec:strongM} For weak Marangoni flow, ${\rm Pe} \ll \bar{U}$, we could decouple the diffusion-advection problem and obtained to regimes, a diffusive or linear response regime for $\bar{U}\ll 1$ and an advective regime for $\bar{U}\gg 1$. Now we increase the Peclet number ${\rm Pe}$ and, thus, the Marangoni flow. For a strong Marangoni flow, ${\rm Pe} \gg \bar{U}$, the linear response regime $\bar{U}\ll 1$ becomes modified. We first have to consider the dominant Marangoni flow problem (iib), which determines the main component of the fluid flow in the diffusion-advection problem (iii). The Marangoni flow pattern is a stationary Marangoni vortex ring around the spherical swimmer below and parallel to the fluid interface $S_{\rm Int}$. Because this solution lacks axisymmetry an analytical solution is no longer possible. Nevertheless, we can obtain novel scaling results for concentration profile and Marangoni flow field in a concentration boundary layer of width $l_c$ below the fluid interface $S_{\rm Int}$ along similar lines as Refs.\ \cite{LeRoux2016,Roche2014}. The surfactant is emitted from the sphere with the fixed current density $\left.\bar{j}\right|_S=1$. Advection by the Marangoni flow $\bar{\vec{v}}_{\rm M}$ concentrates the surfactant in the boundary layer of width $\bar{l}_c$ below $S_{\rm Int}$. It takes a time $t \sim r/v_{\rm M}$ to reach a radial distance $r$. During this time, the surfactant diffuses over a distance $l_c \sim (D t)^{1/2} \sim (Dr/v_{\rm M})^{1/2}$ or $\bar{l}_c \sim (\rho/\bar{v}_{\rm M})^{1/2}$ in vertical $y$-direction, which sets the boundary layer width $\bar{l}_c$. Because we are at low Reynolds numbers, the laminar boundary layer below the fluid interface $S_{\rm Int}$ is of the size $\delta \sim a$ ($\bar{\delta} \sim 1$) set by the sphere. The laminar boundary layer governs the decay of the Marangoni flow field $v_{\rm M}$ in $y$-direction. Moreover, we have mass conservation of the emitted surfactant. The total advective flow $\bar{J} \sim 2\pi \bar{c} \bar{v}_{\rm M}\rho \bar{l}_c$ below the interface at distance $\rho$ will always equal the original flow $\bar{J}=2\pi$ that is emitted at the half-sphere, \begin{equation} 1 = \bar{J}/2\pi \sim \bar{c} \bar{v}_{\rm M}\rho \bar{l}_c \sim \bar{c} \bar{v}_{\rm M}^{1/2}\rho^{3/2} . \label{eq:jadv} \end{equation} In addition, the Marangoni boundary condition (see Eq.\ (\ref{eq:Mflow}) gives a second equation \begin{equation} - {\rm Pe} \partial_\rho\bar{c} = \partial_{\bar{y}} \bar{v}_{\rm M} \sim \frac{\bar{v}_{\rm M}}{\bar{\delta}} \sim \bar{v}_{\rm M} \label{eq:cvM} \end{equation} for concentration and velocity, which follows from the definition of the laminar boundary layer width $\bar{\delta}$. Combining both Eqs.\ (\ref{eq:jadv}) and (\ref{eq:cvM}), we find a differential equation for $\bar{c}(\rho)$, which we solve with the boundary condition $\bar{c}(\infty)=0$ resulting in \begin{align} \bar{c}(\rho) &= \bar{c}(1) \rho^{-2/3} ~~\mbox{with}~~ \bar{c}(1) \sim {\rm Pe}^{-1/3}, \label{eq:cMarangoni}\\ \bar{v}_{\rm M} &\sim \bar{c}^{-2} \rho^{-3} \sim \bar{c}^{-2}(1) \rho^{-5/3} \sim {\rm Pe}^{2/3} \rho^{-5/3}. \label{eq:vMarangoni} \end{align} We see that the advective current $\bar{j}_{\rm M} \sim \bar{c} \bar{v}_{\rm M}\sim {\rm Pe}^{1/3} \rho^{-7/3}$ becomes smaller than the corresponding diffusive current $\bar{j}_D \sim -\partial_\rho \bar{c} \sim {\rm Pe}^{-1/3} \rho^{-5/3}$ for $\rho > {\rm Pe}$. Then our assumption of advective transport breaks down, and this should mark the boundary of the Marangoni advection dominated region. Therefore, $\rho_{\rm M} \sim {\rm Pe}$ should be the scaling of the size of the Marangoni vortex around the sphere for low Reynolds numbers. At larger distances, a crossover to diffusive transport with $\bar{c} \propto \rho^{-1}$ sets in. So far, we considered the leading order of our problem by setting $\bar{U}\approx 0$; going one order further, we get the linear response for small $\bar{U}$ with the ansatz $\bar{c} = \bar{c}^{(0)} + \bar{U} \bar{c}^{(1)}$ with $\bar{c}^{(0)}(\rho)$ given by (\ref{eq:cMarangoni}). In the total flow $\vec{v} + \vec{v}_{\rm M}$, the Marangoni flow (\ref{eq:vMarangoni}) is the zeroth order result, $\vec{v}_{\rm M}= \vec{v}_{\rm M}^{(0)}$, while the Stokes swimming flow $\vec{v}= \vec{v}^{(1)}$ is linear in $\bar{U}$. In an advection dominated situation, the constant flux relation (\ref{eq:jadv}) still holds in the presence of Stokes flow, \begin{equation} 1 \sim (\bar{c}^{(0)} + \bar{U} \bar{c}^{(1)}) (\bar{U}\bar{u}\cos\theta + \bar{v}_{\rm M})^{1/2}\rho^{3/2}, \end{equation} where $\bar{u}(\rho)$ is the radial component of the Stokes flow. Expanding up to first order in $\bar{U}$ and using (\ref{eq:jadv}) for the leading order, we find \begin{equation} \bar{c}^{(1)}(\rho) \sim \frac{1}{ \bar{v}_{\rm M}^{1/2}(\rho)} \bar{c}^{(0)}(\rho) \bar{u}(\rho) \sim {\rm Pe}^{-2/3} \rho^{1/6} \bar{u}(\rho). \end{equation} This contribution is symmetry-breaking; inserting this scaling of the concentrations into the relations (\ref{FMdim}) and (\ref{FMtotaldim} for the Marangoni forces, we obtain \begin{align} \frac{ \bar{F}_{\rm M}}{\pi{\rm Pe}} &\sim \bar{U} {\rm Pe}^{-2/3},& \frac{ \bar{F}_{\rm M,tot}}{\pi {\rm Pe}} &\sim \bar{U} {\rm Pe}^{-2/3}. \label{eq:FMtotalMarangoni} \end{align} We checked these predictions numerically in Fig.\ \ref{fig:FMtotalPe} by using our iterative FEM approach (see Appendix \ref{app:numerical}), which is possible up to ${\rm Pe}\sim 50$ and find good agreement, in particular, for the predictions $\bar{F}_{\rm M}/{\rm Pe} \propto {\rm Pe}^{-2/3}$ and $\bar{F}_{\rm M,tot}/{\rm Pe} \propto {\rm Pe}^{-2/3}$, which will be most important for the swimming speed relation (see insets in Fig.\ \ref{fig:FMtotalPe}). Moreover, these numerical FEM results show that both prefactors in Eq.\ (\ref{eq:FMtotalMarangoni}) are of order unity but hard to quantify because of finite size effects. This shows that Marangoni flows depress the total driving force in the linear response regime by a factor ${\rm Pe}^{-2/3}$ reflecting the fact that it is harder to break the symmetry in the presence of the strong Marangoni flow advection. Numerical results in Fig.\ \ref{fig:FMtotalPe} also show that the total Marangoni force is somewhat larger than the direct Marangoni force, $\bar{F}_{\rm M,tot} > \bar{F}_{\rm M}$. In this respect, our previous results for linear response regime for ${\rm Pe}\ll \bar{U}$ remain unchanged: the Marangoni flow force {\em increases} the direct force. \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{FUPer7.pdf} \caption{\label{fig:FMtotalPe} Iterative three-dimensional FEM results for ${ \bar{F}_{\rm M}}/\pi{\rm Pe}$ (top) and ${\bar{F}_{\rm M,tot}}/\pi {\rm Pe}$ (bottom) as a function of $\bar{U}$ for ${\rm Pe} = 0-50$ for a cubic system with $-7 < \bar{y} <0$, $0<\bar{x}<7$, $-7<\bar{z}<7$. Insets show the slopes ${ \bar{F}_{\rm M}}/\bar{U}\pi{\rm Pe}$ and ${\bar{F}_{\rm M,tot}}/\bar{U}\pi {\rm Pe}$ as a function of ${\rm Pe}$ calculated from the results for $\bar{U}=0.1$. Artificial symmetry breaking from lattice irregularities/defects is prevented by averaging all measured quantities over two simulations with $U$ and $-U$. Blue open circles are results for ${\rm Pe}=0$ from FEM solutions to the axisymmetric diffusion-advection equation in two-dimensional angular representation with $\bar{R}=30$. The slope in the linear response regime for $\bar{U}\ll 1$ is reduced according to Eq.\ (\ref{eq:FMtotalMarangoni}). Results for $\bar{U}\gg 1$ are essentially not affected by strong Marangoni flows ${\rm Pe}\gg {\bar U}$. } \end{center} \end{figure} In the advection-dominated regime $\bar{U}\gg 1$, on the other hand, results are essentially not affected by increasing the Peclet number ${\rm Pe}$ into the regime of strong Marangoni flows ${\rm Pe}\gg {\bar U}$, as the numerical results in Fig.\ \ref{fig:FMtotalPe} show: all curves for the Marangoni forces converge to the previous diffusion-advection results for $\bar{U} \gg 1$. The reason for this behavior is that the flow field $\vec{v}$ will still give rise to a concentration boundary layer of thickness $\Delta \rho \sim \bar{U}^{-1/3}$ around the sphere, which determines the concentration field and, thus, the Marangoni forces. On the scale of the boundary layer, the Marangoni flows $\vec{v}_{\rm M}$ are not yet developed; they develop only further away at $1\ll \rho < \rho_{\rm M} \sim {\rm Pe}$ because of the no-slip boundary condition for the Marangoni flow in (iib). Therefore, the results for $\bar{U}\gg 1$ remain essentially unaffected by a strong Marangoni flow for ${\rm Pe}\gg {\bar U}$. \subsubsection{Evaporation} In the presence of evaporation, the boundary condition for the diffusion-advection problem changes at the air-water interface $S_{\rm Int}$. We then have the convective (Robin) boundary condition (\ref{eq:bcevap}), which is governed by the dimensionless Biot number (\ref{eq:Biot}), instead of the Neumann condition (\ref{eq:bcnoevap}), which is recovered for vanishing Biot number $\bar{k}=0$. In general, evaporation of surfactant depletes the interface of surfactant and, thus, decreases the Marangoni driving forces (both direct and flow force). For volatile camphor, $k \approx \sim 10^{-4}\, {\rm m/s}$ has been suggested \cite{Soh2008}, which corresponds to a high Biot number $\bar{k} = ak/D \approx 550$ for the camphor disks from \cite{Boniface2019}. PEG, on the contrary, has a negligible Biot number as it is not volatile. As a consequence of the new convective boundary condition, the concentration profile will fall off exponentially perpendicular to the interface in the outward direction on a dimensionless extrapolation length scale $\Delta \bar{y}\sim 1/\bar{k}$ given by the inverse of the Biot number. In the presence of evaporation, we can develop a qualitative scaling theory based on the assumption that the total evaporation flux balances the total emission flux of surfactant in a stationary state, which gives \begin{equation} -\int_{S_{\rm Int}} \left. \partial_{\bar{y}} \bar{c}(\vec{\rho})\right|_{\bar{y}=0} = 2\pi \label{eq:jev_balance} \end{equation} in dimensionless quantities and determines the derivatives $ \left. \partial_{\bar{y}} \bar{c}(\vec{\rho})\right|_{\bar{y}=0}$ at the scaling level. Via the convective boundary condition (\ref{eq:bcevap}), this also determines the surface concentration $\left. \bar{c}(\vec{\rho})\right|_{\bar{y}=0}$. Moreover, the convective boundary condition should reduce to our previous results for the Neumann condition for small Biot numbers $\bar{k}$, where the evaporation flux $\bar{j}_{\rm ev} =\bar{k}\left. \bar{c}(\vec{\rho})\right|_{\bar{y}=0}$ is smaller than the dominating transport flux, which is the diffusive or Marangoni flux for $\bar{U}\ll 1$ and the convective flux for $\bar{U}\gg 1$. For the diffusion or Marangoni dominated situation for $\bar{U}\ll 1$, the concentration and, thus, evaporation is distributed over the whole interface $S_{\rm Int}$, i.e., there is no concentration boundary layer around the half-sphere. Therefore, flux balance (\ref{eq:jev_balance}) leads to $ \left. \partial_{\bar{y}} \bar{c}(\vec{\rho})\right|_{\bar{y}=0} \sim O(1)$ resulting in $\left. \bar{c}(\vec{\rho})\right|_{\bar{y}=0} \sim 1/\bar{k}$ because of the convective boundary condition (\ref{eq:bcevap}). Then, also $\bar{F}_{\rm M}\sim 1/\bar{k}$ and $\bar{F}_{\rm M,tot}\sim 1/\bar{k}$. Moreover, the evaporation flux dominates over the diffusive or Marangoni fluxes (which are $O(1)$) only for $\bar{k}>\bar{k}_0$ with a crossover value $\bar{k}_0 =O(1)$, which determines the crossover to the non-evaporative case. Our numerical results in Fig.\ \ref{fig:FtotalUk} suggest $\bar{k}_0 \ll 1$. Therefore, we expect \begin{align} \bar{F}_{\rm M} &\sim \left. \bar{F}_{\rm M}\right|_{\bar{k}=0} \frac{1}{\bar{k}+\bar{k}_0}, & \bar{F}_{\rm M,tot} &\sim \left. \bar{F}_{\rm M,tot}\right|_{\bar{k}=0} \frac{1}{\bar{k}+\bar{k}_0}. \label{eq:FMtotalkUsmall} \end{align} We checked these predictions numerically in Fig.\ \ref{fig:FtotalUk} by using our iterative FEM approach (see Appendix \ref{app:numerical}) and find good agreement. The plots in the bottom row (yellow symbols) show that the dependence on $\bar{k}$ for $\bar{U}\ll 1$ is described very well by $\bar{F}_{\rm M} = \left.\bar{F}_{\rm M}\right|_{\bar{k}=1}/\bar{k}$ and $\bar{F}_{\rm M,tot} = \left. \bar{F}_{\rm M,tot}\right|_{\bar{k}=1}/\bar{k}$ suggesting $\bar{k}_0 \ll 1$. For the advection-dominated situation for $\bar{U}\gg 1$ the concentration and, thus, evaporation is present only in the concentration boundary layer of radial width $\Delta \rho \sim \bar{U}^{-1/3}$ around the half-sphere and in the advection tail. Therefore, the flux balance (\ref{eq:jev_balance}) leads to $ \left. \partial_{\bar{y}} \bar{c}(\vec{\rho})\right|_{\bar{y}=0} \sim \bar{U}^{1/3}$ and $\left. \bar{c}(\vec{\rho})\right|_{\bar{y}=0} \sim \bar{U}^{1/3}/\bar{k}$ because of the convective boundary condition (\ref{eq:bcevap}). Then, also $\bar{F}_{\rm M}\sim \bar{U}^{1/3}/\bar{k}$ and $\bar{F}_{\rm M,tot}\sim \bar{U}^{1/3}/\bar{k}$ if the evaporation flux dominates over the convective fluxes. The convective flux at the interface and at the boundary layer ($\rho\approx 1+\Delta\rho$) is in radial direction $\bar{j}_u =\bar{U}\bar{u}(\rho) \cos\theta \bar{c}(\rho) \sim \bar{U}\Delta \rho^2 \bar{c}(\rho) \sim \bar{U}^{1/3} \bar{c}(\rho)$ and $\bar{j}_v =\bar{U}\bar{v}(\rho) \sin\theta \bar{c}(\rho) \sim \bar{U}\Delta \rho \sin\theta \bar{c}(\rho) \sim \bar{U}^{2/3}\sin\theta \bar{c}(\rho)$ in $\theta$-direction. In the advection tail (of angular width $\Delta \theta \sim \bar{U}^{-1/3}$), this also leads to $\bar{j}_v \sim \bar{U}^{1/3} \bar{c}(\rho)$. Therefore, the evaporation flux starts to dominate over the convective radial flux and the flux in $\theta$-direction in the advection tail for $\bar{k}> \bar{U}^{1/3}$; only then we see the effects of evaporation. Therefore, we expect \begin{align} \bar{F}_{\rm M} &\sim \left. \bar{F}_{\rm M}\right|_{\bar{k}=0} \frac{\bar{U}^{1/3}}{\bar{k}+\bar{U}^{1/3}}, & \bar{F}_{\rm M,tot} &\sim \left. \bar{F}_{\rm M,tot}\right|_{\bar{k}=0} \frac{\bar{U}^{1/3}}{\bar{k}+\bar{U}^{1/3}} \label{eq:FMtotalkU} \end{align} for $\bar{U}\gg 1$. We checked these predictions numerically in Fig.\ \ref{fig:FtotalUk} and find good agreement. The plots in the bottom row (blue symbols) show that the dependence on $\bar{k}$ for $\bar{U}\gg 1$ agrees very well with Eq.\ (\ref{eq:FMtotalkU}). \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{FUkcylr15new.pdf} \caption{\label{fig:FtotalUk} Iterative three-dimensional FEM results for ${ \bar{F}_{\rm M}}/\pi{\rm Pe}$ (top) and ${\bar{F}_{\rm M,tot}}/\pi {\rm Pe}$ (middle) as a function of $\bar{U}$ for ${\rm Pe} = 0$ and Biot numbers $\bar{k}=0-400$ for a half-cylindrical system with $\rho <15$, $\bar{x}>0$, $-4 < \bar{y} <0$, Blue dashed lines are results for $\bar{k}=0$ from FEM solutions to the axisymmetric diffusion-advection equation in two-dimensional angular representation with $\bar{R}=30$. Forces in the diffusive regime $\bar{U}\ll 1$ are reduced according to Eq.\ (\ref{eq:FMtotalkUsmall}). Forces in the advective regime $\bar{U}\gg 1$ are reduced according to Eq.\ (\ref{eq:FMtotalkU}). Bottom row: FEM results for ${ \bar{F}_{\rm M}}/\pi{\rm Pe}$ and ${\bar{F}_{\rm M,tot}}/\pi {\rm Pe}$ as a function of Biot number $\bar{k}$ for $\bar{U}=10^{-4},100$ in comparison to scaling results in Eqs.\ (\ref{eq:FMtotalkUsmall}) and (\ref{eq:FMtotalkU}). } \end{center} \end{figure} In summary, we see a reduction of all Marangoni forces by evaporation both in the linear response regime $\bar{U}\ll 1$ but also in the regime $\bar{U}\gg 1$ of strong advection. In both regimes, evaporation reduces the surfactant concentration, which decreases the Marangoni forces. \subsubsection{Force balance and swimming condition} Now we have a rather complete picture of the solution of problems (i)-(iii), i.e., diffusion-advection coupled to hydrodynamics for a prescribed swimmer velocity $\bar{U}$ at low Reynolds numbers. The main result is the Marangoni forces as a function of the prescribed velocity $\bar{U}$. The swimming condition (\ref{eq:swimcond0}) or (\ref{eq:swimcond}) gives an additional force balance relation between Marangoni forces and $\bar{U}$, which has to be satisfied in the swimming state and determines the swimming speed $\bar{U}=\bar{U}_{\rm swim}$ as a function of Peclet number ${\rm Pe}$ and Biot number $\bar{k}$. In general, the swimming velocity increases with ${\rm Pe}$ and decreases with $\bar{k}$. First, we consider small ${\rm Pe}$, i.e., small surfactant emission rates and see whether a swimming state with spontaneously broken symmetry can exist. For $\bar{k}\approx 0$, as appropriate for the PEG-alginate swimmer, we find from Eq.\ (\ref{eq:c1Mflnew}) that a solution for the swimming condition exists above a critical Peclet number ${\rm Pe}> {\rm Pe}_c \sim 8/\ln \bar{R} \to 0$, which approaches zero for large system sizes. Therefore, the symmetry is essentially always spontaneously broken in a large swimming vessel. Spontaneous symmetry breaking resulting in propulsion is possible by establishing an asymmetric surfactant concentration profile that is maintained by advection and can produce the necessary Marangoni forces. Equation (\ref{eq:c1Mflnew}) is valid only in the decoupled limit ${\rm Pe}\ll \bar{U}$. At the swimming bifurcation, we have ${\rm Pe} ={\rm Pe}_c \gg \bar{U}\approx 0$, however, such that the feedback of Marangoni flows onto the diffusion-advection problem has to be taken into account, and the decoupling approximation cannot be used. Then, Eq.\ (\ref{eq:FMtotalMarangoni}) describes the Marangoni forces in the linear response regime, which further reduces the critical Peclet number to ${\rm Pe}_c \sim 1/(\ln \bar{R})^{3}\to 0$. In the presence of relevant evaporation $\bar{k}\gg 1$, as appropriate for camphor, the total Marangoni force is depressed according to Eq.\ (\ref{eq:FMtotalkUsmall}) resulting in an increased ${\rm Pe}_c \sim \bar{k}^3/(\ln \bar{R})^{3} \to 0$, which is, however, still approaching zero for large swimming vessel sizes $\bar{R}$. Immediate propulsion in all experiments is in accordance with a bifurcation with a small ${\rm Pe}_c$. Moreover, we observe an intermittent stopping of the swimming motion only in the very end (after $20\, {\rm min}$ or more) before the swimming motion stops completely (because the fuel has been consumed). This confirms a small critical value ${\rm Pe}_c$ below which ${\rm Pe}$ drops only for very small emission current densities $\alpha$. Small irregularities can already break the symmetry and give rise to an avoided bifurcation and select a fixed swimming axis with respect to the particle orientation, which is also observed in the experiments. For ${\rm Pe}> {\rm Pe}_c$, a spontaneously symmetry-broken swimming state with $\bar{U}_{\rm swim}>0$ exists. Because the Marangoni force Eq.\ (\ref{eq:c1Mflnew}) remains approximately linear up to $\bar{U}\sim O(1)$, as can also be seen in Fig.\ \ref{fig:FM}, the swimming velocity rises steeply for ${\rm Pe}\gtrsim {\rm Pe}_c$ and quickly enters the asymptotics for the advection-dominated regime $\bar{U}_{\rm swim} \gg 1$. Here, we find the swimming relations \begin{align} \bar{U}_{\rm swim} &\sim {\rm Pe}^{3/5} &&\mbox{for}~ \bar{k}\ll {\rm Pe}^{1/5}, \label{eq:PeU}\\ \bar{U}_{\rm swim} &\sim \bar{k}^{-3/4} {\rm Pe}^{3/4} &&\mbox{for}~ \bar{k}\gg {\rm Pe}^{1/5}. \label{eq:PeUk} \end{align} Also in this regime, we have ${\rm Pe} \gg \bar{U}_{\rm swim}$ such that Marangoni flows are strong, but this has little influence on the swimming speed because of the concentration boundary layer that forms in the advective regime. Evaporation is significant for $\bar{k} \gg {\rm Pe}^{1/5}$ and reduces the swimming speed because it reduces the driving Marangoni forces. The swimming relations (\ref{eq:PeU}) and (\ref{eq:PeUk}) are shown in Fig.\ \ref{fig:comparison} as dotted yellow and dotted blue lines, respectively, together with the experimental results for our PEG-alginate swimmers and camphor boats from Boniface {\it et al.} \cite{Boniface2019}. We see clearly, that the experimentally observed swimming speed differs, because these swimmers operate at higher Reynolds numbers. \subsection{High Reynolds numbers} We have developed a complete picture of the solution of problems (i)-(iii), i.e., diffusion-advection coupled to hydrodynamics for a prescribed swimmer velocity $\bar{U}$ at low Reynolds numbers, including evaporation. Low Reynolds numbers ${\rm Re} = 2\bar{U}/{\rm Sc} \ll 1$ are realized for $\bar{U} \ll {\rm Sc}/2$, which can still be much larger than unity for typical Schmidt numbers for surfactants in aqueous solutions (see Table \ref{tab:nondimen}). For the relevant Marangoni propulsion forces, the following picture has emerged from our analysis at low Reynolds numbers. There is a diffusive regime for $\bar{U}\ll 1$, which becomes modified by strong Marangoni flows for Peclet numbers ${\rm Pe}\gg \bar{U}$, and there is an advective regime for $\bar{U}\gg 1$, which is essentially unchanged in the presence of strong Marangoni flows for ${\rm Pe}\gg \bar{U}$ (see Fig.\ \ref{fig:FMtotalPe}). Both regimes are modified in the presence of evaporation if the Biot number is $\bar{k} \ge 1$ in the diffusive regime and of $\bar{k} \gg \bar{U}^{1/3}$ in the advective regime (see Fig.\ \ref{fig:FtotalUk}). High Reynolds numbers occur for large velocities $\bar{U} \gg {\rm Sc}/2$ and, therefore, always deep in the advective regime $\bar{U}\gg 1$. At low Reynolds numbers, the concentration boundary layer of dimensionless width $\Delta \rho \sim \bar{U}^{-1/3}$ (see Eq.\ (\ref{eq:Deltarho})) determines the results for the Marangoni forces in this advective limit [see Eqs.\ (\ref{eq:NuU}), (\ref{eq:NuUB}), (\ref{eq:c1Unew}), (\ref{eq:c1Mflnew}), (\ref{eq:FMtotalMarangoni}) and (\ref{eq:FMtotalkU})]. In order to generalize to higher Reynolds numbers, we realize that the concentration boundary layer width is closely related to the Nusselt number. By definition (\ref{eq:Nu}), ${\rm Nu} = {-\partial_\rho \bar{c}_0(1)}/{\bar{c}_0(1)}$, the Nusselt number is an inverse extrapolation length, which we expect to be the inverse concentration boundary layer width, \begin{equation} {\rm Nu} \sim \frac{1}{\Delta \rho}. \label{eq:NuDeltarho} \end{equation} The result ${\rm Nu} \sim \bar{U}^{1/3}$ from Eqs.\ (\ref{eq:NuU}) and (\ref{eq:NuUB}) confirms this result both for constant flux and constant concentration boundary conditions at low Reynolds numbers, and we conjecture it to hold also at higher Reynolds numbers. Phenomenologically, the Nusselt number is well-studied also for high Reynolds number \cite{Michaelides2003}, both for heat (${\rm Nu}_T$ in the following) and for mass transfer (${\rm Nu}$ in the following, also Sherwood number ${\rm Sh}$ in the literature), and we can draw on these results in order to develop a theory for the concentration boundary layer and the Marangoni forces. Up to moderate Reynolds numbers ${\rm Re} \lesssim 200$, the physics is governed by additional laminar (viscous) boundary layers that appear around a sphere in fluid flow, which typically have a width $ \bar{\delta} \propto {\rm Re}^{-1/2}$ \cite{White2006,Schlichting2016}. The viscous boundary layer scaling can be rationalized by generalizing our above scaling argument for the concentration boundary layer leading to Eq.\ (\ref{eq:Deltarho}). The important difference is that the velocity field close to the sphere changes from $v(\Delta r) \sim U\Delta r/a$ for Stokes flow to $v(\Delta r) \sim U \Delta r/\delta$ for laminar boundary layer flow with a no slip boundary condition. This leads to \begin{equation} \Delta \rho = \Delta r/a \sim \bar{\delta}^{1/3} \bar{U}^{-1/3} \sim {\bar{U}}^{-1/2} {\rm Sc}^{1/6}. \label{eq:Deltarhodelta} \end{equation} This scaling result is in accordance with phenomenological results for the Nusselt number by Ranz and Marshall \cite{Ranz1952} (${\rm Re} = 2{\bar{U}}/{\rm Sc}$) \begin{align} {\rm Nu}_T &= 1.0 + 0.3 {\rm Re}^{1/2} {\rm Pr}^{1/3} = 1.0 + 0.3 \sqrt{2}\bar{U}^{1/2} {\rm Sc}^{-1/2} {\rm Pr}^{1/3}, \nonumber\\ {\rm Nu} &= 1.0 + 0.3 {\rm Re}^{1/2} {\rm Sc}^{1/3} = 1.0 + 0.3 \sqrt{2}\bar{U}^{1/2} {\rm Sc}^{-1/6} \label{eq:Ranz} \end{align} (${\rm Re} = 2{\bar{U}}/{\rm Sc}$ and with ${\rm Sc}$ replacing the Prandtl number ${\rm Pr}$ for the mass transfer Nusselt number). Because the concentration will drop in radial direction from its value at the surface $S$ of the half-sphere to zero within the concentration boundary layer of width $\Delta \rho$, and, thus, $1 = - \partial_\rho c(\rho=1) \sim c(\rho=1)/\Delta \rho$ for constant flux boundary conditions, the scaling of the concentration boundary layer width (\ref{eq:NuDeltarho}) also gives rise to \begin{equation} \bar{c}(\rho\!=\!1,\theta) \sim {\Delta \rho} \sim {\rm Nu}^{-1}, \label{eq:cNuDeltarho} \end{equation} i.e., the symmetry-breaking concentration level at the sphere is inversely proportional to the Nusselt number [generalizing Eq.\ (\ref{eq:cDeltarho})]. Therefore, the direct Marangoni force (\ref{FMdim}) should follow \begin{equation} \frac{\bar{F}_{\rm M}}{\rm Pe} \sim \int_0^\pi d\theta \cos\theta \bar{c}(1,\theta) \sim {\rm Nu}^{-1} \sim \bar{U}^{-1/2} {\rm Sc}^{1/6} \label{eq:FMhigh0} \end{equation} at higher Reynolds numbers. The total Marangoni force does no longer follow from a reciprocal theorem. In terms of an energy balance, the reciprocal theorem can be interpreted as the absence of mutual dissipation between swimming flow and Marangoni flow \cite{Ender2020}. Therefore, the power input by surface Marangoni stresses into the swimming flow is, transmitted \emph{without loss} as power input by the Marangoni flow force onto the swimmer. For higher Reynolds numbers, the mutual dissipation is no longer zero but there are additional viscous terms appearing, which are connected to the vorticity of the flow. This suggests that the Marangoni stresses at the interface become less effective in generating a Marangoni flow force because of this additional dissipation. Therefore, we simply neglect the Marangoni flow force (or assume that the Marangoni flow force is sub-dominant) and only consider the direct Marangoni force (\ref{eq:FMhigh0}) at high Reynolds numbers, in the following. Likewise, the existence of viscous boundary layers around the half-sphere modifies the drag force. On phenomenological grounds, it has been suggested that $F_{\rm D} = D_c \frac{\pi}{2}\mu a U$ with $D_c\simeq 6{\rm Nu}_T$ \cite{Duan2015}, where ${\rm Nu}_T$ is the Nusselt number for heat transport, resulting in \begin{equation} \bar{F}_{\rm D} = -3\pi \bar{U} {\rm Nu}_T. \label{eq:FShigh} \end{equation} Using the Ranz and Marshall correlation (\ref{eq:Ranz}), we find from the force balance $\bar{F}_{\rm D} +\bar{F}_{\rm M}=0$ \begin{align} {\rm Pe}&\approx 3 {\rm Nu}{\rm Nu}_T \bar{U}_{\rm swim}, \nonumber\\ \bar{U}_{\rm swim} &\sim {\rm Sc}^{1/3} {\rm Pr}^{-1/6} {\rm Pe}^{1/2} \label{eq:PeUhigh} \end{align} in the absence of evaporation. In the presence of evaporation, we use $\bar{F}_{\rm M} \sim \left.\bar{F}_{\rm M}\right|_{\bar{k}=0} {{\rm Nu}}/{(\bar{k}+{\rm Nu})}$ [cf.\ Eq.\ (\ref{eq:FMtotalkU})] to find \begin{align} {\rm Pe}&\approx 3 (\bar{k}+{\rm Nu}){\rm Nu}_T \bar{U}_{\rm swim}, \nonumber\\ \bar{U}_{\rm swim} &\sim \bar{k}^{-2/3} {\rm Sc}^{1/3} {\rm Pr}^{-1/2} {\rm Pe}^{2/3}. \label{eq:PeUhighk} \end{align} Both results (\ref{eq:PeUhigh}) and (\ref{eq:PeUhighk}) are also shown in Fig.\ \ref{fig:comparison} together with the experimental data on PEG-alginate and camphor Marangoni boats. \section{Comparison with experiment} Force balance for low and high Reynolds numbers results in a characteristic ${\rm Pe}$-$\bar{U}_{\rm swim}$-relation for the swimmer with characteristic power laws. For low Reynolds numbers, these are relations (\ref{eq:PeU}), $\bar{U}_{\rm swim} \propto {\rm Pe}^{3/5}$, without evaporation and (\ref{eq:PeUk}), $\bar{U}_{\rm swim} \propto {\rm Pe}^{3/4}$, in the presence of strong evaporation. For higher Reynolds numbers, we find relations (\ref{eq:PeUhigh}), $\bar{U}_{\rm swim} \propto {\rm Pe}^{1/2}$ without evaporation and (\ref{eq:PeUhighk}), $\bar{U}_{\rm swim} \propto {\rm Pe}^{2/3}$ for strong evaporation. Also, the experiment on PEG-alginate swimmers and the camphor boats from Boniface {\it et al.} \cite{Boniface2019} take place at higher Reynolds numbers; parameter value estimates for these experiments are summarized in Table \ref{tab:experiment}, the resulting dimensionless parameters in Table \ref{tab:nondimen}. Our experimental results for the mass release mass $\dot{m}(t)$ as a function of time (see Fig.\ \ref{fig:mdot}(middle)) and the corresponding swimming velocity $U_{\rm swim}(t)$ (see Fig.\ \ref{fig:mdot}(right)) of the PEG-alginate swimmers give the red line in Fig.\ \ref{fig:comparison} in the ${\rm Pe}$-$\bar{U}_{\rm swim}$ parameter plane. We also show experimental results for camphor boats from Boniface {\it et al.} \cite{Boniface2019} (black data points from experiments varying the radius and black line from time-dependent swimming data). Figure \ref{fig:comparison} compares these experimental results with our theoretical results for the appropriate parameter values for the PEG-alginate swimmers and the camphor boats from Tables \ref{tab:nondimen} and \ref{tab:experiment}. \begin{table*} \begin{center} \caption{\label{tab:experiment} Estimates of experimental parameters.} \begin{tabular}{ l | l | l} \hline\noalign{\smallskip} Parameter & PEG-alginate swimmer & Camphor boat\cite{Boniface2019} \\ \noalign{\smallskip}\hline\noalign{\smallskip} Radius $a$ & $1500 \,{\rm \mu m}$ & $1000-15000\, {\rm \mu m}$ \\ Diffusion constant $D$ & $350 \,{\rm \mu m^2/s}$ & $720 \,{\rm \mu m^2/s}$\\ Surface tension reduction $\kappa = - {\Delta \gamma}/{\Delta c}$ & $2.7\times 10^{-4} \,{\rm m^3/s^2}$ & $2\times 10^{-2} \,{\rm m^3/s^2}$ \\ Mass loss per time $\dot{m}= c\pi a^2\alpha$ & $c_{\rm half-sph}=2$ & $c_{\rm disk}=1$ \\ & $0.01-0.25 \times 10^{-6} \,{\rm g/s}$ & $(0.76 \times 10^{-6} \, {\rm g/s} )\left({a}/{4{\rm mm}}\right)^2$ \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table*} The swimming relations in Fig.\ \ref{fig:comparison} are the main result of the paper. For the PEG-alginate swimmer (red line) we see good agreement between the high Reynolds number theory in the absence of evaporation, i.e., with Biot number of $\bar{k} =0$ (yellow line). The corresponding low Reynolds number theory (dotted yellow line) gives significantly lower swimming velocities at the same Peclet number. This outcome is what we expected based on the above estimate of moderate Reynolds numbers ${\rm Re} \sim 60$ for the PEG-alginate swimmers and based on the non-volatility of PEG. We also see that the slower second phase of the swimming motion of the PEG-alginate swimmers is described slightly better by our theory (left part of the red line in Fig.\ \ref{fig:comparison}), which is in accordance with our initial observation that the time constants for mass release and swimming velocity agree only in the second phase. In the first phase, the Peclet number $\text{Pe}$ necessary to achieve the measured swimming velocity is slightly \emph{lower} than predicted by our theory, i.e., a more efficient propulsion. This is a hint that some of our theoretical assumptions could be violated during the first phase, for example, regarding the adsorption equilibrium, which might not yet be established starting from an initially ``empty'' air-water interface, which could give rise to steeper concentration gradients and more efficient propulsion. For volatile camphor, a Biot number of $\bar{k} \approx 550$ has been suggested in Ref.\ \cite{Soh2008}, which we use in Fig.\ \ref{fig:comparison} to compare with the experimental data of Boniface {\it et al.} on camphor disks \cite{Boniface2019} (black data points and black line). Again, we obtain good agreement with the high Reynolds number theory (blue line). The disk geometry differs from the half-spherical geometry we discussed in detail, but we expect that the swimming relation will only differ by numerical factors of order unity. The corresponding low Reynolds number theory (dotted blue line) significantly underestimates swimming velocities, and a theory without evaporation (dashed blue line) overestimates swimming velocities. \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{Uswimexp.pdf} \caption{Different theory results for ${\rm Pe}$-$\bar{U}_{\rm swim}$ swimming relations in a double-logarithmic plot in comparison to experimental results on PEG-alginate swimmers (red lines, obtained from the data in Fig.\ \ref{fig:mdot}) and camphor boats from Ref.\ \cite{Boniface2019} (black lines and dots; black line is time-dependent data, black dots are data at fixed times but for different radii $a$). PEG-alginate swimmers are described well by the high Reynolds number theory without evaporation (Biot number $\bar{k}=0$); camphor boats are best described by a high Reynolds number theory with Biot number $\bar{k}\approx 550$. The corresponding low Reynolds number theories give significantly lower swimming speeds. } \label{fig:comparison} \end{center} \end{figure} \section{Discussion and conclusion} We presented an experimental realization of alginate capsule self-propulsion at the air-water interface by loading the alginate capsule with surfactant molecules during synthesis. Self-propulsion of these capsule swimmers is based on a Marangoni boat mechanism. Alginate is bio-compatible and widely used for capsule production, which are interesting aspects for further applications. The versatile and simple synthesis strategy allowed us to identify various substances that can propel alginate capsules, see Table \ref{tab:fuels}. PEG surfactants exhibit the best propulsion properties: for PEG-300, we find a fast and sustained motion with swimming speeds $U_{\rm swim} \sim 2-3 \,{\rm cm/s}$ over $20\, {\rm min}$ and more. The swimming speed corresponds to several swimmer diameters per second and is comparable or superior to other self-phoretic or microswimmers \cite{ebbens2010} or active liquid droplets \cite{Herminghaus2014}. In general, we find prolonged propulsion only if spreading molecules are water-soluble as the PEG molecules are; then the air-water interface can regenerate by the fuel being dissolved in water. Evaporation from the air-water interface is another mechanism to achieve regeneration, which is utilized in camphor boats \cite{Soh2008,Suematsu2014,Akella2018,Boniface2019}. We conclude that a mechanism that regenerates the air-water interface, such as water-solubility or evaporation of surfactants, is crucial for prolonged propulsion. We could produce alginate swimmers down to radii of several hundreds of micrometers, which is slightly above the realm of low Reynolds numbers. The future work could address further miniaturization of capsules. Starting from low Reynolds numbers, we developed a theory for Marangoni boat propulsion of a completely symmetric, half-spherical, surfactant emitting swimmer. The theoretical description comprises the coupled problems of surface tension reduction by surfactant adsorption at the air-water interface including the possibility of surfactant evaporation, fluid flow (both Marangoni flow and flow induced by swimmer motion), diffusion and advection of the surfactant. In particular, advection is systematically included in our approach and turns out to be essential for all swimmer velocities $U \gg D/a$ ($\bar{U} \gg 1$). These three problems are first solved for prescribed swimmer velocity $U$; the actual swimming velocity $U_{\rm swim}$ is determined by force balance between the drag force, the direct Marangoni force from the surface tension contribution at the air-water-swimmer contact line, and the Marangoni flow force. We find that Marangoni flows can either act to increase the direct Marangoni force (at low velocities) or to increase the drag (at higher velocities). For low Reynolds numbers, all theoretical results are supported by numerical FEM simulations. Non-dimensionalization shows that the swimmer is controlled by two dimensionless control parameters, the Peclet number (\ref{eq:Pe}), which is the dimensionless emission rate of surfactant, and the Biot number (\ref{eq:Biot}), which is the dimensionless evaporation rate. Evaporation is practically absent for PEG, but strong for other frequently studied Marangoni boat swimmers, such as camphor boats \cite{Soh2008}. We showed that a spontaneous symmetry breaking, i.e., a spontaneous transition into a swimming state is possible also for a completely symmetric swimmer above a critical Peclet number. Spontaneous symmetry breaking resulting in propulsion is possible by establishing an asymmetric surfactant concentration profile that is maintained by advection. We find that the critical Peclet number for this transition approaches zero logarithmically for large system sizes, ${\rm Pe}_c \propto 1/(\ln \bar{R})^{3}$. The possibility of such a spontaneous symmetry breaking has been pointed out for autophoretic swimmers \cite{michelin2013,Michelin2014} and liquid Marangoni swimmers \cite{Izri2014} before. Also in these systems, advection by the surrounding fluid can maintain the necessary concentration gradients in fields and/or concentrations. In Eqs.\ (\ref{eq:PeU}) and (\ref{eq:PeUk}), we obtain the power laws governing the swimming velocity as a function of Peclet and Biot number, which are $\bar{U}_{\rm swim} \propto {\rm Pe}^{3/5}$, without evaporation (PEG) and $\bar{U}_{\rm swim} \propto \bar{k}^{-3/4} {\rm Pe}^{3/4}$, in the presence of strong evaporation (camphor). This demonstrates that additional evaporation reduces swimming speed. Experimentally realizable PEG-alginate or camphor swimmers are operating at moderate Reynolds numbers around 60 or more. Accordingly, we generalized the theoretical approach to higher Reynolds numbers by using the concept of the Nusselt number, for which many results at higher Reynolds numbers are known phenomenologically. This might also account for some effects related to the formation of vortices around the swimmer during propulsion at higher Reynolds numbers (see PIV-results in Fig.\ \ref{fig:PIV} and Ref.\ \cite{Sur2019}). Finally, we obtained the swimming relations (\ref{eq:PeUhigh}) and (\ref{eq:PeUhighk}), which give $\bar{U}_{\rm swim} \propto {\rm Pe}^{3/4}$, without evaporation (PEG) and $\bar{U}_{\rm swim} \propto \bar{k}^{-2/3} {\rm Pe}^{1/2}$, in the presence of strong evaporation (camphor). We find a good quantitative fit (without any free fitting parameters) with our own experimental results on PEG-alginate swimmers and the results of Ref.\ \cite{Boniface2019} on camphor swimmers in Fig.\ \ref{fig:comparison}. This is the main result of this paper. The future work should extend the numerical approach to higher Reynolds numbers in order to verify our scaling results for the swimming relation using, for example, the methods introduced in Ref.\ \cite{JafariKang2020}. There are several aspects of the self-propulsion of PEG-alginate capsules, where we presented first experimental results but which deserve a much more detailed investigation in future work: curved trajectories, interactions with container walls, and swimmer-swimmer interaction. Curved trajectories as observed in Figs.\ \ref{fig:trajectory} and \ref{fig:trajectory2} with a swimming direction of the swimmer that is, at least, weakly linked to its orientation, while the orientation of the swimmer is slowly turning can only be explained by small asymmetries of capsules induced by irregularities in the pore structure. This view is supported by the individual character of the turning characteristics of different swimmers (see Fig.\ \ref{fig:trajectory2}). Future work should explore the relation between capsule irregularities and turning statistics in more detail. Experimentally, purely rotary systems could be constructed \cite{Koyano2017}. PEG-alginate swimmers are repelled by walls. In normal collisions we observe direction reversal without reorientation of the swimmer. In the framework of the Marangoni boat mechanism, this can be explained by an accumulation of surfactant emitted by the swimmer in front of the wall because of the zero flux boundary condition at the wall. Surfactant accumulation creates a gradient in surfactant concentration toward the wall, and the swimmer reverses direction if advection and accumulation balance without changing its orientation. This behavior is similar to what has been observed for asymmetric \cite{Hayashima2001} and symmetric \cite{Nagayama2004} camphor boats \cite{Nakata2015}. During the collision the orientation of the swimmer particle does not change, while the swimming direction reverses; therefore, the swimming direction also reverses relatively to the particle orientation. This is consistent with a weak symmetry breaking by small irregularities in the pore distribution, which give rise to many possible metastable propulsion directions. A perturbation as during surfactant accumulation and direction reversal at the wall can easily cause a change between these propulsion directions. There are more oblique collisions (see Fig.\ \ref{fig:trajectory2}), which take longer and can feature a reorientation of the swimmer. The underlying mechanisms could be similar to the reorientation mechanisms of self-diffusiophoretic swimmers \cite{Uspal2015,Bayati2019} but this issue also requires future work. Finally, we have experimental evidence that PEG-alginate swimmers interact with each other via their surfactant concentration fields. Similar observations have been made already in Refs.\ \cite{Kohira2001,Heisler2012,Nakata2015}, mostly in channel geometries. We monitored different collisions between swimmers, where we could find both attraction and repulsion. In the framework of the Marangoni boat mechanisms, where swimmers prefer to move opposite to surfactant concentration gradients, we expect that swimmers are repelled by their surfactant tails, which represent traces of high concentration. This predicts a kind of ``chemo-repellent'' behavior with respect to the tails. For the interaction between swimmers, the concentration dependence of the surface activity $\kappa$ [the $c_0$-dependence in Eq.\ (\ref{eq:kappa})] can also play an important role \cite{Heisler2012}. The topic of swimmer interactions appears to be very rich and important for applications regarding the swarming of PEG-alginate swimmers; it deserves a much more detailed investigation in future work. \begin{acknowledgement} We acknowledge financial support by the Deutsche Forschungsgemeinschaft via SPP 1726 ``Microswimmers'' (KI 662/7-1, KI 662/7-2, and Re 681/25-1). We thank Monika Meuris and the ZEMM (Zentrum f{\"u}r Elektronenmikroskopie und Materialforschung), TU Dortmund for providing SEM images, and Peter Ehrhard (Department of Biochemical and Chemical Engineering, Fluid Mechanics) for access to PIV measurements. \end{acknowledgement} \section{Authors contributions} H.R. and A.-K.F. conceived the experiments. A.-K.F. performed the experiments. J.K. and H.E. developed the theoretical model and performed the analytic calculations and numerical simulations. J.K. wrote the manuscript with support from H.E. and H.R. All authors discussed the results and commented on the manuscript.
2,877,628,089,882
arxiv
\section{Introduction} Recently the theoretical understanding of quasielastic (QE) lepton-nucleus scattering has received renewed attention not only because of its intrinsic interest but also because reliable calculations of neutrino-nucleus cross section in the QE domain are essential when addressing fundamental neutrino properties, specifically neutrino masses and the neutrino oscillations that result from those masses. In particular it has been suggested~\cite{nu} that superscaling~\cite{Day,DS} in electroweak interactions with nuclei, namely the observation that the reduced electron-nucleus cross sections are to a large degree independent of the momentum transfer (scaling of I kind) and of the nuclear species (scaling of II kind), can be used as a tool to obtain predictions for neutrino-nucleus cross sections. Owing to the complexity of nuclear dynamics it is not obvious that the nuclear response to an electroweak field superscales. Indeed several effects are expected to break superscaling to some extent: off-shellness, collective nuclear excitations, meson-exchange currents, nucleon-nucleon (NN) correlations. To assess the impact of these contributions in the QE peak region is then of crucial importance. In the present work we explore the role of NN correlations in the QE peak domain proposing an extension of the Relativistic Fermi Gas (RFG) approach which still includes only on-shell nucleons in an independent-particle model, and put our efforts into going beyond the degenerate description provided by the extreme RFG. To do this we resort to a model~\cite{Barbaro:2008zv} inspired by the BCS theory of condensed matter physics with appropriate modifications, such as retention of covariance, to adapt it to the high-energy physics of atomic nuclei. \section{Longitudinal response and superscaling function} \label{sec:resp} We concentrate here on the longitudinal electromagnetic nuclear response $R_L$, namely the part of the total inclusive electroweak response that is believed to superscale the best~\cite{DS}. All other electroweak responses can be developed using similar arguments to those presented in the following. Within the framework of the plane-wave impulse approximation, where it is assumed that only one vector boson is exchanged between the probe and the nucleus and that this one is absorbed by a single nucleon, the QE longitudinal response function of a nucleus to an external electroweak field bringing three-momentum $$B_R$ q$ and energy $\omega$ into the system reads \begin{equation} \label{eq:RL} R_L(q,\omega)={\cal N}\, R_L^{s.n.}(q,\omega)\, \frac{2\pi m_N^2}{q} \int\!\!\!\!\int_\Sigma dp\,d{\cal E}\,\frac{p}{E_p} S(p,{\cal E})\, , \end{equation} where ${\cal N}$ is the appropriate nucleon number ($Z$ for protons and $N$ for neutrons), $R_L^{s.n.}(q,\omega)$ is the corresponding single nucleon response and $E_p=\sqrt{p^2+m_N^2}$ is the on-shell energy of the struck nucleon, with $m_N$ the nucleon mass. The probability of finding one nucleon in the system is provided by the system's spectral function $S(p,{\cal E})$, which depends on the missing momentum $p$ and on the energy \begin{equation} {\cal E} = \sqrt{p^2+M^{* 2}_{A-1}}-\sqrt{p^2+M^{2}_{A-1}} = \omega-T_N-E_s-T_{A-1}\,. \end{equation} The latter is the excitation energy of the residual nucleus in the reference frame where it moves with momentum $-{$B_R$ p}$ and, neglecting the very small recoiling nucleus kinetic energy $T_{A-1}$, is essentially the missing energy $\omega-T_N$, $T_N$ being the ejected nucleon kinetic energy, minus the separation energy $E_s=M_{A-1}+m_N-M_A$. Equation~\eqref{eq:RL} connects the semi-inclusive $(l,l'N)$ reaction with the inclusive $(l,l')$ process assuming that the outgoing nucleon no longer interacts with the residual $(A-1)$ nucleus (absence of final-state interactions). That equation expresses the assumption that the inclusive cross section is to be obtained by integrating the semi-inclusive cross section, summing over struck protons and neutrons. The boundaries of the integration domain $\Sigma$ in the $({\cal E},p)$ plane are found through the energy conservation relation (see \cite{CDM,Barbaro:2008zv} for the explicit expressions). A further approximation underlies \eqref{eq:RL}, namely the factorization of the single-nucleon response $R_L^{s.n.}(q,\omega)$ out of the integral. Actually this response is in general half-off-shell and hence a function not only of $q$ and $\omega$, but also of the energy and momentum of the off-shell struck nucleon, or equivalently of $p$ and ${\cal E}$. In the models being considered in the present study the struck nucleon is in fact on-shell and so $R_L^{s.n.}$ becomes simply the longitudinal response of a moving free nucleon. In this case its dependence upon $p$ and ${\cal E}$ becomes very weak, particularly if one limits the focus only to regions where the spectral function $S(p,{\cal E})$ plays a significant role, and can accordingly be extracted from the integral. Finally, in this study we confine ourselves to dealing with infinite, homogeneous systems, the simplest among them being the RFG model in which the dynamics are controlled by just one parameter, the Fermi momentum $k_F$. To explore superscaling it then turns out to be convenient to recast \eqref{eq:RL} in the following form \begin{equation} \label{eq:RL1} R_L(q,\omega)={\cal N}\, R_L^{s.n.}(q,\omega)\, \Lambda \times f(q,\omega)\, , \end{equation} where \begin{equation} \label{eq:K} \Lambda = \frac{1}{k_F} \left(\frac{m_N}{q}\right) \left(\frac{2 m_N T_F}{k_F^2}\right) \simeq \frac{m_N}{k_F q}~, \end{equation} $T_F$ being the Fermi kinetic energy. The function \begin{equation} \label{eq:f} f(q,\omega)= 2\pi m_N k_F \times \frac{k_F^2}{2 m_N T_F}\, \int\!\!\!\!\int_\Sigma dp\,d{\cal E}\,\frac{p}{E_p} S(p,{\cal E})\, , \end{equation} is the so-called {\em superscaling function}. Indeed, as we shall see in the next Section, in the RFG the function $f$ loses any dependence on both $k_F$ and $q$, namely, one has superscaling in the non-Pauli-blocked regime. It remains to be seen what happens in the BCS model, {\it i.e.,} in the presence of correlations. \section{The RFG model and its BCS-inspired extension} \label{sec:model} Before presenting our model for the correlated system, let us shortly recall the Fermi gas result. The key tool for exploring superscaling is the nuclear spectral function $S(p,{\cal E})$. In the RFG model this reads~\cite{CDM,rely} \begin{equation} \label{eq:SRFG} S^{RFG}(p,{\cal E}) = 4\, \theta(k_F-p) \delta({\cal E}-T_F+T_p) \frac{V_A}{A(2\pi)^3}~, \end{equation} where $T_p=E_p-m_N$ is the struck nucleon kinetic energy, $A$ the number of nucleons and $V_A$ the volume enclosing the system. The integral \eqref{eq:f} yields the RFG superscaling function~\cite{Alberico:1988bv} \begin{equation} \label{eq:fRFG} f_{RFG}(\psi) = \frac{3}{4} \left(1-\psi^2\right) \theta \left(1-\psi^2\right)~, \end{equation} which depends only on one variable, defined as follows \begin{equation} \label{eq:psi} \psi = \frac{1}{\sqrt{\xi_F}}\times \frac{\lambda-\tau}{\sqrt{(1+\lambda) \tau+\kappa\sqrt{\tau(1+\tau)}}}~, \end{equation} with $\xi_F=T_F/m_N$, $\kappa=\frac{q}{2m_N}$, $\lambda=\frac{\omega}{2m_N}$ and $\tau=\kappa^2-\lambda^2$. As outlined in the Introduction, we now extend the RFG model in order to account for NN correlations by assuming for both the initial ground state ($|BCS>$) and the daughter nucleus ($|D(p)>$) a BCS-like wave function, namely \begin{eqnarray} \label{eq:BCS} |BCS>&=&\prod_k(u_k+v_k a^\dagger_{k\uparrow}a^\dagger_{-k\downarrow})|0> \\ \label{eq:daugh} |D(p)>&=&\frac{1}{|v^\prime_p(p)|}\,a_{p\uparrow} \prod_k[u^\prime_k(p)+v^\prime_k(p) a^\dagger_{k\uparrow}a^\dagger_{ -k\downarrow}]|0>~. \end{eqnarray} In the above $|0>$ is the true vacuum and the states are correctly normalized providing $|u_k|^2+|v_k|^2=1$ and $|u^\prime_k(p)|^2+|v^\prime_k(p)|^2=1$. Note that the $(u,v)$ and $(u^\prime,v^\prime)$ coefficients are a priori different from each other: this point is of crucial relevance for our model, as we shall see below. With the assumption \eqref{eq:BCS} we have a covariant approximation to the nuclear ground state wave function. We have required that the added pairs always occur with back-to-back momenta (hence the net linear momentum of the system in its rest frame is zero) and with opposite helicities (hence the net spin of the ground state is zero). The creation operators add particles with relativistic on-shell spinors. As is well-known, the states \eqref{eq:BCS} and \eqref{eq:daugh} do not correspond to a fixed number of particles, since they are not eigenstates of the operator $\hat n(k)=\sum_s a^\dagger_{k s}a_{k s}$. However we can compute the expectation values \begin{eqnarray} \label{eq:nBCS} n_{BCS}(k)&=&<BCS|\hat n(k)|BCS>=|v_k|^2\\ n_{D(p)}(k)&=&<D(p)|\hat n(k)|D(p)> =|v^\prime_k(p)|^2 (1-\delta_{kp})~. \end{eqnarray} and require the particle number ($A$ for the initial state and $A-1$ for the daughter nucleus) to be conserved on the average, which implies the conditions \begin{eqnarray} \label{eq:constr} && \sum_k |v_k|^2 = A \,, \,\,\,\,\,\,\,\,\, \sum_{k\not=p}|v^\prime_k(p)|^2 = \sum_{k}|v^\prime_k(p)|^2-|v^\prime_p(p)|^2 =A-1~. \end{eqnarray} Concerning the energy, we view our system as being constructed in terms of independent quasi-particles, writing accordingly \begin{eqnarray} \label{eq:EBCS} E_{BCS} &=& <BCS|\sum_{ks}E_k a^\dagger_{ks}a_{ks}|BCS>= \sum_k E_k |v_k|^2\\ E_{D(p)} &=& <D(p)|\sum_{ks}E_k a^\dagger_{ks}a_{ks}|D(p)> = \sum_{k\not=p} E_k |v^\prime_k(p)|^2 \nonumber\\ &=& (E_{BCS}-m_N)-T_p |v_p|^2+\sum_{k\not=p} T_k \left[|v^\prime_k(p)|^2-|v_k|^2\right]~, \label{eq:Edau} \end{eqnarray} where in the last equation the constraints (\ref{eq:constr}) have been exploited. Before computing the spectral function, let us write down the expressions for the normalization conditions (\ref{eq:constr}) in the thermodynamic limit $A\to\infty$, $V_A\to\infty$, $A/V_A=\rho_A$, $(A-1)/V_{A-1}=\rho_{A-1}$, namely \begin{eqnarray} && \lim \frac{1}{V_A} \sum_k |v_k|^2 = \int\frac{d^3k}{(2\pi)^3} |v(k)|^2 =\rho_A~, \label{eq:norma} \\ && \lim \frac{1}{V_{A-1}}\left[\sum_k |v^\prime_k(p)|^2 -|v^\prime_p(p)|^2 \right] = \int\frac{d^3k}{(2\pi)^3} |v^\prime(k;p)|^2 = \rho_{A-1}~. \label{eq:normb} \end{eqnarray} Assuming now $\rho_{A-1}=\rho_A\equiv\rho$ clearly entails $v^{\prime\, 2}(k;p)=v^2(k)$, which allows us to drop the last term in \eqref{eq:Edau}. It must be emphasized that the coefficients $v$ and $v^\prime$ become identical in the thermodynamic limit, but are different for finite $A$. Hence it is crucial to compute the nuclear energies when $A$ is finite and {\em then} take the thermodynamic limit. We can then proceed to compute the daughter nucleus spectral function \begin{equation} \label{eq:SFBCS} S^{BCS}(p,{\cal E})=\left|<D(p)|a_{p\uparrow}|BCS>\right|^2 \delta\left[{\cal E}-\left(E_{D(p)}-E_{D(k_F)}\right)\right] \frac{V_A}{A(2\pi)^3}~, \end{equation} where $E_{D(k_F)}$ is the energy $E_{D(p)}$ of the daughter nucleus evaluated at that value of $p$ where it reaches its minimum, to be referred to as $k_F$ in the BCS model: \begin{equation} \label{eq:stab} \left.\frac{d E_{D(p)}}{dp}\right|_{p=k_F}=0~. \end{equation} Hence we have, after taking the thermodynamic limit, \begin{equation} {\cal E}(p) = E_{D(p)}-E_{D(k_F)} = T_F |v(k_F)|^2 - T_p |v(p)|^2~. \label{eq:cale} \end{equation} The matrix element in \eqref{eq:SFBCS} can be straightforwardly computed, yielding $|v(p)|^2$,. Thus we end up with the expression \begin{equation} S^{BCS}(p,{\cal E}) = |v(p)|^2 \delta\left({\cal E}- T_F |v(k_F)|^2 + T_p |v(p)|^2\right) \frac{1}{(2\pi)^3 \rho}~. \label{eq:SFBCS1} \end{equation} Finally, in order to calculate the superscaling function \eqref{eq:f} what remains to be specified is the integration region $\Sigma$, which in turn requires knowledge of the separation energy. In the present model the latter turns out to be $E_s = -T_F |v(k_F)|^2$. The last ingredient needed to calculate $f$ are the coefficients $v(k)$ appearing in the BCS wavefunction. Although in principle these could be computed self-consistently, together with the energies $E_k$, starting from a model Hamiltonian, here we take a more phenomenological approach, choosing the following three-parameter expression \begin{equation} v^2(k)=\frac{c}{e^{\beta(k-\tilde k)}+1}~. \label{eq:v2} \end{equation} Moreover, for sake of simplicity, we make the assumption $E_k=\sqrt{k^2+m_N^2}$, namely we take the same single particle energies as in the RFG. Next we use the constraints \eqref{eq:norma} and \eqref{eq:stab} to fix the parameters $c$ and $\tilde k$, obtaining \begin{equation} \label{eq:c1} c(\beta,\tilde k) = \pi^2\beta^3\rh / Li_3\left(-e^{\beta\tilde k}\right \end{equation} and \begin{equation} \label{eq:ktilde} \tilde k=k_F+\frac{1}{\beta}\log\left[\frac{\beta}{k_F} \sqrt{k_F^2+m_N^2} \left(\sqrt{k_F^2+m_N^2}-m_N\right)-1\right]~. \end{equation} As far as the parameter $\beta$ is concerned, it clearly controls both the modifications of the momentum distribution near the Fermi surface (promotion of pairs due to residual NN interactions, both long- and short-range) and also the tail of the momentum distribution due to short-range NN correlations. Indeed, for $\beta$ very large one recovers the familiar $\theta$-distribution of the RFG, while for smaller and smaller $\beta$ more and more particles are pulled out of the Fermi sea and produce a significant tail for the momentum distribution at large momenta. The impact of the physics expressed by the parameter $\beta$ on the superscaling function is explored in the next section. \section{Results} \label{sec:results} In presenting the results obtained using our model it is convenient to start by displaying the behaviour of the parameters $\tilde k$ and $c$, which are fixed by the physical conditions of normalization and stability, versus $\beta$ for given $k_F$. When $\tilde k$, $c$ and $\beta$ are known so are the wave functions of the initial and final nuclei. \begin{figure} \label{fig:fig1} \begin{center} \includegraphics[scale=0.7]{Fig_par.ps} \caption{The parameters $c$ and $\tilde k$, given in (\ref{eq:c1}) and (\ref{eq:ktilde}), respectively, shown as functions of $\beta$ for $k_F=228$ MeV/c and $\rho=k_F^3/(6\pi^2)$. The arrow indicates the critical value $\beta_{\rm crit}=0.017$ c/Mev and the horizontal line in panel $a$ corresponds to the Fermi momentum $k_F$.} \end{center} \end{figure} In Fig.~1 the parameters $\tilde k$ and $c$ are plotted versus $\beta$. For large $\beta$ they stay constant (in fact the almost constant value of $\tilde k$ is quite close to the input value $k_F=$ 228 MeV/c) until a critical value $\beta_{\rm crit}=0.017$ c/MeV is reached where $c$ ($\tilde k$) displays a dramatic increase (decrease). This value corresponds to the change of sign of the logarithmic term in \eqref{eq:ktilde}, namely $\beta_{\rm crit}=\frac{2 k_F}{T_F (T_F+m_N)}$. Thus our results appear to point to the existence of a narrow domain of $\beta$ around $\beta_{\rm crit}$, below which the system becomes strongly disrupted by correlations. This has a strong impact on the structure of the superscaling function, as we shall see later. In Fig.~2 we display the momentum distribution \eqref{eq:nBCS} of the initial nucleus for a few values of $\beta$ larger (a) or smaller (b) than $\beta_{\rm crit}$. The progressive development of a tail in the momentum distribution is clearly seen in the figure: for values of $\beta$ lower than $\beta_{\rm crit}$ the nuclear momentum distribution becomes very much extended beyond the Fermi sphere associated with the input value of $k_F$. \begin{figure} \label{fig:fig3} \begin{center} \includegraphics[scale=0.7]{Fig_v2.ps} \caption{Momentum distribution of the initial state, Eq.~(\ref{eq:v2}), evaluated for $k_F=228$ MeV/c, $\rho=k_F^3/(6\pi^2)$ and different values of $\beta$ (in c/MeV) above (a) and below (b) the critical value $\beta_{\rm crit}$.} \end{center} \end{figure} \begin{figure} \label{fig:fig4} \begin{center} \includegraphics[scale=0.5]{Fig4.ps} \caption{The excitation energy $\cal E$ computed according to \eqref{eq:cale} neglecting the last two terms for $k_F=228$ MeV/c, $\rho=k_F^3/(6\pi^2)$, $\beta=0.1$ c/MeV. The RFG results are also shown for comparison. } \end{center} \end{figure} The next issue to be addressed is to determine where the spectral function is nonzero in the $({\cal E},p)$ plane. The answer is found in Fig.~3 where the support of the spectral functions of the RFG and of our BCS-inspired model are displayed and compared. Both spectral functions of course are just $\delta$-functions, but concerning their support two major differences distinguish the two: 1) in the range of momenta where both exist the excitation spectrum of the daughter system is substantially softer than the RFG one; 2) for missing momenta larger than $k_F$ the BCS case, unlike the RFG, continues to display a spectrum, which in the thermodynamic limit rises quite suddenly with $p$ until it reaches the value ${\cal E}$ assumes for vanishing missing momentum, namely ${\cal E}_{\rm max} = T_F |v(k_F)|^2$. This energy is reached only at $p=\infty$, but over a large span of momenta ${\cal E}$ remains almost constant, thus corresponding to the situation of an eigenvalue with infinite degeneracy stemming from the symmetry $U(1)$ associated with the particle number conservation. As $p$ is lowered, approaching the Fermi surface, the degeneracy is lifted and we face a situation of a spontaneously broken symmetry, reflected in the structure of our state which contains components of all possible particle number. This situation is strongly reminiscent of superconductivity, where the spontaneous symmetry breaking also occurs in the proximity of the Fermi surface. \begin{figure} \label{fig:fig5} \begin{center} \includegraphics[scale=0.8]{Fig_scaling.ps} \caption{The superscaling function $f$ defined in (\ref{eq:f}) plotted versus the scaling variable (\ref{eq:psi}) in the RFG model (green) and in the present BCS model for three values of $q$ (red: 500 MeV/c, blue: 1000 MeV/c, magenta: 1500 MeV/c) and different values of $\beta$. As usual, $k_F=$228 MeV/c. Data are taken from \cite{Maieron:2001it,JJ}. } \end{center} \end{figure} This set of degenerate states has a dramatic impact on the superscaling function $f$, which is displayed in Fig.~4 versus the scaling variable $\psi$ for a few values of $\beta$ and $q$. For comparison the RFG result in \eqref{eq:fRFG} and the averaged experimental data~\cite{Maieron:2001it,JJ} are also shown. One sees that to get $f$ for large positive $\psi$ we have to integrate in the $({\cal E},p)$ plane in domains encompassing large fractions of those degenerate states discussed above. These are thus the cause of the asymmetry of the scaling function with respect to $\psi=0$ appearing in Fig.~4. For $\psi$ large and negative these states are to a large extent excluded from entering into the building up of $f$. The fact that this effect is more and more pronounced as $\beta$ becomes smaller reflects the impact of the tail of the momentum distribution which indeed grows when $\beta$ decreases and, as a consequence, more degenerate states participate to build up $f$. Note that values of $\beta$ around the critical value yield a tail which is in qualitative agreement with the experimental data. \begin{figure} \label{fig:fig6} \begin{center} \includegraphics[scale=0.7]{Fig_sclog.ps}% \caption{The superscaling function $f$ in the negative $\psi$ region plotted for several values of $q$ (in MeV/c) and $\beta=0.01$ c/MeV. As usual, $k_F=$228 MeV/c.} \end{center} \end{figure} As far as scaling of the first kind is concerned, Fig.~4 shows that this is quickly reached in the vicinity of the QE peak, although not so to the right and to the left of it. A closer examination of the results (see Fig.~5, where $f$ is plotted on a logarithmic scale for a wider $q$-range at $\beta$=0.01 c/MeV) shows that also here the BCS model does scale, however with an onset reached only for $q\simeq$ 1.5 GeV/c, namely for larger momenta than when at the QE peak where the onset already occurs at about 500 MeV/c. Also from Fig.~5 it appears that the scaling regime is reached faster to the right than to the left of the QE peak. Moreover, the asymptotic value for $\psi<0$ is approached from below, namely the superscaling function grows with $q$ until it reaches its asymptotic value, in contrast with the experimental findings. This reflects the fact that our model, although appealingly simple, is not able to account for features of this kind. Note that the same trend of approaching first-kind scaling from below is also found within the framework of the Coherent Density Fluctuation Model\cite{Martin} where realistic nucleon momentum and density distributions are used~\cite{bulgari}. On the other hand, in relativistic mean-field theory~\cite{Caballero:2005sj} the approach is from above, and thus in better accord with the experimental data. Finally, using the present BCS model, we investigate the second-kind scaling behaviour, namely the dependence of the function $f$ upon the nuclear species. Following the original procedure of Refs.~\cite{DS}, we choose for each nuclear species a momentum $k_A$ (which is a phenomenological parameter, not necessarily coinciding with the Fermi momentum as it must reflect both initial- and final-state interaction effects) and use it in the definition \eqref{eq:psi} of the scaling variable $\psi$ and of the dividing factor \eqref{eq:K}. For simplicity, in the present approach the value of $k_A$ is chosen in order to have all the corresponding superscaling functions coincide at the QE peak, thus realizing superscaling at least where the nuclear response is the largest. The results are displayed in Fig.~6, where each curve corresponds to given $k_F$ and $k_A$. Over much of the range of $\psi$ shown in the figure one sees relatively good second-kind scaling, although the results still point to a sizable violation of the second kind scaling in the scaling domain (large negative $\psi$). \begin{figure} \label{fig:fig9} \begin{center} \includegraphics[scale=0.7]{Fig_scalingII.ps} \caption{The superscaling function $f$ plotted versus $\psi$ for several values of the Fermi momentum $k_F$ (in MeV/c) and with $k_A$ devised in such a way that the peaks coincide (see text).} \end{center} \end{figure} \section{Conclusions} \label{sec:concl} In the present study a simple extension of the relativistic Fermi gas model for studies of relatively high-energy inclusive electroweak cross sections has been developed. Starting from the RFG in which a degenerate gas of nucleons is assumed for the nuclear ground state, in this extension pairs of particles are promoted from below the Fermi surface to above, yielding a spectral function and the resulting momentum distribution with Fourier components for all values of momentum. In the spirit of the RFG this new model has been constructed in a way that maintains covariance. To summarize our findings, we have shown that, likely because in the BCS spirit we limit ourselves to an independent quasi-particle description of nuclear matter, scaling of the first kind (independence of momentum transfers $q$) appears to occur not only at the QE peak, but also at both lower and higher energy transfers $\omega$. We found that the onset of first-kind scaling already occurs at momentum transfers of order 500 MeV/c at the QE peak, whereas away from the QE peak the onset only occurs at quite large momentum transfers (of the order of 2 GeV/c). Furthermore, the shape of the superscaling function turns out to be {\em non-symmetric} around the QE peak, being larger to the right and smaller to the left of it, namely, in agreement with experiment and thus lending support to our approach. However, in our model when in the so-called scaling region (below the QE peak) first-kind scaling is reached as a function of $q$ from below, which is not what is experimentally found. Finally, scaling of the second kind (independence of nuclear species) is shown to be relatively well satisfied, given that an appropriate momentum scale is chosen for each nuclear species, although some violations appear for large negative $\psi$.
2,877,628,089,883
arxiv
\section{Introduction} \label{sec:Introduction} Mathematical expressions are indispensable for describing problems in maths, physics and many other fields. Meanwhile, people have begun to use handwritten mathematical expressions as one natural input mode. However, machine recognition of these handwritten mathematical expressions is difficult and exhibits three distinct challenges~\cite{belaid1984syntactic}, i.e., the complicated two-dimensional structures, enormous ambiguities coming from handwriting input and variant scales of handwritten math symbols. Handwritten mathematical expression recognition comprises two major problems~\cite{chan2000mathematical}: symbol recognition and structural analysis. The two problems can be solved sequentially~\cite{zanibbi2002recognizing} or globally~\cite{alvaro2016integrated}. However, both conventional sequential and global approaches have the following limitations: 1) the challenging symbol segmentation is inevitable, which brings many difficulties; 2) the structural analysis is commonly based on two-dimensional context free grammar~\cite{chou1989recognition}, which requires priori knowledge to define a math grammar; 3) the complexity of parsing algorithms increases with the size of math grammar. In recent research of deep learning, a novel attention based encoder-decoder model has been proposed~\cite{bahdanau2014neural,sutskever2014sequence}. Its general application in machine translation~\cite{cho2014learning}, speech recognition~\cite{bahdanau2016end}, character recognition~\cite{zhang2017ran,zhang2017trajectory} and image captioning~\cite{xu2015show} inspires researchers that mathematical expression recognition can also be one proper application~\cite{zhang2017watch,zhang2017gru,deng2016you,anh2017training}. More specifically, \cite{zhang2017watch} proposed a model namely WAP. The WAP learns to encode input expression images and decode them into LaTeX strings. The encoder is a convolutional neural network (CNN)~\cite{krizhevsky2012imagenet} based on VGG architecture~\cite{simonyan2014very} that maps images to high-level features. The decoder is a recurrent neural network (RNN)~\cite{graves2013speech} with gated recurrent units (GRU)~\cite{chung2014empirical} that converts these high-level features into output strings one symbol at a time. For each predicted symbol, an attention model built in the decoder scans the entire input expression image and chooses the most relevant region to describe a math symbol or an implicit spatial operator. Compared with conventional approaches for handwritten mathematical expression recognition, the attention based encoder-decoder model possesses three distinctive properties: 1) It is end-to-end trainable; 2) It is data-driven, in contrast to traditional systems that require a predefined math grammar; 3) Symbol segmentation can be automatically performed through attention model. \begin{figure} \centering \includegraphics[width=3in]{under-parsing-example} \caption{An incorrectly recognized example of handwritten mathematical expression due to the under-parsing problem: the decimal point ``.'' is missed in the predicted LaTeX notation.} \label{fig:under-parsing-example} \end{figure} In this study, we still focus on offline handwritten mathematical expression recognition and report our recent progress on WAP model~\cite{zhang2017watch}. The main contribution is in two aspects. Firstly, we improve the CNN encoder by employing a novel architecture called densely connected convolutional networks (DenseNet)~\cite{huang2016densely}. The DenseNet has shown excellent performance on image classification task as it strengthens feature extraction and facilitates gradient propagation. Secondly, we present a novel multi-scale attention model to deal with the problems caused by pooling operations. Although pooling layers are essential parts of convolutional networks, they shrink the size of feature maps, yielding decrease of resolution. Because the scales of handwritten math symbols vary severely, the fine-grained details of extracted feature maps are especially important in handwritten mathematical expression recognition, which are lost in low-resolution feature maps. For example, in Fig.~\ref{fig:under-parsing-example}, the decimal point is very close to math symbol ``3'' and its scale is much smaller than its adjacent symbols. After many pooling layers, the visual information of the decimal point is gone, which leads to an under-parsing problem. To implement the multi-scale attention model, we propose a multi-scale dense encoder that will provide both low-resolution features and high-resolution features. The low-resolution features capture a larger receptive field and are more semantic while the high-resolution features restore more fine-grained visual information. The decoder then attends to both low-resolution and high-resolution features for predicting output LaTeX strings. The remainder of the paper is organized as follows. In Section~\ref{sec:Methodology}, we introduce the dense encoder and the proposed multi-scale attention model in detail. We introduce the implementation of training and testing procedure in Section~\ref{sec:Training and Testing Details}. The performances of dense encoder and multi-scale attention model are shown through experimental results and visualization analysis in Section~\ref{sec:Experiments}. Finally we conclude this study in Section~\ref{sec:Conclusion}. \section{Methodology} \label{sec:Methodology} In this section, we first make a brief summarization of DenseNet since our encoder is based on densely connected convolutional blocks. Then we introduce the classic attention based encoder-decoder framework. Finally, we extend DenseNet by introducing a multi-scale dense encoder and describe the implementation of multi-scale attention model in detail. \subsection{Dense Encoder} \label{sec:Dense Encoder} The main idea of DenseNet is to use the concatenation of output feature maps of preceding layers as the input of succeeding layers. As DenseNet is composed of many convolution layers, let $H_{l}(\cdot)$ denote the convolution function of the $l^{\textrm{th}}$ layer, then the output of layer $l$ is represented as: \begin{equation}\label{eq:dense output} \mathbf{x}_{l}=H_{l}([\mathbf{x}_{0};\mathbf{x}_{1};\ldots;\mathbf{x}_{l-1}]) \end{equation} where $\mathbf{x}_{0}, \mathbf{x}_{1},\ldots, \mathbf{x}_{l}$ denote the output features produced in layers $0, 1, \ldots, l$, ``$;$'' denotes the concatenation operation of feature maps. This iterative connection enables the network to learn shorter interactions cross different layers and reuses features computed in preceding layers. By doing so the DenseNet strengthens feature extraction and facilitates gradient propagation. An essential part of convolutional networks is pooling layers, which is capable of increasing receptive field and improving invariance. However, the pooling layers disenable the concatenation operation as the size of feature maps changes. Also, DenseNet is inherently memory demanding because the number of inter-layer connections grows quadratically with depth. Consequently, the DenseNet is divided into multiple densely connected blocks as shown in Fig.~\ref{fig:multi-scale-dense}. A compression layer is appended before each pooling layer to further improve model compactness. \subsection{Decoder} \label{sec:Decoder with Attention} We employ GRU as the decoder because it is an improved version of simple RNN which can alleviate the vanishing and exploding gradient problems~\cite{bengio1994learning,zhang2016rnn}. Given input ${{\bf{x}}_t}$, the GRU output ${\mathbf{h}_t}$ is computed by: \begin{equation}\label{eq:GRU function} {{\bf{h}}_t} = \textrm{GRU} \left( {{\bf{x}}_t}, {{\bf{h}}_{t - 1}} \right) \end{equation} and the GRU function can be expanded as follows: \begin{align}\label{eq:expandGRU} & {{\mathbf{z}}_t} = \sigma ({{\mathbf{W}}_{xz}}{{\mathbf{x}}_{t}} + {{\mathbf{U}}_{hz}}{{\mathbf{h}}_{t - 1}}) \\ & {{\mathbf{r}}_t} = \sigma ({{\mathbf{W}}_{xr}}{{\mathbf{x}}_{t}} + {{\mathbf{U}}_{hr}}{{\mathbf{h}}_{t - 1}}) \\ & {{\bf{\tilde h}}_t} = \tanh ({{\bf{W}}_{xh}}{{\bf{x}}_{t}} + {{\bf{U}}_{rh}}({{\bf{r}}_t} \otimes {{\bf{h}}_{t - 1}})) \\ & {{\bf{h}}_t} = (1 - {{\bf{z}}_t}) \otimes {{\bf{h}}_{t - 1}} + {{\bf{z}}_t} \otimes {{\bf{\tilde h}}_t} \end{align} where $\sigma$ is the sigmoid function and $\otimes$ is an element-wise multiplication operator. ${{\mathbf{z}}_t}$, ${{\mathbf{r}}_t}$ and ${{\bf{\tilde h}}_t}$ are the update gate, reset gate and candidate activation, respectively. Assuming the output of CNN encoder is a three-dimensional array of size $H \times W \times C$, consider the output as a variable-length grid of $L$ elements, $L=H \times W$. Each of these elements is a $C$-dimensional annotation that corresponds to a local region of the image. \begin{equation}\label{eq:annotations} \mathbf{A} = \left\{ { \mathbf{a}_1, \ldots ,\mathbf{a}_L} \right\}\;,\;{{\mathbf{a}}_i} \in {\mathbb{R}^C} \end{equation} Meanwhile, the GRU decoder is employed to generate a corresponding LaTeX string of the input mathematical expression. The output string $\mathbf{Y}$ is represented by a sequence of one-hot encoded symbols. \begin{equation}\label{eq:outputY} \mathbf{Y} = \left\{ { \mathbf{y}_1, \ldots ,\mathbf{y}_T} \right\}\;,\;{{\mathbf{y}}_i} \in {\mathbb{R}^K} \end{equation} where $K$ is the number of total symbols in the vocabulary and $T$ is the length of LaTeX string. Note that, both the annotation sequence $\mathbf{A}$ and the LaTeX string $\mathbf{Y}$ are not fixed-length. To address the learning problem of variable-length annotation sequences and associate them with variable-length output sequences, we attempt to compute an intermediate fixed-length vector ${{\mathbf{c}}_t}$, namely context vector, at each decoding step $t$. The context vector ${{\mathbf{c}}_t}$ is computed via weighted summing the variable-length annotations ${{\mathbf{a}}_i}$: \begin{equation}\label{eq:context vector} {{\mathbf{c}}_t} = \sum\nolimits_{i=1}^L {{\alpha _{ti}}{{\mathbf{a}}_i}} \end{equation} Here, the weighting coefficients $\alpha _{ti}$ are called attention probabilities and they will make decoder to know which part of input image is the suitable place to attend to generate the next predicted symbol and then assign a higher weight to the corresponding local annotation vectors ${{\mathbf{a}}_i}$. After computing the intermediate fixed-length context vector, we then generate the LaTeX string one symbol at a time. By doing so, the problem of associating variable-length annotation sequences with variable-length output LaTeX strings is addressed. The probability of each predicted symbol is computed by the context vector ${{\mathbf{c}}_t}$, current decoder state ${{\mathbf{s}}_t}$ and previous target symbol ${{\mathbf{y}}_{t - 1}}$ using the following equation: \begin{equation}\label{eq:computePy} p({{\mathbf{y}}_t}|{{{\mathbf{y}}_{t - 1}},\mathbf{X}}) = g \left ({{\mathbf{W}}_o}h({\mathbf{E}}{{\mathbf{y}}_{t - 1}} + {{\mathbf{W}}_s}{{\mathbf{s}}_t} + {{\mathbf{W}}_c}{{\mathbf{c}}_t})\right ) \end{equation} where $\mathbf{X}$ denotes input mathematical expression images, $g$ denotes a softmax activation function, $h$ denotes a maxout activation function, let $m$ and $n$ denote the dimensions of embedding and GRU decoder state respectively, then ${{\mathbf{W}}_o} \in {\mathbb{R}^{K \times \frac{m}{2}}}$ and ${{\mathbf{W}}_s} \in {\mathbb{R}^{m \times n}}$, ${\mathbf{E}}$ denotes the embedding matrix. \subsection{Multi-Scale Attention with Dense Encoder} \label{sec:Multi-Scale Attention with Dense Encoder} \subsubsection{Multi-Scale Dense Encoder} \label{sec:Multi-Scale Dense Encoder} \begin{figure} \centering \includegraphics[width=3.25in]{multi-scale-dense} \caption{Architecture of multi-scale dense encoder. The left part is the main branch while the right part is the multi-scale branch.} \label{fig:multi-scale-dense} \end{figure} To implement the multi-scale attention model, we first extend the single-scale dense encoder into multi-scale dense encoder. As illustrated in Fig.~\ref{fig:multi-scale-dense}, our dense encoder consists of two branches, i.e., except the main branch which produces low-resolution annotations $\mathbf{A}$, our dense encoder has another multi-scale branch that produces high-resolution annotations $\mathbf{B}$. The multi-scale branch is extended before the last pooling layer of the main branch so that the output feature maps of multi-scale branch has a higher resolution. The high-resolution annotation is a three-dimensional array of size $2H \times 2W \times {C^{'}}$, which can be represented as a variable-length grid of $4L$ elements: \begin{equation}\label{eq:new annotations} \mathbf{B} = \left\{ { \mathbf{b}_1, \ldots ,\mathbf{b}_{4L}} \right\}\;,\;{{\mathbf{b}}_i} \in {\mathbb{R}^{C^{'}}} \end{equation} where $L$ is the length of annotation sequence $A$. Intuitively, we can extend several multi-scale branches before every pooling layer but such operation brings too much computational cost as the size of feature maps becomes too large. As for the implementation details of dense encoder, we employ three dense blocks in the main branch as described by yellow rectangles in Fig.~\ref{fig:multi-scale-dense}. Before entering the first dense block, a $7 \times 7$ convolution (stride is $2 \times 2$) with 48 output channels is performed on the input expression images, followed by a $2 \times 2$ max pooling layer. Each dense block is titled as ``DenseB'' because we use bottleneck layers to improve computational efficiency, i.e. a $1 \times 1$ convolution is introduced before each $3 \times 3$ convolution to reduce the input to $4k$ feature maps. The growth rate $k=24$ and the depth (number of convolution layers) of each block $D=32$ which means each block has 16 $1 \times 1$ convolution layers and 16 $3 \times 3$ convolution layers. A batch normalization layer~\cite{ioffe2015batch} and a ReLU activation layer~\cite{glorot2011deep} are performed after each convolution layer consecutively. We use $1 \times 1$ convolution followed by $2 \times 2$ average pooling as transition layers between two contiguous dense blocks. The transition layer reduces the number of feature maps of each block by half. While in the multi-scale branch, we append another dense block with bottleneck layer, $k=24$ and $D=16$. We investigate the depth of block in multi-scale branch ($D=0, 8, 16, 24$) in Section~\ref{sec:Experiments}. \subsubsection{Multi-Scale Attention Model} \label{sec:Multi-Scale Attention Model} In this study, our decoder adopts two unidirectional GRU layers to calculate the decoder state ${\mathbf{s}_t}$ and the multi-scale context vector ${\mathbf{c}_t}$ that are both used as input to calculate the probability of predicted symbol in Eq.~\eqref{eq:computePy}. We employ two different single-scale coverage based attention model to generate the low-resolution context vector and high-resolution context vector by attending to low-resolution annotations and high-resolution annotations respectively. As the low-resolution context vector and high-resolution context vector have the same length $1$, we concatenate them to produce the multi-scale context vector: \begin{align}\label{eq:compute decoder state} & {{\mathbf{\hat s}}_t} = \textrm{GRU} \left( {{\bf{y}}_{t-1}}, {{\bf{s}}_{t - 1}} \right) \\ & {\mathbf{cA}_t} = f_{\text{catt}} \left( \mathbf{A}, \mathbf{\hat s}_t \right) \\ & {\mathbf{cB}_t} = f_{\text{catt}} \left( \mathbf{B}, \mathbf{\hat s}_t \right) \\ & {\mathbf{c}_t} = [{\mathbf{cA}_t};{\mathbf{cB}_t}] \\ & {{\mathbf{s}}_t} = \textrm{GRU} \left( {{\mathbf{c}}_t}, {{\mathbf{\hat s}}_t} \right) \end{align} where ${{\mathbf{s}}_{t-1}}$ denotes the previous decoder state, ${{\mathbf{\hat s}}_t}$ is the prediction of current decoder state, ${\mathbf{cA}_t}$ is the low-resolution context vector at decoding step $t$, similarly ${\mathbf{cB}_t}$ is the high-resolution context vector. The multi-scale context vector ${\mathbf{c}_t}$ is the concatenation of ${\mathbf{cA}_t}$ and ${\mathbf{cB}_t}$ and it performs as the input during the computation of current decoder state ${{\mathbf{s}}_{t}}$. $f_{\text{catt}}$ denotes a single-scale coverage based attention model. Take the computation of low-resolution context vector ${\mathbf{cA}_t}$ as an example, we parameterize $f_{\text{catt}}$ as a multi-layer perceptron: \begin{align}\label{eq:coverage attention} & {\mathbf{F}} = {\mathbf{Q}} * \sum\nolimits_{l=1}^{t - 1} {{{\bm{\alpha}}_l}} \\ & {e_{ti}} = {\bm{\nu }}_{\text{att}}^{\rm T}\tanh ({{\mathbf{U}}_{s}}{{\mathbf{\hat s}}_t} + {{\mathbf{U}}_a}{{\mathbf{a}}_i} + {{\mathbf{U}}_f}{{\mathbf{f}}_i}) \\ & {\alpha _{ti}} = \frac{{\exp ({e_{ti}})}}{{\sum\nolimits_{k = 1}^L {\exp ({e_{tk}})} }} \\ & {{\mathbf{cA}}_t} = \sum\nolimits_{i=1}^L {{\alpha _{ti}}{{\mathbf{a}}_i}} \end{align} where ${{\mathbf{a}}_{i}}$ denotes the element of low-resolution annotation sequence $\mathbf{A}$, ${e_{ti}}$ denotes the energy of ${{\mathbf{a}}_{i}}$ at time step $t$ conditioned on the prediction of current decoder state ${{\mathbf{\hat s}}_t}$ and coverage vector ${{\mathbf{f}}_i}$. The coverage vector is initialized as a zero vector and we compute it based on the summation of all past attention probabilities. Hence the coverage vector contains the information of alignment history. We append the coverage vector in the attention model so that the decoder is capable to know which part of input image has been attended or not~\cite{zhang2017watch,tu2016modeling}. Let $n'$ denote the attention dimension and $q$ denote the number of output channels of convolution function $\mathbf{Q}$; then ${{\bm{\nu }}_{\text{att}}} \in {\mathbb{R}^{{n'}}}$, ${{\mathbf{U}}_s} \in {\mathbb{R}^{{n'} \times n}}$, ${{\mathbf{U}}_a} \in {\mathbb{R}^{{n'} \times C}}$ and ${{\mathbf{U}}_{f}} \in {\mathbb{R}^{{n'} \times q}}$. The high-resolution context vector ${\mathbf{cB}_t}$ is computed based on another coverage based attention model $f_{\text{catt}}$ with different initialized parameters except the ${{\mathbf{U}}_s}$ transition matrix. \section{Training and Testing Details} \label{sec:Training and Testing Details} We validated the proposed model on CROHME 2014~\cite{mouchere2014icfhr} test set and CROHME 2016~\cite{mouchere2016icfhr2016} test set. The CROHME competition dataset is currently the most widely used public dataset for handwritten mathematical expression recognition. The training set has 8,836 expressions including 101 math symbol classes. The CROHME 2014 test set has 986 expressions while the CROHME 2016 test set has 1,147 expressions. \subsection{Training} \label{sec:Training} The training objective of the proposed model is to maximize the predicted symbol probability as shown in Eq. \eqref{eq:computePy} and we use cross-entropy (CE) as the objective function: \begin{equation}\label{eq:objective} O = - \sum\nolimits_{t=1}^T \log p({w_t}|{\mathbf{y}_{t-1},\mathbf{x}}) \end{equation} where $w_t$ represents the ground truth word at time step $t$. The implementation details of Dense encoder has been introduced in Section~\ref{sec:Multi-Scale Dense Encoder}. The decoder is a single layer with 256 forward GRU units. The embedding dimension $m$ and decoder state dimension $n$ are set to 256. The multi-scale attention dimension $n'$ is set to 512. The convolution kernel size for computing low-resolution coverage vector is set to $11 \times 11$ but $7 \times 7$ for high-resolution coverage vector, while their number of convolution filters are both set to 256. We utilized the adadelta algorithm \cite{zeiler2012adadelta} with gradient clipping for optimization. The best model is determined in terms of word error rate (WER) of validation set. We used a weight decay of ${10^{ - 4}}$ and we added a dropout layer~\cite{srivastava2014dropout} after each convolutional layer and set the drop rate to 0.2. \subsection{Decoding} \label{sec:Decoding} In the decoding stage, we aim to generate a most likely LaTeX string given the input image. However, different from the training procedure, we do not have the ground truth of previous predicted symbol. Consequently, a simple left-to-right beam search algorithm~\cite{cho2015natural} is employed to implement the decoding procedure. Here, we maintained a set of 10 partial hypotheses at each time step, ending with the end-of-sentence token $<eos>$. We also adopted the ensemble method \cite{dietterich2000ensemble} for improving the performance. We first trained 5 models on the same training set but with different initialized parameters. Then we averaged their prediction probabilities during the beam search process. \section{Experiments} \label{sec:Experiments} We designed a set of experiments to validate the effectiveness of the proposed method for handwritten mathematical expression recognition by answering the following questions: \begin{description} \item[Q1] Is the dense encoder effective? \item[Q2] Is the multi-scale attention model effective? \item[Q3] Does the proposed approach outperform state-of-the-arts? \end{description} \subsection{Metric} \label{sec:Metric} The participating systems in all of the CROHME competitions were ranked by expression recognition rates (ExpRate), i.e., the percentage of predicted mathematical expressions matching the ground truth, which is simple to understand and provides a useful global performance metric. The CROHME competition compared the competing systems not only by ExpRate but also those with at most one to three symbol-level errors. In our experiments, we first transferred the generated LaTeX strings into MathML representation and then computed these metrics by using the official tool provided by the organizer of CROHME. However, it seems inappropriate to evaluate an expression recognition system only at expression level. So we also evaluated our system at symbol-level by using WER metric. \subsection{Evaluation of dense encoder (Q1)} \label{sec:Evaluation of dense encoder (Q1)} We start the proposed multi-scale attention model with dense encoder from WAP~\cite{zhang2017watch}. As shown in Table~\ref{tab:1}, WAP achieves an ExpRate of 44.4\% on CROHME 2014 test set and an ExpRate of 42.0\% on CROHME 2016 test set. The WAP employs an encoder based on VGG architecture and its decoder is a unidirectional GRU equipped with coverage based attention model. Here, we only replace the VGG encoder by dense encoder with the other settings keeping unchanged and the new system is named as ``Dense'' in Table~\ref{tab:1}. The implementation details of the dense encoder is illustrated by the main branch in Fig.~\ref{fig:multi-scale-dense}. We can observe that the ExpRate increases about 5.7\% on CROHME 2014 and 5.5\% on CROHME 2016 by employing dense encoder. \begin{table}[h] \caption{\label{tab:1}{Comparison of recognition performance (in \%) on CROHME 2014 and CROHME 2016 when employing dense encoder and multi-scale attention model.}} \centering \begin{tabular}{c c c c c} \toprule \multirow{2}{*}{System} & \multicolumn{2}{c}{CROHME 2014} & \multicolumn{2}{c}{CROHME 2016} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & WER & ExpRate & WER & ExpRate\\ \midrule \textbf{WAP} & 19.4 & 44.4 & 19.7 & 42.0 \\ \textbf{Dense} & 13.9 & 50.1 & 15.4 & 47.5 \\ \textbf{Dense+MSA} & 12.9 & 52.8 & 13.7 & 50.1 \\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation of multi-scale attention model (Q2)} \label{sec:Evaluation of multi-scale attention model (Q2)} In Table~\ref{tab:1}, the system ``Dense+MSA'' is the proposed multi-scale attention model with dense encoder. ``+MSA'' means that we only replace the single-scale coverage based attention model in system ``Dense'' by multi-scale coverage based attention model. The performance of multi-scale attention model is clear to be observed by the comparison between system ``Dense'' and system ``Dense+MSA''. The ExpRate increases from 50.1\% to 52.8\% on CROHME 2014 and from 47.5\% to 50.1\% on CROHME 2016 after the implementation of multi-scale attention model. More specifically, in the system ``Dense+MSA'', the multi-scale branch of dense encoder contains a dense block with depth $D=16$. We choose $D=16$ as we investigate the depth of block in multi-scale branch ($D=0, 8, 16, 24$) by experiments. In Table~\ref{tab:2}, $D=0$ means that we simply choose the output of the last transition convolutional layer in the main branch of dense encoder as the high-resolution annotations. The performance is only slightly improved compared with system ``Dense'' in Table~\ref{tab:1} which implies that more convolution operations are necessary to obtain more semantic high-resolution annotations. We can observe that $D=16$ is the best setting for both test sets of CROHME 2014 and 2016. The unpleasant results of $D=24$ indicate that too many convolution operations in the multi-scale branch can also lead to performance degradation. \begin{table}[h] \caption{\label{tab:2}{Comparison of recognition performance (in \%) on CROHME 2014 and CROHME 2016 when increasing the depth of dense block in multi-scale branch.}} \centering \begin{tabular}{c c c c c} \toprule \multirow{2}{*}{Depth} & \multicolumn{2}{c}{CROHME 2014} & \multicolumn{2}{c}{CROHME 2016} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & WER & ExpRate & WER & ExpRate\\ \midrule \textbf{0} & 13.5 & 50.8 & 14.3 & 48.5 \\ \textbf{8} & 13.3 & 51.3 & 13.9 & 49.3 \\ \textbf{16} & 12.9 & 52.8 & 13.7 & 50.1 \\ \textbf{24} & 13.1 & 51.4 & 14.1 & 48.9 \\ \bottomrule \end{tabular} \end{table} We also illustrate the performance of multi-scale attention model in Fig.~\ref{fig:attention_visualization}. The left part of Fig.~\ref{fig:attention_visualization} denotes the visualization of single-scale attention on low-resolution annotations and the right part denotes the visualization of multi-scale attention only on high-resolution annotations. Fig.~\ref{fig:attention_visualization} (a) is an example that the decimal point ``.'' is under-parsed by only relying on low-resolution attention model. However, the high-resolution attention in the multi-scale attention model successfully detects the decimal point. Fig.~\ref{fig:attention_visualization} (b) is an example that the math symbols ``- 1'' are mis-parsed as ``7'' due to the low-resolution attention model while the high-resolution attention model can correctly recognize them. \begin{figure} \centering \includegraphics[width=2.5in]{attention_visualization} \caption{Two examples of attention visualization on low-resolution annotations and on high-resolution annotations. The attention probabilities are shown through red color and the predicted symbols are shown on the right of images.} \label{fig:attention_visualization} \end{figure} \subsection{Comparison with state-of-the-arts (Q3)} \label{sec:Comparison with state-of-the-arts (Q3)} \begin{table}[h] \caption{\label{tab:3}{Comparison of ExpRate (in \%) on CROHME 2014, we erase system \uppercase\expandafter{\romannumeral3} namely MyScript because it used extra training data.}} \centering \begin{tabular}{c c c c c} \toprule \textbf{System} & \textbf{Correct(\%)} & \textbf{$\leq$ 1(\%)} & \textbf{$\leq$ 2(\%)} & \textbf{$\leq$ 3(\%)}\\ \midrule \uppercase\expandafter{\romannumeral1} & 37.2 & 44.2 & 47.3 & 50.2 \\ \uppercase\expandafter{\romannumeral2} & 15.0 & 22.3 & 26.6 & 27.7 \\ \uppercase\expandafter{\romannumeral4} & 19.0 & 28.2 & 32.4 & 33.4 \\ \uppercase\expandafter{\romannumeral5} & 19.0 & 26.4 & 30.8 & 33.0 \\ \uppercase\expandafter{\romannumeral6} & 25.7 & 33.2 & 35.9 & 37.3 \\ \uppercase\expandafter{\romannumeral7} & 26.1 & 33.9 & 38.5 & 40.0 \\ \midrule \textbf{WAP} & \textbf{44.4} & \textbf{58.4} & \textbf{62.2} & \textbf{63.1} \\ \textbf{CRNN} & \textbf{35.2} & \textbf{-} & \textbf{-} & \textbf{-} \\ \textbf{Ours} & \textbf{52.8} & \textbf{68.1} & \textbf{72.0} & \textbf{72.7} \\ \bottomrule \end{tabular} \end{table} The comparison among the proposed approach and others on CROHME 2014 test set is listed in Table~\ref{tab:3}. Systems \uppercase\expandafter{\romannumeral1} to \uppercase\expandafter{\romannumeral7} were submitted systems to CROHME 2014 competition and they were mostly based on the traditional two-dimensional context free grammar method. Details of these systems can refer to \cite{mouchere2014icfhr}. To make a fair comparison we erase the system \uppercase\expandafter{\romannumeral3} namely ``MyScript'' which achieved a high ExpRate of 62.7\% but used a large private dataset and the technical details were unrevealed. System ``WAP'', ``CRNN'' and our proposed system are all based on encoder-decoder model with attention that takes handwritten mathematical expressions input as images. As for the system ``CRNN'', it is declared in \cite{anh2017training} that the encoder employs a CNN+RNN architecture and the decoder is a unidirectional RNN with classic attention model. Meanwhile a novel data augmentation method for handwritten mathematical expression recognition was proposed in~\cite{anh2017training}. We can see that our proposed system achieves the best result with ExpRate of 52.8\% on CROHME 2014. Additionally, a gap existed between the correct and error percentages ($\leq$ 1\%), showing that the corresponding systems have a large room for further improvements. In contrast, the small differences between error ($\leq$ 2\%) and error ($\leq$ 3\%) illustrate that it is difficult to improve the ExpRate by incorporating a single correction. \begin{table}[h] \caption{\label{tab:CROHME2016}{Comparison of ExpRate (in \%) on CROHME 2016, we erase team MyScript because it used extra training data.}} \centering \begin{tabular}{c c c c c} \toprule \textbf{} & \textbf{Correct(\%)} & \textbf{$\leq$ 1 (\%)} & \textbf{$\leq$ 2 (\%)} & \textbf{$\leq$ 3 (\%)}\\ \midrule Wiris & 49.6 & 60.4 & 64.7 & -- \\ Tokyo & 43.9 & 50.9 & 53.7 & -- \\ S$\widetilde{\textbf{a}}$o Paolo & 33.4 & 43.5 & 49.2 & -- \\ Nantes & 13.3 & 21.0 & 28.3 & -- \\ \midrule \textbf{WAP} & \textbf{42.0} & \textbf{55.1} & \textbf{59.3} & \textbf{60.2} \\ \textbf{Ours} & \textbf{50.1} & \textbf{63.8} & \textbf{67.4} & \textbf{68.5} \\ \bottomrule \end{tabular} \end{table} To complement a more recent algorithm comparison and test the generalization capability of our proposed approach, we also validate our best system on CROHME 2016 test set as shown in Table \ref{tab:CROHME2016}, with an ExpRate of 50.1\% which is quite a promising result compared with other participating systems. The system ``Wiris'' was awarded as the first place on CROHME 2016 competition using only the CROHME training data with an ExpRate of 49.6\%, and it used a Wikipedia formula corpus, consisting of more than 592,000 LaTeX notations of mathematical expressions, to train a strong language model. The details of other systems can be found in \cite{mouchere2016icfhr2016}. \section{Conclusion} \label{sec:Conclusion} In this study we improve the performance of attention based encoder-decoder for handwritten mathematical expression by introducing the dense encoder and multi-scale attention model. It is the first work that employs densely connected convolutional networks for handwritten mathematical expression recognition and we propose the novel multi-scale attention model to alleviate the problem causing by pooling operation. We demonstrate through attention visualization and experiment results that the novel multi-scale attention model with dense encoder performs better than the state-of-the-art methods. \bibliographystyle{IEEEtran}
2,877,628,089,884
arxiv
\section{Upper Banach density} Let $\ensuremath{ \mathbf N }$, $\ensuremath{ \mathbf N }_0$, and \ensuremath{\mathbf Z}\ denote, respectively, the sets of positive integers, nonnegative integers, and integers. Let $|S|$ denote the cardinality of the set $S$. We define the \emph{interval of integers} \[ [x,y] = \{n \in \ensuremath{ \mathbf N } : x \leq n \leq y\}. \] Let $A$ be a set of positive integers. Let $n \in \ensuremath{ \mathbf N }$. For all $u \in \ensuremath{ \mathbf N }_0$, we have \[ |A \cap [u,u+n-1] | \in [0,n] \] and so \[ f_A(n) = \max_{u \in \ensuremath{ \mathbf N }_0} \left| A \cap [u,u+n-1] \right| \] exists. The \emph{upper Banach density} of $A$ is \[ \delta(A) = \limsup_{n \rightarrow \infty} \frac{f_A(n)}{n}. \] Let $n_1,n_2 \in \ensuremath{ \mathbf N }$. There exists $u^*_1 \in \ensuremath{ \mathbf N }_0$ such that, with $u^*_2 = u^*_1 + n_1$, \begin{align*} f_A(n_1+n_2) & = \left| A \cap [u^*_1, u^*_1+ n_1 + n_2 -1] \right| \\ & = \left| A \cap [u^*_1, u^*_1+ n_1 -1] \right| + \left| A \cap [u^*_1+n_1, u^*_1+ n_1 + n_2 -1] \right| \\ & = \left| A \cap [u^*_1, u^*_1+ n_1 -1] \right| + \left| A \cap [u^*_2, u^*_2 + n_2 -1] \right| \\ & \leq f_A(n_1) + f_A(n_2). \end{align*} It is well known, and proved in the Appendix, that this inequality implies that \[ \delta(A) = \lim_{n \rightarrow \infty} \frac{f_A(n)}{n} = \inf_{n \in \ensuremath{ \mathbf N }} \frac{f_A(n)}{n}. \] \section{An Erd\H os sumset conjecture} About 40 years ago, Erd\H os conjectured that if $A$ is a set of positive integers of positive upper Banach density, then there exist infinite sets $B$ and $C$ of positive integers such that $B+C \subseteq A$. This conjecture has not yet been verified or disproved. The \emph{translation}\index{translation} of the set $X$ by $t$ is the set \[ X+t = \{x + t : x\in X\}. \] Let $B$ and $C$ be sets of integers. For every integer $t$, if $B' = B+t$ and $C' = C - t$, then \[ B' + C' = (B+t) + (C-t) = B+C. \] In particular, if $C$ is bounded below and $t = \min(C)$, then $0 = \min(C')$ and $B' \subseteq B' + C'$. It follows that if $B$ and $C$ are infinite sets such that $B+C\subseteq A$, then, by translation, there exist infinite sets $B'$ and $C'$ such that $B' \subseteq A$ and $B' +C' \subseteq A$. However, a set $A$ with positive upper Banach density does not necessarily contain infinite subsets $B$ and $C$ with $B+C \subseteq A$. For example, let $A$ be any set of odd numbers. For all sets $B$ and $C$ of odd numbers, the sumset $B+C$ is a set of even numbers, and so $A \cap (B+C) = \emptyset$. Of course, in this example we have $B+C \subseteq A+1$. In this note we prove that if $A$ is a set of positive integers with upper Banach density $\delta(A) = 1$, then for every $h \geq 2$ there exist pairwise disjoint subsets $B_1,\ldots, B_h$ of $A$ such that $\delta(B_i) = 1$ for all $i=1,\ldots, h$ and \[ B_1 + \cdots + B_h \subseteq A. \] Indeed, Theorem~\ref{sumsets:theorem:I-sequence} states an even stronger result. There are sets $A$ of upper Banach density 1 for which no infinite subset $B$ of $A$ satisfies $2B \subseteq A + t$ for any integer $t$. A simple example is \[ A = \bigcup_{i=1}^{\infty} \left[ 4^{i}, 4^{i} + i - 1\right]. \] The set $A$ is the union of the infinite sequence of pairwise disjoint intervals \[ A_i = \left[ 4^{i}, 4^{i} + i - 1\right]. \] Let $t \in \ensuremath{ \mathbf N }_0$. There exists $i_0(t)$ such that $4^i-i > t$ for all $i \geq i_0(t)$. If $b_i \in A_i$ for some $i \geq i_0(t)$, then \[ 4^i +i + t < 2\cdot 4^{i} \leq 2b_i < 2\cdot 4^{i} + 2i < 4^{i+1} - 2t \leq 4^{i+1} - t \] and so $2b_i \notin 2A\pm t$. If $B$ is an infinite subset of $A$, then for infinitely many $i$ there exist integers $b_i \in B \cap A_i$, and so $2B \not\subseteq A+t$ for all $t \in \ensuremath{\mathbf Z}$. There are very few results about the Erd\H os conjecture. In 1980, Nathanson~\cite{nath80d} proved that if $\delta(A) > 0$, then for every $n$ there is a finite set $C$ with $|C| = n$ and a subset $B$ of $A$ with $\delta(B) > 0$ such that $B+C\subseteq A$. In 2015, Di Nasso, Goldbring, Jin, Leth, Lupini, and Mahlburg~\cite{dina-gold15} used nonstandard analysis to prove that the Erd\H os conjecture is true for sets $A$ with upper Banach density $\delta(A) > 1/2$. They also proved that if $\delta(A) > 0$, then there exist infinite sets $B$ and $C$ and an integer $t$ such that \[ B+C \subseteq A \cup (A+t). \] It would be of interest to have purely combinatorial proofs of the results of Di Nasso, \emph{et al.} For related work, see Di Nasso~\cite{dina14a,dina14b}, Gromov~\cite{grom15}, Hegyv\' ari~\cite{hegy99,hegy08}, Hindman~\cite{hind79a}, and Jin~\cite{jin04}. \section{Results} The following result is well known. \begin{lemma} \label{sumset:lemma:Banach} A set of positive integers has upper Banach density 1 if and only if, for every $d$, it contains infinitely many pairwise disjoint intervals of $d$ consecutive integers. \end{lemma} \begin{proof} Let $A$ be a set of positive integers. If, for every positive integer $d$, the set $A$ contains an interval of $d$ consecutive integers, then \[ \max_{u \in \ensuremath{ \mathbf N }_0} \left( \frac{ |A \cap [u,u+d-1] |}{d} \right) = 1 \] and so \[ \delta(A) = \lim_{d \rightarrow \infty} \max_{u \in \ensuremath{ \mathbf N }_0} \left( \frac{ |A \cap [u,u+d-1] |}{d} \right) = 1. \] Suppose that, for some integer $d \geq 2$, the set $A$ contains no interval of $d$ consecutive integers. For every $u \in \ensuremath{ \mathbf N }_0$, we consider the interval $I_{u,n} = [u, u+n-1]$. By the division algorithm, there are integers $q$ and $r$ with $0 \leq r < d$ such that \[ |I_{u,n}| = n = qd + r \] and \[ q = \frac{n-r}{d} > \frac{n}{d} - 1. \] For $j=1,\ldots, q$, the intervals of integers \[ I_{u,n}^{(j)} = [u+(j-1)d, u + jd - 1 ] \] and \[ I_{u,n}^{(q+1)} = [u+qd, u+n-1] \] are pairwise disjoint subsets of $I_{u,n}$ such that \[ I_{u,n} = \bigcup_{j=1}^{q+1}I_{u,n}^{(j)}. \] We have \[ A \cap I_{u,n} = \bigcup_{j=1}^{q+1} ( A \cap I_{u,n}^{(j)} ) \] If $A$ contains no interval of $d$ consecutive integers, then, for all $j \in [1,q]$, at least one element of the interval $I_{u,n}^{(j)}$ is not an element of $A$, and so \[ | A \cap I_{u,n}^{(j)}| \leq |I_{u,n}^{(j)}| - 1. \] It follows that \begin{align*} | A \cap I_{u,n}| & = \sum_{j=1}^{q+1} \left| A \cap I_{u,n}^{(j)} \right| \leq \sum_{j=1}^q \left( \left| I_{u,n}^{(j)}\right| - 1 \right) + \left| I_{u,n}^{(q+1)}\right| \\ & = \sum_{j=1}^{q+1} \left| I_{u,n}^{(j)}\right| - q = |I_{u,n}| - q = n - q \\ & < n - \frac{n}{d} +1 = \left(1 - \frac{1}{d}\right)n + 1. \end{align*} Dividing by $n = |I_{u,n}|$, we obtain \[ \max_{u \in \ensuremath{ \mathbf N }_0} \frac{ | A \cap I_{u,n}|}{n} \leq 1 - \frac{1}{d} + \frac{1}{n}. \] and so \[ \delta(A) = \lim_{n \rightarrow \infty} \max_{u \in \ensuremath{ \mathbf N }_0} \frac{ | A \cap I_{u,n} |}{n} \leq 1 - \frac{1}{d} < 1 \] which is absurd. Therefore, $A$ contains an interval of $d$ consecutive integers for every $d \in \ensuremath{ \mathbf N }$. To prove that $A$ contains infinitely many intervals of size $d$, it suffices to prove that if $[u,u+d-1]\subseteq A$, then $[v,v+d-1] \subseteq A$ for some $v \geq u+d$. Let $d' = u+2d$. There exists $u' \in \ensuremath{ \mathbf N }$ such that \[ [u',u'+d'-1] = [u', u' + u + 2d-1] \subseteq A. \] Choosing $v = u'+u+d$, we have $v \geq u+d$ and \[ [v,v+d-1] \subseteq [u', u' + u + 2d-1] \subseteq A. \] This completes the proof. \end{proof} Let $\ensuremath{ \mathcal F}(S)$ denote the set of all finite subsets of the set $S$, and let $\ensuremath{ \mathcal F}^*(S)$ denote the set of all nonempty finite subsets of $S$. We have the fundamental binomial identity \begin{equation} \label{sumsets:FiniteSubsets} \ensuremath{ \mathcal F}^*([1,n+1] ) = \ensuremath{ \mathcal F}^*([1,n]) \cup \left\{ \{n+1\}\cup J:J\in \ensuremath{ \mathcal F}( [1,n] ) \right\}. \end{equation} \begin{theorem} \label{sumsets:theorem:b-sequence} Let $A$ be a set of positive integers that has upper Banach density 1. For every sequence $(\ell_j)_{j=1}^{\infty}$ of positive integers, there exists a sequence $(b_j)_{j=1}^{\infty}$ of positive integers such that \[ b_{j+1} \geq b_{j} + \ell_j \] for all $j \in \ensuremath{ \mathbf N }$, and \[ \sum_{j \in J} [b_{j}, b_{j} + \ell_j -1] \subseteq A \] for all $J \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$. \end{theorem} \begin{proof} We shall construct the sequence $(b_j)_{j=1}^{\infty}$ by induction. For $n=1$, choose $b_1 \in A$ such that $[b_1,b_1+\ell_1 -1] \subseteq A$. Suppose that $(b_j)_{j=1}^n$ is a finite sequence of positive integers such that $b_{j+1} \geq b_{j} + \ell_j$ for $j \in [1,n-1]$ and \begin{equation} \label{sumsets:FiniteSubsets-2} \sum_{j \in J} [b_{j}, b_{j} + \ell_j -1] \subseteq A \end{equation} for all $J \in \ensuremath{ \mathcal F}^*([1,n])$. By Lemma~\ref{sumset:lemma:Banach}, there exists $b_{n+1} \in A$ such that \[ b_{n+1} \geq b_n + \ell_n \] and \[ \left[ b_{n+1}, \sum_{j=1}^{n+1} (b_j + \ell_j ) -1\right] \subseteq A. \] It follows that \[ \left[ b_{n+1}, b_{n+1} + \ell_{n+1}-1\right] \subseteq A. \] Let $J \in \ensuremath{ \mathcal F}([1,n])$. If \begin{align*} a & \in \sum_{j \in \{n+1\} \cup J} [b_{j}, b_{j} + \ell_j -1] \\ & = \left[ b_{n+1}, b_{n+1} + \ell_{n+1}-1\right] + \sum_{j \in J} [b_{j}, b_{j} + \ell_j -1] \end{align*} then \begin{align*} b_{n+1} \leq a & \leq \left( b_{n+1} + \ell_{n+1}-1 \right) + \sum_{j \in J} \left( b_{j} + \ell_j -1 \right) \\ & \leq \sum_{j=1}^{n+1} (b_j + \ell_j ) -1 \end{align*} and so $a \in A$ and \begin{equation} \label{sumsets:FiniteSubsets-3} \sum_{j \in \{n+1\} \cup J} [b_{j}, b_{j} + \ell_j -1] \subseteq \left[ b_{n+1}, \sum_{j=1}^{n+1} (b_j + \ell_j ) -1\right] \subseteq A. \end{equation} Relations~\eqref{sumsets:FiniteSubsets},~\eqref{sumsets:FiniteSubsets-2}, and~\eqref{sumsets:FiniteSubsets-3} imply that \[ \sum_{j \in J} [b_{j}, b_{j} + \ell_j -1] \subseteq A \] for all $J \in \ensuremath{ \mathcal F}^*([1,n+1])$. This completes the induction. \end{proof} \begin{theorem} \label{sumsets:theorem:I-sequence} Every set $A$ of positive integers that has upper Banach density 1 contains an infinite sequence of pairwise disjoint subsets $(B_i)_{i=1}^{\infty}$ such that $B_i$ has upper Banach density 1 for all $i \in \ensuremath{ \mathbf N }$ and \[ \sum_{i\in I} B_i \subseteq A \] for all $I \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$. \end{theorem} \begin{proof} Let $(\ell_j)_{j=1}^{\infty}$ be a sequence of positive integers such that $\lim_{j\rightarrow \infty} \ell_j = \infty$, and let $(b_j)_{j=1}^{\infty}$ be a sequence of positive integers that satisfies Theorem~\ref{sumsets:theorem:b-sequence}. (For simplicity, we can let $\ell_j = j$ for all $j$.) Let $(X_i)_{i =1}^{\infty}$ be a sequence of infinite sets of positive integers that are pairwise disjoint. For $i \in \ensuremath{ \mathbf N }$, let \[ B_i = \bigcup_{j\in X_i} [b_{j}, b_{j} + \ell_j -1]. \] The set $B_i$ contains intervals of $\ell_j$ consecutive integers for infinitely many $\ell_j$, and so $B_i$ has upper Banach density 1. Let $I \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$. If \[ a \in \sum_{i\in I} B_i \subseteq A \] then for each $i \in I$ there exists $a_i \in B_i$ such that $a = \sum_{i\in I} a_i$. If $a_i \in B_i$, then there exists $j_i \in X_i$ such that \[ x_i \in \left[ b_{j_i}, b_{j_i} + \ell_{j_i} - 1 \right]. \] We have $J = \{ j_i: i \in I\} \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$ and \[ a \in \sum_{j_i \in J} \left[ b_{j_i}, b_{j_i} + \ell_{j_i} - 1 \right] \subseteq A. \] This completes the proof. \end{proof} \begin{theorem} \label{sumsets:theorem:I-sequence-AP} Let $A$ be a set of integers that contains arbitrarily long finite arithmetic progressions with bounded differences. There exist positive integers $m$ and $r$, and an infinite sequence of pairwise disjoint sets $(B_i)_{i=1}^{\infty}$ such that $B_i$ has upper Banach density 1 for all $i \in \ensuremath{ \mathbf N }$ and \[ m\ast \sum_{i\in I} B_i +r \subseteq A \] for all $I \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$. \end{theorem} \begin{proof} If the differences in the infinite set of finite arithmetic progressions contained in $A$ are bounded by $m_0$, then there exists a difference $m \leq m_0$ that occurs infinitely often. It follows that there are arbitrarily long finite arithmetic progressions with difference $m$. Because there are only finitely many congruence classes modulo $m$, there exists a congruence class $r \pmod{m}$ such that $A$ contains arbitrarily long sequences of consecutive integers in the the congruence class $r\pmod{m}$. Thus, there exists an infinite set $A'$ such that \[ m\ast A' + r \subseteq A \] and $A'$ contains arbitrarily long sequences of consecutive integers. Equivalently, $A'$ has Banach density 1. By Theorem~\ref{sumsets:theorem:I-sequence}, the sequence $A'$ contains an infinite sequence of pairwise disjoint subsets $(B_i)_{i=1}^{\infty}$ such that $B_i$ has upper Banach density 1 for all $i \in \ensuremath{ \mathbf N }$ and \[ \sum_{i\in I} B_i \subseteq A' \] for all $I \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$. It follows that \[ m\ast \sum_{i\in I} B_i +r \subseteq m\ast A' + r \subseteq A \] for all $I \in \ensuremath{ \mathcal F}^*(\ensuremath{ \mathbf N })$. This completes the proof. \end{proof}
2,877,628,089,885
arxiv
\section{Introduction} \label{sec:intro} Recent advances in nanofabrication techniques have enabled the systematic of study of the physical behavior of single DNA molecules confined to nanochannels.\cite{dai2016polymer,% reisner2012dna} These studies are largely motivated by such diverse applications as DNA sorting,% \cite{dorfman2012beyond} DNA denaturation mapping,\cite{reisner2010single,marie2013integrated} and genome mapping,\cite{lam2012genome,dorfman2013fluid} each of which exploits the effects of confinement on the DNA conformational behavior and dynamics. The development of such applications clearly requires a deep understanding of the physical behavior of polymers in nanochannels, and in recent years there has been considerable progress toward this goal. Much of this progress is due to the refinement and application of advanced Monte Carlo simulation techniques such as the Pruned Enriched Rosenbluth Method (PERM), which enables the simulation of very long polymer chains.\cite{hsu2011review,tree2013dna} Such studies have been instrumental in characterizing the various conformational scaling regimes, which are determined by the degree of confinement in relation to the persistence and contour lengths of the polymer.\cite{odijk2008scaling,dai2016polymer} Notable among this work is the confirmation of the existence of the extended de~Gennes regime\cite{dai2014extended} and the backfolded Odijk regime,\cite{muralidhar2014backfolding} which each lie between the classic de~Gennes\cite{deGennes_book} and Odijk regimes\cite{odijk1983statistics} upon variation in the confining channel width. In addition to work characterizing the scaling regimes of polymers confined to nanochannels, some effort has been devoted to elucidating the behavior of confined polymers in out-of-equilibrium folded states. For example, using fluorescence imaging techniques Levy {\it et al.}\cite{levy2008entropic} examined the behavior of DNA that was electrophoretically driven into a nanochannel in a folded state. They quantified the degree of stretching in the overlapping portion of the molecule and monitored the internal dynamics as the DNA unfolded to its equilibrium linearly ordered state. In a more recent study, Alizadehheidari {\it et al.}\cite{alizadehheidari2015nanoconfined} examined the unfolding dynamics of a circular DNA molecule upon transformation to a linear topology by a light-induced double-strand break, and in addition compared the equilibrium conformational statistics of the linear and circular configurations. These experimental studies have been complemented by molecular dynamics simulation studies of unfolding of flexible polymer chains in cylindrical\cite{ibanez2012entropic} and square\cite{ibanez2013hairpin} nanochannels, where the unfolding time was determined to scale as $DN^2$, where $D$ is the channel width and $N$ is the polymer length. The tendency for a nanochannel-confined polymer to unfold arises from the excluded volume interactions between the portions of the molecule that overlap along the channel. These interactions tend to stretch the overlapping regions and significantly reduce the number of configurations available to the polymer, thus decreasing its conformational entropy. The resulting gradient in the free energy with respect to the degree of overlap provides the effective force that drives the unfolding. This effect is closely related to the segregation of two initially overlapping polymers confined to a narrow channel, a process that is also driven by the increase in conformational entropy as polymer overlap decreases. Such entropy-driven polymer separation is thought to be a factor in the process of chromosome segregation of replicating bacteria\cite{jun2006entropy,jun2010entropy} and has been extensively studied using computer simulation methods.\cite{% jun2006entropy,jun2007confined,arnold2007time,jung2010overlapping,jung2012ring,% jung2012intrachain,liu2012segregation,dorier2013modelling,racko2013segregation,% shin2014mixing,minina2014induction,minina2015entropic,chen2015polymer,polson2014polymer} Another related process is the arm retraction and escape transition for channel-confined star polymers.\cite{racko2015arm} Theoretical analyses of unfolding or segregation dynamics obtained from simulations typically employ analytical approximations using scaling arguments for the conformational free energy and its variation with the degree of overlap along the channel. However, such approximations are known to suffer from finite-size effects for the system sizes typically employed in these simulations.\cite{kim2013elasticity} For this reason, it is of interest to calculate the free energy functions directly and quantify any discrepancy with the theoretical predictions. Recently, we employed Monte Carlo methods to measure the free energy functions for segregating polymers in nanotubes and examined the scaling of the functions with respect to polymer contour length, persistence length, and channel dimensions for both infinite-length and finite-length tubes.\cite{polson2014polymer} In the present study, we carry out similar calculations for a single folded polymer confined to a cylindrical channel. We consider several different variations of the system. First, we examine a freely jointed polymer chain similar to that employed in previous MD studies of unfolding.\cite{ibanez2012entropic,ibanez2013hairpin} We also study a confined three-arm star polymer and examine the effect of folding one of the arms on the conformational free energy. In this case, the results are relevant to the dynamics of arm retraction in star polymers, which was the focus of a recent simulation study by Milchev {\it et al.}\cite{milchev2014arm} In addition to fully-flexible polymers, we also examine semiflexible chains with hairpin folds. Such systems have been the subject of much recent study in the context of the backfolded Odijk regime, for which the persistence length $P$ is of the order of the channel width $D$.\cite{% muralidhar2014backfolding,muralidhar2016backfolding,muralidhar2016backfolded,werner2016emergence} In the present case, we consider instead the regime where the condition for the classic Odijk regime, $P\gg D$, is marginally satisfied and, thus, where the presence of a hairpin clearly represents an out-of-equilibrium state. We consider both single hairpins and S-loops composed of two hairpins. Figure~\ref{fig:illust} shows simulation snapshots of the various systems that were examined in this study. The focus in all cases is the measurement of the gradient in the free energy with respect to the degree of internal overlap along the channel, which determines the effective force that drives unfolding. The scaling properties of the free energy are compared with the analytical approximations. While this work is clearly relevant to previous studies in which the free energy of nanochannel-confined polymers was calculated, to our knowledge it is the first to measure the variation of the free energy with the degree of internal overlap of the polymer. This article is organized as follows. First, we briefly describe the model employed in the study, following which we outline the MC method used to calculate the free energy functions. We then present the main results of the study, which are interpreted and discussed in detail. Results for fully flexible linear polymers and star polymers are presented, followed by those for semiflexible chains in the presence of either a single hairpin fold or an S-loop. In the final section, we summarize the key findings of this study. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.47\textwidth,angle=0]{illust.ps} \end{center} \caption{ Simulation snapshots illustrating the main systems considered in this study: (a) fully flexible linear polymer; (b) fully flexible three-arm star polymer; (c) semi-flexible linear polymer with a hairpin turn; (d) semi-flexible linear polymer with an S-loop. The images were generated using VMD.\cite{HUMP96} } \label{fig:illust} \end{figure} \section{Model} \label{sec:model} We employ a minimal model to describe a polymer confined to a cylindrical tube. The polymer is modeled as a chain of hard spheres, each with diameter $\sigma$. The pair potential for non-bonded monomers is thus $u_{\rm{nb}}(r)=\infty$ for $r\leq\sigma$ and $u_{\rm{nb}}(r)=0$ for $r>\sigma$, where $r$ is the distance between the centers of the monomers. Pairs of bonded monomers interact with a potential $u_{\rm{b}}(r)= 0$ if $0.9\sigma<r<1.1\sigma$ and $u_{\rm{b}}(r)= \infty$, otherwise. Consequently, the bond length fluctuates slightly about its average value. In addition to linear polymers, we also consider a fully flexible three-arm star polymer with arms of equal length, each connected to one core monomer. The interactions are all the same as for the linear polymer case. The polymer is confined to a hard cylindrical tube of diameter $D$. Thus, each monomer interacts with the wall of the tube with a potential $u_{\rm w}(r) = 0$ for $r<D$ and $u_{\rm w}(r) = \infty$ for $r>D$, where $r$ is the distance of a monomer from the central axis of the cylinder. Thus, $D$ is defined to be the diameter of the cylindrical volume accessible to the centers of the monomers. Most of the simulations in this study employ fully flexible polymer chains. However, we also consider the effects of bending stiffness for linear polymers. To do this, we employ a bending potential associated with each consecutive triplet of monomers. The potential has the form, $u_{\rm bend}(\theta) = \kappa (1-\cos\theta)$. The angle $\theta$ is defined at monomer $i$ such that $\cos\theta_i \equiv \hat{u}_{i}\cdot\hat{u}_{i+1}$, where $\hat{u}_i$ is a normalized bond vector pointing from monomer $i-1$ to monomer $i$. The bending constant $\kappa$ determines the stiffness of the polymer and is related to the persistence length $P$ by $\exp(-\langle l_{\rm bond}\rangle/P) = \coth(\kappa/k_{\rm B}T) - k_{\rm B}T/\kappa$. For our model, the mean bond length is $\langle l_{\rm bond} \rangle \approx \sigma$. For the large bending stiffness considered in this study ($\kappa/k_{\rm B}T \geq 15$), this leads to $P/\sigma \approx \kappa/k_{\rm B}T - 0.5$. \section{Methods} \label{sec:methods} For the model systems described above, Monte Carlo simulations were used to calculate the free energy as a function of the end-to-end distance of the polymer, $\lambda$, as measured along the axis of the confining tube. In the case of the star polymer, $\lambda$ is defined as the distance of the end of a selected arm to the core monomer. The simulations employed the Metropolis algorithm and the self-consistent histogram (SCH) method.\cite{frenkel2002understanding} The SCH method efficiently calculates the equilibrium probability distribution ${\cal P}(\lambda)$, and thus its corresponding free energy function, $F(\lambda) = -k_{\rm B}T\ln {\cal P}(\lambda)$. We have previously used this procedure to measure free energy functions in a study of polymer segregation,\cite{polson2014polymer} as well as in simulation studies of polymer translocation.\cite{polson2013simulation,polson2013polymer,polson2014evaluating,% polson2015polymer} To implement the SCH method, we carry out many independent simulations, each of which employs a unique ``window potential'' of the form: \begin{eqnarray} {W_i(\lambda)}=\begin{cases} \infty, \hspace{8mm} \lambda<\lambda_i^{\rm min} \cr 0, \hspace{1cm} \lambda_i^{\rm min}<\lambda<\lambda_i^{\rm max} \cr \infty, \hspace{8mm} \lambda>\lambda_i^{\rm max} \cr \end{cases} \label{eq:winPot} \end{eqnarray} where $\lambda_i^{\rm min}$ and $\lambda_i^{\rm max}$ are the limits that define the range of $\lambda$ for the $i$-th window. Within each window of $\lambda$, a probability distribution $p_i(\lambda)$ is calculated in the simulation. The window potential width, $\Delta \lambda \equiv \lambda_i^{\rm max} - \lambda_i^{\rm min}$, is chosen to be sufficiently small that the variation in $F$ does not exceed a few $k_{\rm B}T$. The windows are chosen to overlap with half of the adjacent window, such that $\lambda^{\rm max}_{i} = \lambda^{\rm min}_{i+2}$. The window width was typically $\Delta \lambda = 2\sigma$. The SCH algorithm was employed to reconstruct the unbiased distribution, ${\cal P}(\lambda)$ from the $p_i(\lambda)$ histograms. For further detail of the histogram reconstruction algorithm, see Ref.~\citenum{frenkel2002understanding}. Polymer configurations were generated by carrying out single-monomer moves using a combination of translational displacements and crankshaft rotations. Trial moves were accepted with a probability $p_{\rm acc}$=${\rm min}(1,e^{-\Delta E/k_{\rm B}T})$, where $\Delta E$ is the energy difference between trial and current states. For simulations of semiflexible chains, reptation moves were also employed. Initial polymer configurations were generated such that $\lambda$ was within the allowed range for a given window potential. Prior to data sampling, the system was equilibrated. As an illustration, for a $N$=500 polymer chain, the system was equilibrated for typically $\sim 10^7$ MC cycles, following which a production run of $\sim 10^8$ MC cycles was carried out. On average, during each MC cycle a displacement or rotation move for each monomer, as well as one reptation move, is attempted once. In the results presented below, quantities of length are measured in units of $\sigma$ and energy in units of $k_{\rm B}T$. In addition, the free energy functions obtained from $F(\lambda)=-k_{\rm B}T\ln {\cal P}(\lambda)$ are shifted such that $F$=0 at the minimum in all plots of $F$ vs $\lambda$. \section{Results} \label{sec:results} Figure \ref{fig:F.dFdz.N500} shows free energy functions for a fully-flexible linear polymer of length $N$=500 under cylindrical confinement. Results are shown for a variety of tube diameters. By symmetry, the curves all satisfy $F(-\lambda)=F(\lambda)$, though the figure only shows a narrow range of negative $\lambda$. Each curve has a single free energy minimum corresponding to the most probable longitudinal end-to-end distance, which is roughly a measure of the average extension length of the polymer along the tube. As expected, the location of the free energy minimum, $\lambda_{\rm min}$, shifts to higher $\lambda$ as $D$ decreases. For $\lambda > \lambda_{\rm min}$, $F$ rises steeply with increasing $\lambda$ due to the reduction in conformational entropy associated with stretched conformations. In the regime $\lambda < \lambda_{\rm min}$, the increase in $F$ with decreasing $\lambda$ gradually becomes linear. This is evident in Fig.~\ref{fig:F.dFdz.N500}(b), which shows the variation of the derivative $dF/d\lambda$ with $\lambda/\lambda_{\rm min}$, calculated using the functions in Fig.~\ref{fig:F.dFdz.N500}(a). As $\lambda$ decreases, $dF/d\lambda$ approaches a constant. The magnitude of $dF/d\lambda$ in this regime increases with deceasing confinement tube diameter, $D$, and consequently the height of the free energy barrier, $\Delta F\equiv F(0)-F(\lambda_{\rm min})$, does as well. This linear regime corresponds to the case of intramolecular overlap, which is illustrated in Fig.~\ref{fig:illust}(a). As $\lambda$ decreases and the two end-monomers are brought closer together, portions of the polymer are forced to overlap. This may occur with a single backfold, as illustrated in Fig.~\ref{fig:illust}(a), or with two backfolds. The degree of overlap increases as $\lambda$ decreases, leading to a reduction in conformational entropy and a corresponding increase in $F$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{F.dFdz.N500.eps} \end{center} \caption{ (a) Free energy functions for a flexible polymer of length $N=500$ confined to a cylinder of diameter $D$. The curves show the free energy $F$ as a function of the monomer end distance $\lambda$ along the cylindrical axis. Results for several cylindrical diameters are shown. (b) Derivative of the free energy, $dF/d\lambda$, vs end-monomer separation $\lambda$ as a function of scaled $\lambda$. The results correspond to the data shown in (a). The vertical line marks the transition between the folded and non-folded regimes predicted by the theory of Ref.~\citenum {milchev2014arm}. } \label{fig:F.dFdz.N500} \end{figure} The observed trends in the free energy functions can be interpreted using a scaling theory developed by Milchev {\it et al.}\cite{milchev2014arm} In that study, an expression was derived for the free energy function of a single chain in a tube and the predictions were used to interpret results for simulations of a confined three-arm star polymer. The theory employed a free energy function constructed from two terms, one accounting for compression that uses scaling behavior in the semi-dilute regime, and another term of Pincus form that accounts for stretching. Using this functional form in cases with and without a backfold and optimizing the free energy, they predict a transition from a state of uniform compression or expansion about the free energy minimum and a backfolded regime to lie at $\lambda$=0.779$\lambda_{\rm min}$ (using the notation of the present study). However, in Fig~\ref{fig:F.dFdz.N500}(b), we observe instead a gradual transition between the backfolded regime the uniformly expanded/compressed regime, with the predicted transition point (labeled with a vertical dashed line in th figure) only roughly marking the region of the transition. Other tests of the accuracy of the theory of Ref.~\citenum{milchev2014arm} can be made using the simulation results. One such test is the scaling of the entropic force in the backfolded regime. In this regime, it was shown that the variation of the free energy with end-to-end distance is $F(\lambda) = F_0 (2^{1/2\nu} - p \lambda / \lambda_{\rm min})$, where $p\approx 0.91$ for a Flory exponent of $\nu\approx 0.588$, and where $F_0\sim ND^{-1/\nu}$ is the polymer free energy and $\lambda_{\rm min}\sim ND^{1-1/\nu}$ is the equilibrium extension of the polymer, both obtained from free energy minimization with respect to $\lambda$. (See Eqs.~(2), (3) and (8) in Ref.~\citenum{milchev2014arm}, and note the change in notation to match that used here.) Note that the scaling for $F_0$ and $\lambda_{\rm min}$ is identical to that predicted from the de~Gennes blob model for the free energy energy and extension lengths of a polymer in a tube. From this relation for $F(\lambda)$, it can be easily shown that the entropic force, defined as the magnitude of the derivative of the free energy, $f\equiv |dF/d\lambda|$, is constant with respect to $\lambda$ and scales as $f \sim N^0 D^{-1}$. This prediction also follows from a straightforward application of the de~Gennes blob picture in conjunction with an approximation for overlapping confined chains that was suggested in Ref.~\citenum {jung2012ring}. As illustrated in Fig.~\ref{fig:blob_illust}(a) the non-overlapping region is approximately of length $\lambda$ and is composed of $n_{\rm bl}=\lambda/D$ blobs of $g\approx D^{1/\nu}$ monomers. Likewise, in the overlapping regime, the monomers may be viewed as two strings of blobs of size $D/\sqrt{2}$, since they effectively occupy half the cross-sectional area of the tube.\cite{jung2012ring} Thus, each blob has $g^\prime\approx (D/\sqrt{2})^{1/\nu}$ monomers, and there are $(N-(\lambda/D)/g)/g^\prime$ of these smaller blobs present. Noting that each blob contributes roughly $k_{\rm B}T$ to $F$, it follows that $f \equiv |dF/d\lambda| \sim N^0 D^{-1}$. Figure~\ref{fig:dFdz_vs_D} shows the variation of $f$ with respect to $D$ calculated from linear fits of the free energy functions in the regime $\lambda < 0.5\lambda_{\rm min}$. Results for several chain lengths are shown. Consistent with the prediction, there is no significant dependence on $N$, and the data do satisfy a power law scaling. However, a fit to the data yields a scaling of $f \sim N^0 D^{-1.20\pm 0.01}$. The fitted curve and a curve for the predicted $D^{-1}$ scaling are both overlaid on the data in figure, and the comparison clearly illustrates that the discrepancy of the prediction is significant. A discrepancy of comparable magnitude was observed for the scaling of the free energy gradient in our previous study of entropy-driven polymer segregation.\cite{polson2014polymer} In that case, the scaling of the gradient in the free energy with respect to the center-of-mass separation distance was $f\sim N D^{-1.93\pm 0.01}$, compared to the predicted scaling of $f \sim N D^{-1.70}$. The physical origins of that discrepancy are likely the same as those for the single folded polymer, which will be discussed below. \begin{figure}[!ht] \begin{center} \vskip 0.1in \includegraphics[width=0.4\textwidth]{blob_illust.eps} \end{center} \caption{Illustration of blob domains for (a) a confined linear polymer with a backfold and (b) a confined star polymer with one backfolded arm.} \label{fig:blob_illust} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{dFdz_vs_D.eps} \end{center} \caption{ Variation of $f\equiv |dF/d\lambda|$ with the confining cylinder diameter $D$. Results are shown for different polymer lengths $N$. Note that $f$ is calculated in the linear regime of $F(\lambda)$. The solid line shows the power law scaling that yielded the best fit of the data, i.e. $f \sim D^{-1.20\pm 0.01}$. The dashed line shows the prediction of the scaling theory of Ref.~\citenum {milchev2014arm}. } \label{fig:dFdz_vs_D} \end{figure} In addition to the conformational free energy, we also consider the variation of the mean extension length of the polymer, $L_{\rm ext}$, as a function of $\lambda$. Figure~\ref{n500_len}(a) shows results for $N$=500 for various tube diameters, while Fig.~\ref{n500_len}(b) shows results for various polymer lengths with fixed tube diameter $D$=9. In each graph, the inset shows the derivative $dL_{\rm ext}/d\lambda$ vs $\lambda$ for each data set. As expected, at sufficiently high $\lambda$, the polymer is stretched relative to typical equilibrium conformations and $L_{\rm ext} \approx \lambda$. This is evident from the overlapping curves for all $N$ and $D$ and the limiting behavior of $dL_{\rm ext}/d\lambda\rightarrow 1$ at high $\lambda$. In the backfold regime at low $\lambda$, $L_{\rm ext}$ is dependent on both $N$ and $D$. However, independent of $N$ and $D$, the curves in this regime are each linear and parallel with one another with a derivative of $dL_{\rm ext}/d\lambda\approx 0.27$. The transition between the two regimes occurs in the vicinity of $\lambda \approx \lambda_{\rm min}$, which lies at lower $\lambda$ for higher $D$ (as evident in Fig.~\ref{fig:F.dFdz.N500}(a)) and for lower $N$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{dEdl.eps} \end{center} \caption{ (a) Extension length of the polymer $L_{\rm ext}$ vs end-monomer separation along the cylinder axis $\lambda$ for a polymer of length $N=500$. Results for several values of the cylinder diameter $D$ are shown. The inset shows the derivative $dL_{\rm ext}/d\lambda$ vs $\lambda$. (b) As in (a), except with fixed $D$=9 and for various $N$. } \label{n500_len} \end{figure} As for the case of the free energy, the behavior of the extension length curves in the backfold regime can be understood in the context of the scaling theory of Ref.~\citenum {milchev2014arm}. In that case, it was noted that length $y$ of the overlapping portions of the polymer satisfies $y = 2^{(1-3\nu)/2\nu}(\lambda_{\rm min} - \lambda/0.779)$. Approximating $L_{\rm ext}\approx \lambda + y$, it follows that $dL_{\rm ext}/d\lambda \approx 0.18$, where we use a Flory exponent $\nu\approx 0.588$. Thus, the observed independence of $dL_{\rm ext}/d\lambda$ with respect to $N$ and $D$ is consistent with the prediction, though the predicted value of the derivative is somewhat of an underestimate. To summarize, the scaling theory captures the qualitative behavior and some of the quantitative behavior of the free energy and extension length functions. Unsurprisingly for systems of this size, there are quantitative discrepancies. Finite-size effects have been studied and quantified in other studies of polymers confined to narrow channels and suggest that results consistent with blob model predictions emerge only for diameters of $D>10$.\cite{kim2013elasticity} In the present case, the extra crowding associated with backfolded polymer decreases the effective size of $D$ for each overlapping strand, thus potentially amplifying the finite-size effect. This is also likely a cause of the discrepancy between observed and predicted scaling of the free energy gradient with respect to center-of-mass separation for segregating polymers under cylindrical confinement observed in Ref.~\citenum {polson2014polymer}. Another possible problem concerns the validity of the approximation suggested in Ref.~\citenum {jung2012ring} that overlapping polymers can each be viewed as strings of blobs of size $D/\sqrt{2}$, since they effectively occupy half the cross-sectional area of the tube. In recent work, we have carried out simulations to test this assumption and find significant quantitative discrepancies from this prediction.\cite{polson2017segregation} Together with finite-size effects associated with the de~Gennes blob model, this factor likely contributes to the observed inconsistency. There is a third possible source for the discrepancy, for which we now provide evidence to show is not significant. It concerns a point noted in Ref.~\citenum {milchev2014arm} regarding the possible presence of two backfolds. The calculated free energy of the polymer depends on the end-to-end distance, which determines the degree of polymer overlap. However, configurations with two backfolds with the same $\lambda$ can produce the same degree of polymer overlap, and it was suggested that these additional configurations lead to logarithmic corrections to the prediction for free energy. It is possible that this effect may contribute to the discrepancies between our measurements and the predictions. To test this possibility, we have carried out a modified simulation that imposes a single backfold (at most) on the system for any given $\lambda$. To do this, we insert a hard wall cap in the cylinder, at the center of which a single monomer is fixed. During the simulations, we carry out reptation moves in addition to the regular MC trial moves so that the sequence position of the monomer fixed to the cap, $m$, can take a value from 1 to $N$. Employing the SCH method using $m$ as the fluctuating variable, we measure the free energy function $F(m)$. Clearly, there can only be a single backfold, located at the position of monomer $m$. Furthermore, a given value of $m$ corresponds to a unique value of the degree of internal polymer overlap along the tube. If the results obtained using this model are closer to predictions from the scaling theory, then this would provide evidence that the previous discrepancies were partially due to the presence of multiple backfolds. In addition to helping resolve this question, this calculation has the side benefit of clarifying a particular issue in the context of the related process of polymer translocation. Specifically, when a polymer translocates through a nanopore in a barrier, the likelihood of initially capturing the polymer from one of its ends or at another point along the contour in a folded configuration is expected to be affected by the degree of polymer confinement prior to translocation. The present calculation provides a means to quantify the likelihood for a given capture position along the chain i.e. higher $F(m)$ will correspond to a lower likelihood of capture at monomer $m$.\cite{ding2016flow} Figure~\ref{fig:F_zak.N350}(a) shows the variation in the free energy with the index of the monomer that is tethered to the cap in the semi-infinite cylinder. Results for a polymer length of $N$=350 and several different tube diameters are shown. As expected, $F$ is greatest when $m$=$N/2$, in which case the polymer subchains on either side of the anchored monomer are of equal contour length, leading to maximum overlap along the tube. (Note that the extension lengths of the subchains along the channel will differ slightly in this case, as noted in Ref.~\citenum{milchev2014arm}, and thus a little fewer than $N/2$ monomers of the longer subchain will overlap with those of the other. This leads to a slightly lower free energy maximum at $m$=$N/2$ than would otherwise be the case.) As the $m$ increases or decreases from this point, the subchain lengths become increasingly unequal, and the shorter subchain overlaps only partially with the longer subchain. In turn, there is less crowding for a greater number of monomers, leading to a decrease in the free energy. For sufficiently small $D$, the variation is linear over most of each half of the curve, though the functions tend to be slightly less linear with increasing $D$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{F_zak.N350.eps} \end{center} \caption{ (a) Free energy vs monomer index $m$ for a flexible polymer confined to a cylinder of diameter $D$ and semi-infinite length. Monomer $m$ is fixed to a hard cap in the cylinder. Results are shown for a polymer of length $N$=350 for several different values of $D$. (b) Derivative $dF/dm$ vs tube diameter $D$. The derivative was calculated from a fit to the free energy functions in (a) in linear regime, as well as data for other polymer lengths. The solid line is a fit of the $N$=350 data to a power-law function, which yielded a scaling of $dF/dm \sim D^{-2.04\pm 0.02}$. } \label{fig:F_zak.N350} \end{figure} To understand this effect, the blob model can once again be used. For $m<N/2$, the shorter subchain is of length $m$, while the longer subchain is of length $N-m$. Clearly, the portion of the subchain that overlaps the short subchain is also of length $m$, and thus the remaining (i.e. non-overlapping) portion of the longer subchain is of length $N-2m$. The confined polymer system then can be viewed as two chains of length $m$ and one of length $N-2m$ with a corresponding free energy of $F(m)/k_{\rm B}T \sim 2 m\left(D/\sqrt{2}\right)^{-1/\nu} + (N-2m) D^{-1/\nu}$, from which it follows, $dF/dm\propto N^0D^{-1/\nu} = N^0D^{-1.70}$. By symmetry, the derivative is equal in magnitude and opposite in sign for $m>N/2$. Thus, the decrease in the derivative (and, therefore, the free energy barrier height) is qualitatively consistent with this prediction. Figure~\ref{fig:F_zak.N350}(b) shows the variation in $dF/dm$ with $D$ for several different polymer lengths. Consistent with the prediction, there is no dependence on $N$ except for small finite-size variations at large $D$. However, a fit to the data for $N$=350 yields a scaling of $dF/dm \propto D^{-2.04\pm 0.02}$, which represents a significant discrepancy from the predicted scaling. The magnitude of this discrepancy is essentially the same as between the predicted and observed scaling of the data in Fig.~\ref{fig:dFdz_vs_D}. We conclude that presence of two backfolds was not the main source of this disagreement. We now examine the behavior of a confined star polymer. Figure~\ref{F_star_N150} shows free energy functions for a three-arm star polymer of arm-length $N_{\rm arm}$=150 for several different values of $D$. In this case $\lambda$ is the longitudinal distance between the core monomer and the end-monomer of a single chosen arm. We consider values of $N_{\sf arm}$ and $D$ with a sufficiently high free energy barrier that neither of the other arms has an appreciable chance of crossing while the chosen arm is brought to the selected range of $\lambda$ for each simulation used to calculate $F(\lambda)$. The curves are qualitatively similar to those of Fig.~\ref{fig:F.dFdz.N500}(a). As before, there is a linear regime at sufficiently low $\lambda$, in which the arm that defines $\lambda$ is backfolded. One notable difference from the results for the linear polymer is the presence of two free energy minima separated by a small barrier. This feature has an origin similar to the behavior observed in Ref.~\citenum {milchev2014arm} for two overlapping polymers confined to a cylinder and both tethered to a cap at one end. In that case, the lowest free energy state corresponds to that of slightly different elongation lengths for the polymers, with a small barrier located at equal extension length separating states where the roles of compressed and elongated polymers are exchanged. In the present case, the two arms of the star polymer connected to the core monomer play the same role, giving rise to the same effect. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{F_star_N150.eps} \end{center} \caption{ Free energy $F$ vs distance $\lambda$ between an end-monomer and the core monomer along the cylinder axis $\lambda$ for a three-arm star polymer. Results are shown for an arm length $N_{\rm arm}=150$ for several values of the cylinder diameter $D$. The inset shows the derivative $f\equiv |dF/d\lambda|$ in the linear domain at low $\lambda$ vs tube diameter $D$ for systems with $N_{\rm arm}$=150 and $N_{\rm arm}$=100. The solid lines are power-law fits to each data set, which yielded a scaling of $f \sim D^{-1.35\pm 0.01}$ for $N_{\rm arm}$=100 and $f \sim D^{-1.31\pm 0.01}$ for $N_{\rm arm}$=150. } \label{F_star_N150} \end{figure} The free energy gradient $f$ for the linear portion of the free energy function is plotted as a function of $D$ in the inset of the figure. Results are shown for arm lengths of $N_{\rm arm}$=100 and 150. There is a slight dependence of $f$ on the arm length, with slightly larger values of $f$ for the greater $N_{\rm arm}$. Fitting the results to a power law yields a scaling of $f \sim D^{-1.35\pm 0.01}$ for $N_{\rm arm}$=100 and $f \sim D^{-1.31\pm 0.01}$ for $N_{\rm arm}$=150. In principle, these results can be understood using a simple scaling analysis employing the de~Gennes blob model. Consider the case where two arms lie on one side of the core monomer, one of which is the arm selected to define $\lambda$. If this arm has a backfold (while the other does not), then there are three distinct regimes for these two arms along the cylinder, which are labeled in Fig.~\ref{fig:blob_illust}(b). In regime~I, the overlapping polymer arms can be viewed as two strings of blobs, each effectively occupying half the cross sectional area of the tube. Following the approach taken in Ref.~\citenum {jung2012ring}, the blobs can be viewed as being confined to a tube of effective diameter $D/\sqrt{2}$. The span of this regime is approximately $\lambda$. Likewise, in regime~II where the backfolded chain lies, the polymer subchains can be viewed as three strings of blobs of effective diameter $D/\sqrt{3}$. Finally, in regime~III, there is only one string of blobs for the stretched arm, each of diameter $D$. Varying $\lambda$ will change the number of blobs in each regime, each of which contributes on the order of $k_{\rm B}T$ to the free energy. Varying $\lambda$ does not affect the number of blobs for the arm on the other side of the core. Following the same approach used earlier to derive the scaling for the backfolded linear polymer, it can easily be shown that the predicted scaling is $f \sim N_{\rm arm}^0 D^{-1}$. The poor agreement with the measured scaling is not surprising here. In part, it is due to the very small values of $D$ used here, for which the blob model is expected to be inaccurate. In addition and as noted above, the approximation of Ref.~\citenum {jung2012ring} employed here has also been shown to suffer quantitative inaccuracies.\cite{polson2017segregation} It is expected that the scaling will improve for larger values of $D$ and $N_{\rm arm}$. However, such calculations are not computationally feasible for us at present. Let us now turn to the folding behavior of longitudinally confined semiflexible polymers in the classic Odijk regime. Figure~\ref{F.N400.D6} shows free energy functions for semiflexible polymers of length $N$=400 in a tube of diameter $D$=6. Results are shown for several different values of persistence length $P$ with $P/D \geq 3$, thus marginally satisfying the conditions for the Odijk regime. The functions have a form that is qualitatively different from those for the flexible chains shown in Fig.~\ref{fig:F.dFdz.N500}(a). In this case there are two distinct regimes, one corresponding to a deep free energy well at high $\lambda$, and a second with a linear variation of $F$ with $\lambda$ over a broad range of $\lambda$ to the left of the well, with a sharp boundary between the two. Similar results were observed for other values of $N$ (data not shown). As the chains stiffen, the position of the free energy minimum shifts slightly to higher $\lambda$, and its width narrows. Thus, the polymers become more elongated and the fluctuations in the extension length decrease, in accord with established results for confined semiflexible chains in the classic Odijk regime.\cite{odijk1983statistics} The increase in $F$ as $\lambda$ decreases from its value at the minimum corresponds physically to the formation of a hairpin turn. The free energy cost of forming the hairpin is dominated by the bending energy of the chain, but there is also a significant entropic contribution as well.\cite{odijk2006dna,chen2017conformational} The more gradual change in $F$ in the linear regime corresponds to the change in the degree of overlap of the stiff subchains. As $\lambda$ decreases, the overlap increases leading to a reduction in conformational entropy, thus increasing $F$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{F.N400.D6.eps} \end{center} \caption{ Free energy vs $\lambda$ for a semiflexible linear polymer of length $N$=400 in a cylindrical tube of diameter $D$=6. Results for several values of the persistence length $P$ are shown. } \label{F.N400.D6} \end{figure} Figure~\ref{dFdk.P.D6}(a) shows $f$ vs $P$ for a fixed tube diameter of $D$=6, while Fig.~\ref{dFdk.P.D6}(b) shows $f$ vs $D$ for a fixed persistence length of $P$=29.5. In each case, results for several polymer lengths are shown. No clear dependence on $N$ is evident. The derivatives were calculated from fits to curves such as those in Fig.~\ref{F.N400.D6} for the linear regime, where a single hairpin turn was present and there was partial intrapolymer overlap. Fits to the data suggest a scaling of $f \sim N^0 P^{-\alpha} D^{-\beta}$, where $\alpha=0.37\pm 0.01$ and $\beta=1.72\pm 0.02$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{dFdk.P.D.eps} \end{center} \caption{ (a) Free energy gradient $f \equiv |dF/d\lambda|$ in the linear overlap regime vs persistence length $P$ for a semi-flexible linear polymer in a cylinder of diameter $D$=6. Results for several polymer lengths are shown. The solid line is a fit to all of the data. (b) $f$ in the linear overlap regime vs tube diameter for a semi-flexible polymer of persistence length $P$=29.5. The solid line is a fit to all of the data.} \label{dFdk.P.D6} \end{figure} A simple explanation for the scaling behavior of $f$ follows from using arguments similar to those employed by Odijk in a theoretical study of backfolding regimes of DNA in nanochannels.\cite{odijk2008scaling} Recall that the confinement free energy of a single semiflexible chain in this regime is given by $F_1(L) \sim k_{\rm B}T L/l_{\rm def}$, where $L$ is the contour length of the polymer and the total Odijk deflection length satisfies $l_{\rm def} \approx D^{2/3}P^{1/3}$. The confinement free energy of a backfolded chain can be approximated $F_{\rm c} = F_1(L-h) + F_{\rm h} + F_{\rm int}$, where the $h$ is the length of the hairpin fold, $F_1(L-h)$ is the Odijk free energy for the two subchains outside the fold, $F_{\rm int}$ is the interaction between the strands and $F_{\rm h}$ is the free energy of the hairpin. In the regime of interest, only $F_{\rm int}$ depends on $\lambda$, so we neglect the other two terms. To estimate $F_{\rm int}$, we use an approximation for the free energy of a system of long, hard rigid rods. For that system, the interaction free energy in the 2nd virial approximation is given by $F^{\rm (int)}/k_{\rm B}T = (l^2\sigma N^2/V) \langle |\sin\gamma|\rangle$ for $N$ rods of length $l$ and diameter $\sigma$ confined to a volume $V$, where $\gamma$ is the angle between the rods.\cite{odijk1986theory} When the rods are highly aligned, $\langle|\sin\gamma|\rangle \sim \sqrt{\langle\theta^2\rangle}$, where $\theta$ is the angle between the rod and the alignment direction. To apply these results to the present system, we treat each deflection length $l_{\rm d}$ as a rigid rod and substitute $l\rightarrow l_{\rm d}\sim D^{2/3}P^{1/3}$. In addition, we assume that the alignment arises principally from confinement, such that $\langle\theta^2\rangle \sim (D/l_{\rm d})^2$. Further, we note that $V$ is the volume over which the intermolecular segments overlap. This is given by $V\sim l_{\rm ov} D^2$, where the overlap region is $l_{\rm ov} \approx (L-h-\lambda)/2$. Finally, replacing $N$ with the number of deflection lengths of the two overlapping segments, $N\rightarrow 2l_{\rm ov}/l_{\rm def}$, it can be shown that $F\sim (L-h-\lambda)D^{-5/3}P^{-1/3}$ plus terms independent of $\lambda$. Thus, we predict that $f\equiv |dF/d\lambda| \sim N^0 D^{-\beta}P^{-\alpha}$, where $\beta=5/3$ and $\alpha=1/3$. These predictions are close to the measured values of $\beta=1.72\pm 0.02$ and $\alpha=0.37\pm 0.01$. The small quantitative discrepancy may arise from the fact that the system only just marginally satisfies the condition defining the Odijk regime that $P\gg D$, as well as from other approximations that have been employed. By comparison, in our related previous study of the segregation of overlapping confined semiflexible chains, the scaling exponents for the free energy gradient were measured to be $\beta\approx 2$ and $\alpha\approx 0.37$.\cite{polson2014polymer} The larger discrepancy for the scaling with respect to $D$ in that study is likely due to the narrower tube diameters considered, i.e. $2\leq D \leq 5$, in contrast to the range here of $5\leq D \leq 7$. In the present study, the wider tubes correspond to lower packing fraction, for which the second virial approximation is better suited to describe interactions between the chain segments. We now consider the deep wells in the free energy functions. As noted earlier, the wells are associated with the formation of hairpin turns of confined semiflexible polymers. Figure~\ref{fig:delFw} shows the dependence of the free energy well depth on the polymer persistence length for various polymer lengths. As expected, there is no significant dependence of $\Delta F_{\rm w}$ on the polymer length, since the well depth is a measure of the free energy required to form a hairpin turn. Over the range of $P$ considered here, we note that $\Delta F_{\rm w}$ increases roughly linearly with $P$. Overlaid on the data are two theoretical predictions. The first is that for a mechanical model developed by Odijk.\cite{odijk2006dna,odijk2008scaling} In that work, it was noted that the effects of entropic depletion on the hairpin make an appreciable contribution to the hairpin free energy in addition to the bending energy. A more recent and far more rigorous theoretical analysis by Chen\cite{chen2017conformational} used the Green's function formalism to determine the hairpin free energy and the global persistence length. Results obtained using the latter were found to be consistent with the simulation results of Muralidhar and Dorfman in their study of the backfolded Odijk regime.\cite{muralidhar2016backfolding} An analytical representation to Chen's numerical solution for the hairpin free energy, $F_{\rm hp}$, is shown in the figure. It is qualitatively similar to Odijk's prediction, though with somewhat lower values of $\Delta F_{\rm w}$. This arises from the failure of Odijk's theory to account for the orientational entropy of the hairpin planes.\cite{chen2017conformational} (Note that the calculations for the Odijk predictions were carried out using a correction to an error that was noted by Chen.\cite{chen2017conformational}) As evident in the figure, the predictions of Chen are much closer to the simulation data than that of Odijk. There appears to be a very slight overestimate of $\Delta F_{\rm w}$ by an amount $\lesssim k_{\rm B}T$. This effect may be due to discretization of the polymer and may also be associated with the small fluctuating bond length used in our model, neither of which are present in the theoretical model employed in Ref.~\citenum{chen2017conformational}. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{delFw.eps} \end{center} \caption{(a) Free energy well depth $\Delta F_{\rm w}$ vs persistence length $P$ for a tube diameter of $D$=6. Results for various polymer lengths are shown. The dashed and solid lines are the hairpin turn free energy values predicted by Odijk in Ref.~\citenum {odijk2006dna} and by Chen in Ref.~\citenum {chen2017conformational}, respectively. (b) As in (a), except $\Delta F_{\rm w}$ vs tube diameter $D$ for persistence length $P$=29.5. } \label{fig:delFw} \end{figure} We turn finally to a brief look at the behavior of S-loop configurations in nanotubes. The structure of such a formation is illustrated in Fig.~\ref{fig:illust}(d). In order to prevent the formation of single hairpin turns, the polymer ends are tethered perpendicular to the centers of virtual caps. Note that this has the effect of significantly reducing the conformational freedom of the polymer near the ends, which in turn will affect results, especially for short polymers. Figure~\ref{F_N400_Sloop} shows free energy functions for a linear semiflexible polymer chain with such constraints imposed its the ends. Since single hairpins are precluded, the polymer forms an S-loop composed of two hairpin turns as the end-to-end distance $\lambda$ decreases. The functions are qualitatively similar to those for polymers with a single hairpin turn in Fig.~\ref{F.N400.D6}. In this case, the linear regime at lower $\lambda$ corresponds to end separations where the S-loop is present. As $\lambda$ decreases and the ends approach each other, the overlap region of the loop increases and conformational entropy decreases, leading to an increase in $F$. The depth of the free energy well located at high $\lambda$ corresponds roughly to the free energy cost of forming the S-loop. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.45\textwidth]{F_N400_Sloop.eps} \end{center} \caption{ Free energy vs $\lambda$ for a semi-flexible cylindrically confined linear polymer with an S-loop. Results are shown for a polymer of length $N$=400, persistence length $P$=24.5 and for several values of the tube diameter $D$. } \label{F_N400_Sloop} \end{figure} Figure~\ref{dFdkappa.P.R2.5}(a) shows the variation of $f$ with $P$ in the linear regime for $D$=4, while Fig.~\ref{dFdkappa.P.R2.5}(b) shows $f$ with $D$ for $P$=24.5. In each case, results for $N$=200 and $N$=400 are shown. As in the case of the results for a single hairpin turn, there is no significant dependence on the polymer length. A fit to all of the data suggests a scaling of $f \sim D^{-\beta} P^{-\alpha}$, where $\alpha$=$1.91\pm 0.03$ and $\beta$=$0.36\pm 0.01$. \begin{figure}[!ht] \begin{center} \vskip 0.1in \includegraphics[width=0.45\textwidth]{dFdk.S.P.D.eps} \end{center} \caption{ (a) $f\equiv |dF/d\lambda|$ in the S-loop regime vs persistence length $P$ for a cylindrically confined semi-flexible linear polymer. Results are shown for a confinement tube of diameter $D$=4 and polymer lengths of $N$=200 and $N$=400. The solid line is a fit to both data sets. (b) $f$ in the S-loop regime vs confinement tube diameter $D$ for a semi-flexible linear polymer of persistence length $P$=24.5. Results for $N$=200 and $N$=400 are shown. The solid line is a fit to both data sets.} \label{dFdkappa.P.R2.5} \end{figure} To interpret these scaling results, we employ the same theoretical approach as that used for the single-hairpin data. In this case, we note that the overlap volume $V\sim l_{\rm ov}D^2$ has an overlap length $l_{\rm ov}\approx (L-2h-\lambda)/2$, where $L$ is the polymer contour length and $h$ is the length of each hairpin. In addition, since there are three polymer segments in the overlap region rather than two in the case of the single hairpin, there are $3l_{\rm ov}/l_{\rm def}$ deflection lengths in the overlap region, in contrast to $2l_{\rm ov}/l_{\rm def}$ for the single hairpin. Otherwise, the analysis is identical, leading to the same predicted scaling exponents of $\alpha$=1/3 and $\beta$=5/3. In addition, we expect that the ratio of the derivatives for the S-loop and single hairpin is $f_{\rm S}/f_{\rm hp}=(3/2)^2=2.25$. The predicted scaling exponents are somewhat less accurate for the S-loop that for the single-hairpin. In addition, we find, for example, that $f_{\rm S}/f_{\rm hp}\approx 3.1$ at $D$=6 and $P$=24.5, which is somewhat larger than the predicted value. The discrepancies likely arise from multiple sources. We note that for fixed $D$, the density of deflection segments is larger for the S-loop, perhaps leading to a reduction in the accuracy of the 2nd virial approximation to describe excluded volume interactions. Further, as noted above, the reduction in orientational freedom of the ends tethered to the virtual end caps likely affect the results. Additional simulations using much longer chains should clarify this matter, but such calculations are not currently feasible. \section{Conclusions} \label{sec:conclusions} In this study, we have used Monte Carlo simulations to study the conformational free energy $F$ of folded polymers in cylindrical channels. Specifically, the simulations were used to measure the variation of $F$ with respect to the end-to-end distance $\lambda$, which determines the degree of internal overlap along the tube. The gradient in the free energy, $f\equiv |dF/d\lambda|$, is the effective force that drives unfolding toward the polymer's equilibrium state and thus is essential for a meaningful analysis of unfolding kinetics measurements in experiments and simulations. The main goal of this work was to compare the scaling properties of $f$ with predictions from scaling arguments. For fully-flexible chains, we find that the gradient scaling of $f\sim N^0 D^{-1.20\pm 0.01}$, for chains of up to $N$=500 monomers and cylinder diameters of $D$=$3-18$, in units of monomer diameter. This differs appreciably from the prediction $f\sim N^0 D^{-1}$, a discrepancy that is likely due to finite-size effects and deficiencies in the approximation used to account for overlap, i.e. that overlapping polymer strands behave as noninteracting polymers in effective tubes of size $D/\sqrt{2}$.\cite{jung2012ring} A similar and even greater discrepancy was observed for the folding free energy for a single arm of a star polymer. The transition from a uniformly compressed linearly ordered polymer to a backfolded polymer was found to be gradual, in contrast to a recent theoretical prediction. In the case of confined semiflexible polymers in the classic Odijk regime, we find a free energy gradient scaling of $f\sim N^0 D^{-1.72\pm 0.02} P^{-0.37\pm 0.01}$, which is close to the prediction $f\sim N^0 D^{-5/3} P^{-1/3}$ obtained by treating interactions between deflection segments at the 2nd virial level. In the case of S-loops, the agreement was somewhat poorer, perhaps due to the higher segment density in the overlap regime and a resulting breakdown in the 2nd virial approximation. Finally, we note that the measured free energy of a hairpin turn was quantitatively consistent with a recent theoretical prediction.\cite{chen2017conformational} The work carried out in this study can be extended in various directions. One straightforward and useful extension is an examination of other channel shapes. Although the conformational behavior of polymers confined to cylindrical channels has been the subject of a number of theoretical studies, understanding the effects of confinement in {\it rectangular} channels tends to be more directly relevant to experimental studies. Recent studies employing square or rectangular channels using computer simulation\cite{muralidhar2016backfolding,muralidhar2016backfolded} and scaling theories\cite{werner2015scaling} have highlighted the importance of the channel shape and size on polymer scaling regimes. In the future, it will be useful to employ the methods of the present study to characterize the free energy functions for folded polymers in such channels. Among other points of interest, it is expected that the entropic contributions to the hairpin free energy of semiflexible polymers will differ appreciably from that for cylindrical channels. Another useful extension would be an examination the free energy functions in the case of $P\sim D$. Here, the calculations could be used to quantify the degree of validity of the assumed form of the interaction between overlapping polymer subchains employed in the theory of the backfolded Odijk regime, as well as extend range of verification of Chen's prediction of the hairpin free energy. Finally, it would be useful to carry out molecular dynamics simulations to examine the ability of employing the entropic force obtained from the free energy functions to accurately describe the kinetics of unfolding. \begin{acknowledgement} This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants Program. We are grateful to the Atlantic Computational Excellence Network (ACEnet) and WestGrid for use of their computational resources. \end{acknowledgement} \providecommand{\latin}[1]{#1} \makeatletter \providecommand{\doi} {\begingroup\let\do\@makeother\dospecials \catcode`\{=1 \catcode`\}=2 \doi@aux} \providecommand{\doi@aux}[1]{\endgroup\texttt{#1}} \makeatother \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{52} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Dai \latin{et~al.}(2016)Dai, Renner, and Doyle]{dai2016polymer} Dai,~L.; Renner,~C.~B.; Doyle,~P.~S. The polymer physics of single DNA confined in nanochannels. \emph{Adv. Colloid Interface Sci.} \textbf{2016}, \emph{232}, 80--100\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Reisner \latin{et~al.}(2012)Reisner, Pedersen, and Austin]{reisner2012dna} Reisner,~W.; Pedersen,~J.~N.; Austin,~R.~H. DNA confinement in nanochannels: Physics and biological applications. \emph{Rep. Prog. Phys.} \textbf{2012}, \emph{75}, 106601\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dorfman \latin{et~al.}(2012)Dorfman, King, Olson, Thomas, and Tree]{dorfman2012beyond} Dorfman,~K.~D.; King,~S.~B.; Olson,~D.~W.; Thomas,~J.~D.; Tree,~D.~R. Beyond gel electrophoresis: Microfluidic separations, fluorescence burst analysis, and DNA stretching. \emph{Chem. Rev.} \textbf{2012}, \emph{113}, 2584--2667\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Reisner \latin{et~al.}(2010)Reisner, Larsen, Silahtaroglu, Kristensen, Tommerup, Tegenfeldt, and Flyvbjerg]{reisner2010single} Reisner,~W.; Larsen,~N.~B.; Silahtaroglu,~A.; Kristensen,~A.; Tommerup,~N.; Tegenfeldt,~J.~O.; Flyvbjerg,~H. Single-molecule denaturation mapping of DNA in nanofluidic channels. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2010}, \emph{107}, 13294--13299\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Marie \latin{et~al.}(2013)Marie, Pedersen, Bauer, Rasmussen, Yusuf, Volpi, Flyvbjerg, Kristensen, and Mir]{marie2013integrated} Marie,~R.; Pedersen,~J.~N.; Bauer,~D.~L.; Rasmussen,~K.~H.; Yusuf,~M.; Volpi,~E.; Flyvbjerg,~H.; Kristensen,~A.; Mir,~K.~U. Integrated view of genome structure and sequence of a single DNA molecule in a nanofluidic device. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2013}, \emph{110}, 4893--4898\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lam \latin{et~al.}(2012)Lam, Hastie, Lin, Ehrlich, Das, Austin, Deshpande, Cao, Nagarajan, Xiao, and Kwok]{lam2012genome} Lam,~E.~T.; Hastie,~A.; Lin,~C.; Ehrlich,~D.; Das,~S.~K.; Austin,~M.~D.; Deshpande,~P.; Cao,~H.; Nagarajan,~N.; Xiao,~M.; Kwok,~P.-Y. Genome mapping on nanochannel arrays for structural variation analysis and sequence assembly. \emph{Nature Biotech.} \textbf{2012}, \emph{30}, 771--776\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dorfman(2013)]{dorfman2013fluid} Dorfman,~K.~D. The fluid mechanics of genome mapping. \emph{AIChE Journal} \textbf{2013}, \emph{59}, 346--354\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hsu and Grassberger(2011)Hsu, and Grassberger]{hsu2011review} Hsu,~H.-P.; Grassberger,~P. A review of Monte Carlo simulations of polymers with PERM. \emph{J. Stat. Phys.} \textbf{2011}, \emph{144}, 597--637\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tree \latin{et~al.}(2013)Tree, Muralidhar, Doyle, and Dorfman]{tree2013dna} Tree,~D.~R.; Muralidhar,~A.; Doyle,~P.~S.; Dorfman,~K.~D. Is DNA a good model polymer? \emph{Macromolecules} \textbf{2013}, \emph{46}, 8369--8382\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Odijk(2008)]{odijk2008scaling} Odijk,~T. Scaling theory of DNA confined in nanochannels and nanoslits. \emph{Phys. Rev. E} \textbf{2008}, \emph{77}, 060901\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dai \latin{et~al.}(2014)Dai, van~der Maarel, and Doyle]{dai2014extended} Dai,~L.; van~der Maarel,~J.; Doyle,~P.~S. Extended de Gennes Regime of DNA Confined in a Nanochannel. \emph{Macromolecules} \textbf{2014}, \emph{47}, 2445--2450\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Muralidhar \latin{et~al.}(2014)Muralidhar, Tree, and Dorfman]{muralidhar2014backfolding} Muralidhar,~A.; Tree,~D.~R.; Dorfman,~K.~D. Backfolding of wormlike chains confined in nanochannels. \emph{Macromolecules} \textbf{2014}, \emph{47}, 8446--8458\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[de~Gennes(1979)]{deGennes_book} de~Gennes,~P.-G. \emph{Scaling Concepts in Polymer Physics}; Cornell University Press: Ithica NY, 1979\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Odijk(1983)]{odijk1983statistics} Odijk,~T. The statistics and dynamics of confined or entangled stiff polymers. \emph{Macromolecules} \textbf{1983}, \emph{16}, 1340--1344\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Levy \latin{et~al.}(2008)Levy, Mannion, Cheng, Reccius, and Craighead]{levy2008entropic} Levy,~S.~L.; Mannion,~J.~T.; Cheng,~J.; Reccius,~C.~H.; Craighead,~H.~G. Entropic unfolding of DNA molecules in nanofluidic channels. \emph{Nano Lett.} \textbf{2008}, \emph{8}, 3839--3844\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Alizadehheidari \latin{et~al.}(2015)Alizadehheidari, Werner, Noble, Reiter-Schad, Nyberg, Fritzsche, Mehlig, Tegenfeldt, Ambj\"ornsson, Persson, and Westerlund]{alizadehheidari2015nanoconfined} Alizadehheidari,~M.; Werner,~E.; Noble,~C.; Reiter-Schad,~M.; Nyberg,~L.~K.; Fritzsche,~J.; Mehlig,~B.; Tegenfeldt,~J.~O.; Ambj\"ornsson,~T.; Persson,~F.; Westerlund,~F. Nanoconfined circular and linear DNA: Equilibrium conformations and unfolding kinetics. \emph{Macromolecules} \textbf{2015}, \emph{48}, 871--878\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ib{\'a}{\~n}ez-Garc{\'\i}a and Goldstein-Menache(2012)Ib{\'a}{\~n}ez-Garc{\'\i}a, and Goldstein-Menache]{ibanez2012entropic} Ib{\'a}{\~n}ez-Garc{\'\i}a,~G.~O.; Goldstein-Menache,~P. Entropic unfolding of flexible polymers in nanotubes: A Langevin dynamics study. \emph{Soft Matter} \textbf{2012}, \emph{8}, 8666--8671\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ib{\'a}{\~n}ez-Garc{\'\i}a \latin{et~al.}(2013)Ib{\'a}{\~n}ez-Garc{\'\i}a, Goldstein, and Zarzosa-P{\'e}rez]{ibanez2013hairpin} Ib{\'a}{\~n}ez-Garc{\'\i}a,~G.~O.; Goldstein,~P.; Zarzosa-P{\'e}rez,~A. Hairpin polymer unfolding in square nanochannels. \emph{J. Polym. Sci. Part B Polym. Phys.} \textbf{2013}, \emph{51}, 1411--1418\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jun and Mulder(2006)Jun, and Mulder]{jun2006entropy} Jun,~S.; Mulder,~B. Entropy-driven spatial organization of highly confined polymers: lessons for the bacterial chromosome. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2006}, \emph{103}, 12388--12393\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jun and Wright(2010)Jun, and Wright]{jun2010entropy} Jun,~S.; Wright,~A. Entropy as the driver of chromosome segregation. \emph{Nat. Rev. Microbiol.} \textbf{2010}, \emph{8}, 600--607\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jun \latin{et~al.}(2007)Jun, Arnold, and Ha]{jun2007confined} Jun,~S.; Arnold,~A.; Ha,~B.-Y. Confined space and effective interactions of multiple self-avoiding chains. \emph{Phys. Rev. Lett.} \textbf{2007}, \emph{98}, 128303\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Arnold and Jun(2007)Arnold, and Jun]{arnold2007time} Arnold,~A.; Jun,~S. Time scale of entropic segregation of flexible polymers in confinement: implications for chromosome segregation in filamentous bacteria. \emph{Phys. Rev. E} \textbf{2007}, \emph{76}, 031901\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jung and Ha(2010)Jung, and Ha]{jung2010overlapping} Jung,~Y.; Ha,~B.-Y. Overlapping two self-avoiding polymers in a closed cylindrical pore: Implications for chromosome segregation in a bacterial cell. \emph{Phys. Rev. E} \textbf{2010}, \emph{82}, 051926\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jung \latin{et~al.}(2012)Jung, Jeon, Kim, Jeong, Jun, and Ha]{jung2012ring} Jung,~Y.; Jeon,~C.; Kim,~J.; Jeong,~H.; Jun,~S.; Ha,~B.-Y. Ring polymers as model bacterial chromosomes: Confinement, chain topology, single chain statistics, and how they interact. \emph{Soft Matter} \textbf{2012}, \emph{8}, 2095--2102\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jung \latin{et~al.}(2012)Jung, Kim, Jun, and Ha]{jung2012intrachain} Jung,~Y.; Kim,~J.; Jun,~S.; Ha,~B.-Y. Intrachain ordering and segregation of polymers under confinement. \emph{Macromolecules} \textbf{2012}, \emph{45}, 3256--3262\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liu and Chakraborty(2012)Liu, and Chakraborty]{liu2012segregation} Liu,~Y.; Chakraborty,~B. Segregation of polymers in confined spaces. \emph{Phys. Biol.} \textbf{2012}, \emph{9}, 066005\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dorier and Stasiak(2013)Dorier, and Stasiak]{dorier2013modelling} Dorier,~J.; Stasiak,~A. Modelling of crowded polymers elucidate effects of double-strand breaks in topological domains of bacterial chromosomes. \emph{Nucleic Acids Res.} \textbf{2013}, \emph{41}, 6808--6815\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ra{\v{c}}ko and Cifra(2013)Ra{\v{c}}ko, and Cifra]{racko2013segregation} Ra{\v{c}}ko,~D.; Cifra,~P. Segregation of semiflexible macromolecules in nanochannel. \emph{J. Chem. Phys.} \textbf{2013}, \emph{138}, 184904\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Shin \latin{et~al.}(2014)Shin, Cherstvy, and Metzler]{shin2014mixing} Shin,~J.; Cherstvy,~A.~G.; Metzler,~R. Mixing and segregation of ring polymers: Spatial confinement and molecular crowding effects. \emph{New J. Phys.} \textbf{2014}, \emph{16}, 053047\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Minina and Arnold(2014)Minina, and Arnold]{minina2014induction} Minina,~E.; Arnold,~A. Induction of entropic segregation: the first step is the hardest. \emph{Soft Matter} \textbf{2014}, \emph{10}, 5836--5841\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Minina and Arnold(2015)Minina, and Arnold]{minina2015entropic} Minina,~E.; Arnold,~A. Entropic Segregation of Ring Polymers in Cylindrical Confinement. \emph{Macromolecules} \textbf{2015}, \emph{48}, 4998--5005\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chen \latin{et~al.}(2015)Chen, Yu, Wang, and Luo]{chen2015polymer} Chen,~Y.; Yu,~W.; Wang,~J.; Luo,~K. Polymer segregation under confinement: Influences of macromolecular crowding and the interaction between the polymer and crowders. \emph{J. Chem. Phys.} \textbf{2015}, \emph{143}, 134904\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Polson and Montgomery(2014)Polson, and Montgomery]{polson2014polymer} Polson,~J.~M.; Montgomery,~L.~G. Polymer segregation under confinement: Free energy calculations and segregation dynamics simulations. \emph{J. Chem. Phys.} \textbf{2014}, \emph{141}, 164902\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ra{\v{c}}ko and Cifra(2015)Ra{\v{c}}ko, and Cifra]{racko2015arm} Ra{\v{c}}ko,~D.; Cifra,~P. Arm retraction and escape transition in semi-flexible star polymer under cylindrical confinement. \emph{J. Mol. Model.} \textbf{2015}, \emph{21}, 186--186\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kim \latin{et~al.}(2013)Kim, Jeon, Jeong, Jung, and Ha]{kim2013elasticity} Kim,~J.; Jeon,~C.; Jeong,~H.; Jung,~Y.; Ha,~B.-Y. Elasticity of flexible polymers under cylindrical confinement: appreciating the blob scaling regime in computer simulations. \emph{Soft Matter} \textbf{2013}, \emph{9}, 6142--6150\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Milchev \latin{et~al.}(2014)Milchev, Muller, and Klushin]{milchev2014arm} Milchev,~A.; Muller,~M.; Klushin,~L. Arm Retraction Dynamics and Bistability of a Three-Arm Star Polymer in a Nanopore. \emph{Macromolecules} \textbf{2014}, \emph{47}, 2156--2168\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Muralidhar and Dorfman(2016)Muralidhar, and Dorfman]{muralidhar2016backfolding} Muralidhar,~A.; Dorfman,~K.~D. Backfolding of DNA confined in nanotubes: Flory theory versus the two-state cooperativity model. \emph{Macromolecules} \textbf{2016}, \emph{49}, 1120--1126\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Muralidhar \latin{et~al.}(2016)Muralidhar, Quevillon, and Dorfman]{muralidhar2016backfolded} Muralidhar,~A.; Quevillon,~M.~J.; Dorfman,~K.~D. The backfolded Odijk regime for wormlike chains confined in rectangular nanochannels. \emph{Polymers} \textbf{2016}, \emph{8}, 79\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Werner \latin{et~al.}(2016)Werner, Jain, Muralidhar, Frykholm, Smithe, Fritzsche, Westerlund, Dorfman, and Mehlig]{werner2016emergence} Werner,~E.; Jain,~A.; Muralidhar,~A.; Frykholm,~K.; Smithe,~T.; Fritzsche,~J.; Westerlund,~F.; Dorfman,~K.; Mehlig,~B. Emergence of hairpins in the conformations of a confined polymer. \emph{arXiv preprint arXiv:1611.05736} \textbf{2016}, \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Humphrey \latin{et~al.}(1996)Humphrey, Dalke, and Schulten]{HUMP96} Humphrey,~W.; Dalke,~A.; Schulten,~K. {VMD} -- {V}isual {M}olecular {D}ynamics. \emph{J. Mol. Graphics} \textbf{1996}, \emph{14}, 33--38\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Frenkel and Smit(2002)Frenkel, and Smit]{frenkel2002understanding} Frenkel,~D.; Smit,~B. \emph{Understanding Molecular Simulation: From Algorithms to Applications}, 2nd ed.; Academic Press: London, 2002; Chapter 7\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Polson \latin{et~al.}(2013)Polson, Hassanabad, and McCaffrey]{polson2013simulation} Polson,~J.~M.; Hassanabad,~M.~F.; McCaffrey,~A. Simulation study of the polymer translocation free energy barrier. \emph{J. Chem. Phys.} \textbf{2013}, \emph{138}, 024906\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Polson and McCaffrey(2013)Polson, and McCaffrey]{polson2013polymer} Polson,~J.~M.; McCaffrey,~A.~C. Polymer translocation dynamics in the quasi-static limit. \emph{J. Chem. Phys.} \textbf{2013}, \emph{138}, 174902\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Polson and Dunn(2014)Polson, and Dunn]{polson2014evaluating} Polson,~J.~M.; Dunn,~T.~R. Evaluating the applicability of the Fokker-Planck equation in polymer translocation: A Brownian dynamics study. \emph{J. Chem. Phys.} \textbf{2014}, \emph{140}, 184904\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Polson(2015)]{polson2015polymer} Polson,~J.~M. Polymer translocation into and out of an ellipsoidal cavity. \emph{J. Chem. Phys.} \textbf{2015}, \emph{142}, 174903\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Polson \latin{et~al.}()Polson, Kerry, and Montgomery]{polson2017segregation} Polson,~J.~M.; Kerry,~D. R.-M.; Montgomery,~L.~G. Unpublished\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ding \latin{et~al.}(2016)Ding, Chen, Duan, and Shi]{ding2016flow} Ding,~M.; Chen,~Q.; Duan,~X.; Shi,~T. Flow-induced polymer translocation through a nanopore from a confining nanotube. \emph{J. Chem. Phys.} \textbf{2016}, \emph{144}, 174903\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Odijk(2006)]{odijk2006dna} Odijk,~T. DNA confined in nanochannels: Hairpin tightening by entropic depletion. \emph{J. Chem. Phys.} \textbf{2006}, \emph{125}, 204904\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chen(2017)]{chen2017conformational} Chen,~J.~Z. Conformational Properties of a Back-Folding Wormlike Chain Confined in a Cylindrical Tube. \emph{Phys. Rev. Lett.} \textbf{2017}, \emph{118}, 247802\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Odijk(1986)]{odijk1986theory} Odijk,~T. Theory of lyotropic polymer liquid crystals. \emph{Macromolecules} \textbf{1986}, \emph{19}, 2313--2329\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Werner and Mehlig(2015)Werner, and Mehlig]{werner2015scaling} Werner,~E.; Mehlig,~B. Scaling regimes of a semiflexible polymer in a rectangular channel. \emph{Phys. Rev. E} \textbf{2015}, \emph{91}, 050601\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
2,877,628,089,886
arxiv
\section{Introduction} \emph{Deep Learning} has gained popularity over the last decade due to its ability to learn data representations in an unsupervised manner and generalize to unseen data samples using hierarchical representations. The most recent and best-known \emph{Deep learning model} is the \emph{Deep Belief Network} ~\cite{Hinton06afast}. Over the last decade, numerous breakthroughs have been made in the field of Deep Learning; a notable one being ~\cite{QuocCCDN12}, where a locally connected sparse autoencoder was used to detect objects in the ImageNet dataset~\cite{Deng09imagenet} producing state-of-the-art results. In ~\cite{MohamedDH12}, Deep Belief Networks have been used for modeling acoustic signals and have been shown to outperform traditional approaches using Gaussian Mixture Models for Automatic Speech Recognition (ASR). They have also been found useful in hybrid learning models for noisy handwritten digit classification ~\cite{basuesann2015}. Another closely related approach, which has gained much traction over the last decade, is the Convolutional Neural Network~\cite{Lecun98gradient-basedlearning}. This has been shown to outperform Deep Belief Network in classical object recognition tasks like MNIST~\cite{mnist}, and CIFAR~\cite{Krizhevsky09learningmultiple}. A related and equally hard problem is Satellite\footnote{Note that we use the terms satellite and airborne interchangeably in this paper because the extracted features and learning algorithms are generic enough to handle both satellite and airborne datasets.} image classification. It involves terabytes of data and significant variations due to conditions in data acquisition, pre-processing and filtering. Traditional supervised learning methods like Random Forests~\cite{Breiman:2001} do not generalize well for such a large-scale learning problem. A novel classification algorithm for detecting roads in Aerial imagery using Deep Neural Networks was proposed in ~\cite{MnihHinton2010}. The problem of detecting various land cover classes in general is a difficult problem considering the significantly higher intra-class variability in land cover types such as trees, grasslands, barren lands, water bodies, etc. as compared to that of roads. Also, in ~\cite{MnihHinton2010}, the authors used a window of size 64$\times$64 to derive contextual information. For our general classification problem, a 64$\times$64 window is too big a context covering a total area of 64m$\times$64m. A tree canopy, or a grassy patch can typically be much smaller than this area and hence we are constrained to use a contextual window having a maximum dimension of 28m$\times$28m. Traditional supervised learning approaches require carefully selected handcrafted features and substantial amounts of labeled data. On the other hand, purely unsupervised approaches are not able to learn the higher order dependencies inherent in the land cover classification problem. So, we propose a combination of handcrafted features that were first used in ~\cite{haralick1973} and an unsupervised learning framework using Deep Belief Network ~\cite{Hinton06afast} that can learn data representations from large amounts of unlabeled data. There has been limited research in the field of satellite image classification due to a dearth of labeled satellite image datasets. The most well known labeled satellite dataset is the NLCD 2006 ~\cite{wickham2013}, which covers the entire globe and provide a spatial resolution of 30m. However, at this resolution, it becomes extremely difficult to distinguish between various landcover types. A high-resolution dataset acquired at a spatial resolution of 1.2m was used in ~\cite{MnihHinton2010}. However, the total area covered by the datasets namely URBAN1 and URBAN2 was ${\sim}600$ square kilometers, which included both training and testing datasets. The labeling was also available only for roads. Satellite/airborne image classification at a spatial resolution of 1-m was addressed in \cite{basu2015}. However, they performed tree-cover delineation by training a binary classifier based on Feedforward Backpropagation Neural Networks. The main contributions of our work are twofold -- (1) We first present two labeled datasets of airborne images -- SAT-4 and SAT-6 covering a total area of ${\sim}800$ square kilometers, which can be used to further the research and investigate the use of various learning models for airborne image classification. Both SAT-4 and SAT-6 are sampled from a much larger dataset~\cite{naip}, which covers the whole of continental United States and can be used to create labeled landcover maps, which can then be used for various applications such as measuring ground carbon content or estimating total area of rooftops for solar power generation. (2) Next, we present a framework for the classification of satellite/airborne imagery that a) extracts features from the image, b) normalizes the features, and c) feeds the normalized feature vectors to a Deep Belief Network for classification. On the SAT-4 dataset, our framework outperforms three state-of-the-art object recognition algorithms - Deep Belief Networks, Convolutional Neural Networks and Stacked Denoising Autoencoders by $\sim$11\% and produces an accuracy of 97.95\%. On SAT-6, it produces an accuracy of 93.9\% and outperforms the other algorithms by $\sim$15\%. We also present a statistical analysis based on Distribution Separability Criterion and Intrinsic Dimensionality Estimation to justify the effectiveness of our feature extraction approach to obtain better representations for satellite data. \section[Dataset]{Dataset\footnote{The SAT-4 and SAT-6 datasets are available at the web link \cite{datasets}}} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{sat_img} \caption{Sample images from the SAT-6 dataset} \label{Fig:dataset} \end{figure*} Images were extracted from the National Agriculture Imagery Program (NAIP \cite{naip}) dataset. The NAIP dataset consists of a total of 330,000 scenes spanning the whole of the Continental United States (CONUS). We used the uncompressed digital Ortho quarter quad tiles (DOQQs) which are GeoTIFF images and the area corresponds to the United States Geological Survey (USGS) topographic quadrangles. The average image tiles are $\sim$6000 pixels in width and $\sim$7000 pixels in height, measuring around 200 megabytes each. The entire NAIP dataset for CONUS is $\sim$65 terabytes. The imagery is acquired at a ground sample distance (GSD) of 1 meter. The horizontal accuracy lies within 6 meters of ground control points identifiable from the acquired imagery \cite{modis}. The images consist of 4 bands -- red, green, blue and Near Infrared (NIR). In order to maintain the high variance inherent in the entire NAIP dataset, we sample image patches from a multitude of scenes (a total of 1500 image tiles) covering different landscapes like rural areas, urban areas, densely forested, mountainous terrain, small to large water bodies, agricultural areas, etc. covering the whole state of California. An image labeling tool developed as part of this study was used to manually label uniform image patches belonging to a particular landcover class. Once labeled, $28{\times}28$ non-overlapping sliding window blocks were extracted from the uniform image patch and saved to the dataset with the corresponding label. We chose $28{\times}28$ as the window size to maintain a significantly bigger context as pointed by \cite{MnihHinton2010}, and at the same time not to make it as big as to drop the relative statistical properties of the target class conditional distributions within the contextual window. Care was taken to avoid interclass overlaps within a selected and labeled image patch. Sample images from the dataset are shown in Figure \ref{Fig:dataset}. \subsection{SAT-4} SAT-4 consists of a total of 500,000 image patches covering four broad land cover classes. These include -- barren land, trees, grassland and a class that consists of all land cover classes other than the above three. 400,000 patches (comprising of four-fifths of the total dataset) were chosen for training and the remaining 100,000 (one-fifths) were chosen as the testing dataset. We ensured that the training and test datasets belong to disjoint set of image tiles. Each image patch is size normalized to $28{\times}28$ pixels. Once generated, both the training and testing datasets were randomized using a pseudo-random number generator. \subsection{SAT-6} SAT-6 consists of a total of 405,000 image patches each of size $28{\times}28$ and covering 6 landcover classes - barren land, trees, grassland, roads, buildings and water bodies. 324,000 images (comprising of four-fifths of the total dataset) were chosen as the training dataset and 81,000 (one fifths) were chosen as the testing dataset. Similar to SAT-4, the training and test sets were selected from disjoint NAIP tiles. Once generated, the images in the dataset were randomized in the same way as that for SAT-4. The specifications for the various landcover classes of SAT-4 and SAT-6 were adopted from those used in the National Land Cover Data (NLCD) algorithm \cite{nlcd_desc}. \section{Investigation of various \\ Deep Learning Models} \subsection{Deep Belief Network} \label{sec:DBN} Deep Belief Network (DBN) consists of multiple layers of stochastic, latent variables trained using an unsupervised learning algorithm followed by a supervised learning phase using feedforward backpropagation Neural Networks. In the unsupervised pre-training stage, each layer is trained using a Restricted Boltzmann Machine (RBM). Unsupervised pre-training is an important step in solving a classification problem with terabytes of data and high variability. A DBN is a graphical model \cite{Koller2009} where neurons of the hidden layer are conditionally independent of one another for a particular configuration of the visible layer and vice versa. A DBN can be trained layer-wise by iteratively maximizing the conditional probability of the input vectors or visible vectors given the hidden vectors and a particular set of layer weights. As shown in \cite{Hinton06afast}, this layer-wise training can help in improving the variational lower bound on the probability of the input training data, which in turn leads to an improvement of the overall generative model. We first provide a formal introduction to the Restricted Boltzmann Machine. The RBM can be denoted by the energy function: \begin{equation} E(v,h) = -\sum_{i} a_i v_i - \sum_{j} b_j h_j - \sum_{i} \sum_{j} h_j w_{i,j} v_i \end{equation} where, the RBM consists of a matrix of layer weights $W=(w_{i,j})$ between the hidden units $h_j$ and the visible units $v_i$. The $a_i$ and $b_j$ are the bias weights for the visible units and the hidden units respectively. The RBM takes the structure of a bipartite graph and hence it only has inter-layer connections between the hidden or visible layer neurons but no intra-layer connections within the hidden or visible layers. So, the activations of the visible unit neurons are mutually independent for a given set of hidden unit activations and vice versa \cite{carreiraperpinan2005contrastive}. Hence, by setting either $h$ or $v$ constant, we can compute the conditional distribution of the other as follows: \begin{equation} P(h_j=1|v) = \sigma(b_j + \sum_{i=1}^{m} w_{i,j} v_{i}) \end{equation} \begin{equation} P(v_i=1|h) = \sigma(a_i + \sum_{j=1}^{n} w_{i,j} h_{j}) \end{equation} where, $\sigma$ denotes the log sigmoid function: \begin{equation} f(x) = \frac{1}{1+e^{-x}} \end{equation} The training algorithm maximizes the expected log probability assigned to the training dataset $V$. So if the training dataset $V$ consists of the visible vectors $v$, then the objective function is as follows: \begin{equation} \argmax_{W} E\Big[\sum_{v \in V} \log{P(v)} \Big] \end{equation} A RBM is trained using a \emph{Contrastive Divergence} algorithm \cite{carreiraperpinan2005contrastive}. Once trained, the DBN can be used to initialize the weights of the Neural Network for the supervised learning phase \cite{Bengio2009}. Next, we investigate the classification accuracy of various architectures of DBN on both SAT-4 and SAT-6 datasets. \subsubsection{DBN Results on SAT-4 \& SAT-6} To investigate the performance of the DBN, we experiment with both \emph{big} and \emph{deep} neural architectures. This is done by varying the number of neurons per layer as well as the total number of layers in the network. Our objective is to investigate whether the more complex features learned in the deeper layers of the DBN are able to provide the network with the discriminative power required to handle higher-order texture features typical of satellite imagery data. The results from the DBN for various network architectures for SAT-4 and SAT-6 are enumerated in Table \ref{table:DBN_accuracy_SAT_4_and_SAT_6}. Each network was trained for a maximum of 500 epochs and the network state with the lowest validation error was used for testing. Regularization is done using $L_2$ norm-regularization. It can be seen from the table that for both SAT-4 and SAT-6, the classifier accuracy initially improves and then falls as more neurons or layers are added to the network. \begin{table}[h] \centering \begin{tabular}{ | c | c | c | c | } \hline \textbf{Network Arch.} & \textbf{Classifier} & \textbf{Classifier}\\ \textbf{Neurons/layer} & \textbf{Accuracy} & \textbf{Accuracy} \\ \textbf{[Layers]} & \textbf{SAT-4 (\%)} & \textbf{SAT-6 (\%)} \\ \hline 100 [2] & 79.74 & 68.51 \\ \hline \textbf{100 [3]} & \textbf{81.78} & \textbf{76.47} \\ \hline 100 [4] & 79.802 & 74.44 \\ \hline 100 [5] & 62.776 & 63.14 \\ \hline 500 [2] & 68.916 & 60.35 \\ \hline 500 [3] & 71.674 & 61.12 \\ \hline 500 [4] & 65.002 & 57.31 \\ \hline 500 [5] & 64.174 & 55.78 \\ \hline \end{tabular} \caption{Classification Accuracy of DBN with various architectures on SAT-4 and SAT-6} \label{table:DBN_accuracy_SAT_4_and_SAT_6} \end{table} \subsection{Convolutional Neural Network} Convolutional Neural Network (CNN) first introduced in \cite{fukushima:neocognitronbc} is a hierarchical model inspired by the human visual cortical system \cite{Hubel:62}. It was significantly improved and applied to document recognition in \cite{Lecun98gradient-basedlearning}. A committee of 35 convolutional neural nets with elastic distortions and width normalization \cite{Ciresan2012} has produced state-of-the-art results on the MNIST handwritten digits dataset. CNN consists of a hierarchical representation using convolutional layers and fully connected layers, with non-linear transformations and feature pooling. They also include local or global pooling layers. Pooling can be implemented in the form of subsampling, averaging, max-pooling or stochastic pooling. Each of these pooling architectures has its own advantages and limitations and numerous studies are in place that investigate the effect of different pooling functions on representation power of the model (\cite{Scherer2010},\cite{ICML2011Saxe_551}). A very important feature of Convolutional Neural Network is weight sharing in the convolutional layers, so that the same filter bank is applied to all pixels in a particular layer; thereby generating sparse networks that can generalize well to unseen data samples while maintaining the representational power inherent in deep hierarchical architectures. We investigate the use of different CNN architectures for SAT-4 and SAT-6 as detailed below. \subsubsection{CNN Results on SAT-4 \& SAT-6} For CNN, we vary the number of feature maps in each layer as well as the total number of convolutional and subsampling layers. The results from various network configurations with increasing number of maps and layers is enumerated in Table \ref{table:CNN_accuracy_SAT_4_and_SAT_6}. For the experiments, we used both $3{\times}3$ and $5{\times}5$ kernels for the convolutional layers and $3{\times}3$ averaging and max-pooling kernels for the sub-sampling layers. We also use overlapping pooling windows with a stride size of $2$ pixels. The last sub-sampling layer is connected to a fully-connected layer with 64 neurons. The output of the fully-connected layer is fed into a 4-way softmax function that generates a probability distribution over the 4 class labels of SAT-4 and a 6-way softmax for the 6 class labels of SAT-6. In Table \ref{table:CNN_accuracy_SAT_4_and_SAT_6}, the ``Ac-Bs(n)'' notation denotes that the network has a convolutional layer with A feature maps followed by a sub-sampling layer with a kernel of size $B{\times}B$. `n' denotes the type of pooling function in the sub-sampling layer, `a' denotes average pooling while `m' denotes max-pooling. From the table, it can be seen that the smallest networks consistently produce the best results. Also, both for SAT-4 and SAT-6, using networks with convolution kernels of size $3{\times}3$ leads to a significant drop in classifier accuracy. The biggest networks with 50 maps per layer also exhibit significant drop in classifier accuracy. \begin{table}[h!] \centering \begin{tabular}{ | c | c | c | c |} \hline \textbf{Network Architecture} & \textbf{Accuracy} & \textbf{Accuracy}\\ \textbf{(Convolution kernel size)} & \textbf{SAT-4} & \textbf{SAT-6} \\ &\textbf{(\%)} & \textbf{(\%)}\\ \hline \textbf{6c-3s(a)-12c-3s(m) ($5{\times}5$)} & \textbf{86.827} & \textbf{79.063} \\ \hline 18c-3s(a)-36c-3s(m) ($5{\times}5$) & 82.325 & 78.704\\ \hline 6c-3s(a)-12c-3s(m)-12c & 81.907 & 76.963\\ -3s(m)($5{\times}5$) & &\\ \hline 50c-3s(a)-50c-3s(m)-50c & 73.85 & 75.689 \\ -3s(m)($5{\times}5$) & &\\ \hline 6c-3s(a)-12c-3s(m) ($3{\times}3$) & 73.811 & 54.385 \\ \hline 6c-3s(m)-12c-3s(m) ($5{\times}5$) & 85.612 & 77.636 \\ \hline \end{tabular} \caption{Classification Accuracy of CNN with various architectures on SAT-4} \label{table:CNN_accuracy_SAT_4_and_SAT_6} \end{table} \subsection{Stacked Denoising Autoencoder} A Stacked Denoising Autoencoder (SDAE)~\cite{Vincent:2010} consists of a combination of multiple sparse autoencoders, which can be trained in a greedy-layerwise fashion similar to that of Restricted Boltzmann Machines in a DBN. Each autoencoder is associated with a set of weights and biases. In the SDAE, each layer can be trained independent of the other layers. Once trained, the parameters of an autoencoder are frozen in place. The training algorithm is comprised of two phases -- a forward pass phase and a backward pass phase. The forward pass, also called as the encoding phase encodes raw image pixels into an increasingly higher-order representation. The backward pass simply performs the reverse operation by decoding these higher-order features into simpler representations. The encoding step is given as: \begin{equation} a^{(l)} = f(z^{(l)}) \end{equation} \begin{equation} z^{(l+1)}= W^{(l,1)} a^{(l)}+ b^{(l,1)} \end{equation} And the decoding step is as follows: \begin{equation} a^{(n+l)} = f(z^{(n+l)}) \end{equation} \begin{equation} z^{(n+l+1)}= W^{(n-l,2)} a^{(n+l)}+ b^{(n-l,2)} \end{equation} The hidden unit activations of the neurons in the deepest layer are used for classification after a supervised fine-tuning using backpropagation. \subsubsection{SDAE Results on SAT-4 \& SAT-6} Different network configurations were chosen for the SDAE in a manner similar to that described above for DBN and CNN. The results are enumerated in Table \ref{table:SAE_accuracy_SAT_4_and_SAT_6}. Similar to DBN, each network is trained for a maximum of 500 epochs and the lowest test error is considered for evaluation. As highlighted in the Table, networks with 5 layers and 100 neurons in each layer produce the best results on both SAT-4 and SAT-6. It can be seen from the table that on both datasets, the classifier accuracy initially improves and then drops with increasing number of neurons and layers, similar to that of DBN. Also, the biggest networks with 500 and 2352 neurons in each layer exhibit a significant drop in classifier accuracy. \begin{table}[h] \centering \begin{tabular}{ | c | c | c | } \hline \textbf{Network Arch.} & \textbf{Classifier} & \textbf{Classifier}\\ \textbf{Neurons/layer} & \textbf{Accuracy} & \textbf{Accuracy} \\ \textbf{[Layers]} & \textbf{SAT-4 (\%)} & \textbf{SAT-6 (\%)} \\ \hline 100 [1] & 75.88 & 74.89 \\ \hline 100 [2] & 76.854 & 76.12 \\ \hline 100 [3] & 77.804 & 76.45 \\ \hline 100 [4] & 78.674 & 76.52 \\ \hline \textbf{100 [5]} & \textbf{79.978} & \textbf{78.43} \\ \hline 100 [6] & 75.766 & 76.72 \\ \hline 500 [3] & 63.832 & 54.37 \\ \hline 2352 [2] & 51.766 & 37.121 \\ \hline \end{tabular} \caption{Classification Accuracy of SDAE with various architectures on SAT-4 and SAT-6} \label{table:SAE_accuracy_SAT_4_and_SAT_6} \end{table} \section{DeepSat - A Detailed \\ Architectural Overview} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{CLASSIFICATION_ARCHITECTURE} \caption{Schematic of the DeepSat classification framework} \label{Fig:1} \end{figure*} Figure \ref{Fig:1} schematically describes our proposed classification framework. Instead of the traditional DBN model described in Section \ref{sec:DBN}, which takes as input the multi-channel image pixels reshaped as a linear vector, our classification framework first extracts features from the image which in turn are fed as input to the DBN after normalizing the feature vectors. \subsection{Feature Extraction} The feature extraction phase computes 150 features from the input imagery. The key features that we use for classification are mean, standard deviation, variance, 2nd moment, direct cosine transforms, correlation, co-variance, autocorrelation, energy, entropy, homogeneity, contrast, maximum probability and sum of variance of the hue, saturation, intensity, and NIR channels as well as those of the color co-occurrence matrices. These features were shown to be useful descriptors for classification of satellite imagery in previous studies (\cite{haralick1973}, \cite{Soh99textureanalysis}, \cite{Clausi2002}). Since two of the classes in SAT-4 and SAT-6 are trees and grasslands, we incorporate features that are useful determinants for segregation of vegetated areas from non-vegetated ones. The red band already provides a useful feature for discrimination of vegetated and non-vegetated areas based on chlorophyll reflectance, however, we also use derived features (vegetation indices derived from spectral band combinations) that are more representative of vegetation greenness -- this includes the Enhanced Vegetation Index (EVI \cite{huete2002}), Normalized Difference Vegetation Index (NDVI \cite{rouse1974}, \cite{Tucker1979127}) and Atmospherically Resistant Vegetation Index (ARVI \cite{kaufman1992}). These indices are expressed as follows: \begin{equation} EVI = G\times\frac{NIR-Red}{NIR+c_{red}\times{Red}-c_{blue}\times{Blue}+L} \end{equation} Here, the coefficients $G$, $c_{red}$, $c_{blue}$ and $L$ are chosen to be 2.5, 6, 7.5 and 1 following those adopted in the MODIS EVI algorithm \cite{modis}. \begin{equation} NDVI = \frac{NIR-Red}{NIR+Red} \end{equation} \begin{equation} ARVI = \frac{NIR-(2\times{Red}-Blue)}{NIR+(2\times{Red}+Blue)} \end{equation} The performance of our learner depends to a large extent on the selected features. Some features contribute more than others towards optimal classification. The 150 features extracted are narrowed down to 22 using a feature-ranking algorithm based on Distribution Separability Criterion \cite{Boureau10atheoretical}. Details of the feature ranking method along with the ranking for all the 22 features used in our framework is listed in Section \ref{Section:feature_ranking}. \subsection{Data Normalization} The feature vectors extracted from the training and test datasets are separately normalized to lie in the range $[0,1]$. This is done using the following equation: \begin{equation} F_{normalized} = \frac{F-F_{min}}{F_{max}-F_{min}} \end{equation} where, $F_{min}$ and $F_{max}$ are computed for a particular feature type over all images in the dataset. \subsection{Classification} The set of normalized feature descriptors extracted from the input image is fed into the DBN, which is then trained using \emph{Contrastive divergence} in the same way as explained in Section \ref{sec:DBN}. Once trained the DBN is used to initialize the weights of a feedforward backpropagation neural network. The neural network gives an estimate of the posterior probabilities of the class labels, given the input vectors, which is the feature vector in our case. As illustrated in \cite{Bishop1995}, the outputs of a neural network which is obtained by optimizing the sum-squared error-gradient function approximates the average of the class conditional distributions of the target variables \begin{equation} \label{eq:1} y_k(x) = \langle t_k|x \rangle = \int t_k p(t_k|x)dt_k \end{equation} Here, $t_k$ are the set of target values that represent the class membership of the input vector $x_k$. For a binary classification problem, in order to map the outputs of the neural network to the posterior probabilities of the labeling, we use a single output $y$ and a target coding that sets $t^n=1$ if $x^n$ is from class $C_1$ and $t^n=0$ if $x^n$ is from class $C_2$. The target distribution would then be given as \begin{equation} \label{eq:2} p(t_k|x) = \delta{(t-1)}P(C_1|x) + \delta{(t)}P(C_2|x) \end{equation} Here, $\delta$ denotes the Dirac delta function which has the properties $\delta(x)=0$ if $x \neq 0$ and \begin{equation} \int_{-\infty}^{\infty} \delta(x)\,\mathrm{d}x = 1 \end{equation} From \ref{eq:1} and \ref{eq:2}, we get \begin{equation} y(x)= P(C_1|x) \end{equation} So, the network output $y(x)$ represents the posterior probability of the input vector $x$ having the class membership $C_1$ and the probability of the class membership $C_2$ is given by $P(C_2|x) = 1-y(x)$. This argument can easily be extended to multiple class labels for a generalized multi-class classification problem. The feature extraction phase proves to be a useful dimensionality reduction technique that helps improve the discriminative power of the DBN based classifier significantly. \section{Results and Comparative Studies} The feature vectors extracted from the dataset are fed into DBNs with different configurations. Since, the feature vectors create a low dimensional representation of the data, so, DeepSat converges to high accuracy even with a much smaller network with fewer layers and very few neurons per layer. This speeds up network training by several orders of magnitude. Various network architectures along with the classification accuracy for DeepSat on the SAT-4 and SAT-6 datasets are listed in Table \ref{table:DeepSat_accuracy_SAT_4_and_6}. For regularization, we again use $L_2$ norm-regularization. From the Table, it is evident that the best performing DeepSat network outperforms the best traditional Deep Learning approach (CNN) by ${\sim}$11\% on the SAT-4 dataset and by ${\sim}$15\% on the SAT-6 dataset. We also compare DeepSat with a Random Forest classifier to investigate the advantages gained by unsupervised pre-training in DBN as opposed to the traditional supervised learning in Random Forests. On SAT-4, the Random forest classifier produces an accuracy of 69\% while on SAT-6, it produces an accuracy of 54\%. The highest accuracy was obtained for a forest with 100 trees. Further increase in the number of trees did not yield any significant improvement in classifier accuracy. It can be easily seen that the various Deep architectures produce better classification accuracy than the Random Forest classifier which relies solely on supervised learning. \begin{table}[h!] \centering \begin{tabular}{ | c | c | c | } \hline \textbf{Network Arch.} & \textbf{Classifier} & \textbf{Classifier}\\ \textbf{Neurons/layer} & \textbf{Accuracy} & \textbf{Accuracy} \\ \textbf{[Layers]} & \textbf{SAT-4 (\%)} & \textbf{SAT-6 (\%)} \\ \hline 10 [2] & 96.585 & 91.91\\ \hline 10 [3] & 96.8 & 87.716 \\ \hline 20 [2] & 97.115 & 86.21 \\ \hline 20 [3] & 95.473 & 93.42 \\ \hline \textbf{50 [2]} & \textbf{97.946} &\textbf{93.916} \\ \hline 50 [3] & 97.654 & 92.65 \\ \hline 100 [2] & 97.292 & 89.08 \\ \hline 100 [3] & 95.609 & 91.057 \\ \hline \end{tabular} \caption{Classification Accuracy of DeepSat with various network architectures on SAT-4 and SAT-6} \label{table:DeepSat_accuracy_SAT_4_and_6} \end{table} \begin{figure*} \centering \subfigure[Distribution of NIR on the SAT-4 classes]{\includegraphics[width=0.45\linewidth]{NIR_distributions}} \subfigure[Distribution of a sample DeepSat feature (Autocorrelation of Hue Color co-occurance matrix) on the SAT-4 classes]{ \includegraphics[width=0.45\linewidth]{feature_distributions}} \caption{Distributions of the raw NIR values for traditional Deep Learning Algorithms and a sample DeepSat feature for various classes on SAT-4 (\emph{Best viewed in color})} \label{fig:distributions} \end{figure*} \section{Why Traditional Deep Architectures are not enough for SAT-4 \& SAT-6?} While traditional Deep Learning approaches have produced state-of-the-art results for various pattern recognition problems like handwritten digit recognition \cite{mnist}, object recognition \cite{Krizhevsky09learningmultiple}, face recognition \cite{DeepFace2013}, etc., but satellite datasets have high intra and inter-class variability and the amount of labeled data is much smaller as compared to the total size of the dataset. Also, higher-order texture features are a very important discriminative parameter for various landcover classes. On the contrary, shape/edge based features which are predominantly learned by various Deep architectures are not very useful in learning data representations for satellite imagery. This explains the fact why traditional Deep architectures are not able to converge to the global optima even for reasonably large as well as Deep architectures. Also, spatially contextual information is another important parameter for modeling satellite imagery. In traditional Deep Learning approaches like DBN and SDAE, the relative spatial information of the pixels is lost. As a result the orderless pool of pixel values which acts as input to the Deep Networks lack sufficient discriminative power to be well-represented even by very big and/or deep networks. CNN however, involves feature-pooling from a local spatial neighborhood, which justifies its improved performance over the other two algorithms on both SAT-4 and SAT-6. Even though our approach extracts an orderless pool of feature vectors, the spatial context is already well-represented in the individual feature values themselves. We substantiate our arguments about the effectiveness of our feature extraction approach from a statistical point of view as detailed in the analysis below. \begin{table}[h] \centering \begin{tabular}{ | c | c | c | c | } \hline & & \textbf{Dist. b/w} & \textbf{Standard} \\ & & \textbf{Means} & \textbf{Deviations} \\ \hline \multirow{2}{*}{\raisebox{-.15in}{\rotatebox{90}{\tiny{SAT-4}}}} & Raw Images & 0.1994 & 0.1166 \\ & DeepSat Features & \textbf{0.8454} & \textbf{0.0435} \\ \hline \multirow{2}{*}{\raisebox{-.15in}{\rotatebox{90}{\tiny{SAT-6}}}} & Raw Images & 0.3247 & 0.1273 \\ & DeepSat Features & \textbf{0.9726} & \textbf{0.0491} \\ \hline \end{tabular} \caption{Distance between Means and Standard Deviations for raw image values and DeepSat feature vectors for SAT-4 and SAT-6} \label{table:Distribution_mean_and_sd} \end{table} \subsection{A Statistical Perspective based on Distribution Separability Criterion}\label{statistical_perspective} Improving classification accuracy can be viewed as maximizing the separability between the class-conditional distributions. Following the analysis presented in \cite{Boureau10atheoretical}, we can view the problem of maximizing distribution separability as maximizing the distance between distribution means and minimizing their standard deviations. Figure \ref{fig:distributions} shows the histograms that represent the class-conditional distributions of the NIR channel and a sample feature extracted in the DeepSat framework. As illustrated in Table \ref{table:Distribution_mean_and_sd}, the features extracted in DeepSat have a higher distance between means and a lower standard deviation as compared to the original image distributions, thereby ensuring better class separability. \subsubsection{Feature Ranking}\label{Section:feature_ranking} Following the analysis proposed in Section \ref{statistical_perspective} above, we can derive a metric for the Distribution Separability Criterion as follows: \begin{equation} D_s = \frac{\overbar{\lVert \delta_{mean} \rVert }}{\overbar{\delta_{\sigma}}} \end{equation} where $\overbar{\lVert \delta_{mean} \rVert }$ indicates the mean of distance between means and $\overbar{\delta_{\sigma}}$ indicates the mean of standard deviations of the class conditional distributions. Maximizing $D_s$ over the feature space, a feature ranking can be obtained. Table \ref{table:Feature_ranking} shows the ranking of the various features used in our framework along with the values of the corresponding distance between means $\overbar{\lVert \delta_{mean} \rVert }$, standard deviation $\overbar{\delta_{\sigma}}$ and Distribution Separability Criterion $D_s$. \begin{table}[h!] \centering \begin{tabular}{ |c|c|c|c|c|} \hline \textbf{Rank} & \textbf{Feature} & $\boldsymbol{\overbar{\lVert \delta_{mean} \rVert }}$ & $\boldsymbol{\overbar{\delta_{\sigma}}}$ & $\boldsymbol{D_s}$ \\[1.5ex] \hline 1&I CCM mean & 0.4031 & 0.1371 & 2.9403 \\ \hline 2 & H CCM sosvh & 0.2359 & 0.0928 & 2.5413 \\ \hline 3 & H CCM autoc & 0.2334 & 0.1090 & 2.1417 \\ \hline 4 & S CCM mean & 0.0952 & 0.0675 & 1.4099 \\ \hline 5 & H CCM mean & 0.0629 & 0.0560 & 1.1237 \\ \hline 6 & SR & 0.0403 & 0.0428 & 0.9424 \\ \hline 7 & S CCM & 0.0260 & 0.0312 & 0.8354 \\ & 2nd moment & & & \\ \hline 8 & I CCM & 0.0260 & 0.0312 & 0.8354 \\ & 2nd moment & & & \\ \hline 9 & I 2nd moment & 0.0260 & 0.0312 & 0.8345 \\ \hline 10 & I variance & 0.0260 & 0.0312 & 0.8345 \\ \hline 11 & NIR std & 0.0251 & 0.0315 & 0.7980 \\ \hline 12 & I std & 0.0251 & 0.0314 & 0.7968 \\ \hline 13 & H std & 0.0252 & 0.0317 & 0.7956 \\ \hline 14 & H mean & 0.0240 & 0.0314 & 0.7632 \\ \hline 15 & I mean & 0.0254 & 0.0336 & 0.7541 \\ \hline 16 & S mean & 0.0232 & 0.0319 & 0.7268 \\ \hline 17 & I CCM & 0.0378 & 0.0522 & 0.7228 \\ & covariance & & & \\ \hline 18 & NIR mean & 0.0246 & 0.0351 & 0.6997 \\ \hline 19 & ARVI & 0.0229 & 0.0345 & 0.6622 \\ \hline 20 & NDVI & 0.0215 & 0.0326 & 0.6594 \\ \hline 21 & DCT & 0.0344 & 0.0594 & 0.5792 \\ \hline 22 & EVI & 0.0144 & 0.0450 & 0.3207 \\ \hline \end{tabular} \caption{Ranking of features based on Distribution Separability Criterion for SAT-6} \label{table:Feature_ranking} \end{table} \subsubsection{Distribution Separability and Classifier Accuracy} In order to analyze the improvements achieved in the learning framework due to the feature extraction step, we measured the Distribution Separability of the mean activation of the neurons in each layer of the DBN and that of DeepSat. The results are noted in Figure \ref{dist_sep_neurons}. It can be seen that the mean activation learned by each layer of DeepSat exhibit a significantly higher distribution separability (by several orders of magnitude) than the neurons of a DBN. This justifies the significant improvement in performance of DeepSat (using the features) as compared to the DBN based framework (using the raw pixel values as input). Also, a comparison of Figure \ref{dist_sep_neurons} with Table \ref{table:DBN_accuracy_SAT_4_and_SAT_6} and Table \ref{table:DeepSat_accuracy_SAT_4_and_6} shows that the distribution separabilities using the various architectures of the DBN and DeepSat are positively correlated to the final classifier accuracy. This justifies the effectiveness of our distribution separability metric $D_s$ as a measure of the final classifier accuracy. \begin{figure*} \centering \subfigure[Distribution Separability Criterion of DBN]{\includegraphics[width=0.45\linewidth]{DBN_distSep}} \subfigure[Distribution Separability Criterion of DeepSat]{ \includegraphics[width=0.45\linewidth]{Feature_distSep}} \caption{Distribution Separability Criterion of the neurons in the layers of a DBN and DeepSat with various architectures on SAT-6} \label{dist_sep_neurons} \end{figure*} \section{What is the difference between MNIST, CIFAR-10 and SAT-6 in terms of dimensionality?} We argue that handwritten digit datasets like MNIST and object recognition datasets like CIFAR-10 lie on a much lower dimensional manifold than the airborne SAT-6 dataset. Hence, even if Deep Neural Networks can effectively classify the raw feature space of object recognition datasets but the dimensionality of the airborne image datasets is such that Deep Neural Networks cannot classify them. In order to estimate the dimensionality of the datasets, we use the concept of \emph{intrinsic dimension}\cite{Ceruti20142569}. \subsection{Intrinsic Dimension Estimation using the DanCo algorithm} To estimate the intrinsic dimension of a dataset, we use the DANCo algorithm \cite{Ceruti20142569}. It uses the complementary information provided by the normalized nearest neighbor distances and angles calculated on pairs of neighboring points. Taking 10 rounds of 1000 random samples and averaging, we obtain the intrinsic dimension for the MNIST, CIFAR-10 and SAT-6 datasets and the Haralick features extracted from the SAT-6 dataset. The results are listed in Table \ref{table:Intrinsic_Dimension}. \begin{table}[h] \centering \begin{tabular}{ | c | c | c | } \hline \textbf{Dataset} & \textbf{Intrinsic Dimension} \\ \hline MNIST & 16 \\ \hline CIFAR-10 & 17 \\ \hline SAT-6 & 115 \\ \hline Haralick Features extracted from SAT-6 & \textbf{4.2} \\ \hline \end{tabular} \caption{Intrinsic Dimension estimation using DANCo on the MNIST, CIFAR-10, and SAT-6 datasets and the Haralick features extracted from the SAT-6 dataset.} \label{table:Intrinsic_Dimension} \end{table} So, it can be seen that the intrinsic dimensionality of the SAT-6 dataset is orders of magnitude higher than that of MNIST. So, a deep neural network finds it difficult to classify the SAT-6 dataset because of its intrinsically high dimensionality. However, as seen in the equation above, the features extracted from SAT-6 have a much lower intrinsic dimensionality and lie on a much lower dimensional manifold than the raw vectors and hence can be classified even by networks with relatively smaller architectures. \subsection{Visualizing Data in an n-dimensional space} We can visualize the data as distributed in an n-dimensional unit hypersphere \\ Volume of the sphere, \begin{equation} {V}_{sphere} = \frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}R^{n} = \frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)} \end{equation} for n-dimensional Euclidean space and $\Gamma$ is Euler's gamma function. Now, the total volume of the n-dimensional space can be accounted by the volume of an n-dimensional hypercube of length 2 embedding the hypersphere, i.e, Volume of the n-cube, \begin{equation} {V}_{cube} = {R}^{n} = 2^n \end{equation} So, the relative fraction of the data points which lie on the sphere as compared to the data points on the n-dimensional embedding space is given as \begin{equation} {V}_{relative} = \frac{{V}_{sphere}}{{V}_{cube}} = \frac{\pi^{\frac{n}{2}}}{{2}^{n}\Gamma(\frac{n}{2}+1)} \end{equation} \begin{equation} {V}_{relative} \to 0 ~\text{as}~ n \to \infty \end{equation} This means that as the dimensionality of sample data approaches $\infty$, the spread or scatter of the data points approaches 0 with respect to the total search space. As a result, various classification and clustering algorithms lose their discriminative power in higher dimensional feature spaces. \section{Related Work} Present classification algorithms used for Moderate-resolution Imaging Spectroradiometer (MODIS)(500-m) \cite{Friedl_MCD_Coll5Validation2009} or Landsat(30-m) based land cover maps like NLCD \cite{wickham2013} produce accuracies of 75\% and 78\% resp. The relatively lower resolution of the datasets makes it difficult to analyze the performance of these algorithms for 1-m imagery. A method based on object detection using Bayes framework and subsequent clustering of the objects using Latent Dirichlet Allocation was proposed in \cite{Vaduva2012}. However, their approach detects object groups at a higher level of abstraction like parking lots. Detecting the objects like cars or trees in itself is not addressed in their work. A deep convolutional hierarchical framework was proposed recently by \cite{romerounsupervised}. However, they report results on the AVIRIS Indiana's Indian Pines test site. The spatial resolution of the dataset is limited to 20m and it is difficult to evaluate the performance of their algorithm for object recognition tasks at a higher resolution. An evaluation of various feature learning strategies was done in \cite{Tokarczyk2012}. They evaluated both feature extraction techniques as well as classifiers like DBN and Random Forest for various aerial datasets. However, since the training data was significantly limited, the DBN was not able to produce any improvements over Random Forest even when raw pixel values were fed into the classifier. In contrast, our study shows that DBNs can be better classifiers when there is significant amount of training data to initialize the neural network at a global error basin. \section{Conclusions and Future Directions} Our semi-supervised learning framework produces an accuracy of 97.95\% and 93.9\% on the SAT-4 and SAT-6 datasets and significantly outperforms the state-of-the-art by ${\sim}$11\% and ${\sim}$15\% respectively. The Feature extraction phase is inspired by the remote sensing literature and significantly improves the discriminative power of the framework. For satellite datasets, with inherently high variability, traditional deep learning approaches are unable to converge to a global optima even with significantly big and deep architectures. A statistical analysis based on Distribution Separability Criterion justifies the effectiveness of our feature extraction approach. We plan to investigate the use of various pooling techniques like SPM \cite{Lazebnik:2006} as well as certain sparse representations like sparse coding \cite{Lee07efficientsparse} and Hierarchical representations like Convolutional DBN \cite{Lee:2009:CDBN} to handle satellite datasets. We believe that SAT-4 and SAT-6 will enable researchers to learn better representations for satellite datasets and create benchmarks for the classification of satellite imagery. \section{Acknowledgments} The project is supported by NASA Carbon Monitoring System through Grant \#NN-\\H14ZDA001-N-CMS and Army Research Office (ARO) under Grant \#W911NF1-\\010495. We are grateful to the United States Department of Agriculture for providing us the National Agriculture Imagery Program (NAIP) airborne imagery dataset for the Continental United States. This research was partially supported by the Cooperative Agreement Number NASA-NNX12AD05A, CFDA Number 43.001, for the project identified as "Ames Research Center Cooperative for Research in Earth Science and Technology (ARC-CREST)". Any opinions findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect that of NASA, ARO or the United States Government. \bibliographystyle{abbrv}
2,877,628,089,887
arxiv
\section{Introduction} \label{sect1} One of the ongoing debates around the foundations of quantum mechanics is the question to what extent concepts and properties of classical physics continue to apply to quantum physics. The results of measurement processes in classical physics simply reflect properties of one or more elements of reality that exist independently of observers or measurements. These elements of reality, particles or electromagnetic fields for example, have their time dependent values regardless of this measurement or observation ``context'', and their evolution is only influenced by their direct surroundings. This non-contextual nature of reality has never been at issue within classical physics, unlike the desire for local causality, which, however, had become firmly established with the development of Maxwell's theory of electromagnetism and Einstein's relativity theories. It replaced the unpalatable non-local action at a distance present in Newton's and Coulomb's theories, where a change in location of a massive or charged body had an instantaneous impact on bodies that were arbitrary far away. Quantum mechanics in the Copenhagen interpretation denies the existence of a microscopic reality in which particles and fields have properties that are independent of observations or measurements. This notion of reality without (counterfactual) definiteness was challenged in the famous paper by Einstein, Rosen and Podolsky \cite{EPR35}, who pitted locality of the quantum theory against its completeness. It would require non-local action at a distance, they argued, to avoid the conclusion that measurement results exist independently from the act of measuring and hence a quantum reality with only an evolving or collapsing wave function cannot be complete. Somewhat ironically, in a reversal of the EPR argument, Bell showed \cite{Bell64} that these more complete, i.e., hidden variable versions of quantum mechanics are necessarily in conflict with local causality if they are to reproduce the results computed with the well-established rules of quantum mechanics. Therefore it follows \cite{Norsen05,BellCh10} that the combined EPR and Bell arguments imply that quantum mechanics, with or without hidden variables, does not support the classical notion of locality. Furthermore, the counterfactual nature of reality was challenge by the almost equally famous theorem by Kochen and Specker \cite{KochenSpecker}, which forbids the existence of a non-contextual value map. I.e., in a quantum system, it is not possible to simultaneously assign definite values to all Hermitian operators in the system's Hilbert space (while maintaining the usual functional relationships between these values). Since measurements single out a specific set of (mutually commuting) Hermitian operators, it appears that values of microscopic elements of reality, if these exists, must somehow be dependent on a macroscopic measurement context. This picture is different in Bohm's interpretation of quantum mechanics \cite{deBroglie56,Bohm52}, where he showed that the location of particles can in fact exist as (counterfactually real and non-contextual) elements of reality. Restoring such an observer-independent ontology, which lends itself to numerical simulations of a microscopic quantum world, has subsequently provided valuable insights into the still distinctly non-classical features of quantum physics - for example in the field of quantum chemistry \cite{QuantChem} and quantum cosmology \cite{PintoNeto13}. However, one may wonder why, in Bohm's interpretation, only particle positions (in non-relativistic quatum mechanics) have this factual existence. Additional, intrinsic degrees of freedom that are also represented in the system's Hilbert space, such as particle spin are absent in this quantum world and must remain properties that only manifest themselves indirectly through their impact on particle trajectories \cite{BohmHileyCh10,Norsen14,BricmontGoldsteinHemmick19a}. Consensus, at least among adherents of Bohm-style interpretations, is that it is both sufficient and natural for ``position'' (in contrast to for example ``momentum'') to be the only observable with a factual existence. This is typically substantiated by stating that all observations and experimental results are ultimately realized in terms of locations of pointers or dots on a screen or paper. But one could challenge this assertion: if, on a dark night, I see a very faint star, because just a few photons hit the retina in my eye, do I then observe the location of these photons or is it their momentum that triggers a signal that seeds my awareness of the star's faint light from which I infer its location on the firmament? Or, more importantly, I would argue that my awareness of a measurement result does not only involve a pointer in a specific location, but ultimately consists of a specific, realized state of the visual cortex in my brain. It is not clear that the realized values of {\it only} the positions of the particles in my brain is sufficient to express the mental state of seeing pointer positions or a faint star. Possibly, also particle spin or other internal particle degrees of freedom must assume realized values in order to represent the richness of our conscious minds - i.e., perhaps also these quantum features must be present as elements of reality. An additional, possibly more compelling reason to extend the minimal Bohm interpretation presents itself when considering situations where the omission of internal degrees of freedom in the set of elements of reality leads to particle trajectories with remarkable features -- so much so that they have been characterized as ``surreal'' \cite{Englert93,Scully98}. Explaining and justifying these unexpected features of Bohmian trajectories requires careful analysis and invocation of the non-local contextuality of particle properties other than location \cite{Durr93,Dewdney93,Hiley00,Barrett00}. Including internal particle degrees of freedom into the set of elements of reality, on the other hand, results in particle trajectories that naturally conform with one's intuitive expectations, as is further elaborated on in Appendix \ref{Ap_A}. In this paper I will, therefore, explore an extension of the de Broglie-Bohm-Bell approach (dubbed eBBB) which includes particle spin. Because position is indeed such a natural and obvious candidate to be part of reality and since there are additional supporting arguments \cite{Wiseman07} to single-out location as a preferred observable, I will not further challenge this assumption. As was shown in ref.~\cite{Vink93}, it is straightforward to apply Bell's (re)formulation of Bohmian dynamics \cite{BellCh19} to generate value trajectories for spin provided one chooses a specific spin representation (i.e., a preferred basis). However, there is no natural, fixed choice for such a preferred spin representation, since it depends on the specific (experimental) context in which orientation the spin value is measured. This problem can be overcome through an additional rule for the stochastic dynamics that automatically selects the most natural representation. In the present paper this is achieved by dynamically adjusting the spin representation through guidance of the wave function and conditionally on the realized values of the system's location variables.\footnote{This approach combines elements from the work by Dewdney et al.~\cite{Dewdney87}, who constructed guidance equations for the Euler angles of a system with two spinning particles, and the Bell-type approach from ref.~\cite{Vink93}. In this early work, the approach to represent quantum spin as a directional (angular momentum-like) vector for each particle was abandoned, because such a collection of 3D spin vectors could not accommodate the exponential growth of the dimension of the spin state space when the number of particles increases (cf.~the discussion in sections 10.2 and 10.3 of ref.~\cite{BohmHileyCh10})} This shows explicitly that the required context dependence need not be provided by an (external) experiment, but can be supplied by the dynamically evolving system itself. It is a key benefits of the eBBB approach that it allows for computer simulations of this enriched microscopic quantum world. The ``simulatability'' of this formulation is further strengthened in this paper by a rigorous extension of the stochastic dynamics to discrete time. This not only allows performing numerical simulations of quantum systems without the need for further approximations to solve the stochastic trajectory evolution along with the Schr\"odinger equation, it also simplifies the notion of time and suggests a natural definition for the discrete time steps. It would be interesting to see if similarly detailed and credible simulations of quantum systems that can be performed with the eBBB formulation, for example of Bohm's version of the EPR experiment, can be achieved using other formulations of quantum mechanics that contend not to rely on external observers, e.g., Everett's many-worlds interpretation \cite{Everett57}, the different versions of consistent histories formulations \cite{Griffiths84,Omnes88,Hartle90} and the spontaneous collapse interpretation \cite{GWR86} (see e.g.~refs.~\cite{Goldstein98,Barrett19} for an overview of these approaches). The remainder of the paper is organized as follows. The next section describes the extensions of the eBBB formulation after which section \ref{sect3} focuses on simulations of two types of quantum spin systems. First, a simulation of Larmor precession of an entangled pair of spin-two particles is described, followed by a description of the set-up and results of a simulation of a simplified version of the EPRB experiment (Bohm's modified version of the EPR experiment). The closing section contains a brief summary and discussion. An elaboration of the different behavior of trajectories in the causal Bohm and eBBB formulations, and technical details of the numerical EPRB experiment are relegated to two appendices. \section{Extended de Broglie-Bohm-Bell Formulation} \label{sect2} The extended de Broglie-Bohm-Bell (eBBB) formulation described below introduces three improvements over the approach of refs.~\cite{BellCh19} and \cite{Vink93}: First, the continuous time formulation of the stochastic dynamics is replaced by a discrete time formulation; second, it is shown that the beable dynamics can be expressed using a basis of the Hilbert space that changes in time; third, a special dynamics is proposed for the basis (or representation) of the internal degrees of freedom of the particles (such as spin). With this dynamics, the representation for particle spin will, conditionally on the attained location realization, adjust to a representation that is most appropriate to express the physical, for example measured, state of the system. \subsection{The eBBB Formulation in Discrete Time} \label{sect2sub1} The formulation of discrete time eBBB dynamics starts with the fundamental evolution equation for a quantum state, \begin{equation} \ket{\psi}^{t+1} = \hat{U}\ket{\psi}^t. \label{Eq_1} \ee Here, $\ket{\psi}^t$ represents the quantum state at time step $t$, which is propagated to the state at the next time step $t+1$ through the action of the unitary evolution operator $\hat{U}$. This operator follows from the Hamiltonian operator $\hat{H}$, as \begin{equation} \hat{U} = e^{-i \hat{H}\epsilon }. \label{Eq_3} \ee The Hamiltonian may, and the time step size $\epsilon$ will depend on the index $t$, which labels are suppressed to avoid cluttering the notation too much. The relevance of a variable time step size will become clear shortly. As in ref.~\cite{Vink93}, the system's Hilbert space is assumed to be finite with dimension $N$, such that operators can be represented as Hermitian matrices and wave functions as vectors.\footnote{This assumes that space is both finite and discrete, consisting of discrete points; even though quantum states are technically represented as vectors, I will often refer to them as wave functions.} The Schr\"odinger equation (\ref{Eq_1}) and an arbitrary (Hermitian or unitary) operator $\hat{O}$ in the $n$-representation can be written as \begin{equation} \psi_n^{t+1}= \sum_{m=1}^N U_{n,m} \psi_m^t , \;\;\;\; \psi_n^t=\bra{n}\psi\rangle^t, \label{Eq_2b} \ee and \begin{equation} O_{n,m}=\bra{n}\hat{O}\ket{m}. \label{Eq_2} \ee respectively. In order to achieve the same result as Bell obtained using the differential form of the state evolution \cite{BellCh19}, the discrete time evolution of the probabilities $P_n^t=\psi_n^{*t}\psi_n^t$ should be linked to an anti-symmetric probability current and then expressed in terms of transition probabilities between different values of the eigenvector index $n$. This can be achieved as follows. First, the probabilities for the next time step are written as \begin{equation} P_n^{t+1} = \psi_n^{*t+1}\psi_n^{t+1} = \sum_{m}\psi_n^{*t+1}U_{n,m}\psi_{m}^t. \label{Eq_4} \ee Next, a real-valued matrix $K$ is defined as \begin{equation} K^t_{n,m} = \Re( \psi_n^{*t+1}U_{n,m}\psi_{m}^t ), \label{Eq_5} \ee which can be used to write \be \begin{array}{rcl} P_n^{t+1} & = & \sum_m K^t_{n,m} = {\frac{1}{2}}\sum_m (K^t_{n,m} + K^t_{m,n}) + {\frac{1}{2}} \sum_m ( K^t_{n,m} - K^t_{m,n} ) \\[2mm] & = & {\frac{1}{2}}\sum_{km}( \psi_k^{*t}U_{kn}^{\dagger}U_{n,m}\psi_{m}^t + \psi_k^{*t}U_{km}^{\dagger}U_{m,n}\psi_{n}^t) +{\frac{1}{2}}\sum_m(K^t_{n,m} - K^t_{m,n}) \\[2mm] & = & {\frac{1}{2}} \psi_n^{*t+1}\psi_n^{t+1} + {\frac{1}{2}} \psi_n^{*t}\psi_n^{t} +{\frac{1}{2}}\sum_m (K^t_{n,m} - K^t_{m,n}). \label{Eq_7} \end{array}\ee After defining the anti-symmetric probability current matrix $J^t$ as \begin{equation} J^t_{n,m} = K^t_{n,m} - K^t_{m,n} = \Re (\psi_n^{*t+1}U_{n,m}\psi_{m}^t - \psi_m^{*t+1}U_{m,n}\psi_{n}^t) , \label{Eq_8} \ee equation (\ref{Eq_7}) turns into a discrete-time continuity equation \begin{equation} P_n^{t+1} = P_n^t + \sum_m J^t_{n,m}. \label{Eq_9} \ee Following similar steps as in ref.~\cite{BellCh19}, this continuity equation can be worked into the desired master equation, \be \begin{array}{rcl} P_n^{t+1} & = & P_n^t + \sum_m \left( \theta(J^t_{n,m}) J^t_{n,m} - \theta(J^t_{m,n}) J^t_{m,n} \right), \\[2mm] & = & P_n^t + \sum_m( T^t_{n,m} P_m^t - T^t_{m,n} P_n^t), \label{Eq_9b} \end{array}\ee where $\theta(x)$ is the Heaviside step function and transition probabilities are defined as \begin{equation} T^t_{n,m} = \theta(J^t_{n,m}) J^t_{n,m} / P^t_m = {\rm max}\left(0, \Re(\psi^{*t+1}_n U_{n,m} \psi^t_m )- (n \leftrightarrow m) \right)/ P^t_m. \label{Eq_10} \ee It should be noted that this equation for the time evolution of $P_n$ is an exact result but only makes sense as a master equation if the transition probabilities satisfy the consistency condition \begin{equation} T^t_{m,m} \equiv 1 - \sum_{n \ne m}T^t_{n,m} \ge 0 \;\;\;\; \forall m, \label{Eq_11} \ee which automatically implies that \begin{equation} T^t_{n,m} \le 1 \;\;\;\; \forall n,m. \label{Eq_12} \ee Eq.~(\ref{Eq_10}) suggests that condition (\ref{Eq_11}) can be violated when for one or more $m$, the probability $P^t_m\rightarrow 0$. A similar consistency condition on transition probabilities in Bell's result is formally satisfied owing to the infinitesimal value of the time step size in his continuous time formulation. This suggests to explore if the consistency conditions for the discrete time formulation will be met for sufficiently small time steps. For small time step size $\epsilon$ the evolution matrix can be approximated as \begin{equation} U_{n,m} = \delta_{n,m} - iH_{n,m} \epsilon + O(\epsilon^2). \label{Eq_13} \ee Substituting this in Eq.~(\ref{Eq_10}) and collecting terms up to order $\epsilon$, one can verify that the result obtained by Bell \cite{BellCh19} is recovered \begin{equation} T^t_{n,m} = \tilde{T}^t_{n,m} \epsilon + O(\epsilon^2), \ee where the transition probability {\it rate} matrix $\tilde{T}^t$ is defined as \begin{equation} \tilde{T}^t_{n,m} = {\rm max}\left(0,-2\Im(\psi^{*t}_n H_{n,m}\psi^t_m) \right) / P^t_m . \ee Writing $\psi^t_m=R^t_m e^{iS^t_m}$, it follows that \begin{equation} T^t_{n,m} = {\rm max}\left( 0,-2\Im(H_{n,m}e^{i(S^t_m-S^t_n)} )\right)(\epsilon R^t_n/R^t_m) + O(\epsilon^2). \label{Eq_Teps} \ee This shows that $1-T^t_{m,m}\sim \sum_{n\ne m} c^t_{n,m}\epsilon/R^t_m$ for $\epsilon\rightarrow 0$, where the $c_{n,m}$ are positive numbers. Since the quantum system is finite and the state with index $m$ has been realized, the value of $P_m$ is non zero (i.e.~$R^t_m > \delta^t$ with $\delta^t > 0$) and the value of $ \sum_n c^t_{n,m}\equiv M^t$ is finite. This in turn implies that the constraint (\ref{Eq_11}) can be satisfied with a finite time step size that is chosen sufficiently small, $\epsilon < \delta^t/M^t$. In fact, when this eBBB approach is applied to an isolated quantum system, such as the Universe \cite{Vink92}, the value of $\epsilon$ is unobservable since there is no external clock. Therefore, a natural choice for a discrete, time index dependent time step value is that it is always maximally large, such that Eq.~(\ref{Eq_11}) just holds. For a large system, the dscrete time steps $\epsilon^t$ will then naturally be very small, since the bounding values $\delta^t$ and $1/M^t$ will be very small. In practical simulations to compute an ensemble of trajectories \cite{Vink93,Vink18}, it has been sufficient to choose a reasonably small default value for $\epsilon$ and (temporarily) reduce this value when a state $m$ with a very small $P_m$ would occur; such a state will then with high probability jump to one that has a much larger probability, which allows restoring the original time step size. \subsection{Trajectories with a Dynamically Changing Beable Representation} \label{sect2sub2} In the discussion above, a fixed $n$-representation was used to obtain a prescription for generating trajectories for the eigenvalue index values in this representation. However, it is possible to generalize this approach to allow using a representation that dynamically changes during the evolution of the trajectories. This is an interesting feature because it can be used to address the Preferred Basis problem \cite{Barrett05,deRonde16}. In particular, the present paper will show how the trajectory dynamics can be extended to generated an optimal spin basis along with spin (and location) values for the particles. Starting from the Schr\"odinger equation in the $n$-representation, Eq.~(\ref{Eq_2b}), a change to a different $ \alpha$-representation at time $t$ involves transforming $\psi$ and $U$ according to, \begin{equation} \psi^{V^t}_{ \alpha} = \sum_n V^t_{ \alpha,n}\psi_n \label{VonPsi} \ee and \begin{equation} U^{V^{t+1}V^t}_{ \alpha,\beta} = \sum_{nm}V^{t+1}_{ \alpha,n}U_{n,m}V^{t\dagger}_{m,\beta} \label{VonU} \ee respectively, with $V^t_{ \alpha,n} = \bra{ \alpha}n\rangle$ the unitary transformation from the $n$- to the $ \alpha$-representation. A different change in representation can be applied at every time step, such that the index values of state vectors $\psi_{ \alpha}^t$ and $\psi^{t+1}_{\beta}$ refer to different representations $V^t$ and $V^{t+1}$: \begin{equation} \psi^{V^{t+1} t+1}_{\beta} = \sum_{ \alpha}U^{V^{t+1}V^t}_{\beta, \alpha}\psi^{V^t t}_{ \alpha}. \label{Vtrans} \ee Hence, the value of the eigenvalue index is no longer sufficient to describe the trajectory state; also the transformation matrix $V^t$ or another specification of the representation used at each time step must be provided and therefore has become a dynamic element of reality. Since the dimensionality of the system's Hilbert space is very large, there obviously is a lot of freedom to choose basis transformations $V$. To make the challenge of finding a credible basis dynamics tractable, I will make three assumptions: First, the basis transformation must be determined by the system's wave function. External and internal degrees of freedom are handled separately. In this paper, focus will be on basis changes for internal degrees of freedom, for which the wave function that guides basis changes will be restricted to these internal degrees of freedom. As will be shown in more detail below, this implies that a different (spin) basis change may be applied for each different value of the configuration argument in the wave function. Second, in a multi-particle system, the combined basis must be a direct product of single-particle bases. This ensures that the eigenvalue trajectories can be decomposed into independent values, such that each particle in the system has its own well-defined and localized properties. Third, the admissible (single particle) basis sets will be restricted to those that correspond to operator representations that are relevant for, and observed in the macroscopic world (e.g., location, momentum, angular momentum, 3D spin-orientation, etc.). The optimal basis choice will then be the one in which the (reduced) state vector maximally aligns with one of the vectors in this basis. The first assumption to only apply basis transformations to the spin (or other internal) degrees of freedom can be made explicit by writing the generic eigenvalue index $n$ as a pair $xs$, where $x$ refers to the locations of the particles and $s$ to their spin. Using this notation, the detailed expression for basis changing transition probabilities becomes \begin{equation} T^{V^{yt}V^{xt} t}_{ys',xs} ={\rm max}(0,\Re(\psi^{*V^{yt}(t+1)}_{ys'} U_{yu',xv'}^{ V^{yt} V^{xt} }\psi^{V^{xt}t}_{xs}) - (xs\leftrightarrow ys') )/ P^{V^{xt}t}_{xs}, \label{Eq_TVx} \ee with \begin{equation} P^{V^{xt}t}_{xs} = \vert \psi^{V^{xt}t}_{xs}\vert^2. \ee The basis transformations $V$ now only work on spin indices, and may depend both on time index $t$ and configuration index $x$. I.e., the transformed wave functions and evolution matrix are, \begin{equation} \psi^{V^{xt}t}_{xs} = \sum_{v} V^{xt}_{s,v} \psi^{t}_{xv}, \;\;\;\; U_{ys',xs}^{ V^{yt} V^{xt} } = \sum_{uv} V^{yt}_{s',u}U_{yu,xv} V^{xt\dagger}_{v,s}. \ee Also with these conditional ($x$-dependent) basis transformations the steps in Eqs.~(\ref{Eq_4}-\ref{Eq_10}) can be followed to verify that the transition probabilities (\ref{Eq_TVx}) generate the same time-dependent probability distributions $P^t_{xs}$ as are produced through the Schr\"odinger dynamics in quantum mechanics. I.e., if $P^0_{xs}= \vert \psi^{V^{x0}0}_{xs}\vert^2$ at $t=0$, then $P^t_{xs}$ computed using the transition probabilities (\ref{Eq_TVx}) will be equal to $\vert \psi^{V^{xt}t}_{xs}\vert^2$ for all $t$. The next section describes how appropriate spin basis transformations $V^{xt}$ can be computed from the system's wave function $\psi^t_x$. \subsection{A Self-Adjusting Spin Basis} \label{sect2sub3} The wave function for $N$ particles with spin can be written in more detail as \begin{equation} \psi^t_{xs} \equiv \psi^t_{x_1s_1,x_2s_2,\dots,x_Ns_N} \ee where the pairs $x_i s_i$ indicate the location (external) and spin (internal) degrees of freedom of particle $i$. In accordance with the first assumption described above, a preferred, dynamically adjusting basis for the spin subspace of this system will be pursued, which should be derived from the spin-content of the system's wave function. For a given configuration $x$, the associated conditional spin-only wave function is \begin{equation} \tilde{\psi}^{xt}_{s_1,s_2,\dots,s_N} = \psi^t_{x_1s_1,\dots,x_Ns_N}/ (\sum_{s_1,\dots,s_n}\vert\psi^t_{x_1s_1,\dots,x_Ns_N}\vert^2)^{1/2}. \label{Eq_PsiS} \ee The preferred basis in this spin subspace will be the one in which the reduced state vector (\ref{Eq_PsiS}) maximally aligns with one of the basis vectors. Next, the third assumption is used to limit candidate basis sets to those that represent different orientations of the spin vector. I.e., the collection of candidate single-spin basis sets are assumed to be eigenvector sets of the $z$-component of an arbitrarily rotated spin operator $\vec{\hat{S}}=(\hat{S}^x, \hat{S}^y, \hat{S}^z)^T$. These rotations can be parameterized with two Euler angles, $\theta$ (the angle between the rotated $z$-axis and the reference $z$-axis and $\phi$ (the angle of rotation around the $z$-axis). To obtain Euler angles for the spin basis for each separate particle, the second assumption is used, which states that candidate basis vectors of the multi-spin system will be tensor products of the form \begin{equation} v^{m_1}_{\theta^t_1\phi^t_1}\otimes\dots \otimes v^{m_N}_{\theta^t_N\phi^t_N}. \ee Here, the basis vectors $v^m_{\theta^t \phi^t}$ with $m=-s,\dots,s$ are the eigenvectors of $\hat{S}_z$ rotated by (time-dependent) Euler angles $\theta^t$ and $\phi^t$. The optimization that is required to find the basis in which the reduced state vector is maximally aligned with one of the basis vectors can be performed in two steps. First, the multi-spin wave function (\ref{Eq_PsiS}) is approximated by a product of single spin wave functions, by performing the minimization \begin{equation} \min_{\psi^{(1)},\dots,\psi^{(N)}}\sum_{s_1\dots s_N=1}^{n_s} \vert\tilde{\psi}^{xt}_{s_1,s_2,...,s_N} - \psi^{(1)}_{s_1}\dots\psi^{(N)}_{s_N}\vert^2 \label{Eq_minPsi} \ee subject to $\vert\psi^{(i)}\vert^2=1$ for all $i$. Second, these individual wave functions $\psi^{(i)}$ are matched against candidate basis vectors. I.e., for all $i$ the optimal rotation angles $\theta^t_i$ and $\phi^t_i$ and best-fitting basis vector index $m_i$ are determined by minimizing \begin{equation} \min_{\theta^t_i,\phi^t_i,m_i}\vert\psi^{(i)} - v^{m_i}_{\theta^t_i \phi^t_i}\vert^2. \label{Eq_bvFit} \ee In general the minimization (\ref{Eq_minPsi}) is difficult to perform analytically, but for $N=2$ it amounts to solving the two eigenvalue problems \begin{equation} \sum_{s_2,s_3} (\tilde{\psi}^*_{s_3,s_1}\tilde{\psi}_{s_3,s_2}) f^{\lambda}_{s_2} = \lambda f^{\lambda}_{s_1} \;\;\;\;{\rm and}\;\;\;\; \sum_{s_2,s_3} (\tilde{\psi}_{s_1,s_3}\tilde{\psi}^*_{s_2,s_3}) g^{\mu}_{s_2} = \mu g^{\mu}_{s_1}. \label{Eq_Ev} \ee The desired wave function factors are the eigenvectors with largest eigenvalue, $\psi^{(1)}=f^{\lambda_{max}}$ and $\psi^{(2)}=g^{\mu_{max}}$. In case of degeneracy of this maximum eigenvalue, an additional condition must be imposed\footnote{A two-fold degeneracy does occur in one of the examples to be discussed below. The ambiguity can be fixed, for example, by choosing the linear combination that maximally aligns with the constant vector $(1,1,\dots,1)^T$} to ensure finding a well-defined and unambiguous result when repeatedly computing $\psi^{(1)}$ and $\psi^{(2)}$ for successive time steps. To avoid heavy notation, the superscript $xt$ is not always shown in Eqs.~(\ref{Eq_minPsi}-\ref{Eq_Ev}). It should be noted, however, that the preferred spin basis in general will be time-dependent and may be different for each different particle configuration $x$. \section{Examples of eBBB Trajectories} \label{sect3} In this section the above formulation to make both particle locations and spin values elements of reality is applied to two (simplified) quantum systems. The first example focuses on the spin dynamics and shows trajectories for spin values and orientation angles of the spin vector. The pure spin system mimics two static but entangled spin-two particles in a magnetic field, such that the spin vectors exhibit Larmor precession. The second example illustrates the combined dynamics of particle locations and spin values in a simplified realization of Bohm's version of the EPR experiment. In this example the quantum system consists of the two entangled particle spins {\it as well as} the angles of the Stern-Gerlach magnets that, last-minute, determine the spin orientations of the deflected particles. \begin{figure}[tth] \begin{center} \includegraphics[height=6.5cm]{Fig_1.png} \end{center} \captionW{Euler angles $\theta^t$ (horizontal lines) and $\phi^t$ for the system (\ref{Eq_2Psys}) with two entangled spin-2 particles in a magnetic field with interaction strength $\mu_1=\mu$ for particle one (solid lines) and $\mu_2=3\mu/2$ for particle two (dashed lines); the initial state has Euler angles $\theta_1=\pi/4$ and $\phi_1=\pi/2$ for particle one and $\theta_2=\pi/8$ and $\phi_1=\pi/4$ for particle two. \label{Fig_2SpAngles}} \end{figure} \begin{figure}[ttt] \begin{centering} \scalebox{0.8}{\includegraphics[height=3.0cm]{Fig_2a.png}}\\[-2mm] \scalebox{0.8}{\includegraphics[height=3.0cm]{Fig_2b.png}}\\[-2mm] \scalebox{0.8}{\includegraphics[height=3.0cm]{Fig_2c.png}}\\[-2mm] \end{centering} \captionW{Three trajectories for spin $s_1$ and $s_2$ for the same system as in Figure~\ref{Fig_2SpAngles}. The solid and dashed lines are for particle one and two respectively. \label{Fig_2SpTraj}} \end{figure} \begin{figure}[ttt] \begin{center} \includegraphics[height=6.5cm]{Fig_3.png} \end{center} \captionW{Analytical (solid lines) and ensemble averages (dots) of the time-dependent spin-index values for the same system as in Figure~\ref{Fig_2SpAngles}. The 8 curves show the time-dependent probabilities $P^t_s$ with $s=s_1+2+5(s_2+2)$ and $s_1,s_2\in[-2,2]$, for which $\max_t{P_s^t}>0.1$. The ensemble size used to compute the averages is 50,000. \label{Fig_2SpProb}} \end{figure} \subsection{Two Entangled Particles with Larmor Precession} \label{sect3sub2} The Hamiltonian for a particle with spin $s$ in a magnetic field pointing in the $z$-direction can be written as \begin{equation} \hat{H}_{\mu} = -\mu \hat{S}_z, \label{Eq_Hmu} \ee with $\mu$ representing the strength of the interaction of the magnetic field and the particle's dipole moment and $\hat{S}_z$ the $z$-component of the spin operator in an $n_s = 2s+1$ dimensional Hilbert space. The particle is assumed to be sufficiently heavy, such that the kinetic term in the Hamiltonian can be ignored. To make this example more interesting, the spin is taken to be relatively large, such that the dimension $n_s$ of the spin Hilbert space is lager than the dimension of the 3D rotation group. A system consisting of two such spin-two particles with different magnetic moments, has an $n_s\times n_s=25$ dimensional Hilbert space, with Hamiltonian $\hat{H} = \hat{H}_{\mu_1}+ \hat{H}_{\mu_2}$ and a wave functions $\psi_{s1,s_2}$. To test the robustness of the formulation of section \ref{sect2sub3}, the initial wave function is chosen to be a non-separable combination of rotated basis vectors, \begin{equation} \psi^0 = (v^{m_1}_{\theta_1 \phi_1} \otimes v^{m_2}_{\theta_2 \phi_2} - \label{Eq_2Psys} 2 v^{m_2}_{\theta_1 \phi_1} \otimes v^{m_1}_{\theta_2 \phi_2} )/\sqrt{2}, \ee with $\theta_1 = \pi/2$ and $\phi_1 = \pi/4$ for particle one and $\theta_2 = \pi/4$ and $\phi_2 = \pi/8$ for particle two; the (unrotated) spins are maximally aligned with the $z$-axis and have opposite sign: $m_1=2$, $m_2=-2$. Also the magnetic coupling strengths are different: $\mu_1 = \mu$ and $\mu_2 = 3\mu/2$ for particle one and two respectively. The results in Figure~\ref{Fig_2SpAngles} confirm that the microscopic system exhibits the expected dynamics for the Euler angles: the two particle spins precess with the frequency expected from the strength of the magnetic coupling, with a constant tilt angle of the spin vector. These tilt angles $\theta_{1,2}$ and start values $\phi^0_{1,2}$ correspond exactly with the Euler angles of the rotation of the basis vectors that define the initial state. The formulation of section \ref{sect2sub2} was used to compute an ensemble of trajectories with spin values in these continuously changing representations. Three of these trajectories are shown in Figure \ref{Fig_2SpTraj}, where the solid and dashed lines are the trajectories for the spin of particle one and two respectively. Since the evolving state vector of the entangle spins never is exactly aligned with the (rotating) basis vectors, the spin value can jump away from their initial values. Finally, Figure~\ref{Fig_2SpProb} shows that the ensemble averages of these generated spin values exactly reproduce the probabilities expected from the evolving wave function $\psi^{Vt}_s=\sum_{s'}V^t_{s,s'}\psi^t_{s'}$. The plot shows eight curves (solid lines) of $P^t_s=\vert\psi^{Vt}_s\vert^2$ for which $\max_t{ P^t_s}>0.1$; the dots that overlay these lines are the corresponding values computed from the ensemble of spin trajectories. \subsection{The Einstein-Podolsky-Rosen-Bohm Spin Experiment} \label{sect3sub3} This section describes the set-up and results of simulations of a simplified model for Bohm's version of the EPR experiment (EPRB for short). Here the spin dynamics and representation will need to be conditional on the realized location observables in the system. \subsubsection{Defining a Simulatable EPRB Experiment} \label{sect3sub3sub1} In many discussions of the EPRB experiment the focus is on the entangled spin states of the two particles and the location part of the wave function is ignored. When applying the Bohm interpretation of quantum mechanics to this system, where particle locations are the only observables with realized values in the physical world, it is of course essential to also include these particle locations - as is done in ref.~\cite{Norsen14} (see also \cite{BricmontGoldsteinHemmick19a}). In the system presented here, particle position is also included in the system's state space, albeit in a highly simplified manner. Moreover, and unlike in most discussions of EPRB experiments, the device angles are part of the quantum system as well and will be subject to a last-minute change. I.e., the experiment is fully self-contained and described by a single wave function. To achieve this, the state space for each particle has two location-like observables in addition to spin: a particle position observable $\hat{X}$ and a magnet orientation observable $\hat{\Phi}$. In order to keep the system sufficiently simple such that it can be simulated on a small computer, the particle position and magnet orientation degrees of freedom will be restricted to a small set of discrete values. More specifically, the state vector of each particle is written as $\ket{\psi}=\ket{ \phi, x, \theta\pm}$. The position observable $\hat{X}$ has six states, with (eigen)values $x \in \{x_r, x_a, x_{ \alpha+},x_{ \alpha-},x_{\beta+},x_{\beta-} \}$ that represent the initial `ready' state at the start of the experiment, an intermediate `all-set' state at which point the two spin-measuring devices (the Stern-Gerlach magnets) have been set in specific orientations, and four `measured' states; these last four states indicate a position shift up or down in the $ \alpha$ or $\beta$ direction. The magnet orientation observable $\hat{\Phi}$ has three states, with values $\phi \in \{0, \alpha,\beta\}$ that represent, respectively, a ready or zero-degrees state and two states that represent two alternative magnet orientations. The remaining spin subspace is of course two dimensional, and has spin values denoted by $\theta+$ and $\theta-$; here it is sufficient to limit spin rotations to a single angle, such that the spin can align with the angle $\phi$ of the Stern-Gerlach magnet. As discussed above, in the eBBB formulation also the spin orientation angle $\theta$ is dynamic and an element of reality. To provide quantum dynamics for the state vector $\ket{ \phi, x, \theta\pm}$, two evolution operators are defined, $\hat{U}^{(i)}$ and $\hat{U}^{(f)}$, which take the system from `ready' to `all-set' and from `all-set' to `measured' respectively. In the first stage, the unitary evolution operator $\hat{U}^{(i)}$ must transform the location and magnet angle states from the initial `ready' state $\ket{\psi_r}=\ket{\phi_0,x_r,0\pm}$, to states $\ket{ \alpha,x_a,0\pm}$ or $ \ket{\beta,x_a,0\pm}$, at which point the magnets have assumed orientation $ \alpha$ with probability $P_{ \alpha}$ or orientation $\beta$ with probability $P_{\beta}=1-P_{ \alpha}$. I.e., the evolution operator transforms the initial `ready' state $\ket{\psi_r}$ into an intermediate `all-set' state $\ket{\psi_a}$: \begin{equation} \ket{\psi_a} = \hat{U}^{(i)}\ket{\psi_r} = \hat{U}^{(i)}\ket{\phi_0,x_r,0\pm} = \gamma_{ \alpha}\ket{ \alpha,x_a,0\pm} + \gamma_{\beta}\ket{\beta,x_a,0\pm}. \label{Eq_Psi1} \ee The coefficients in Eq.~(\ref{Eq_Psi1}) are such that $\vert\gamma_{ \alpha}\vert^2 = P_{ \alpha}$ and $\vert\gamma_{\beta}\vert^2 = 1-\vert\gamma_{ \alpha}\vert^2=P_{\beta}$. The second stage evolution matrix then evolves this `all-set' state $\ket{\psi_a}$ into the final, `measured' state $\ket{\psi_m}$, which consists of a superposition of four components: \be \begin{array}{rcl} \ket{\psi_m} = \hat{U}^{(f)}\ket{\phi_a} & = & \gamma_{ \alpha}\bra{ \alpha+}0\pm\rangle \ket{ \alpha,x_{ \alpha+}, \alpha+} + \gamma_{ \alpha}\bra{ \alpha-} 0\pm\rangle \ket{ \alpha,x_{ \alpha-}, \alpha-} \\ & + & \gamma_{\beta} \bra{\beta+}0\pm\rangle \ket{\beta,x_{\beta+},\beta+} + \gamma_{\beta} \bra{\beta-} 0\pm\rangle \ket{\beta,x_{\beta-},\beta-}, \label{Eq_Uf} \end{array}\ee where \begin{equation} \begin{array}{ll} \bra{\phi+}0+\rangle = \cos(\phi/2), & \bra{\phi+}0-\rangle = \sin(\phi/2), \\ \bra{\phi-}0+\rangle = \sin(\phi/2), & \bra{\phi-}0-\rangle = -\cos(\phi/2), \;\;\;\; (\phi = \alpha,\beta) \label{Eq_as} \end{array} \ee A detailed description of the construction of these evolution matrices $\hat{U}^{(i)}$ and $\hat{U}^{(f)}$ can be found in Appendix \ref{Ap_B}. When these two evolution matrices are used in Eq.~(\ref{Eq_10}) they do not automatically lead to valid transition probabilities. For this, it is required that the matrices are sufficiently close to unity. This is not the case for the first and second stage evolution matrix, since they produce the large transitions between the macroscopically different `ready' and `all-set' state, and `all-set' and `measured' state respectively. However, it is straightforward to introduce a sub-step evolution matrix for each stage, such that the repeated action of this matrix reproduces the action of the full evolution matrix. For any unitary matrix $U_{T}$ that evolves the system over a time period $T$, one can define a sub-step matrix $U_{\epsilon}$ with $\epsilon=T/N_t$, as \begin{equation} U_{\epsilon} =U_{T}^{1/N_t}. \label{Eq_subU} \ee If the number of intermediate time steps $N_t$ is large enough, the sub-step matrix $U_{\epsilon}$ will be sufficiently close to the unit matrix such that the consistency condition (\ref{Eq_11}) is met and Eq.~(\ref{Eq_10}) can be used to define transition probabilities that in turn can be used to create stochastic trajectories for the system's location and spin values. In the simulation results below, the time units will be such that $T=1$ with $T$ the duration of stage one and two of the experiment. \subsubsection{Numerical Simulation of the EPRB Spin Experiment -- Stage 1} \label{sect3sub3sub2} To set the scene and get familiar with the type of results produced through numerical simulation of this system, the time evolution in the initial stage of the experiment will be discussed first. Here, the evolution matrix consists of three independent components for the $x$, $\phi$ and $\sigma$ subsystems, $U^{(i)}= U^{(i,\phi)}U^{(i,x)}U^{(i,\sigma)}$. For the location dynamics, it is sufficient to only consider the `ready' and `all-set' values, reducing the evolution matrix for this subsystem to a $2\times 2$ size. Then the three evolution matrices can be written as (cf.~Eqs.~(\ref{Eq_BU1phi}, \ref{Eq_BU1x} and \ref{Eq_BU1spin}) in Appendix \ref{Ap_B}) \begin{equation} U^{(i,\phi)}= \left( \begin{array}{ccc} 0 & 0 & 1 \\ \gamma_{ \alpha} & -\gamma_{\beta}^* & 0 \\ \gamma_{\beta} & \gamma_{ \alpha}^* & 0 \end{array} \right) , \label{Eq_52} \;\;\;\; U^{(i,x)} = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right), \;\;\;\; U^{(i,\sigma)} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right). \ee The matrix elements are computed from Eqs.~(\ref{Eq_BU1phi}), (\ref{Eq_BU1x}) and (\ref{Eq_BU1spin}) in Appendix \ref{Ap_B}. The simulation results shown below, are obtained using $\gamma_{ \alpha}=\sin(\pi/5)$ and $\gamma_{\beta}=\cos(\pi/5)$. As is implied by the unit matrix for the particle spin evolution, neither the spin values, nor the best-matching representation obtained when the particles are created in the `ready' state will change during the subsequent progression in time. Therefore, the spin sector can be ignored and in this initial stage the focus will be on the $x$ and $\phi$ trajectories for one of the particles.\footnote{Even though the spin representation does not change, the best-match approach discussed in section \ref{sect2sub3} will select a preferred representation. This will be discussed in the section describing the stage two simulation results.} With these simplifications, the Schr\"odinger equation for the $\phi$-$x$ system of one of the particles takes the form, \begin{equation} \ket{\phi,x}^{t+1} =( U^{(i,\phi)})^{1/N_t} ( U^{(i,x)})^{1/N_t} \ket{\phi,x}^t, \;\;\;\; \ket{\phi,x}^0= \ket{\phi_0,x_r}. \label{Eq_SE} \ee It is then straightforward to substitute this evolving state and the sub-time step evolution matrix in Eq.~(\ref{Eq_10}) to compute the transition matrix for $\phi$ and $x$ values and use it to produce an ensemble of trajectories for the magnet angle and location beables. \begin{figure} \begin{center} \includegraphics[height=6.5cm]{Fig_4.png} \end{center} \captionW{Comparison of the analytically computed probabilities (drawn lines) for the six discrete $\phi$,$x$ states (using Eq.~(\ref{Eq_SE})) with the numerically computed relative frequencies (discrete points with error bars that are smaller than the markers) obtained from an ensemble of 50,000 trajectories computed with time step $\epsilon=0.02$. The descending curve shows the probability of the $\ket{ \phi_0,x_r}$ state, the two ascending curves the probabilities of the $\ket{ \alpha,x_a}$ and $ \ket{\beta,x_a}$ states. \label{Fig_St1Prob}} \end{figure} \begin{figure} \begin{center} \includegraphics[height=3.2cm]{Fig_5a.png}\\[-0.2cm] \includegraphics[height=3.73cm]{Fig_5b.png} \end{center} \captionW{Trajectories for the discrete magnet orientation beable $\phi$ (top) and location beable $x$ (bottom). All $x$-trajectories start at $x=x_r$ and end at $x=x_a$; all $\phi$-trajectories start at $\phi=\phi_0$ and end either at $\phi= \alpha$ or $\phi=\beta$. To simplify following individual trajectories, the four paths have been slightly shifted vertically. \label{Fig_St1XF}} \end{figure} The results of such a simulation are presented in Figures \ref{Fig_St1Prob} and \ref{Fig_St1XF}. To verify the accuracy of the discrete time dynamics, the exact probabilities for each of the discrete states, the $P_n$ with $n \in \{(\phi_0,x_r), ( \alpha,x_r),(\beta,x_r),(\phi_0,x_a), ( \alpha,x_a),(\beta,x_a) \}$, computed using the Schr\"odinger equation Eq.~(\ref{Eq_SE}), are compared with the relative frequencies computed from an ensemble of 50,000 $\phi$ and $x$ trajectories with a time step size of $1/N_t = 0.02$. Figure \ref{Fig_St1Prob} shows that the probabilities computed from the ensemble of trajectories reproduce the analytical results within (the very small) statistical errors. Next, figure \ref{Fig_St1XF} shows four individual trajectories for magnet orientation $\phi$ (top) and location $x$ (bottom). As expected, all $x$-trajectories start at $x=x_r$ and end at $x=x_a$. The time at which the jump in value occurs is, of course, subject to chance, since the probability for the system to be in the $x_r$ state gradually decreases from $1$ at $t=0$, to $0$ at $t = 1$---as can be seen in Figure \ref{Fig_St1Prob}. Similarly, the trajectories for $\phi$ all start at $\phi=\phi_0$ and end at either $\phi= \alpha$ or $\phi=\beta$, with probability $\vert\gamma_{ \alpha}\vert^2\approx 0.35$ and $\vert\gamma_{\beta}\vert^2 \approx 0.65$ respectively. Most trajectories have a single transition where $\phi$ jumps to either $ \alpha$ or $\beta$. One trajectory has two transitions, first from $\phi_0$ to $\beta$, then to the end-value $\phi= \alpha$. Since the evolution matrix $ U^{(i,\phi)}$ from Eq.~(\ref{Eq_52}) also has matrix elements that connect $ \alpha$ and $\beta$ states, this type of trajectories, where the magnet angles can also take intermediate values that are different from their start or end-value, will be present in the ensemble. This is an artifact of the simple representation of a macroscopic magnet orientation using a single microscopic degree of freedom. \subsubsection{Numerical Simulation of the EPRB Spin Experiment -- Stage 2} \label{sect3sub3sub2} In a similar fashion the second stage dynamics can be simulated. Now the evolution is rather more involved as it includes the conditional dynamics of the spin values in their evolving representations. The evolution operator for this stage is $\hat{U}^{(f)}\otimes \hat{U}^{(f)}$, with $\hat{U}^{(f)}$ defined in Eq.~(\ref{Eq_Uf}) (cf.~also Eq.~(\ref{Eq_BU2}) in Appendix \ref{Ap_B}). This operator acts on the product state of the two particles, each of which has a dimension equal to $N_{\phi}\times N_x \times 2 = 20$, where $N_{\phi} = 2$ is the dimension of the state space for the magnet orientation, $N_x = 5$ and the remaining factor $2$ is the dimension of the spin subspace. To keep the dimension of the state space as low as possible, the ready state values $\phi=\phi_0$ and $x=x_r$ have been dropped, since they play no active part in this stage. The combined state space therefore has dimension $400$, which is still quite manageable in a numerical simulation. The initial state for stage two is the `all-set' state show in Eq.~(\ref{Eq_Psi1}). At this point, the device angle $\phi$ for the first particle has assumed the value $ \alpha$ with probability $\vert\gamma_{ \alpha}\vert^2 \approx 0.35$ and value $\beta$ with probability $\vert\gamma_{\beta}\vert^2\approx 0.65$; for particle two these probabilities are $\vert\gamma'_{ \alpha}\vert^2 \approx 0.79$ and $\vert\gamma'_{\beta}\vert^2\approx 0.21$ respectively. Hence, the stage two initial state represents a superposition of four different experiments in which the two device angles $(\phi_1,\phi_2)$ are $( \alpha, \alpha)$, $(\beta, \alpha)$, $( \alpha,\beta)$ and $(\beta,\beta)$, realized with relative probabilities $0.07$, $0.14$, $0.28$ and $0.51$ respectively. The initial location of both particles is $x=x_a$. In this state, spin components are not correlated with $x$ or $\phi$, hence the spin state conditional on realized values $\phi= \alpha$ or $\beta$ and $x=x_a$ for the two particles is still the singlet state, \begin{equation} \ket{\psi^0_{\sigma}} = (\ket{\theta+}\otimes\ket{\theta-} - \ket{\theta-}\otimes\ket{\theta+})/\sqrt{2}. \ee Following the procedure outlined in section \ref{sect2sub3}, the realized spin representations (i.e., the rotation angles $\theta$ for the two spins) in this singlet state are found to be $\pi/2$ for both particles. The spin values for both particles are, of couse, $+1/2$ or $-1/2$ with equal probability, where the spins of the two particles always have opposite sign. In order to support the claim that the eBBB dynamics leads to a physically reasonable behavior of the elements of reality, the $\phi$, $x$, $\sigma$ and $\theta$ trajectories should show the following characteristics: The device angles must remain at the values they assumed in the `all-set' state. The location values of the particles at some time must jump from the `all-set' state to one of the `measured' states. On trajectories with device angle $ \alpha$, $x$ must jump to either $x_{ \alpha+}$ or $x_{ \alpha-}$ and on trajectories with device angle $\beta$, $x$ must jump to either $x_{\beta+}$ or $x_{\beta-}$. When $x$ jumps, this must be in accordance with the microscopic spin state, so also the spin representation angle $\theta$ must jump from $\pi/2$ to $ \alpha$ if the device angle on this trajectory is $ \alpha$ or to $\beta$ if the device angle is $\beta$, whereas the spin values of the particles must assume opposite values $\pm1/2$ with equal probability, but correlated in accordance with the relative angle between the two measuring devices. \begin{figure}[ttt] \begin{center} \hspace{1mm}\includegraphics[height=3.cm]{Fig_6a.png}\\[-2mm] \includegraphics[height=3.cm]{Fig_6b.png}\\[-2mm] \includegraphics[height=3.cm]{Fig_6c.png}\\[-2mm] \hspace{-1mm}\includegraphics[height=3.3cm]{Fig_6d.png} \end{center} \captionW{One history of the EPRB experiment simulated using the eBBB approach. The device angles in this experiment can assume values $ \alpha=\pi/5$ and $\beta=3\pi/5$. The top plot shows the $\phi$ trajectory for device one (solid line) and device two (dashed line); the second plot shows the two corresponding particle location trajectories; the third and fourth plots show their spin representations and spin values. The time step size $\epsilon=0.02$. \label{Fig_EPRB_Traj}} \end{figure} Figure \ref{Fig_EPRB_Traj} shows a set of typical trajectories from a numerical simulation of the EPRB experiment, which indeed confirms the behavior described above. In this specific history, the first device angle evolved in the first stage to $\beta$, the second to $ \alpha$, and both values remain unchanged in the second stage (first sub-plot). The location variable jumps at around $t=0.35$ from $x=x_a$ to $x=x_{\beta+}$ for particle one and to $x=x_{ \alpha-}$ for particle two (second sub-plot). At this same instance the spin representation angle jumps from $\pi/2$ to $3\pi/5=\beta$ for particle one and to $\pi/5= \alpha$ for particle two, while the spin values (which initially had opposite signs) assume the same value $+1/2$ (subplots three and four). A similar consistency between the realized angle, location and spin values holds for all generated trajectories. In particular, the location and spin realizations are fully consistent on all 50,000 generated trajectories: when $x$ jumps to $x_{ \alpha+}$ or $x_{ \alpha-}$ then the spin orientation $\theta$ jumps to $\alpha$ with spin value $+1/2$ or $-1/2$ respectively, and mutatis mutandis when $x$ jumps to $x_{\beta+}$ or $x_{\beta-}$. The next set of results validates the statistical accuracy of the eBBB approach. Figure \ref{Fig_EPRB_Pxs} shows the time dependent probabilities of the combined $\phi$, $x$ and $\sigma$ index values, computed from the system's wave function, i.e., the probabilities $P^{V^{xt}t}_{xs}$ (solid lines) and the same probabilities computed from the ensemble of trajectories (dots with error bars). The four (degenerate pairs of) descending curves represent the initial states for the four different experiments; the six ascending curves represent 12 final states: since $x$ and spin values are fully correlated, there are two (opposite spin) states for each of the two experiments with equal device angles and four states (with different spin combinations) for each of the two experiments with different device angles. As expected from the discussion in section \ref{sect2sub3}, the averages computed from the ensemble of index trajectories closely matches the result computed from the wave function. \begin{figure}[ttt] \begin{center} \includegraphics[height=6.5cm]{Fig_7.png} \end{center} \captionW{Probabilities $P_{xs}$ of the realized location ($\phi$, $x$) and spin values in the EPRB experiment. Solid lines are computed directly from the system's wave function, dotted lines marked with error bars, are computed from an ensemble of 50,000 trajectories. \label{Fig_EPRB_Pxs}} \end{figure} \begin{figure}[bbh] \begin{center} \includegraphics[height=6.5cm]{Fig_8.png} \end{center} \captionW{Exact (dashed line) and simulated results (dots) of (minus) the correlation between simulated spin values $\pm1$ of the two particles, as expressed through their $x$-values, for seven values of $ \alpha - \beta$. Statistical errors are comparable with the size of the symbols. \label{Fig_corr} } \end{figure} Finally, for completeness and as in an actual EPRB experiment, the results for the average spin and spin-spin correlations can be computed from the ensemble of realized location variables. Spin values and correlations can be computed using the observable $x$-values (as in a real experiment), which reliaby represent the not directly observable particle spins $\sigma$. As expected the ensemble average of the spin of each particle is then found to be zero within statistical errors, irrespective of the orientation angle $\phi$. The correlation, $C( \alpha,\beta)$ of the spin values of the two particles, measured at different values of the device angles also accurately reproduces the expected result, as is illustrated in Figure \ref{Fig_corr}. \section{Discussion} \label{sect4} The main driver for extending the de Broglie-Bohm-Bell approach is to account for an extended content of the physical world where more quantum degrees of freedom beyond location (for example, spin) can be realized as elements of reality. This allows numerical simulations to more deeply probe and expose microscopic quantum behavior, it may be required to support the richness of our daily experiences and it supports microscopic dynamics that is more closely in line with our common sense expectations. The eBBB approach explored in this paper additionally (but less fundamentally) assumes that the physical world is rigorously finite and discrete, which avoids complexities of dealing with infinite dimensional Hilbert spaces and regulates the infrared and ultraviolet divergences of quantum field theories. Furthermore, its stochastic dynamics can rigorously be formulated in discrete time, which then suggests a natural definition for a minimal time increment. The fundamentally finite and discrete formulation makes the eBBB formulation very ``simulatable'', which allows exploring and elucidating quantum behaviors such as contextuality in a very explicit and unambiguous fashion. Of course, the discrete space breaks the treasured relativistic invariance of classical physics, and singling out a special 3-space in which the universe performs its discrete time stepping further breaks this invariance. The eBBB formulation shares this reliance on a special space-time foliation with other Bohm-type formulations and shares the assumption that, in extensions of this approach to relativistic quantum field theories \cite{BohmHileyCh11,BellCh19,DurrQFT,Vink18}, this important macroscopic invariance is restored at large scales where space-time discreteness and other microscopic symmetry breaking features are hidden. The quantum world implied by this formulation is counterfactually definite in the generic sense that microscopic elements of reality exist with definite values, irrespective of being measured or observed by humans (or other conscious beings). Experiments and observations are integral part of the realized world, all driven by the guidance conditions, the transition probabilities prescribed by the system's (or even the entire universe's) wave function. It is contextual, but only in the specific and limited sense required to avoid the constraints of the Kochen-Specker theorem, in that only a single, compatible set of operators has values that exist as elements of reality. This set (or more precisely, the factorized eigenvector basis in the system's Hilbert space) is dynamic and evolves autonomously, also guided by the wave function and conditionally on the realized values of particle locations. Hence, it is the wave function itself, and for internal degrees of freedom the realized particle configuration, that provides the context for the beables to assume their appropriate values. In a world governed by classical physics, particle dynamics is governed by locally acting forces and the particle's spin determines the direction in which its trajectory deflects in an inhomogenous magnetic field. In quantum physics, as shown explicitly in the causal Bohm interpreation \cite{Norsen14}, spin is a contextual property of the particle and the system it interacts with, such that its dynamics is driven by a (non-localized) wave function. The eBBB formulation shows that it is nonetheless possible to represent this contextual spin as an element of reality, i.e., as a (localized) property of each individual particle. \section*{Acknowledgments} I would like to thank Jeffrey Barrett for valuable comments on an early draft of the paper. \section*{Appendices}
2,877,628,089,888
arxiv
\section{Introduction} The conformal field theory treatment of quantum spin chains at positive temperature $T$ deals with systems of size $M$ in the limit \begin{equation} M\rightarrow\infty, \qquad T\rightarrow 0,\qquad T M\ {\rm fixed}. \label{cft limit} \end{equation} This limit is discussed in terms of a variable $q=\exp(2 \pi v/ M T)$ where $v$ is the speed of sound. The modular invariant partition function is computed in terms of $q$, and one of the important results~\cite{bcn,affleck} is that as $q\rightarrow 1$, the free energy per site is given as \begin{equation} f=e_{GS}-{c\pi\over 6v}T^2 + o(T^2) \end{equation} where $c$ is the central charge as determined from finite size corrections to the ground state energy \begin{equation} E_{GS}=Me_0-{\pi c v\over 6 M} + o(M^2). \end{equation} However the limit~(\ref{cft limit}) is not the limit of thermodynamics. This limit is \begin{equation} M\rightarrow\infty, \qquad T\ {\rm fixed}. \label{tba limit} \end{equation} Here the low temperature behavior of the specific heat is obtained by letting $T\rightarrow 0$ after the limit~(\ref{tba limit}) is taken. These two limits will give the same result if there are no additional length scales in the problem. In this case the result is obtained~\cite{bcn,affleck} that the specific heat $C$ is \begin{equation} C\sim \frac{\pi c}{3 v} T. \label{spheat} \end{equation} For the anti-ferromagnetic 3-state Potts chain, the low lying order one excitations in the limit~(\ref{cft limit}), were used in~\cite{km} to compute the partition function, and the result~(\ref{spheat}) was obtained. This was accomplished by adding up the order one excitations found from the Bethe's equations for the model~\cite{albert,adm2}. The partition function obtained in this manner is the modular invariant partition function of conformal field theory. The counting of states in~\cite{km} depends on the fact that the momenta of order one excitations obey not only a fermi exclusion rule $P_j\neq P_k$, but also have additional exclusion rules: The number of states near $e(P)=0$ diminishes as the number of order one excitations in the system increases. Correct counting of the states, incorporating these exclusion rules, gives a partition function which has a central charge smaller than that of fermions. In the case of the anti-ferromagnetic 3-state Potts model, there are three kinds of quasi-particle excitations, which, for purely fermionic exclusion rules, would give a central charge of 3/2, whereas the actual central charge of the model is 1. In this paper, we use the thermodynamic limit~(\ref{tba limit}) to obtain the low temperature specific heat of the 3-state Potts chain from Bethe's equations, using the methods of~\cite{yny,gaudin,takahashi}. In section 2, we write the Bethe equations for the finite lattice and introduce the completeness rules~\cite{adm} for $Q=0$. We find it convenient to study the ferromagnetic and antiferromagnetic cases using two different sets of integral equations. In section 3, we write the free energy for the ferromagnetic case in terms of only one integral equation, and compute the linear term in the low temperature specific heat. In section 4, we do the same for the anti-ferromagnetic chain, this time in terms of two integral equations. We obtain the central charge of the conformal limit of both spin chains from the linear term in the specific heat, using equation~(\ref{spheat}). In section 5, we consider the sector $Q=1$. In section 6, we discuss how the counting of states in the finite size system~\cite{km} is incorporated into the discussion in terms of densities in the thermodynamic limit. We find that in the thermodynamic limit, the counting of states is contained in the entropy, written in terms of densities. The densities are related through the thermodynamic limit of the Bethe equations~(\ref{bae+}),~(\ref{bae-}), and this relationship corresponds to the way the number of available states depends on the number of excitations in the finite size system. These equations become particularly important when evaluating the low temperature specific heat, where we need to consider the equations in precisely the limit which corresponds to $P\rightarrow0$ in order to extract the linear term in the specific heat. It is important to note that the 3-state Potts model is the $D_4$ model in the classification of Pasquier~\cite{pasquier}, which, by orbifold construction~\cite{fg,roche}, has certain sectors of eigenvalues which overlap with the $A_5$ RSOS model. {}From this construction, it is to be expected that the thermodynamic quantities of the two models are equal as long as the sectors which dominate the thermodynamics are common to the sectors which overlap. Indeed, the ground states of both the ferromagnetic and anti-ferromagnetic ends of the $D_4$ model are the same as the critical $A_5$ model at the boundaries of the $III/IV$ and $I/II$ regimes respectively. Thus the central charges computed for $A_5$~\cite{abf,huse,baz-resh} coincide with the central charges of the $D_4$ model~\cite{fl,pearceun}. We further note that the classical two dimensional anti-ferromagnetic 3-state Potts model which is critical at $T=0$ is equivalent~\cite{baxter,saluer,park-widom} to the 3-coloring problem and also has central charge $c=1$~\cite{kim-pearce}. The thermodynamics of the $A_n$ series was studied in~\cite{baz-resh}, where the integral equations for the free energy, the central charges and order one excitations above the ground state were found. The thermodynamics of other affine Lie algebras were also studied in~\cite{kuniba}. In this paper, however, we obtain different sets of integral equations, which display a more direct relationship to the modular invariant partition function discussed in~\cite{km} for the anti-ferromagnetic chain and in~\cite{dkm} for the ferromagnetic chain. The fact that there are different sets of integral equations for the model is related to the fact that the modular invariant partition function can be expressed as sums over different sets of quasi-particle excitations~\cite{kkmm}. \section{Formulation} \setcounter{equation}{0} The three state Potts hamiltonian is \begin{equation} H = \pm\frac{2}{\sqrt{3}} \sum_{j=1}^{M}\bigl\{ X_j + X_j^{\dagger} + Z_j Z_{j+1}^{\dagger} + Z_j^\dagger Z_{j+1}\bigr\} \label{ham} \end{equation} with periodic boundary conditions, where $M$ is the number of sites in the chain, and the matrices $X_j$ and $Z_j$ are \begin{eqnarray} X_j &=&I \otimes I \otimes \ldots \otimes X_{j^{th}} \otimes \ldots\otimes I, \nonumber\\ Z_j &=&I \otimes I \otimes \ldots \otimes Z_{j^{th}} \otimes \ldots\otimes I, \end{eqnarray} $I$ being the $3 \times 3$ identity matrix and $X$ and $Z$ are $3\times3$ matrices with entries: \begin{equation} X_{ij} = \delta_{i,j+1}(\hbox{\rm mod}\ 3),\qquad Z_{ij} = \delta_{i,j} \omega^{i-1},\qquad\omega=e^{2 \pi i/3}. \end{equation} The hamiltonian with the ($+$) $-$ sign is referred to as the (anti-)ferromagnetic Potts chain It commutes with the spin rotation operator, whose eigenvalue is $e^{2\pi i Q/3}$, $Q=0,\pm1$. The eigenvalues of the hamiltonian~(\ref{ham}) are derived from functional equations~\cite{abf,baz-resh,bazstrog,bbp,amp} from which we find~\cite{albert}: \begin{equation} E=\sum_{j=1}^{L} \cot{(i \lambda_j + \frac{\pi}{12})} - \frac{2 M}{\sqrt3}, \qquad L=2(M-|Q|),\quad Q=0,1,2 \label{energy} \end{equation} where the set $\{\lambda_j\}$ satisfy the Bethe equations: \begin{equation} \label{bae1} \Bigl[\frac{\sinh(i \pi/12-\lambda_j)}{\sinh(i\pi/12+\lambda_j)} \Bigr]^{2M} =(-1)^{M+1} \prod_{k=1}^L \frac{\sinh(i \pi/3-(\lambda_j-\lambda_k))} {\sinh(i \pi/3+(\lambda_j-\lambda_k))},\ \ j=1,\ldots,L \end{equation} Not all solutions of the equations~(\ref{bae1}) correspond to eigenvalues of the hamiltonian~(\ref{ham}). The equations do not impose sufficient restrictions on the set $\{\lambda_j\}$. There are additional conditions, which ensure that the energy is real, postulated from finite size studies in~\cite{adm}, where the spectrum of $H$ was classified. We introduce these conditions by writing~(\ref{bae1}) in logarithmic form. The solutions of~(\ref{bae1}) which correspond to eigenstates of the hamiltonian~(\ref{ham}) fall into 5 classes, where below $\lambda_j$ is a real number: \begin{equation} \lambda_j^+ = \lambda_j,\quad \lambda_j^- = \lambda_j + \frac{i\pi}{2},\quad \lambda_j^{2s} = \lambda_j \pm \frac{i\pi}{6},\quad \lambda_j^{-2s} = \lambda_j \pm \frac{i\pi}{3},\quad \lambda_j^{ns} = \lambda_j \pm \frac{i\pi}{4}. \label{roottypes} \end{equation} The last three always occur in complex conjugate pairs. Since we are interested in the infinite lattice limit, the imaginary parts are assumed to be exact. We denote the number of each type of root $\alpha\in\{+,-,2s,-2s,ns\}$ by $m_\alpha$. We rewrite the Bethe equations~(\ref{bae1}) to explicitly display the different types of roots~(\ref{roottypes}). Let \begin{equation} h(\lambda) = \frac{\sinh(\frac{i\pi}{3}-\lambda)} {\sinh(\frac{i\pi}{3}+\lambda)}. \end{equation} Then~(\ref{bae1}) become, for $\alpha=+$ or $-$: \begin{eqnarray} (-1)^{M+1}\Bigl[\frac{\sinh(\frac{i\pi}{12}-\lambda_j^\alpha)} {\sinh(\frac{i\pi}{12}+\lambda_j^\alpha)}\Bigr]^{2M} & = & \prod_{k=1}^{m_+}h(\lambda_j^\alpha-\lambda_j^+) \prod_{k=1}^{m_-}h(\lambda_j^\alpha-\lambda_j^-) \prod_{k=1}^{m_{2s}} h(\lambda_j^\alpha-\lambda_j^{2s})h(\lambda_j^\alpha-\lambda_j^{2s*}) \nonumber \\ & &\hbox{\hskip-1in}\times \prod_{k=1}^{m_{-2s}} h(\lambda_j^\alpha-\lambda_j^{-2s})h(\lambda_j^\alpha-\lambda_j^{-2s*}) \prod_{k=1}^{m_{ns}} h(\lambda_j^\alpha-\lambda_j^{ns})h(\lambda_j^\alpha-\lambda_j^{ns*}) \label{bae2.1} \end{eqnarray} whereas for $\beta=2s,-2s$ or $ns$, the equations for each complex conjugate pair are multiplied together: \begin{eqnarray} \Bigl[\frac{\sinh(\frac{i\pi}{12}-\lambda_j^\alpha) \sinh(\frac{i\pi}{12}-\lambda_j^{\alpha*})} {\sinh(\frac{i\pi}{12}+\lambda_j^\alpha) \sinh(\frac{i\pi}{12}+\lambda_j^{\alpha*})}\Bigr]^{2M} &=& \prod_{k=1}^{m_+}h(\lambda_j^{\alpha}-\lambda_k^+)h(\lambda_j^{\alpha*}-\lambda_k^+) \prod_{k=1}^{m_-}h(\lambda_j^{\alpha}-\lambda_k^-)h(\lambda_j^{\alpha*}-\lambda_k^-) \nonumber \\ & &\hbox{\hskip-1in}\times \prod_{k=1}^{m_{2s}}h(\lambda_j^{\alpha}-\lambda_k^{2s}) h(\lambda_j^{\alpha*}-\lambda_k^{2s}) h(\lambda_j^{\alpha}-\lambda_k^{2s*}) h(\lambda_j^{\alpha*}-\lambda_k^{2s*}) \nonumber \\ & &\hbox{\hskip-1in}\times \prod_{k=1}^{m_{-2s}}h(\lambda_j^{\alpha}-\lambda_k^{-2s}) h(\lambda_j^{\alpha*}-\lambda_k^{-2s}) h(\lambda_j^{\alpha}-\lambda_k^{-2s*}) h(\lambda_j^{\alpha*}-\lambda_k^{-2s*}) \nonumber \\ & &\hbox{\hskip-1in}\times \prod_{k=1}^{m_{ns}}h(\lambda_j^{\alpha}-\lambda_k^{ns}) h(\lambda_j^{\alpha*}-\lambda_k^{ns}) h(\lambda_j^{\alpha}-\lambda_k^{ns*}) h(\lambda_j^{\alpha*}-\lambda_k^{ns*}). \label{bae2.2} \end{eqnarray} We follow ref.~\cite{adm} in taking the logarithm of equations~(\ref{bae2.1}),~(\ref{bae2.2}). To do this, we define the functions $t_\alpha$ and $\Theta_{\alpha,\beta}$ in the following way: \begin{equation} t_\pm(\lambda_j^\pm) =\cases{ -2i \ln\Bigl[\pm\frac{\sinh(i\pi/12-\lambda_j^\pm)} {\sinh(i\pi/12+\lambda_j^\pm)}\Bigr] & for $\alpha=\pm$;\cr - 2i f_\alpha\ln\Bigl[\frac{\sinh(i\pi/12-\lambda_j^\alpha)} {\sinh(i\pi/12+\lambda_j^\alpha)} \frac{\sinh(i\pi/12-\lambda_j^{\alpha*})} {\sinh(i\pi/12+\lambda_j^{\alpha*})}\Bigr]& for $\alpha=\pm 2s,ns $\cr} \end{equation} where $f_{\pm2s}=-1$ $f_{ns}=-1/2$ and $f_\pm=1$. For $\alpha,\beta=\pm$: \begin{equation} \label{theta} \Theta_{\alpha\beta}(\lambda_j^\alpha-\lambda_k^\beta)= -i \ln\Bigl[\alpha\beta h(\lambda_j^\alpha-\lambda_k^\beta)\Bigr] \end{equation} For $\alpha=\pm$, $\beta=\pm 2s, ns$ or $\alpha=\pm2s,ns$ $\beta=\pm$, \begin{equation} \Theta_{\alpha\beta}(\lambda_j^\alpha-\lambda_k^\beta)= -i f_\alpha\ln\Bigl[\epsilon_{\alpha\beta}h(\lambda_j^\alpha-\lambda_k^\beta) h(\lambda_j^\alpha-\lambda_k^{\beta*})\Bigr] \end{equation} where $\epsilon_{+,-2s}=\epsilon_{-,2s}=-1$, $=1$ otherwise. For $\alpha,\beta=\pm 2s,ns$ \begin{equation} \Theta_{\alpha\beta}(\lambda_j^\alpha-\lambda_k^\beta)= -i f_\alpha\ln\Bigl[\epsilon_{\alpha\beta} h(\lambda_j^\alpha-\lambda_k^\beta) h(\lambda_j^\alpha-\lambda_k^{\beta*}) h(\lambda_j^{\alpha*}-\lambda_k^\beta) h(\lambda_j^{\alpha*}-\lambda_k^{\beta*})\Bigr] \end{equation} where $\epsilon_{2s,2s}=\epsilon_{-2s,-2s}=-1$, $1$ otherwise. (Note that in~\cite{adm}, the functions $t_\alpha$ and $\Theta_{\alpha\beta}$ for $\alpha=\pm2s,ns$ were defined without the factor $f_\alpha$. This will change the completeness rules somewhat from those presented in~\cite{adm}, but is necessary in order to have positive densities.) Here, all logarithms are taken so that $|{\rm Im}\ \ln z|\leq \pi$, and the functions $t_\alpha$ and $\Theta_{\alpha\beta}$ are defined so that \begin{eqnarray} t_\alpha(\lambda_j^\alpha) &=& 0 \qquad \hbox{if}\ \ \ {\rm Re}\ (\lambda_j^\alpha)=0,\nonumber\\ \Theta_{\alpha\beta}(\lambda_j^\alpha-\lambda_k^\beta)& =& 0 \qquad \hbox{if}\ \ \ {\rm Re}\ (\lambda_j^\alpha)={\rm Re}\ (\lambda_k^\beta). \end{eqnarray} The logarithmic Bethe equations are written in terms of these functions: \begin{equation} Z(\lambda_j)\equiv \frac{I_j^\alpha}{M}= \frac{1}{2\pi}t_\alpha(\lambda_j^\alpha)- \frac{1}{2\pi M}\sum_{\beta=\pm,\pm2s,ns}\sum_{k=1}^{m_\beta} \Theta_{\alpha\beta}(\lambda_j^\alpha-\lambda_k^\beta), \label{logbae} \end{equation} where $I_j^\alpha$ are (half-) integers. We now present the completeness rules for the integers $I_j^\alpha$. It is only necessary at this point to discuss the completeness rules for $Q=0$. It will be shown in section 5 that the $Q=\pm1$ sectors are identical to this sector in the thermodynamic limit. The completeness rules of~\cite{adm} for $Q=0$ in the notation introduced here become: \begin{enumerate} \item $I_k^+$ and $I_k^{2s}$ are distinct (half-)integers, are chosen from the same set of $m_++m_{2s}$ integers, and $I_k^+=I_k^{2s\ h}$, where $h$ represent a ``hole'' or missing integer. Therefore the set $\{I_j^+\}+\{I_j^{2s}\}$ fills the interval $-1/2(m_++m_{2s})$ to $1/2(m_++m_{2s})$. \item $I_k^-$ and $I_k^{-2s}$ are distinct (half-)integers, are chosen from the same set of $m_-+m_{-2s}$ integers, and $I_k^+=I_k^{2s\ h}$. Again, the set $\{I_j^-\} + \{I_j^{-2s}\}$ fills the interval. \item $I_k^{ns}$ are distinct (half-)integers chosen from a set of $2m_-+2m_{-2s}+m_{ns}$ (half-)integers. \item The spacing between ``available'' integers, the set of integers $\{I_k^\alpha\}+\{I_k^{\alpha\ h}\}$, is 1. \end{enumerate} We see that $+$ integers correspond to missing $2s$ integers, and the same for $-$ and $-2s$. In addition to these rules, there is a sum rule for $m_\alpha$: \begin{eqnarray} m_+=2n_{ns}+3m_-+4m_{-2s} \nonumber \\ m_{ns}+2m_{ns}+3m_{-2s}+2m_-=M \label{particle numbers} \end{eqnarray} This sum rule is responsible for restricting the maximum integers $I_{max}^\alpha$ as a function of the number of excitations in the system. We make the assumption at large $M$ that the rules (1) and (2) imply the equality \begin{equation} \lambda_j^+ =\lambda_j^{2s\ h},\qquad \lambda_j^-=\lambda_j^{-2s\ h}. \label{ass} \end{equation} This appears to be true from numerical results, has been proven for order one excitations~\cite{adm2}, and shown to be consistent for all excitation densities in the thermodynamic limit. We now take the thermodynamic limit $M\rightarrow\infty$ of the Bethe equations~(\ref{logbae}), with $\lambda$ fixed. When we do this, we lose the information contained in the rules (1)-(3) about the maximum integers. We rewrite the functions $\Theta_{\alpha,\beta}$ and $t_\alpha$ in terms of the real part $\lambda_j^\alpha$, using~(\ref{roottypes}), and take the derivative of $Z(\lambda)$ with respect to $\lambda$ in the thermodynamic limit. We obtain the following set of equations: \begin{eqnarray} \rho_t^\pm(\lambda) &=& \frac{1}{\pi}K_{\pi/12}^\pm(\lambda) - \frac{1}{2\pi} \Bigl[ K_{\pi/3}^\pm*(\rho_p^+ - \rho_p^{-2s}) +K_{\pi/3}^\mp*(\rho_p^--\rho_p^{2s})+ \{K_{\pi/12}^++K_{\pi/12}^-\}*\rho_p^{ns}\Bigr] \nonumber \\ \rho_t^{\pm2s}(\lambda) &=& \frac{1}{\pi}(K_{\pi/12}^\pm(\lambda)- K_{\pi/4}^{\pm}(\lambda)) - \frac{1}{2\pi} \Bigl[ K_{\pi/3}^\mp*(\rho_p^+ - \rho_p^{-2s}) +K_{\pi/3}^\pm*(\rho_p^--\rho_p^{2s}) \nonumber\\& & \hbox{\hskip1in} + \{K_{\pi/12}^++K_{\pi/12}^-\}*\rho_p^{ns}\Bigr] \nonumber \\ \rho_t^{ns}(\lambda)&=& -\frac{1}{2\pi}(K_{\pi/3}^+(\lambda)+K_{\pi/3}^-(\lambda)) +\frac{1}{2\pi}\Bigl[ \frac{1}{2}(K_{\pi/12}^++K_{\pi/12}^-)*(\rho_p^+-\rho_p^{-2s}+\rho_p^- -\rho_p^{2s}) \nonumber\\ & &\hbox{\hskip1in} + (K_{\pi/3}^++K_{\pi/3}^-)*\rho_p^{ns}\Bigr] \label{lbae} \end{eqnarray} where \begin{equation} \rho_t^\alpha=\lim_{M\rightarrow\infty}\frac{1} {M(\lambda_{I_j+1}-\lambda_{I_j})},\qquad \rho_p^\alpha =\lim_{M\rightarrow\infty}\frac{1} {M(\lambda_{I_{j+1}}-\lambda_{I_j})}, \label{dendef} \end{equation} the convolution $*$ is defined as: \begin{equation} f*g = \int_{-\infty}^{\infty}d\mu f(\lambda-\mu)g(\mu), \end{equation} and the kernels $K_\alpha^\pm(\lambda)$ are \begin{equation} K_\alpha^\pm(\lambda)= \frac{\pm 2 \sin{2\alpha}}{\cosh{2\lambda}\mp\cos{2\alpha}}. \end{equation} In writing equations~(\ref{lbae}), we did not make use of the relationship between holes and particles~(\ref{ass}), which imposes a relationship between the densities in equations~(\ref{lbae}). The assumption~(\ref{ass}) implies that when particle integers of, say, $+$ are equal to the ``hole'' integers of $2s$, their corresponding rapidities are are equal. Therefore, in light of the definitions~(\ref{dendef}), the total densities of $+$ and $2s$ are equal (and those of $-$ and $-2s$ as well), and the particle densities are related in a simple way: \begin{equation} \rho_t^+(\lambda)=\rho_t^{2s}(\lambda), \quad \rho_t^-(\lambda)=\rho_t^{-2s}(\lambda),\quad \rho_p^{\pm2s}(\lambda)=\rho_t^\pm(\lambda)-\rho_p^\pm(\lambda) \label{den1} \end{equation} This allows us to rewrite the density equations~(\ref{lbae}), in terms of three independent particle densities. It is convenient for further computation do this separately for the ferromagnetic and anti-ferromagnetic spin chains. \section{Ferromagnetic chain} \setcounter{equation}{0} For the ferromagnetic hamiltonian, we know from~\cite{adm,adm2} that the order one excitations are $+,-$ and $ns$. We therefore choose to rewrite~(\ref{lbae}) using~(\ref{den1}), as \begin{eqnarray} \rho_t^+(\lambda)&=& \frac{6}{\pi\cosh{6\lambda}} + K_1 *(\rho_p^++\rho_p^-)- K_2*\rho_p^{ns},\nonumber\\ \rho_t^-(\lambda)&=& K_1 *(\rho_p^++\rho_p^-)- K_2*\rho_p^{ns},\nonumber\\ \rho_t^{ns}(\lambda)&=& K_2 *(\rho_p^++\rho_p^-), \label{bae+} \end{eqnarray} where the kernels are \begin{equation} K_1(\lambda) = \frac{18}{\pi^2}\frac{\lambda}{\sinh{6\lambda}},\qquad K_2(\lambda) = \frac{3}{\pi\cosh{6\lambda}}. \end{equation} The particle densities in equations~(\ref{bae+}) above are now all independent of each other, there are no additional constraints. In the thermodynamic limit, the sum rule~(\ref{particle numbers}) becomes a relationship between total particle densities $D_\alpha$, \begin{equation} D_\alpha = \lim_{M\rightarrow\infty}\frac{m_\alpha}{M} = \int d\lambda \rho_p^\alpha. \end{equation} However, we find we do not need to impose the sum rule as an additional restriction on the densities in~(\ref{bae+}), as it is contained in those equations already. To see this, we take the fourier transform of the first two equations in~(\ref{bae+}) and evaluate at $k=0$. This gives exactly the relationship~(\ref{particle numbers}) divided by $M$. In~\cite{km}, the sum rules~(\ref{particle numbers}) were found to give rise to the infrared anomaly, that is, to the diminishing of the number of states near $P=0$ for the anti-ferromagnetic case, and thus to exclusion rules beyond those of fermions. Here, although we lose information about how the maximum integers change as a function of $m_\alpha$ when we take the thermodynamic limit, we still retain a restriction between the densities which contains some of this information. This restriction will allow us to retain the concept of correct counting of states in the thermodynamic limit. The free energy is \begin{equation} F = E-TS \label{fe} \end{equation} evaluated at the stationary point with respect to independent particle densities, where $S$ is the entropy of a state with fixed densities $\rho_p^\alpha$ and $E$ is the total energy of the state. for large $M$, the entropy is: \begin{equation} S = M \sum_{\alpha=+,-,ns} \int_{-\infty}^{\infty} d\lambda \Bigl(\rho_t^\alpha\ln{\rho_t^\alpha} - \rho_p^\alpha\ln{\rho_p^\alpha} - \rho_h^\alpha\ln{\rho_h^\alpha} \Bigr), \label{ent} \end{equation} where $\rho_h=\rho_t-\rho_p$. The energy $E$ is the thermodynamic limit of equation~(\ref{energy}): \begin{equation} E = \sum_{\alpha=+,-,2s,-2s,ns} \int d\lambda \rho_p^\alpha(\lambda) e^\alpha(\lambda) -\frac{2 M}{\sqrt{3}} \label{en1} \end{equation} where $e^\alpha(\lambda)$ is the energy associated with a root of type $\beta$: \begin{eqnarray} e^\pm(\lambda) &=& \frac{\pm1-2i\sinh2\lambda}{2\cosh2\lambda\mp\sqrt3}, \qquad e^{ns}(\lambda)=\frac{-2\sqrt3-4i\sinh4\lambda}{1+2\cosh4\lambda},\nonumber\\ & &e^{\pm2s}(\lambda) =\frac{\mp1-2i\sinh2\lambda}{2\cosh2\lambda\mp\sqrt3} +\frac{\pm1-i\sinh2\lambda}{\cosh2\lambda}. \end{eqnarray} The energy in~(\ref{en1}) is not manifestly real. However, using~(\ref{bae+}), we find that the energy can be re-expressed in terms of only the independent particle densities, and depends only on $\rho_p^+$: \begin{equation} E=-\frac{2M}{\sqrt3} + \int_{-\infty}^{\infty} d\lambda \frac{6}{\cosh6\lambda}\rho_p^+(\lambda). \label{en+} \end{equation} This expression for the energy is manifestly real. Minimizing~(\ref{fe}) with respect to the three particle densities $\rho_p^+,\rho_p^-,\rho_p^{ns}$, we obtain the free energy per site, \begin{equation} \label{free energy} f= e_0 - T \int_{-\infty}^{\infty}d\lambda \frac{6}{\pi\cosh{6\lambda}} \ln\bigl(1+e^{-\epsilon^+(\lambda)/T}\bigr), \end{equation} where \begin{equation} e_0 = -\frac{2}{\sqrt3} + \int_{-\infty}^{\infty}d\lambda \frac{6e^{2s}(\lambda)}{\pi\cosh6\lambda} = -\frac{4}{\pi}-\frac{8}{3\sqrt3}=-2.81284\ldots \end{equation} and the densities $\epsilon^\beta$ are defined as $\rho_h^\beta/\rho_p^\beta=\exp{(\epsilon^\beta/T)}$, and satisfy the nonlinear integral equations: \begin{eqnarray} \epsilon^{+}(\lambda)&=&\frac{6}{\cosh6\lambda} - T\Bigl[K_1 * \ln\Bigl[(1+e^{-\epsilon^+/T})(1+e^{-\epsilon^-/T})\Bigr]+ K_2 *\ln(1+e^{-\epsilon^{ns}/T})\Bigr],\nonumber \\ \epsilon^-(\lambda)&=&\epsilon^+(\lambda)-\frac{6}{\cosh6\lambda},\nonumber\\ \epsilon^{ns}(\lambda)&=& T K_2 *\Bigl[\ln(1+e^{-\epsilon^+(\mu)/T})(1+e^{-\epsilon^-(\mu)/T})\Bigr]. \label{nie} \end{eqnarray} The functions $\epsilon^\alpha$ are also referred to as dressed energies. Note that~(\ref{nie}) represents only one integral equation for $\epsilon^+$, since $\epsilon^-$ is simply related to $\epsilon^+$, and the equation for $\epsilon^{ns}$ is not an integral equation, as $\epsilon^{ns}$ does not appear on the right hand side. At fixed $\lambda$ and $T=0$, we get from~(\ref{nie}) \begin{equation} \epsilon_0^+(\lambda)=\frac{6}{\cosh6\lambda},\qquad \epsilon_0^-=\epsilon_0^{ns}=0, \end{equation} which are the order one excitations found in~\cite{adm2} for the ferromagnetic chain. The free energy per site in this limit is $f=e_0$, which is the ground state energy found in~\cite{baxter10,auyang,albert}. The linear term in the specific heat $C$ is obtained from the low temperature expansion of the free energy, or the entropy~(\ref{ent}): \begin{equation} \label{sph1} C=-T\frac{\partial^2F}{\partial T^2}=T\frac{\partial S}{\partial T}. \end{equation} We find the linear term in the specific heat by computing the $O(T)$ term in the low temperature entropy~\cite{filyov,babujian,kirillov,baz-resh}. As $T\rightarrow0$, $\epsilon^+(\lambda)/T$ scales as $1/(T \cosh 6\lambda)$, which vanishes and gives no contribution to the integral~(\ref{free energy}) except when $\lambda\sim O(\frac{1}{6}\ln T)$. We rescale the equations~(\ref{nie}) by making the change of variables $\lambda\rightarrow\lambda-\frac{1}{6}\ln T$ and consider the equations~(\ref{nie}) at large $\lambda$ and small $T$. We define, at this range of variables, $\phi^\beta(\lambda)=\epsilon^\beta(\lambda-\frac{1}{6}\ln T)/T.$ The integral equations become \begin{eqnarray} \phi^+(\lambda)&\simeq& 12 e^{-6\lambda} - K_1*\ln\Bigl[(1+e^{-\phi^+})(1+e^{-\phi^-})\Bigr] - K_2*\ln(1+e^{-\phi^{ns}}), \nonumber\\ \phi^-(\lambda)&\simeq& -K_1*\ln\Bigl[(1+e^{-\phi^+})(1+e^{-\phi^-})\Bigr] - K_2*\ln(1+e^{-\phi^{ns}}), \nonumber\\ \phi^{ns}(\lambda)&\simeq& K_2*\ln\Bigl[(1+e^{-\phi^+})(1+e^{-\phi^-})\Bigr]. \label{phinie} \end{eqnarray} Differentiating~(\ref{phinie}) with respect to $\lambda$, \begin{eqnarray} \frac{d\phi^+}{d\lambda}&\simeq& -2\times 36 e^{-6\lambda} + K_1* \Bigl[ \frac{\phi'^+}{1+e^{\phi^+}} +\frac{\phi'^-}{1+e^{\phi^-}}\Bigr] +K_2 *\frac{\phi'^{ns}}{1+e^{\phi^{ns}}}\nonumber\\ \frac{d\phi^-}{d\lambda}&\simeq& K_1* \Bigl[ \frac{\phi'^+}{1+e^{\phi^+}} +\frac{\phi'^-}{1+e^{\phi^-}}\Bigr] +K_2 *\frac{\phi'^{ns}}{1+e^{\phi^{ns}}}\nonumber\\ \frac{d\phi^{ns}}{d\lambda}&\simeq& -K_2* \Bigl[ \frac{\phi'^+}{1+e^{\phi^+}} +\frac{\phi'^-}{1+e^{\phi^-}}\Bigr] \label{dphi} \end{eqnarray} where $\phi'^\beta=d\phi^\beta/d\lambda$. We rescale equations~(\ref{bae+}) in the same way. Let $\tilde\rho_t^\beta(\lambda)=\rho_t^\beta(\lambda-\frac{1}{6}\ln T)$, and recall that $\rho_p^\beta=\rho_t^\beta/(1+e^{\epsilon^\beta/T})$. Then \begin{eqnarray} \tilde\rho_t^+(\lambda)&=& \frac{12 T}{\pi}e^{-6\lambda} + K_1*(\frac{\tilde\rho_t^+}{1+e^{\phi^+}} +\frac{\tilde\rho_t^-}{1+e^{\phi^-}}) -K_2*\frac{\tilde\rho_t^{ns}}{1+e^{\phi^{ns}}}\nonumber\\ \tilde\rho_t^-(\lambda)& = & K_1*(\tilde\frac{\rho_t^+}{1+e^{\phi^+}} +\frac{\tilde\rho_t^-}{1+e^{\phi^-}}) - K_2*\frac{\tilde\rho_t^{ns}}{1+e^{\phi^{ns}}}\nonumber\\ \tilde\rho_t^{ns}(\lambda)&=& K_2*(\frac{\tilde\rho_t^+}{1+e^{\phi^+}}+\frac{\tilde\rho_t^-}{1+e^{\phi^-}}). \label{bae+r} \end{eqnarray} Comparing equation~(\ref{bae+r}) to~(\ref{dphi}), we see that \begin{equation} \tilde\rho_p^\pm=-\frac{T}{6\pi} \frac{d\phi^\pm}{d\lambda} \frac{1}{1+e^{\phi^\pm}},\qquad \tilde\rho_p^{ns}=\frac{T}{6\pi} \frac{d\phi^{ns}}{d\lambda} \frac{1}{1+e^{\phi^{ns}}}. \label{rhoasym} \end{equation} The entropy can be evaluated in this limit. The $\lambda\rightarrow\infty$ and $\lambda\rightarrow-\infty$ limits make the same contribution to $S$ (and $f$). Therefore we write \begin{eqnarray} S&\simeq&2\sum_{\beta=+,-,ns}\int_{-\infty}^{\infty}d\lambda \Bigl\{ \tilde\rho_p^\beta\ln(1+e^{\phi^\beta}) +\tilde\rho_h^\beta\ln(1+e^{-\phi^\beta})\Bigr\}\nonumber\\ &=&2\times \frac{T}{6\pi}\Bigl[ -\int_{\phi^+(-\infty)}^{\phi^+(\infty)}d\phi g(\phi)- \int_{\phi^-(-\infty)}^{\phi^-(\infty)}d\phi g(\phi) + \int_{\phi^{ns}(-\infty)}^{\phi^{ns}(\infty)}d\phi g(\phi) \Bigr], \label{asyment} \end{eqnarray} where \begin{equation} g(\phi)= \frac{\ln(1+e^\phi)}{1+e^\phi}+\frac{\ln(1+e^{-\phi})}{1+e^{-\phi}} \label{g} \end{equation} The limits $\phi(\pm\infty)$ are found from equations~(\ref{phinie}). In these limits, the integrals can be performed by taking the log out from under the integral sign, and integrating only the kernel. Let $\tilde\phi$ denote the asymptotic value under consideration $\phi(\lambda=\pm\infty)$. Then, for $\lambda= \infty$, we obtain a system of equations \begin{eqnarray} \tilde\phi^+ =\tilde\phi^-&=& -\frac{1}{4}\ln[(1+e^{-\tilde\phi^+})(1+e^{-\tilde\phi^-})] -\frac{1}{2}\ln(1+e^{-\tilde\phi^{ns}}),\nonumber\\ \tilde\phi^{ns} &=& \frac{1}{2}\ln[(1+e^{-\tilde\phi^+})(1+e^{-\tilde\phi^-})]. \end{eqnarray} Therefore the upper limits of equation~(\ref{asyment}) are \begin{equation} \phi^+(\infty)=\phi^-(\infty)=-\ln2,\qquad\phi^{ns}(\infty)=\ln3. \end{equation} At $\lambda=-\infty$, $\tilde\phi^+=\infty$, and \begin{eqnarray} \tilde\phi^- &=& -\frac{1}{4}\ln(1+e^{-\tilde\phi^-}) -\frac{1}{2}\ln(1+e^{-\tilde\phi^{ns}}),\nonumber\\ \tilde\phi^{ns}&=& \frac{1}{2}\ln(1+e^{-\tilde\phi^-}), \end{eqnarray} and thus the lower limits in~(\ref{asyment}) are \begin{equation} \phi^+(-\infty) =\infty,\qquad \phi^-(-\infty) =-\ln\Bigl[\frac{1+\sqrt5}{2}\Bigr],\qquad \phi^{ns}(-\infty) =\ln\Bigl[\frac{1+\sqrt5}{2}\Bigr] \end{equation} We show in the appendix that how to express the integral~(\ref{asyment}) in terms of dilogarithms. Using the identities described in the appendix, we find that in the low temperature limit, \begin{equation} S\simeq \frac{4\pi T}{45},\qquad \end{equation} and from~(\ref{spheat}) we see that, with $v_F=3$~\cite{adm2}, the central charge $c=\frac{4}{5}$. This is the central charge of the well known conformal limit of the 3-state Potts chain~\cite{huse,dotsenko,bpz} computed in the limit~(\ref{cft limit}). This verifies that the limits~(\ref{cft limit}) and~(\ref{tba limit}) smoothly connect together, and there are no additional length scales in the problem. In the calculation above, we find that, although in the zero temperature limit the energies $\epsilon^-,\epsilon^{ns}$ vanish, they contribute to the low temperature specific heat, i.e. the functions $\phi^{-}(\lambda),\phi^{ns}(\lambda)$ do not vanish. This is a manifestation of the feature seen in~\cite{adm2}, that although the energy can be expressed, as in~(\ref{en+}), in terms of only $\rho_p^+$, the number of states $e^S$ with energy $E$ depends on $\rho_t^\alpha$, and therefore on $\rho_p^-$ and $\rho_p^{ns}$, as it depends on $m_-,m_{ns}$ in~\cite{adm2}. In our case, the densities $\rho_p^-,\rho^{ns}$ enter the expression for the free energy via the entropy $S$, which counts the states. Computation of the specific heat at low temperature depends sensitively on correct counting of states, as was seen in~\cite{km} for the anti-ferromagnetic case. Note that from the counting rules of~\cite{adm}, and from the equations~(\ref{bae+}), the number of states near $P=0$ increases as the number of excitations increase. This causes the central charge to be larger than 1/2, the value we would expect if the excitations $\rho_p^+$ were fermions. \section{Anti-ferromagnetic chain} \setcounter{equation}{0} To find the low temperature behavior of the anti-ferromagnetic chain, we rewrite~(\ref{lbae}) in term of the $\pm2s,ns$ densities, which we know to be the order one excitations for this hamiltonian~\cite{adm,adm2,km}. The equations~(\ref{lbae}) become \begin{eqnarray} \rho_t^{\pm2s}(\lambda) &=& \frac{3}{\pi}\frac{1}{\sqrt2\cosh3\lambda\mp1} -\overline{K}_1*(\rho_p^{2s}+\rho_p^{-2s})- \overline{K}_2*\rho_p^{ns},\nonumber\\ \rho_t^{ns}(\lambda)&=&\frac{3}{\pi\cosh3\lambda} - \overline{K}_2*(\rho_p^{2s}+\rho_p^{-2s})-2\overline{K}_1*\rho_p^{ns}, \label{bae-} \end{eqnarray} where the kernels are: \begin{equation} \overline{K}_1(\lambda)=\frac{3}{2\pi\cosh3\lambda},\qquad \overline{K}_2(\lambda)=\frac{6\cosh3\lambda}{\sqrt2\pi\cosh6\lambda}. \label{kernels2} \end{equation} Note that, from these equations, in the antiferromagnetic case the density of available states always diminishes with increasing particle densities. This was seen in~\cite{adm,km}, where the number of available states decreases as a function of $m_{2s},m_{-2s},m_{ns}$. The entropy in terms of these densities looks the same as~(\ref{ent}), but now we sum over the three independent densities $\alpha=\pm2s,ns$. We also express $E$ in terms of the $\pm2s,ns$ densities. Minimizing the quantity~(\ref{fe}) with respect to the three particle densities $\rho_p^{2s},\rho^{-2s},\rho^{ns}$ now gives the free energy in terms of these densities: \begin{equation} f=\tilde{e}_0+ T\int_{-\infty}^{\infty}d\lambda\Bigl\{ \frac{3\ln(1+e^{-\epsilon^{2s}(\lambda)/T})}{\pi(\sqrt2\cosh3\lambda-1)} +\frac{3\ln(1+e^{-\epsilon^{-2s}(\lambda)/T})}{\pi(\sqrt2\cosh3\lambda+1)} +\frac{3\ln(1+e^{-\epsilon^{ns}(\lambda)/T})}{\pi\cosh3\lambda}\Bigr\}, \label{free energy-} \end{equation} where \begin{equation} \tilde{e}_0=e_0+\frac{18}{\pi^2}\int_{-\infty}^{\infty}d\lambda\frac{1}{\cosh6\lambda( \sqrt2\cosh3\lambda-1)} = 3-\frac{8}{3\sqrt3}+\frac{2}{\pi} =2.097\ldots, \end{equation} and $\epsilon^\alpha$ satisfy the integral equations \begin{eqnarray} \epsilon^{\pm2s}(\lambda)&=& \frac{3}{\sqrt2\cosh3\lambda\mp1} + T\Bigl\{\overline{K}_1*\ln\bigl[(1+e^{-\epsilon^{2s}/T})(1+e^{-\epsilon^{-2s}/T})] +\overline{K}_2*\ln(1+e^{-\epsilon^{ns}/T}) \Bigr\},\nonumber \\ \epsilon^{ns}(\lambda) &=& \frac{3}{\cosh3\lambda} + T\Bigl\{ \overline{K}_2*\ln\Bigl[(1+e^{-\epsilon^{2s}/T})(1+e^{-\epsilon^{-2s}/T})] +2\overline{K}_1*\ln(1+e^{-\epsilon^{ns}/T}) \Bigr\}. \label{nie-} \end{eqnarray} At $T=0$, we see from~(\ref{nie-}) that \begin{equation} \epsilon_0^{\pm2s} = \frac{3}{\sqrt2\cosh3\lambda\mp1},\qquad \epsilon_0^{ns} = \frac{3}{\cosh3\lambda}, \label{ooe-} \end{equation} and from~(\ref{free energy-}) the free energy is $f=\tilde e_0$. These are the order one excitations and ground state energy for the anti-ferromagnetic hamiltonian found in~\cite{baxter10,adm2}. Again we compute the low temperature limit of the entropy, but now we rescale the integral equations by changing variables to $\lambda\rightarrow\lambda-\frac{1}{3}\ln T$. Defining $\phi^\beta(\lambda)=\epsilon(\lambda-\frac{1}{3}\ln T)/T$ we have \begin{eqnarray} \phi^{2s}=\phi^{-2s}&\simeq& 2\times\frac{3}{\sqrt2} e^{-3\lambda} + T\Bigl\{ K_1 *\ln\Bigl[(1+e^{-\phi^{2s}})(1+e^{-\phi^{-2s}})\Bigr] +K_2 * \ln(1+e^{-\phi^{ns}})\Bigl\}\nonumber\\ \phi^{ns}&\simeq& 2\times3 e^{-3\lambda} + T\Bigl\{ K_2 *\ln\Bigl[(1+e^{-\phi^{2s}})(1+e^{-\phi^{-2s}})\Bigr] +2K_1 * \ln(1+e^{-\phi^{ns}})\Bigl\}, \label{phi-} \end{eqnarray} so that $2s$ and $-2s$ are symmetric in this limit, which was a feature seen in~\cite{km}. Again, differentiating~(\ref{phi-}) with respect to $\lambda$ and comparing to the density equations~(\ref{bae-}) rescaled as $\lambda\rightarrow\lambda -\frac{1}{3}\ln T$, we see that \begin{equation} \tilde\rho_p^{\beta} = -\frac{T}{3\pi}\frac{d\phi^\beta}{d\lambda} \frac{1}{1+e^{\phi^\beta}},\ \ \ \beta=2s,-2s,ns. \end{equation} The entropy is calculated as in~(\ref{asyment}). {}From~(\ref{phi-}) we find the limits $\phi^\alpha(\pm\infty)$ \begin{eqnarray} \phi^{2s}(-\infty)&=&\phi^{-2s}(-\infty)=\phi^{ns}(-\infty)=\infty,\nonumber\\ \phi^{-2s}(\infty)&=&\phi^{2s}(\infty) = \ln 2,\qquad \phi^{ns}(\infty)=\ln 3. \end{eqnarray} The entropy is \begin{equation} S\simeq -\frac{2 T}{3\pi}\Bigl[ 2\int_{\infty}^{\ln2}g(\phi)d\phi + \int_{\infty}^{\ln3}g(\phi)d\phi \Bigr], \end{equation} which, using the dilogarithmic identities in the appendix, gives \begin{equation} S\simeq = \frac{2\pi T}{9}. \label{ent-} \end{equation} This, with $v_F=3/2$~\cite{adm2}, gives a central charge $c=1$, which is the central charge of the conformal limit of the model~\cite{park-widom,baz-resh}. Again, this verifies that the limits~(\ref{cft limit}) and~(\ref{tba limit}) commute, and there are no additional length scales. We see that the fact that $\rho_t^\alpha$ tends to decrease with increasing $\rho_p^\alpha$ causes the central charge to be smaller than the value 3/2 one would expect for pure fermions. \section{$\bf Q=\pm1$} In the thermodynamic limit, quantities which are not of order $M$ are irrelevant to the calculation. Therefore, we do not expect the value of $Q$ to affect the thermodynamic equations. This is indeed the case. In~\cite{adm}, the counting rules for $Q=\pm1$ were found. It was shown that the counting rules depended on the value of the numbers $m_{++}$ and $m_{-+}$, where \begin{equation} m_{++}-m_{-+} = 0, \pm1. \label{diff} \end{equation} Again we note that this difference is not of order $M$ and we do not expect it to change the thermodynamic equations. The sum rules~(\ref{particle numbers}) for this sector are changed to: \begin{eqnarray} m_+ &=& 2 m_{ns} + 4 m_{-2s} + m_{-+} + m_{++} \nonumber \\ M-1 &=& m_{2s} + 2 m_{ns} + 3 m_{-2s} + m_{-+} + m_{++}. \end{eqnarray} This is only different from the $Q=0$ sector by a term of order 1, due to equation~(\ref{diff}) and the fact that $m_{-+}=m_-$. Therefore to order $M$, the sum rules are identical to~(\ref{particle numbers}). For $m_{++}-m_{-+}=\pm1$, the completeness rules for $Q=1$ are the same as for $Q=0$. For $m_{++}-m_{-+}=0$, there is spectrum doubling: The integers $I_j^{2s}$ are shifted from those of $+$ by $\pm1/2$, and those of $-2s$ are shifted from $I_j^-$ in the same way, both signs giving the same energy level. The shift does not affect the thermodynamic limit of equations~(\ref{logbae}). The spectrum doubling gives rise to an additive term of order one in the entropy (which counts the number of states). Since the entropy is of order $M$, again this term is not relevant in the thermodynamic limit. We conclude therefore that this sector is identical to $Q=0$. In~\cite{km}, the difference in counting rules for the sectors $Q=\pm1$ gave rise to different branching functions in the modular invariant partition function from those of $Q=0$. However, each term in the modular invariant partition function gives the same specific heat, Due to the modular invariance property. The specific heat is found from the limit $q\rightarrow1$ of the partition function of~\cite{km}. However, the partition function is invariant under modular transformations, where, if $q=\exp(2\pi i\tau)$, the transformation $\tau\rightarrow-1/\tau$ leaves the partition function invariant, so the specific heat is obtained from the $q\rightarrow0$ limit of the partition function. The same transformation sends each branching function into a linear combination of all other branching functions. Therefore, each branching function has the same $q\rightarrow1$ behavior. In the thermodynamic calculation we do not see this difference between the sectors $Q=0$ and $Q=\pm1$. \section{Discussion} \setcounter{equation}{0} In~\cite{km}, the order one excitations~(\ref{ooe-}) were used to compute the partition function of the anti-ferromagnetic chain in the limit~(\ref{cft limit}). This is identical to the modular invariant partition function of the conformal limit of the model~\cite{pearceun,gq} and so gives the same specific heat as~(\ref{ent-}). For the computation in~\cite{km}, it was necessary to carefully consider the information about the way the maximum integer $I_{max}^\alpha$ of the finite size system in~(\ref{logbae}) changed with the number of particles in the system. This is because those integers correspond to the energies close to zero, $e(P)\sim0$, or in the language used here, the large $\lambda$ behavior of the energies, which is the region which contributes to the specific heat. The maximum integers decrease, at finite $M$, as more particles are added to the system. In~\cite{km} this phenomenon was referred to as an {\em infrared anomaly}. This represents counting rules for the excitations beyond the fermionic exclusion rule, and is the phenomenon responsible for the central charge being different from that of fermions. In the case of the anti-ferromagnetic chain discussed in~\cite{km}, the infrared anomaly was repulsive: fewer states were available as the number of excitations was increased than would be available for fermions. In the case of the ferromagnetic chain, both repulsive and attractive infrared anomalies are present, but the total infrared anomaly is attractive. In the thermodynamic limit, we discard the information about the maximum integers $I_j^\alpha$. Nevertheless, the integral equations~(\ref{nie-}) contain the information about the way the density of available states depends on the particle density, represented by the density equations~(\ref{bae-}). This information enables correct counting of states, using the entropy. We see that this information gives the same specific heat as the counting of ref~\cite{km}, but, as we saw in section 5, does not show the difference between the different $Q$ sectors. When computing the low temperature specific heat, the region of $\lambda$ which contributes to the free energy as $T\rightarrow0$ is the $\lambda\sim \frac{1}{6}\ln T$ limit in the ferromagnetic case, and the $\lambda\sim\frac{1}{3}\ln T$ limit in the anti-ferromagnetic case. This limit is the $P\rightarrow0$ limit~\cite{adm2}, which corresponds to the lowest lying order one excitations in~\cite{km}. The careful counting of states there is paralleled in the computation here by the rescaling of the integral equations and the density equations in the limit $T\rightarrow0$ and $\lambda\rightarrow\infty$. Finally, we note that these computations are related to the thermodynamic Bethe ansatz method of~\cite{zamolodchikov}, a point which is discussed in some detail in~\cite{kkmm}. \section*{Acknowledgements} We are pleased to acknowledge useful discussion with Dr. G. Albertini, Dr. S. Dasmahapatra, and E. D. Williams. We are particularly grateful to Prof. B. M. McCoy for his many suggestions and insights. We'd also like to thank Prof. P. A. Pearce and Prof. V. V. Bazhanov for their comments. This work was partially supported by the National Science Foundation under grant DMR-9106648. \begin{appendix} \section{Appendix: Expression of entropy via dilogarithms} \setcounter{equation}{0} The Rogers dilogarithm is defined as~\cite{rogers} \begin{equation} L(x) = -\frac{1}{2} \int_0^x df \left( \frac{\ln(1-f)}{f} + \frac{\ln(f)}{1-f} \right). \end{equation} Making a change of variables in the expression for the entropy~(\ref{asyment}) to $f=1/1+e^\phi$, the entropy~(\ref{asyment}) is expressed in terms of $L(x)$~\cite{baz-resh}: \begin{equation} S\simeq -\frac{2 T}{3 \pi} \left( 2 L(\frac{1}{3}) + L(\frac{1}{4}) - L(1) - 2 L(\frac{3-\sqrt 5}{2})\right). \label{id1} \end{equation} We use the identity on Rogers dilogarithms~\cite{kirillov}: \begin{equation} \sum_{k=2}^{n-2} L\left(\frac{\sin^2(\pi/n)}{\sin^2(k\pi/n)}\right) = \frac{2(n-3)}{n} L(1), \end{equation} where $L(1) = \pi^2/6$~\cite{lewin}. Using this identity with, $n=6$, we find that \begin{equation} 2 L(1/3) + L(1/4) = L(1), \label{id2} \end{equation} and, with $n=5$, we see that~\cite{lewin}: \begin{equation} 2 L\left(\frac{3-\sqrt5}{2}\right) = 2 \frac{\pi^2}{15}. \end{equation} Therefore equation~(\ref{id1}) gives \begin{equation} S\simeq \frac{4 T}{3\pi} L\left(\frac{3-\sqrt 5}{2}\right) = \frac{4 \pi T} {45}. \end{equation} In the anti-ferromagnetic case, we again use~(\ref{id2}) with $n=6$, and entropy is: \begin{equation} S\simeq \frac{4 T}{3\pi} (2 L(1/3) + L(1/4)) = \frac{4 T}{3\pi}L(1) = \frac{2\pi T}{9}. \end{equation} \end{appendix}
2,877,628,089,889
arxiv
\section{Introduction} Special Lagrangian submanifolds in Calabi-Yau manifolds were introduced as special classes of minimal submanifolds in \cite{HL}. Since their importance was pointed out in the context of mirror symmetry in \cite{SYZ}, they have been intensively studied. As a fundamental tool for constructing special Lagrangian submanifolds, Lagrangian mean curvature flows have been expected. In \cite{TY} Thomas-Yau proposed a conjecture, which is reformulated by Joyce \cite{J} recently, that the long-time existence of Lagrangian mean curvature flows are closely related to the stability of Lagrangian submanifolds. This conjecture is a central problem in this area. Main difficulties in studying mean curvature flows come from singularities of the flow. In \cite{H} Huisken showed that, if we rescale the mean curvature flow at a type I singularity, the blow-up limit is a self-shrinker. It is also known that a translating soliton appears as a certain blow-up limit at another type of singularities. Therefore, it is important to study self-shrinkers or translating solitons, which can be considered as local models of singularities of mean curvature flows. In the case of Lagrangian mean curvature flows, there are some examples of self-shrinkers or translating solitons as follows. Anciaux \cite{A}, Lee-Wang \cite{LW1,LW2} constructed examples of Lagrangian self-shrinkers and self-expanders. Later, Joyce-Lee-Tsui \cite{JLT} constructed another type of Lagrangian self-shrinkers and self-expanders as well as Lagrangian translating solitons. Moreover, Castro-Lerma \cite{CL} also constructed another type of Lagrangian translating solitons in $\mathbb{C}^2$. Recently, in \cite{Y}, Yamamoto pointed out that some of these constructions can be interpreted in terms of moment maps for Hamiltonian torus actions on toric Calabi-Yau manifolds and he also constructed examples of generalized Lagrangian mean curvature flows in the sense of \cite{Be} in almost Calabi-Yau manifolds. In this paper, we construct of various examples of Lagrangian mean curvature flows, using moment maps for actions of abelian Lie groups on Calabi-Yau manifolds. Since some parts of Yamamoto's construction rely on toric geometry, we get rid of it so that our construction can be applied to more general cases. Roughly speaking, we prove the following. See Theorem \ref{construction} for the precise statement. \begin{theorem} \label{construction0} Let $M$ be a Calabi-Yau manifold and $L$ a special Lagrangian submanifold. Suppose there is an isometric and Hamiltonian action of an abelian Lie group $H$ with a moment map $\mu_H \colon M \to \mathfrak{h}^*$. Suppose also that the $H$-orbit through $p \in L$ intersects with $L$ orthogonally for each $p \in L$. Fix $c_0 \in \mathfrak{h}^*$ and set $V_c=\mu_H^{-1}(c) \cap L$ for $c \in \mathfrak{h}^*$. Then there exist $a_H \in \mathfrak{h}^*$ and a vector field $\chi$, which is written in terms of $a_H$ explicitly, on $L$ such that the following holds: \\ $(1)$ Let $\gamma_p \colon [0,T) \to L$ be the integral curve of the vector field $\chi$ with $\gamma_p(0)=p$. If $p \in V_{c_0}$, then $\gamma_p(t) \in V_{c_t}$ holds, where $c_t= c_0 - t a_H \in \mathfrak{h}^*$. \\ $(2)$ Define a family of maps $\{ F_t \colon V_{c_0} \times H \to M \}_{t \in [0,T)}$ by $F_t(p,h)=\gamma_p(t)h$. If it exists, then it is a Lagrangian mean curvature flow. \end{theorem} Using the above theorem, we construct various examples of Lagrangian mean curvature flows. Our basic examples are Lagrangian self-shrinkers, which were already found in \cite{LW2} as well as Lagrangian translating solitons in the Euclidean spaces, which are higher dimensional generalizations of ones given in \cite{CL}. Moreover, our construction can be applied to construct Lagrangian mean curvature flows in non-flat Calabi-Yau manifolds. 4-dimensional Ricci-flat ALE spaces of $A_n$-type are hyperK\"ahler manifolds, on which $S^1$ acts, preserving their hyperK\"ahler structures. These $S^1$-actions are extended to $T^2$-actions, which preserve only their K\"ahler structures. We use these actions not only for constructing Lagrangian mean curvature flows in these spaces, but also for investigating singularities of the flows. As a result, we see that the singularities are type I and determine the blow-up limits at the singularities. The condition in Theorem \ref{construction0}, that the $H$-orbit through $p \in L$ intersects with $L$ orthogonally for each $p \in L$, has already appeared in Chapter 9 in \cite{J2}, where Joyce constructed examples of special Lagrangian submanifolds of $\mathbb{C}^m$ by using momant maps. His construction is the case $a_H=0$ in Theorem \ref{construction0}. Thus our theorem can be considered as a generalization of Joyce's construction of special Lagrangian submanifolds to Lagrangian mean curvature flows. The author would like to thank Dominic Joyce for informing him of it. Contents of this paper is as follows. In Section 2, we state our general construction and prove it. In Section 3, we apply the construction to the Euclidean spaces to get Lagrangian self-shrinkers, self-expanders, and translating solitons. In Section 4, we apply our construction to hyperK\"ahler manifolds which admit isometric, $I_1$-antiholomorphic and $I_2$-holomorphic involutions. As examples of this construction, we describe Lagrangian mean curvature flows in 4-dimensional Ricci-flat ALE spaces of $A_n$-type in Section 5. \section{Constructions of Lagrangian mean curvature flows}\label{BC} Let $(M,I)$ be a complex manifold. Here, $I$ is an integrable almost complex structure, which is an automorphism of the tangent bundle $TM$ satisfying $I^2=-1$ and the integrability condition. A K\"ahler form $\omega$ on $(M,I)$ is an $I$-invariant symplectic form which induces a Riemannian metric $g(u,v)=\omega (u,I_p v)$ for $p \in M$ and $u,v \in T_pM$. \begin{definition}\label{def-CY} {\rm A} Calabi-Yau $n$-fold {\rm is a quadruple $(M,I,\omega, \Omega)$ such that $(M,I)$ is an $n$-dimensional complex manifold equipped with a K\"ahler form $\omega$ and a holomorphic volume form $\Omega$ which satisfy the following relation:} \begin{align}\label{CY} \frac{\omega^n}{n!}=(-1)^{\frac{n(n-1)}{2}} (\frac{\sqrt{-1}}{2})^n \Omega \wedge \overline{\Omega}. \end{align} \end{definition} It is well known that Calabi-Yau manifolds are Ricci-flat. Moreover, if $L$ is an oriented Lagrangian submanifold of a Calabi-Yau manifold $(M,I,\omega, \Omega)$ with the K\"ahler metric $g$, then there exists a function $\theta \colon L \to \mathbb{R}/2\pi \mathbb{Z}$, which is called the {\it Lagrangian angle}, satisfying \begin{align*} \iota^* \Omega = e^{\sqrt{-1}\theta}\mathrm{vol}_{\iota^* g}, \end{align*} where $\iota \colon L \to M$ is the embedding, and $\mathrm{vol}_{\iota^* g}$ is the volume form of $L$ with respect to the induced metric $\iota^* g$. Moreover, the mean curvature vector $\mathcal{H}_p$ at $p\in L$ is given by \begin{align*} \mathcal{H}_p = I_{\iota(p)} (\iota_{*p} (\mathrm{grad}_{\iota^* g}\theta)_p ) \in (T_{\iota(p)} L)^\perp, \end{align*} where $\mathrm{grad}_{\iota^* g}\theta$ is the gradient of the function $\theta$ with respect to the induced metric $\iota^* g$. Then, special Lagrangian submanifolds are defined as follows. \begin{definition} {\rm Let $(M,I,\omega, \Omega)$ be a Calabi-Yau manifold. An oriented Lagrangian submanifold $L$ of $M$ is a} special Lagrangian submanifold with the phase $\theta_0 \in \mathbb{R}/2\pi \mathbb{Z}$ {\rm if its Lagrangian angle $\theta \colon L \to \mathbb{R}/2\pi \mathbb{Z}$ is the constant function $\theta = \theta_0$.} \end{definition} Next, we fix our notation on Hamiltonian group actions. In this paper, a Lie group $G$ acts on a manifold $M$ from the right. We denote the right translation of $g \in G$ by $R_g \colon M \to M$. The exponential map is denoted by $\mathrm{Exp}_G \colon \mathfrak{g} \to G$. The Lie group $G$ acts on $\mathfrak{g}^*$ by the coadjoint action. Suppose that a Lie group $G$ with the Lie algebra $\mathfrak{g}$ acts on a symplectic manifold $(M, \omega)$. Then $G$-equivariant map $\mu_G \colon M \to \mathfrak{g}^*$ is a {\it moment map} if $$-i(\xi^\#) \omega = d \langle \mu_G(\cdot), \xi \rangle$$ holds for each $\xi \in \mathfrak{g}$, where $\xi^\#$ is the vector field on $M$ generated by $\xi$. Then we have the following theorem. \begin{theorem}\label{construction} Let $(M,I,\omega, \Omega)$ be a connected Calabi-Yau n-fold and $g$ the K\"ahler metric. Let $L$ be a special Lagrangian submanifold with the phase $\theta_0 \in \mathbb{R}/2\pi \mathbb{Z}$. Suppose that an abelian Lie group $H$ acts on $(M, I, \omega)$, preserving $I$ and $\omega$, with a moment map $\mu_H \colon M \to \mathfrak{h}^*$. Suppose also that $\xi^\#_p \ne 0 \in T_pM$ and that $\xi^\#_p \perp T_pL$ for each $p \in L$, $\xi \in \mathfrak{h} \setminus \{ 0 \}$. Then the following holds. \\ $(1)$ There exists $a_H \in \mathfrak{h}^*$ such that $(R_{\mathrm{Exp}_H \xi})^* \Omega = e^{\sqrt{-1} \langle a_H,\xi \rangle}\Omega$ on $M$ for each $\xi \in \mathfrak{h}$. \\ $(2)$ Set $V_c=\mu_H^{-1}(c) \cap L$ for $c \in \mathfrak{h}^*$. Then $V_c$ is an orientable submanifold of M of dimension $n - \dim H$ if $V_c$ is not empty. Moreover, $T_p V_c \perp I_p \xi^\#_p $ holds for each $p \in V_c$ and $\xi \in \mathfrak{h}$. \\ $(3)$ The map $\phi_c \colon V_c \times H \to M$, which is defined by $\phi_c(p,h)=ph$, is a Lagrangian immersion. Moreover, $V_c \times H$ is has the canonical orientation so that $(\phi_c^* \Omega)_{(p, \mathrm{Exp}_H \xi)} = e^{\sqrt{-1} \theta_c} \mathrm{vol}_{\phi_c^* g}$ holds, where $\theta_c(p, \mathrm{Exp}_H \xi) = \langle a_H, \xi \rangle + \theta_0 - \frac{\pi \dim H}{2}$. \\ $(4)$ Define a map $\tilde{\xi} \colon L \to \mathfrak{h}$ by $g_p((\tilde{\xi}(p))^\#_p, \eta^\#_p)= \langle a_H, \eta \rangle$ for $p \in L$ and $\eta \in \mathfrak{h}$. Define the vector field $\chi$ on $L$ by $\chi_p=I_p\tilde{\xi}(p)^\#_p \in T_pL$ for $p \in L$. Then the mean curvature vector $\mathcal{H}^c \in \Gamma(\phi_c^*TM)$ of the map $\phi_c \colon V_c \times H \to M$ is given by $\mathcal{H}^c_{(p,h)}=R_{h*p}\chi_p \in T_{\phi_c(p,h)}M$. \\ $(5)$ Fix $c_0 \in \mathfrak{h}^*$. Suppose that there exists $T >0$ such that, for each $p \in \mu_H^{-1}(c_0) \cap L$, there exists the integral curve $\gamma_p \colon [0,T) \to L$ of the vector field $\chi$ with $\gamma_p(0)=p$. Then $\gamma_p(t) \in V_{c_t}$ holds for any $p \in L$ and $t \in [0,T)$, where $c_t= c_0 - t a_H \in \mathfrak{h}^*$. Moreover, the family of maps $\{ F_t \colon V_{c_0} \times H \to M \}_{t \in [0,T)}$, which is defined by $F_t(p,h)=\gamma_p(t)h$, is a Lagrangian mean curvature flow. Namely, $\frac{\partial F_t}{\partial t}(p,h)=\mathcal{H}^{c_t}_{(\gamma_p(t),h)}$ holds for $t \in [0,T)$. \end{theorem} {\it Proof.} $(1)$ Fix any $h \in H$. Since $R_h^*\Omega$ is a holomorphic $(n,0)$-form, there exists a holomorphic function $f_h \colon M \to \mathbb{C}$ such that $R_h^*\Omega = f_h \Omega$. By the relation (\ref{CY}), we have $|f_h(p)|=1$ for each $p \in M$. Since $M$ is connected, we see that $f_h$ is constant, that is, $f_h(p)=c_h$ for each $p \in M$. Moreover, since $R_{h_1 h_2}^* \Omega=R_{h_2}^*(R_{h_1}^*\Omega)$, we have $c_{h_1h_2}=c_{h_1}c_{h_2}$. So, there exists $a_H \in \mathfrak{h}^*$ such that $c_{\mathrm{Exp}_H \xi}=e^{\sqrt{-1} \langle a_H,\xi \rangle}$ for each $\xi \in \mathfrak{h}$. \\ $(2)$ It is easy to see that $\mathrm{grad}_g \langle \mu_H(\cdot), \xi \rangle = - I \xi^\#$ on $M$. Since $L$ is a Lagrangian submanifold of a K\"ahler manifold $(M,I,\omega)$, the complex structure $I_p \colon T_pM \to T_pM$ induces an isomorphism from $T_pL$ to the orthogonal complement $(T_pL)^\perp$ for each $p \in L$. Moreover, since $\xi^\#_p \ne 0$ and $\xi^\#_p \perp T_pL$ for $p \in L$ by the assumption, we have $(\mathrm{grad}_g \langle \mu_H(\cdot), \xi \rangle)_p \in T_pL \setminus \{ 0 \}$ for $p \in L$ and $\xi \in \mathfrak{h} \setminus \{ 0 \}$. Therefore, if we set $\nu= \mu_H|_L \colon L \to \mathfrak{h}^*$, we have $(d \langle \nu, \xi \rangle)_p \ne 0 \in T^*_p L$ for $p \in L$ and $\xi \in \mathfrak{h} \setminus \{ 0 \}$. Since $\langle (d \nu)_p, \xi \rangle = (d \langle \nu, \xi \rangle)_p \ne 0$ for each $\xi \in \mathfrak{h} \setminus \{ 0 \}$, we see that $(d \nu)_p \colon T_pL \to \mathfrak{h}^*$ is surjective for each $p \in L$. That is, every point in $L$ is regular for $\nu$. This implies that, for any $c \in \mathfrak{h}^*$, the level set $\nu^{-1}(c)= \mu_H^{-1}(c) \cap L$ is a submanifold of dimension $n- \dim H$ if $\nu^{-1}(c)$ is not empty. For each $v \in T_p(\mu_H^{-1}(c) \cap L)$ and $\xi \in \mathfrak{h}$, we have $$0= \langle (d\mu_H)_p(v), \xi \rangle = - \omega_p(\xi^\#_p, v)= -g_p(I_p\xi^\#_p, v). $$ Thus we see $T_p(\mu_H^{-1}(c) \cap L) \perp I_p \xi^\#_p$ for $p \in \mu_H^{-1}(c) \cap L$ and $\xi \in \mathfrak{h}$. Since $L$ is oriented, $T_pL$ is an oriented vector space for each $p \in L$. Define an isomorphism \begin{equation}\label{ori-isom} \psi_p \colon T_p(\mu_H^{-1}(c) \cap L) \oplus \mathfrak{h} \to T_pL \end{equation} by $\psi_p(v,\xi)=v + I_p\xi^\#_p$. If we fix an orientation on $\mathfrak{h}$, then we can define an orientation on $T_p(\mu_H^{-1}(c) \cap L)$ so that $\psi_p$ is orientation preserving. Thus we see that $\mu_H^{-1}(c) \cap L$ is orientable. \\ $(3)$ Define an orientation of $T_{(p,h)}(V_c \times H) \cong T_pV_c \oplus \mathfrak{h}$ for each $(p,h) \in V_c \times H$ so that the map $\psi_p$ in (\ref{ori-isom}) is orientation preserving. Thus $V_c \times H$ has the canonical orientation. Moreover, we note that, for each $(p,h) \in V_c \times H$ and $(v,\xi) \in T_pV_c \oplus \mathfrak{h} \cong T_{(p,h)}(V_c \times H)$, the following holds: \begin{align*} (\phi_c)_{*(p,h)}(v,\xi) = (R_h)_{*p} (v + \xi^\#_p) \in T_{\phi_c(p,h)}M. \end{align*} Firstly, we show that $\phi_c$ is an immersion. Suppose that $ (\phi_c)_{*(p,h)}(v,\xi) = 0$. Then we have $(R_h)_{*p} (v + \xi^\#_p ) =0 \in T_{ph}M$. Since $(R_h)_{*p} \colon T_p M \to T_{ph}M$ is an isomorphism, we have $v + \xi^\#_p =0 \in T_p M$. Since $v \perp \xi^\#_p$, we see that $v= \xi^\#_p= 0 \in T_p M$. Thus we see that $(v, \xi) = 0 \in T_pV_c \times \mathfrak{h}$, which implies that $\phi_c \colon V_c \times H \to M$ is an immersion. Secondly, we show $\phi_c^*\omega=0$. Since $R_h^* \omega=\omega$, we have \begin{align*} (\phi_c^* &\omega)_{(p,h)}((v_1, \xi_1), (v_2, \xi_2))\\ &= \omega_{\phi_c(p,h)}((\phi_c)_{*(p,h)}(v_1, \xi_1), (\phi_c)_{*(p,h)}(v_2, \xi_2)) \\ &=(R_h^* \omega)_p(v_1 + (\xi_1)^\#_p, v_2 + (\xi_2)^\#_p) \\ &= \omega_p(v_1,v_2)+\omega_p((\xi_1)^\#_p,(\xi_2)^\#_p) +\omega_p(v_1,(\xi_2)^\#_p)+\omega_p((\xi_1)^\#_p,v_2). \end{align*} The first term is zero, because $v_i \in T_p L$ for $i=1,2$ and $L$ is a Lagrangian submanifold. The second term is zero, because $H$ is abelian. For the third term, we have $$ \omega_p(v_1,(\xi_2)^\#_p) = (d \langle \mu_H(\cdot), \xi_2 \rangle)_p(v_1)=0, $$ because $v_1 \in T_pV_c$ and $\mu_H$ is constant on $V_c$. Similarly, the fourth term is zero. Thus we have $\phi_c^* \omega =0$. Thirdly, we compute $\phi_c^*\Omega$. Fix an arbitrary $(p,h) \in V_c \times H$ and take $\xi_h \in \mathfrak{h}$ such that $h = \mathrm{Exp}_H \xi_h$. Since $(\phi_c)_{*(p,h)}(v,0) = (R_h)_{*p}v \in T_{ph}M$ is perpendicular to $(\phi_c)_{*(p,h)}(0,\xi)= (R_h)_{*p}\xi^\#_p \in T_{ph}M$ for each $v \in T_pV_c$ and $\xi \in \mathfrak{h}$, there exists $v_1, \dots, v_{n-m} \in T_pV_c$ and $\xi_1, \dots, \xi_m \in \mathfrak{h}$, where $m= \dim H$, such that the collection $$(v_1, 0), \dots, (v_{n-m},0), (0, \xi_1), \dots, (0,\xi_m) \in T_{(p,h)}(V_c \times H)$$ is an oriented orthonormal basis of $T_{(p,h)}(V_c \times H)$ with respect to $(\phi_c^*g)_{(p,h)}$. Then, the collection $$v_1, \dots, v_{n-m}, I_p(\xi_1)^\#_p, \dots,I_p (\xi_m)^\#_p$$ is an oriented orthonormal basis of $T_p L$. Recall that $R_h^*\Omega=e^{\sqrt{-1}\langle a_H, \xi_h \rangle} \Omega$ and $\iota_L^* \Omega =e^{\sqrt{-1}\theta_0} \mathrm{vol}_{\iota_L^* g}$, where $\iota_L \colon L \to M$ is the embedding. Moreover, since $\xi^\# + \sqrt{-1} I \xi^\#$ is a (0,1)-vector field and $\Omega$ is an $(n,0)$-form on $(M,I)$, we have $$i(\xi^\#)\Omega=-\sqrt{-1} i( I \xi^\#)\Omega.$$ So we have \begin{align*} &(\phi_c^* \Omega)_{(p,h)}((v_1, 0), \dots, (v_{n-m},0), (0, \xi_1), \dots, (0,\xi_m))\\ &=\Omega_{ph}((R_h)_{*p} v_1, \dots, (R_h)_{*p} v_{n-m}, (R_h)_{*p}(\xi_1)^\#_p, \dots, (R_h)_{*p}(\xi_m)^\#_p) \\ &=(R_h^*\Omega)_p(v_1, \dots, v_{n-m}, (\xi_1)^\#_p, \dots, (\xi_m)^\#_p) \\ &=e^{\sqrt{-1}\langle a_H, \xi_h \rangle} \Omega_p(v_1, \dots, v_{n-m}, (\xi_1)^\#_p, \dots, (\xi_m)^\#_p) \\ &=e^{\sqrt{-1}\langle a_H, \xi_h \rangle}(-\sqrt{-1})^m \Omega_p(v_1, \dots,v_{n-m}, I_p(\xi_1)^\#_p, \dots,I_p (\xi_m)^\#_p). \end{align*} Since the collection $v_1, \dots,v_{n-m}, I_p(\xi_1)^\#_p, \dots,I_p (\xi_m)^\#_p$ is an oriented orthonormal basis of $T_pL$, we have \begin{align*} &(\phi_c^* \Omega)_{(p,h)}((v_1, 0), \dots, (v_{n-m},0), (0, \xi_1), \dots, (0,\xi_m)) \\ & \hspace*{60mm}= e^{\sqrt{-1}\langle a_H, \xi_h \rangle}(-\sqrt{-1})^m e^{\sqrt{-1}\theta_0}. \end{align*} Since the collection $(v_1, 0), \dots, (v_{n-m},0), (0, \xi_1), \dots, (0,\xi_m)$ is an oriented orthonormal basis of $T_{(p,h)}(V_c \times \mathfrak{h})$, then we have \begin{align*} \phi_c^* \Omega = e^{\sqrt{-1}\langle a_H, \xi_h \rangle}(-\sqrt{-1})^m e^{\sqrt{-1}\theta_0} \mathrm{vol}_{\phi_c^* g}. \end{align*} $(4)$ Let $\mathrm{grad}_{\phi_c^*g}\theta_c$ be the gradient of $\theta_c$ with respect to the metric $\phi_c^*g$. Firstly, we show that $(\mathrm{grad}_{\phi_c^*g}\theta_c)_{(p,h)}=(0,\tilde{\xi}(p)) \in T_pV_c \times \mathfrak{h} \cong T_{(p,h)}(V_c \times H)$. For each $(v,\eta) \in T_pV_c \times \mathfrak{h} \cong T_{(p,h)}(V_c \times H)$ we have \begin{align*} (\phi_c^*g)_{(p,h)}((\mathrm{grad}_{\phi_c^*g}\theta_c)_{(p,h)}, (v,\eta)) &=(d \theta_c)_{(p,h)}(v,\eta) \\ &=\langle a_H, \eta \rangle \\ &=g_p((\tilde{\xi}(p))^\#_p, \eta^\#_p) \\ &=g_{ph}((R_h)_{*p}\tilde{\xi}(p)^\#_p, (R_h)_{*p}(v+\eta^\#_p))\\ &=g_{\phi_c (p,h)}((\phi_c)_{*(p,h)}(0,\tilde{\xi}(p)), (\phi_c)_{*(p,h)}(v,\eta))\\ &=(\phi_c^* g)_p((0,\tilde{\xi}(p)), (v,\eta)). \end{align*} Thus we see $(\mathrm{grad}_{\phi_c^*g}\theta_c)_{(p,h)}=(0,\tilde{\xi}(p))$. Therefore the mean curvature vector $\mathcal{H}^c$ is computed as follows. \begin{align*} \mathcal{H}^c_{(p,h)} &=I_{\phi_c(p,h)}(\phi^c)_{*(p,h)}(\mathrm{grad}_{\phi_c^*g}\theta_c)_{(p,h)} \\ &=I_{ph}(\phi^c)_{*(p,h)}(0,\tilde{\xi}(p)) \\ &=I_{ph}(R_h)_{*p}(\tilde{\xi}(p)^\#_p) \\ &=(R_h)_{*p}I_{p}(\tilde{\xi}(p)^\#_p) \\ &=(R_h)_{*p}\chi_p. \end{align*} $(5)$ Firstly, we show $\gamma_p(t) \in V_{c_t}=\mu_H^{-1}(c_t) \cap L$. Since $\gamma_p(t) \in L$ is obvious, it is enough to show $\gamma_p(t) \in \mu_H^{-1}(c_t)$. Let $\eta_1, \dots, \eta_m$ be a basis of $\mathfrak{h}$ and $\eta^1, \dots, \eta^m \in \mathfrak{h}^*$ the dual basis. Since $\frac{d\gamma_p}{dt}(t)=\chi_{\gamma_p(t)}= I_{\gamma_p(t)}(\tilde{\xi}(\gamma_p(t))^\#_{\gamma_p(t)})$, we have \begin{align*} \frac{d}{dt} \mu_H(\gamma_p(t)) &=\frac{d}{dt} \sum_{i=1}^m \langle \mu_H(\gamma_p(t)),\eta_i \rangle \eta^i \\ &=- \sum_{i=1}^m \omega_{\gamma_p(t)}( (\eta_i)^\#_{\gamma_p(t)},\frac{d\gamma_p}{dt}(t)) \eta^i \\ &=- \sum_{i=1}^m \omega_{\gamma_p(t)}( (\eta_i)^\#_{\gamma_p(t)}, I_{\gamma_p(t)}(\tilde{\xi}(\gamma_p(t))^\#_{\gamma_p(t)})) \eta^i \\ &=- \sum_{i=1}^m g_{\gamma_p(t)}( (\eta_i)^\#_{\gamma_p(t)}, \tilde{\xi}(\gamma_p(t))^\#_{\gamma_p(t)}) \eta^i \\ &=- \sum_{i=1}^m \langle a_H, \eta_i \rangle \eta^i =-a_H. \end{align*} Therefore we have $\mu_H(\gamma_p(t))=\mu_H(\gamma_p(0))-ta_H=c_t$. Thus we see that the image of the map $F_t \colon V_{c_0} \times H \to M$ is contained in the image of the map $\phi_{c_t} \colon V_{c_t} \times H \to M$. Since the both maps are immersions, the mean curvature vector of the immersion $F_t$ at $(p,h) \in V_{c_0} \times H$ is $\mathcal{H}^{c_t}_{(\gamma_p(t),h)} \in T_{\gamma_p(t)h}M$. On the other hand, we have $$ \frac{\partial F_t}{\partial t}(p,h) =\frac{\partial}{\partial t}(\gamma_p(t)h) =(R_h)_{* \gamma_p(t)}\frac{d \gamma_p}{dt}(t) =(R_h)_{* \gamma_p(t)}\chi_{\gamma_p(t)} =\mathcal{H}^{c_t}_{(\gamma_p(t),h)}. $$ Thus we finish the proof of Theorem \ref{construction}. \hfill $\Box$ \section{Basic Examples} In this section we apply Theorem \ref{construction} and give basic examples of Lagrangian mean curvature flows in the Euclidean spaces. \subsection{Self-shrinkers and self-expanders}\label{3-1} Let $M=\mathbb{C}^d$ equipped with the standard complex structure $I$, the standard K\"ahler form $\omega$ and the standard holomorphic volume form $\Omega$. The K\"ahler metric is denoted by $g$. Let $H$ be $S^1= \{ \zeta \in \mathbb{C}~|~|\zeta|=1\}$ and $\mathfrak{h}$ its Lie algebra. Take $\xi_0 \in \mathfrak{h}$ such that $\mathrm{Exp}_H t \xi_0 =e^{\sqrt{-1}t} \in H$. Take $\xi^0 \in \mathfrak{h}^*$ such that $\langle \xi^0, \xi_0 \rangle =1$. The action of $H$ on $M$ is defined by $$ z\mathrm{Exp}_H t \xi_0 =(z_1e^{\sqrt{-1}\lambda_1 t}, \dots, z_d e^{\sqrt{-1}\lambda_d t}), $$ where $z=(z_1, \dots, z_d) \in M$ and $\lambda_i \in \mathbb{Z} \setminus \{ 0 \}$ for $i= 1, \dots, d$. The moment map $\mu_H \colon M \to \mathfrak{h}^*$ is given by $\mu_H(z)= \frac{1}{2} (\sum_{i=1}^d \lambda_i |z_i|^2)\xi^0$. We also have $$ R_{\mathrm{Exp}_H \xi}^* \Omega=e^{\sqrt{-1} \langle a_H, \xi \rangle}\Omega, \quad \text{where $\xi \in \mathfrak{h}$ and $a_H=(\lambda_1 + \dots + \lambda_d)\xi^0 \in \mathfrak{h}^*$.} $$ Note that $(\xi_0)^\#_x=(\sqrt{-1}\lambda_1 x_1, \dots, \sqrt{-1}\lambda_d x_d) \in \mathbb{C}^d \cong T_xM$ for each $x =(x_1, \dots, x_d) \in \mathbb{R}^d$. So, if we set $L= \mathbb{R}^d \setminus \{ (0, \dots, 0) \}$, then $L$ is a special Lagrangian submanifold of $(M,I,\omega,\Omega)$, and $(\xi_0)^\#_x \ne 0 \in T_xM$ and $(\xi_0)^\#_x \perp T_xL$ hold for each $x \in L$. \begin{lemma}\label{chi1} Under the above setting, the vector field $\chi$ on L in Theorem \ref{construction} $(4)$ is given by $$ \chi_x=\frac{- \sum_{i=1}^d \lambda_i}{\sum_{i=1}^d \lambda_i^2 x_i^2}(\lambda_1 x_1, \dots, \lambda_d x_d) \in \mathbb{R}^d \cong T_xL, $$ where $x=(x_1, \dots, x_d) \in L$. In particular, $\lim_{|x| \to \infty}|\chi_x| = 0$ holds. \end{lemma} {\it Proof.} Note that $(\xi_0)^\#_x=(\sqrt{-1}\lambda_1 x_1, \dots, \sqrt{-1}\lambda_d x_d)$ for each $x \in L$ and that $\tilde{\xi}(x) \in \mathfrak{h}$ is defined by $g_x((\tilde{\xi}(x))^\#_x, (\xi_0)^\#_x)= \langle a_H, \xi_0 \rangle$. If we set $\tilde{\xi}(x)=\alpha_x \xi_0$, where $\alpha_x \in \mathbb{R}$, then we have $g_x(\alpha_x(\xi_0)^\#_x,(\xi_0)^\#_x)= \langle a_H, \xi_0 \rangle$. So we have $$ \alpha_x = \frac{\langle a_H, \xi_0 \rangle}{g_x((\xi_0)^\#_x,(\xi_0)^\#_x)}= \frac{\sum_{i=1}^d \lambda_i}{\sum_{i=1}^d \lambda_i^2 x_i^2}. $$ Since $\chi_x= I_x (\tilde{\xi}(x))^\#_x = \sqrt{-1} \alpha_x (\xi_0)^\#_x$, we finish the proof. \hfill $\Box$ Then we have the following. \begin{proposition}\label{prop-existence} Fix $c_0 \in \mathfrak{h}^* \setminus \{ 0 \}$ and set $c_t=c_0 -ta_H$. Suppose $c_t \ne 0$ for $t \in [0,T)$. Then the family of maps $\{ F_t \colon (\mu_H^{-1}(c_0) \cap L) \times H \to M \}_{t \in [0,T)}$, which is defined by $F_t(p,h)=\gamma_p(t)h$, is a Lagrangian mean curvature flow. \end{proposition} {\it Proof.} First we show that, for each $p \in \mu_H^{-1}(c_0) \cap L$, the integral curve $\{ \gamma_p(t) \}$ of the vector field $\chi$ with $\gamma_p(0)=p$ exists for $t \in [0,T)$. Assume that, for some $p_0 \in \mu_H^{-1}(c_0) \cap L$, $\gamma_{p_0}(t)$ exists only for $t \in [0,t_0)$, where $0<t_0 < T$. By Lemma \ref{chi1}, we see that $| \frac{d \gamma_{p_0}}{dt}(t)|$ is uniformly bounded for $t \in [0,t_0)$. Since $c_{t_0} \ne 0$, $\mu_H^{-1}(c_{t_0}) \cap L$ is closed in $\mathbb{R}^n$. So it is easy to see that $\lim_{t \to t_0}\gamma_{p_0}(t)$ exists in $\mu_H^{-1}(c_{t_0}) \cap L$ and that $\gamma_{p_0}(t)$ extends to $t \in [0,t_0 + \epsilon)$ for some $\epsilon > 0$. This contradicts to the assumption. Thus we see that, for each $p \in \mu_H^{-1}(c_0) \cap L$, the integral curve $\{ \gamma_p(t) \}$ of the vector field $\chi$ with $\gamma_p(0)=p$ exists for $t \in [0,T)$. Then the claim follows from Theorem \ref{construction} $(5)$. \hfill $\Box$ The above examples are already known. In \cite{LW2} Lee and Wang found them as examples of Lagrangian self-shrinkers or self-expanders. Here we reprove their results in our context. \begin{proposition}{\rm\cite{LW2}} Fix $c \in \mathfrak{h}^* \setminus \{ 0 \}$ and suppose $\mu_H^{-1}(c) \cap L \ne \emptyset$. Define a map $\phi_c \colon (\mu_H^{-1}(c)\cap L) \times H \to M$ by $\phi_c(x,h)=xh$. Let $\phi_c(x,h)^\perp$ be the normal component of the position vector $\phi_c(x,h)$ in $T_{\phi_c(x,h)}M$. Then the mean curvature vector $\mathcal{H}^c \in \Gamma(\phi_c^*TM)$ is given by $$ \mathcal{H}^c_{(x,h)}= \alpha_c \phi_c(x,h)^\perp \in T_{\phi_c(x,h)}M, \quad \text{where $\alpha_c= \frac{-\sum_{i=1}^d \lambda_i}{2 \langle c,\xi_0 \rangle}$.} $$ That is, $\phi_c$ is a self-shrinker if $\alpha_c<0$, and a self-expander if $\alpha_c >0$. \end{proposition} {\it Proof.} Recall $\langle \mu_H(x), \xi_0 \rangle = \frac{1}{2} \sum_{i=1}^d \lambda_i x_i^2$ for $x=(x_1, \dots, x_d) \in L$. Since the gradient of $\langle \mu_H( \cdot ), \xi_0 \rangle|_L$ is a normal vector field of $\mu_H^{-1}(c) \cap L$ in $L$, the unit normal vector filed $\nu$ of $\mu_H^{-1}(c)\cap L$ in $L$ is given by $$ \nu_x=\frac{1}{\sqrt{\sum_{i=1}^d \lambda_i^2 x_i^2}}(\lambda_1 x_1, \dots, \lambda_d x_d). $$ For $x \in \mu_H^{-1}(c) \cap L$, fix an orthonormal basis $v_1, \dots, v_d$ of $(T_x \mathrm{Im}\phi_c)^\perp$ satisfying $v_1=\nu_x$. Then we have $v_i \perp T_x L$, in particular $g_x(x,v_i)=0$, for $i=2, \dots, d$. Moreover, $(R_h)_{*x}v_1, \dots, (R_h)_{*x}v_d$ is an orthonormal basis of $(T_{\phi_c(x,h)} \mathrm{Im}\phi_c)^\perp$. So we have, for $(x,h) \in (\mu_H^{-1}(c)\cap L) \times H$, \begin{align*} \phi_c(x,h)^\perp &=\sum_{i=1}^d g_{xh}(\phi_c(x,h), (R_h)_{*x}v_i)(R_h)_{*x}v_i \\ &=\sum_{i=1}^d g_{x}(x, v_i)(R_h)_{*x}v_i \\ &=(R_h)_{*x}(\sum_{i=1}^d g_{x}(x, v_i)v_i) \\ &=(R_h)_{*x}(g_{x}(x, v_1)v_1) \\ &=(R_h)_{*x}(\frac{\sum_{i=1}^d \lambda_i x_i^2}{\sqrt{\sum_{i=1}^d \lambda_i^2 x_i^2}}\frac{1}{\sqrt{\sum_{i=1}^d \lambda_i^2 x_i^2}}(\lambda_1 x_1, \dots, \lambda_d x_d)) \\ &=(R_h)_{*x}(\frac{2\langle c, \xi_0 \rangle}{\sum_{i=1}^d \lambda_i^2 x_i^2}(\lambda_1 x_1, \dots, \lambda_d x_d)) . \end{align*} Together with Lemma \ref{chi1} and Theorem \ref{construction} $(4)$, we have $$\alpha_c \phi_c(x,h)^\perp =(R_h)_{*x}\chi_x=\mathcal{H}^c_{(x,h)}.$$ \hfill $\Box$ Set $L_t = \{ xh ~|~ x \in \mu_H^{-1}(c_t) \cap \mathbb{R}^d, h \in H\}$ for $t \in \mathbb{R}$, where $c_t=c_0 -ta_H \in \mathfrak{h}^*$ as above. Note that $L_t= \mathrm{Im} \phi_{c_t}$ if $c_t \ne 0$. Lee and Wang also proved that $\{ L_t \}_{t \in \mathbb{R}}$ forms an eternal solution for Brakke flow in \cite{LW1,LW2}. \subsection{Translating solitons} Let $M=\mathbb{C}^{d+1}$ equipped with the standard complex structure $I$, the standard K\"ahler form $\omega$ and the standard holomorphic volume form $\Omega$. The K\"ahler metric is denoted by $g$. Let $H=\mathbb{R}$ and $\mathfrak{h}$ its Lie algebra. Take $\xi_0 \in \mathfrak{h}$ such that $\mathrm{Exp}_H t \xi_0 =t \in H$. Take $\xi^0 \in \mathfrak{h}^*$ such that $\langle \xi^0, \xi_0 \rangle =1$. The action of $H$ on $M$ is defined by $$ z\mathrm{Exp}_H t \xi_0 =(z_1e^{\sqrt{-1}\lambda_1 t}, \dots, z_d e^{\sqrt{-1}\lambda_d t}, z_{d+1}+ \sqrt{-1}t), $$ where $z=(z_1, \dots, z_{d+1}) \in M$ and $\lambda_i \in \mathbb{R}$ for $i= 1, \dots, d$. The moment map $\mu_H \colon M \to \mathfrak{h}^*$ is given by $\mu_H(z)= (\frac{1}{2} \sum_{i=1}^d \lambda_i |z_i|^2 + \mathrm{Re} z_{d+1})\xi^0$, where $\mathrm{Re} z_{d+1}$ is the real part of $z_{d+1} \in \mathbb{C}$. We also have $$ R_{\mathrm{Exp}_H \xi}^* \Omega=e^{\sqrt{-1} \langle a_H, \xi \rangle}\Omega, \quad \text{where $\xi \in \mathfrak{h}$ and $a_H=(\lambda_1 + \dots + \lambda_d)\xi^0 \in \mathfrak{h}^*$.} $$ Set $L= \mathbb{R}^{d+1}$. Then $L$ is a special Lagrangian submanifold of $(M, I,\omega,\Omega)$. Since $(\xi_0)^\#_x=(\sqrt{-1}\lambda_1 x_1, \dots, \sqrt{-1}\lambda_d x_d, \sqrt{-1}) \in \mathbb{C}^{d+1} \cong T_xM$ for each $x \in L$, we have $(\xi_0)^\#_x \ne 0 \in T_xM$ and $(\xi_0)^\#_x \perp T_xL$. \begin{lemma}\label{chi2} Under the above setting, the vector field $\chi$ on L in Theorem \ref{construction} $(4)$ is given by $$ \chi_x=\frac{- \sum_{i=1}^d \lambda_i}{1+\sum_{i=1}^d \lambda_i^2 x_i^2}(\lambda_1 x_1, \dots, \lambda_d x_d,1) \in \mathbb{R}^{d+1} \cong T_xL, $$ where $x=(x_1, \dots, x_{d+1}) \in L$. In particular, $|\chi_x|$ is uniformly bounded on $L$. \end{lemma} {\it Proof.} Note that $(\xi_0)^\#_x=(\sqrt{-1}\lambda_1 x_1, \dots, \sqrt{-1}\lambda_d x_d, \sqrt{-1})$ for each $x \in L$ and that $\tilde{\xi}(x) \in \mathfrak{h}$ is defined by $g_x((\tilde{\xi}(x))^\#_x, (\xi_0)^\#_x)= \langle a_H, \xi_0 \rangle$. If we set $\tilde{\xi}(x)=\alpha_x \xi_0$, where $\alpha_x \in \mathbb{R}$, then we have $g_x(\alpha_x(\xi_0)^\#_x,(\xi_0)^\#_x)= \langle a_H, \xi_0 \rangle$. So we have $$ \alpha_x = \frac{\langle a_H, \xi_0 \rangle}{g_x((\xi_0)^\#_x,(\xi_0)^\#_x)}= \frac{\sum_{i=1}^d \lambda_i}{1+\sum_{i=1}^d \lambda_i^2 x_i^2}. $$ Since $\chi_x= I_x (\tilde{\xi}(x))^\#_x = \sqrt{-1} \alpha_x (\xi_0)^\#_x$, we finish the proof. \hfill $\Box$ In contrast to the case of Proposition 1, $\mu_H^{-1}(c_t) \cap L$ is closed in $\mathbb{R}^n$ for any $t \in \mathbb{R}$ in this case. So, due to the above lemma, by the same argument as in the proof of Proposition \ref{prop-existence}, we see that, for any $p \in L$, the integral curve $\{ \gamma_p(t) \}$ of the vector field $\chi$ with $\gamma_p(0)=p$ exists for $t \in \mathbb{R}$ . Theorem \ref{construction} implies the following. \begin{proposition} Fix $c_0 \in \mathfrak{h}^*$ and set $c_t=c_0 -ta_H$ for $t \in \mathbb{R}$. Then the family of maps $\{ F_t \colon (\mu_H^{-1}(c_0) \cap L) \times H \to M \}_{t \in \mathbb{R}}$, which is defined by $F_t(p,h)=\gamma_p(t)h$, is a Lagrangian mean curvature flow. \end{proposition} The next proposition shows that the above examples are Lagrangian translating solitons. These are higher dimensional generalization of the examples given in \cite{CL}. \begin{proposition} Fix $c \in \mathfrak{h}^*$. Define a map $\phi_c \colon (\mu_H^{-1}(c)\cap L) \times H \to M$ by $\phi_c(x,h)=xh$. Set $u=(0, \dots, 0,-\sum_{i=1}^d \lambda_i) \in \mathbb{R}^{d+1}$. Let $u^\perp_{(x,h)}$ be the normal component of the vector $u$ in $T_{\phi_c(x,h)}M$. Then the mean curvature vector is given by $\mathcal{H}^c_{(x,h)}= u^\perp_{(x,h)} \in T_{\phi_c(x,h)}M$. That is, $\phi_c$ is a translating soliton. \end{proposition} {\it Proof.} Recall $\langle \mu_H(x), \xi_0 \rangle = \frac{1}{2} \sum_{i=1}^d \lambda_i x_i^2 + x_{d+1}$ for $x=(x_1, \dots, x_{d+1}) \in L$. Since the gradient of $\langle \mu_H( \cdot ), \xi_0 \rangle|_L$ is a normal vector field of $\mu_H^{-1}(c) \cap L$ in $L$, the unit normal vector filed $\nu$ of $\mu_H^{-1}(c) \cap L$ in $L$ is given by $$ \nu_x=\frac{1}{\sqrt{1+\sum_{i=1}^d \lambda_i^2 x_i^2}}(\lambda_1 x_1, \dots, \lambda_d x_d,1). $$ For $x \in \mu_H^{-1}(c) \cap L$, fix an orthonormal basis $v_1, \dots, v_{d+1}$ of $(T_x \mathrm{Im}\phi_c)^\perp$ satisfying $v_1=\nu_x$. Then we have $v_i \perp T_x L$, in particular $g_x(u, v_i)=0$, for $i=2, \dots, d+1$. Moreover, $(R_h)_{*x}v_1, \dots, (R_h)_{*x}v_{d+1}$ is an orthonormal basis of $(T_{\phi_c(x,h)} \mathrm{Im}\phi_c)^\perp$. If we note $(R_h)_{*x}u=u$, we have, for $(x,h) \in (\mu_H^{-1}(c)\cap L) \times H$, \begin{align*} u^\perp_{(x,h)} &=\sum_{i=1}^{d+1} g_{xh}(u, (R_h)_{*x}v_i)(R_h)_{*x}v_i \\ &=\sum_{i=1}^{d+1} g_{x}(u, v_i)(R_h)_{*x}v_i \\ &=(R_h)_{*x}(\sum_{i=1}^{d+1} g_{x}(u, v_i)v_i) \\ &=(R_h)_{*x}(g_{x}(u, v_1)v_1) \\ &=(R_h)_{*x}(\frac{-\sum_{i=1}^d \lambda_i}{\sqrt{1+\sum_{i=1}^d \lambda_i^2 x_i^2}}\frac{1}{\sqrt{1+\sum_{i=1}^d \lambda_i^2 x_i^2}}(\lambda_1 x_1, \dots, \lambda_d x_d,1)) \\ &=(R_h)_{*x}\chi_x=\mathcal{H}^c_{(x,h)}, \end{align*} where the last two equalities follow from Lemma \ref{chi2} and Theorem \ref{construction} $(4)$, respectively. \hfill $\Box$ \section{Lagrangian mean curvature flows in hyperK\"ahler manifolds} In this section we give examples of special Lagrangian submanifolds, which satisfies the conditions in Theorem \ref{construction}, in non-flat Calabi-Yau manifolds. \begin{lemma}\label{K-involution} Let $(M,I,\omega)$ be a K\"ahler manifold. Let $\sigma \colon M \to M$ be an isometric, anti-holomorphic involution, whose fixed point set is denoted by $M^\sigma$. Then the following holds. \\ $(1)$ $\sigma^* \omega = - \omega.$ \\ $(2)$ $M^\sigma$ is a Lagrangian submanifold. \\ $(3)$ For $p \in M^\sigma$, the set of eigenvalues of $\sigma_{*p} \colon T_pM \to T_pM$ is $\{1,-1 \}$. The eigenspaces $V(\lambda)$, corresponding to the eigenvalues $\lambda=\pm 1$, are given by $V_p(1)= T_p(M^\sigma)$ and $V_p(-1)= T_p(M^\sigma)^\perp$, respectively. \end{lemma} {\it Proof.} $(1)$ Let $g$ be the K\"ahler metric on $(M,I,\omega)$. For $p \in M$ and $v,w \in T_pM$, we have \begin{align*} (\sigma^* \omega)_p&(v,w)=\omega_{\sigma(p)}(\sigma_{*p}v, \sigma_{*p}w) =g_{\sigma(p)}(I_{\sigma(p)} \sigma_{*p}v, \sigma_{*p}w) \\ &=-g_{\sigma(p)}(\sigma_{*p}I_p v, \sigma_{*p}w) =-(\sigma^*g)_p(I_p v, w)=-g_p(I_p v, w)=-\omega_p(v,w). \end{align*} $(3)$ By the slice theorem, we see that $M^\sigma$ is a submanifold. Since $\sigma_{*p} \colon T_pM \to T_pM$ is an isometric involution for $p \in M^\sigma$, we have an orthogonal decomposition $T_pM = V_p(1) \oplus V_p(-1)$ and $T_p (M^\sigma)=V_p(1)$ for $p \in M^\sigma$. Then $(3)$ follows. \\ $(2)$ If $p \in M^\sigma$ and $v,w \in V_p(\lambda)$, where $\lambda = \pm 1$, then we have \begin{align*} \omega_p(v,w)=-(\sigma^* \omega)_p(v,w)=-\omega_p(\sigma_{*p}v, \sigma_{*p}w) =-\omega_p(\lambda v, \lambda w)=-\omega_p(v,w). \end{align*} So we have $\omega_p(v,w)=0$ for $v,w \in V_p(\lambda)$. Therefore we have $\dim V_p(\lambda) \le n$, where $n$ is the complex dimension of $(M,I)$. Since $T_pM = V_p(1) \oplus V_p(-1)$, we have $\dim V_p(1) = \dim V_p(-1) = n$. Thus we see that $M^\sigma$ is a Lagrangian submanifold. \hfill $\Box$ Let us recall the notion of hyperK\"ahler manifolds. \begin{definition} {\em A} hyperK\"ahler manifold {\em is a collection $(M,g, I_1,I_2,I_3)$ such that $(M,g)$ is a $4n$-dimensional Riemannian manifold with three complex structures $I_1,I_2,I_3$ which satisfies the following properties: \\ $(i)$ $I_1,I_2,I_3$ satisfy the quaternion relation, that is, $I_1 I_2=-I_2 I_1=I_3$. \\ $(ii)$ $g$ is a K\"ahler metric with respect to each complex structure $I_1,I_2,I_3$.} \end{definition} Let $\omega_j$ be the K\"ahler form of $(M,g,I_j)$ for $j=1,2,3$. Then $\omega_\mathbb{C}= \omega_2+ \sqrt{-1}\omega_3$ is a holomorphic symplectic form and $\displaystyle \Omega=\frac{\omega_\mathbb{C}^n}{n!}$ is a holomorphic volume form of $(M,I_1)$. Moreover, $(M,I_1, \omega_1, \Omega)$ is a Calabi-Yau $2n$-fold in the sense of Definition \ref{def-CY}. \begin{proposition}\label{construction2} Let $(M,g,I_1,I_2,I_3)$ be a $4n$-dimensional hyperK\"ahler manifold, admitting an isometric involution $\sigma \colon M \to M$, which is anti-holomorphic with respect to $I_1$, and holomorphic with respect to $I_2$, respectively. Denote the set of fixed points of $\sigma$ by $M^\sigma$, and the embedding by $\iota \colon M^\sigma \to M$. Let $\omega_j$ be the K\"ahler form for $j=1,2,3$. Set $\omega_\mathbb{C}= \omega_2+ \sqrt{-1}\omega_3$ and $\displaystyle \Omega=\frac{\omega_\mathbb{C}^n}{n!}$. Then the following holds. \\ $(1)$ $\sigma^* \omega_1 = - \omega_1$, $\sigma^* \omega_2 = \omega_2$ and $\sigma^* \omega_3 = - \omega_3$. In particular, $M^\sigma$ is a Lagrangian submanifold of $(M, \omega_j)$ for $j=1,3$. \\ $(2)$ $M^\sigma$ is a complex submanifold of $(M, I_2)$. In particular, $M^\sigma$ is oriented. \\ $(3)$ $\iota^*\Omega = \mathrm{vol}_{\iota^* g}$ holds. $M^\sigma$ is a special Lagrangian submanifold of $(M, I_1, \omega_1, \Omega)$. \\ $(4)$ In addition, suppose that an abelian Lie group $H$ acts on $M$, preserving $I_1$ and $\omega_1$, with a moment map $\mu_H \colon M \to \mathfrak{h}^*$, and that $\sigma(ph)=\sigma(p)h^{-1}$ holds for each $p \in M$ and $h \in H$. Set $L= \{ p \in M^\sigma~|~ \xi^\#_p \ne 0 ~\text{for each $\xi \in \mathfrak{h} \setminus \{0 \}$} \}$. Then $\xi^\#_p \perp T_pL$ for each $p \in L$, $\xi \in \mathfrak{h} \setminus \{0 \}$ holds. That is, $L$ satisfies the conditions in Theorem \ref{construction}. \end{proposition} {\it Proof.} $(1)$ Note that $\sigma \colon M \to M$ is an anti-holomorphic involution with respect to $I_3=I_1 I_2$. The claim follows from Lemma \ref{K-involution}. \\ $(2)$ Since $(I_2)_p \colon T_pM \to T_pM$ commutes with $\sigma_{*p} \colon T_pM \to T_pM$ for $p \in M^\sigma$, $(I_2)_p$ preserves the eigenspaces $V_p(\lambda)$ of $\sigma_{*p}$ for $\lambda= \pm 1$. Since $T_p(M^\sigma)=V_p(1)$ by Lemma \ref{K-involution}, the claim follows. \\ $(3)$ Since $\iota^*\omega_2$ is the K\"ahler form of $(M^\sigma, I_2)$ by $(2)$, we see that $\frac{(\iota^*\omega_2)^n}{n!}$ is the volume form $\mathrm{vol}_{\iota^* g}$ of $M^\sigma$. On the other hand, since $\iota^*\omega_\mathbb{C}= \iota^*\omega_2$ by $(1)$ , we have $$\iota^*\Omega=\iota^*(\frac{\omega_\mathbb{C}^n}{n!})=\frac{(\iota^*\omega_\mathbb{C})^n}{n!}=\frac{(\iota^*\omega_2)^n}{n!}=\mathrm{vol}_{\iota^* g}. $$ $(4)$ Since $\sigma(ph)=\sigma(p)h^{-1}$ holds for each $p \in M$ and $h \in H$, we have $\sigma_{*p}\xi^\#_p = - \xi^\#_p$ for each $p \in M^\sigma$ and $\xi \in \mathfrak{h}$. By Lemma \ref{K-involution}, we have $\xi^\#_p \perp T_p(M^\sigma)$ for each $p \in M^\sigma$ and $\xi \in \mathfrak{h}$. \hfill $\Box$ \section{Lagrangian mean curvature flows in Ricci-flat ALE spaces of type $A_n$} In this section, as an application of Proposition \ref{construction2}, we construct Lagrangian mean curvature flows in 4-dimensional Ricci-flat ALE spaces of $A_n$-type and study their properties. \subsection{Ricci-flat ALE spaces of type $A_n$} In this subsection we construct 4-dimensional Ricci-flat ALE spaces of $A_n$-type. Our construction seems to be slightly different from the original one due to \cite{Kr}, but essentially the same. In the following, we identify the quaternionic vector space $\mathbb{H}^{n+1}$ with the product $\mathbb{C}^{n+1}\times \mathbb{C}^{n+1}$. The standard hyperK\"ahler structure $(g,I_1,I_2,I_3)$ on $\mathbb{H}^{n+1}$ is defined by \begin{align*} &g((z,w),(z^\prime,w^\prime))=\mathrm{Re}(^t\!z\overline{z^\prime}+^t\!w\overline{w^\prime}),\\ &I_1(z,w)=(\sqrt{-1}z,\sqrt{-1}w), I_2(z,w)=(-\overline{w},\overline{z}),I_3(z,w)=(-\sqrt{-1}\overline{w},\sqrt{-1}\overline{z}), \end{align*} where $z,z^\prime, w,w^\prime \in \mathbb{C}^{n+1}$ are column vectors. The K\"ahler forms $\omega_i$ corresponding to the complex structure $I_i$ for $i=1,2,3$ are given by \begin{align*} &\omega_1((z,w),(z^\prime, w^\prime))=\mathrm{Re}\{ \sqrt{-1}(^t\!z\overline{z^\prime}+ \! ^t\!w\overline{w^\prime})\},\\ &(\omega_2 + \sqrt{-1} \omega_3)((z,w),(z^\prime, w^\prime))= \! ^t\!z w^\prime- \! ^t\!z^\prime w. \end{align*} The right action of the torus $T^{n+1} = \{ \zeta=(\zeta_0, \cdots, \zeta_n) \in \mathbb{C}^{n+1}~|~ |\zeta_i|=1\quad \text{for}~~ i= 0, \dots, n \}$ on $\mathbb{H}^{n+1}=\mathbb{C}^{n+1}\times \mathbb{C}^{n+1}$ is defined by \begin{align}\label{equ-action} ( \begin{pmatrix}z_0 \\ \vdots \\ z_n \end{pmatrix} , \begin{pmatrix}w_0 \\ \vdots \\ w_n \end{pmatrix}) \zeta = ( \begin{pmatrix}z_0 \zeta_0 \\ \vdots \\ z_n \zeta_n \end{pmatrix} , \begin{pmatrix}w_0 \zeta_0^{-1} \\ \vdots \\ w_n \zeta_n^{-1} \end{pmatrix}) . \end{align} Define a group homomorphism $\rho \colon T^{n+1} \to T^1$ by $\rho(\zeta)=\zeta_0 \dots \zeta_n$. Denote the kernel of $\rho$ by $K$, and let $\iota \colon K \to T^{n+1}$ be the embedding. Then we have the following exact sequence of abelian Lie groups: \begin{align}\label{exact} 1 \longrightarrow K \overset{\iota}{\longrightarrow} T^{n+1} \overset{\rho}{\longrightarrow}T^1 \longrightarrow 1. \end{align} We also have the corresponding exact sequences of the Lie algebras and their dual spaces: \begin{align*} 0 \longrightarrow \mathfrak{k} \overset{\iota_*}{\longrightarrow} \mathfrak{t}^{n+1} \overset{\rho_*}{\longrightarrow}\mathfrak{t}^1 \longrightarrow 0, \quad 0 \longleftarrow \mathfrak{k}^* \overset{\iota^*}{\longleftarrow} (\mathfrak{t}^{n+1})^* \overset{\rho^*}{\longleftarrow}(\mathfrak{t}^1)^* \longleftarrow 0. \end{align*} Let $e_0, \dots, e_n$ be the standard basis of $\mathfrak{t}^{n+1}$ and $e^0, \dots, e^n \in (\mathfrak{t}^{n+1})^*$ the dual basis. We also denote the standard basis of $\mathfrak{t}^1$ by $p_1$ and the dual basis by $p^1 \in (\mathfrak{t}^1)^*$. Then we have \begin{align*} \rho_*e_0= \cdots = \rho_*e_n= p_1, \quad \rho^* p^1=e^0 + \cdots + e^n . \end{align*} If we set $f_i=e_i -e_{i-1}$ for $i=1,\dots, n$, then $f_1, \dots, f_n$ is a basis of $\mathfrak{k}$. The dual basis $f^1, \dots, f^n \in \mathfrak{k}^*$ satisfies the following; \begin{align}\label{equ-f} \iota^*e^0= -f^1, \quad \iota^*e^1 = f^1 -f^2, \quad \cdots, \quad \iota^*e^{n-1}=f^{n-1}-f^n, \quad \iota^*e^n=f^n. \end{align} The induced $K$-action on $\mathbb{H}^{n+1}$ admits a hyperK\"ahler moment map $$\mu_K=(\mu_K^1,\mu_K^2, \mu_K^3) \colon \mathbb{H}^{n+1} \to \mathfrak{k}^* \otimes \mathrm{Im}\mathbb{H},$$ which is given by \begin{align*} &\mu_K^1(z,w)=\frac{1}{2} \sum_{i=0}^n(|z_i|^2 -|w_i|^2)\iota^*e^i=\frac{1}{2} \sum_{i=1}^n (|z_i|^2 -|w_i|^2-|z_{i-1}|^2 +|w_{i-1}|^2)f^i, \\ & (\mu_K^2 + \sqrt{-1}\mu_K^3)(z,w)=-\sqrt{-1} \sum_{i=0}^n z_i w_i\iota^*e^i= -\sqrt{-1} \sum_{i=1}^n (z_i w_i -z_{i-1} w_{i-1} )f^i. \end{align*} Define the codimension one subspaces $W_{i,j}$ of $\mathfrak{k}^*$ for $0 \le i < j \le n$ by \begin{align*} W_{i,j} &= \mathrm{span}\{ \iota^*e^0, \dots, \widehat{\iota^*e^i}, \dots, \widehat{\iota^*e^j}, \dots, \iota^*e^n \} \\ &= \{ \alpha \in \mathfrak{k}^*~|~ \langle \alpha, \sum_{k=i+1}^j f_k \rangle =0 \}, \end{align*} where the second equality follows from $\displaystyle \sum_{k=i+1}^j f_k =e_j -e_i$. \begin{proposition}\label{prop-rv} The set of regular values of the hyperK\"ahler moment map \\ $\mu_K \colon \mathbb{H}^{n+1} \to \mathfrak{k}^* \otimes \mathrm{Im}\mathbb{H}$ is \begin{align*} \mathfrak{k}^* \otimes \mathrm{Im}\mathbb{H} \setminus \bigcup_{0 \le i < j \le n} W_{i,j}\otimes \mathrm{Im}\mathbb{H}. \end{align*} \end{proposition} {\it Proof.} In general, let us define a map $\nu \colon \mathbb{C}^2 \to \mathbb{R} \times \mathbb{C}$ by $\nu(u,v)=(|u|^2 - |v|^2, uv)$. Then it is easy to observe that $\nu$ is surjective, and that the set of regular points of $\nu$ is $\mathbb{C}^2 \setminus \{ (0,0)\}$. By this observation, the image of the differential of $\mu_K$ at $(z,w) \in \mathbb{H}^{n+1}$ is \begin{align*} \sum_{j \in J_{(z,w)}}\iota^* e^j \otimes \mathrm{Im}\mathbb{H}, \quad \text{where $J_{(z,w)}= \{ j ~|~ (z_j,w_j) \ne (0,0) \}$.} \end{align*} Therefore, the differential of $\mu_K$ at $(z,w)$ is surjective if and only if $\{ \iota^* e^j ~|~ j \in J_{(z,w)} \}$ spans $\mathfrak{k}^*$. Since $\{ \iota^* e^0, \dots, \widehat{\iota^* e^k}, \dots, \iota^* e^n \}$ forms a basis of $\mathfrak{k}^*$ for $k=0, \dots, n$, the lemma follows. \hfill $\Box$ If $(\alpha,\beta) \in \mathfrak{k}^* \oplus (\mathfrak{k}^* \!\otimes\! \mathbb{C})= \mathfrak{k}^* \otimes \mathrm{Im}\mathbb{H}$ is a regular value of $\mu_K$, then we have a smooth hyperK\"ahler quotient \begin{align*} M(\alpha,\beta)= (\mu_K^{-1}(\alpha,\beta)/K, g,I_1,I_2,I_3), \end{align*} on which $T^1=T^{n+1}/K$ acts, preserving the hyperK\"ahler structure. If $(\alpha,\beta)$ is a critical value, then the hyperK\"ahler quotient $M(\alpha,\beta)= \mu_K(\alpha,\beta)/K$ has singularities. A point in $M(\alpha,\beta)$, which is represented by $(z,w) \in \mu_K^{-1}(\alpha,\beta)$, is denoted by $[z,w]_K$. The hyperK\"ahler structure $(g,I_1,I_2,I_3)$ on $\mu_K^{-1}(\alpha,\beta)/K$, where $(\alpha,\beta)$ is a regular value of $\mu_K$, is defined as follows. Consider the natural projection $$\pi \colon \mu_K^{-1}(\alpha,\beta) \to M(\alpha,\beta)$$ as a principal $K$-bundle. For $(z,w) \in \mu_K^{-1}(\alpha,\beta)$, the vertical subspace $V_{(z,w)}$ is defined to be the kernel of the differential \begin{align*} \pi_{*(z,w)} \colon T_{(z,w)}\mu_K^{-1}(\alpha,\beta) \to T_{[z,w]_K}M(\alpha,\beta). \end{align*} Its orthogonal complement $H_{(z,w)}$ in $T_{(z,w)}\mu_K^{-1}(\alpha,\beta)$ is called the horizontal subspace. So we have the orthogonal decomposition \begin{align}\label{o-decomp} T_{(z,w)}\mu_K^{-1}(\alpha,\beta)=H_{(z,w)} \oplus V_{(z,w)}. \end{align} It is easy to see that $V_{(z,w)}$ is the tangent space of the $K$-orbit through $(z,w)$ and that $H_{(z,w)}$ is a quaternionic subspace of $T_{(z,w)}\mathbb{H}^{n+1}$. The hyperK\"ahler structure on $M(\alpha,\beta)$ is defined so that the map $\pi_{*(z,w)}$ induces an hyperK\"ahler isometry from $H_{(z,w)}$ to $T_{[z,w]_K}M(\alpha,\beta)$. Define the action of $\mathbb{Z}_{n+1}= \{ \gamma \in \mathbb{C}~|~ \gamma^{n+1}=1 \}$ on $\mathbb{C}^2$ by \begin{align*} (u,v)\gamma=(u \gamma, v \gamma^{-1}). \end{align*} Since this action preserves the hyperK\"ahler structure of $\mathbb{C}^2$, the quotient space $\mathbb{C}^2/\mathbb{Z}_{n+1}$ is an orbifold which has the standard flat metric. A point in $\mathbb{C}^2/\mathbb{Z}_{n+1}$, which is represented by $(u,v) \in \mathbb{C}^2$, is denoted by $[u,v]_{\mathbb{Z}_{n+1}}$. \begin{proposition}\label{prop-orb} The map $\phi \colon \mathbb{C}^2/\mathbb{Z}_{n+1} \to M(0,0)$, which is defined by \begin{align*} \phi([u,v]_{\mathbb{Z}_{n+1}}) =[\frac{1}{\sqrt{n+1}}\begin{pmatrix}u \\ \vdots \\ u \end{pmatrix}, \frac{1}{\sqrt{n+1}}\begin{pmatrix}v \\ \vdots \\ v \end{pmatrix}]_K, \end{align*} is an isometry. \end{proposition} {\it Proof.} Define the map $\widetilde{\phi} \colon \mathbb{C}^2 \to \mathbb{H}^{n+1}$ by \begin{align*} \widetilde{\phi}(u,v) =(\frac{1}{\sqrt{n+1}}\begin{pmatrix}u \\ \vdots \\ u \end{pmatrix}, \frac{1}{\sqrt{n+1}}\begin{pmatrix}v \\ \vdots \\ v \end{pmatrix}). \end{align*} Note that $(z,w) \in \mu_K^{-1}(0,0)$ if and only if \begin{align}\label{0-0} |z_0|^2-|w_0|^2= \cdots = |z_n|^2-|w_n|^2, \quad z_0w_0= \cdots = z_n w_n. \end{align} Therefore, we see that $\widetilde{\phi}(u,v) \in \mu_K^{-1}(0,0)$ for each $(u,v) \in \mathbb{C}^2$. Moreover, it is easy to see that $[\widetilde{\phi}(u,v)]_K=[\widetilde{\phi}(u^\prime,v^\prime)]_K$ if and only if there exists $\gamma \in \mathbb{Z}_{n+1}$ such that $(u^\prime, v^\prime)=(u,v)\gamma$. This implies that the map $\phi$ is well-defined and injective. To see that $\phi$ is surjective, fix an arbitrary $(z,w) \in \mu_K^{-1}(0,0)$. By (\ref{0-0}), there exists $\zeta_k \in \mathbb{C}$ such that \begin{align*} |\zeta_k|=1, \quad (z_0,w_0)=(z_k \zeta_k, w_k \zeta_k^{-1}) \quad \text{for $k=1, \dots, n$.} \end{align*} If we set $\xi=(\zeta_1\cdots \zeta_n)^{\frac{-1}{n+1}}$, then we have $\zeta=(\xi, \zeta_1 \xi, \dots, \zeta_n \xi) \in K$ and \begin{align*} [z,w]_K=[(z,w)\zeta]_K= \phi([z_0 \xi, w_0 \xi^{-1}]_{\mathbb{Z}_{n+1}}) . \end{align*} Thus we see that $\phi$ is surjective. The differential $\widetilde{\phi}_{*(u,v)} \colon T_{(u,v)}\mathbb{C}^2 \to T_{\widetilde{\phi}(u,v)}\mathbb{H}^{n+1}$ is an isometric embedding for $(u,v) \in \mathbb{C}^2$. It is easy to see that the image of $\widetilde{\phi}_{*(u,v)}$ is perpendicular to the tangent space of the $K$-orbit through $\widetilde{\phi}(u,v)$ for $(u,v) \in \mathbb{C}^2 \setminus \{(0,0) \}$. This implies that $\widetilde{\phi}_{*(u,v)}$ induces an isometry from $T_{(u,v)}\mathbb{C}^2 $ to the horizontal subspace $H_{\widetilde{\phi}(u,v)}$ in (\ref{o-decomp}) for $(u,v) \in \mathbb{C}^2 \setminus \{(0,0) \}$. So we see that $\phi$ is an isometry. Thus we finish the proof. \hfill $\Box$ Thus we see that $(M(0,0),I_1)$ is isomorphic to $\mathbb{C}^2/\mathbb{Z}_{n+1}$ Moreover, it is well-known that $(M(\alpha,\beta), I_1)$ is a minimal resolution of $(M(0,0),I_1)$. \subsection{Torus actions on Ricci-flat ALE spaces of type $A_n$} From now on, we fix \begin{align}\label{alpha} \alpha = \sum_{i=1}^n \alpha_i f^i \in \mathfrak{k}^*, \quad \text{where $\alpha_i>0$ for $i=1, \dots, n$} \end{align} and $h \in (\mathfrak{t}^{n+1})^*$ such that $\alpha= \iota^* h$, that is, \begin{align}\label{h} h = \sum_{i=0}^n h_i e^i , \quad \text{where $h_0 \in \mathbb{R}$ arbitrary, $h_i=h_{i-1}+\alpha_i$ for $i=1, \dots, n$}. \end{align} If we set $h_{-1}=- \infty$ and $h_{n+1}=\infty$, then, by (\ref{alpha}) and (\ref{h}), we have \begin{align}\label{h2} - \infty=h_{-1} <h_0 < h_1 < \dots < h_n < h_{n+1}=\infty. \end{align} By Proposition \ref{prop-rv}, $(\alpha,0) \in \mathfrak{k}^* \oplus (\mathfrak{k}^* \otimes \mathbb{C})= \mathfrak{k}^* \otimes \mathrm{Im}\mathbb{H}$ is a regular value of $\mu_K$. Therefore $M(\alpha,0)= (\mu_K^{-1}(\alpha,0)/K, g,I_1,I_2,I_3)$ is a smooth manifold, on which $T^1=T^{n+1}/K$ acts, preserving the hyperK\"ahler structure. The K\"ahler form corresponding to $I_j$ is denoted by $\omega_j$ for $j=1,2,3$. Then $\omega_\mathbb{C}=\omega_2+\sqrt{-1}\omega_3$ is a holomorphic symplectic form on $(M(\alpha,0),I_1)$. Note that $(z,w) \in \mu_K^{-1}(\alpha,0)$ if and only if \begin{align}\label{alpha-0} &\frac{1}{2}(|z_0|^2-|w_0|^2)-h_0= \dots =\frac{1}{2}(|z_n|^2-|w_n|^2)-h_n, \nonumber \\ &z_0 w_0= \dots =z_n w_n. \end{align} Define the action of a torus $G= \{ (\gamma_0,\gamma_1) \!\in\! \mathbb{C}^2~|~ |\gamma_0 |=|\gamma_1|=1 \}$ on $M(\alpha,0)$ by \begin{align}\label{G-action} [ \begin{pmatrix}z_0 \\z_1\\ \vdots \\z_n\end{pmatrix}, \begin{pmatrix}w_0 \\w_1\\ \vdots \\w_n\end{pmatrix}]_K (\gamma_0,\gamma_1) =[ \begin{pmatrix}z_0\gamma_0\gamma_1 \\z_1\gamma_0\\ \vdots \\z_n\gamma_0\end{pmatrix}, \begin{pmatrix}w_0\gamma_1^{-1} \\w_1\\ \vdots \\w_n\end{pmatrix}]_K . \end{align} It is easy to see that the $G$-action is well-defined. Note that the action of $(1, \gamma_1) \in G$ is the same as the action of $\rho(\gamma_1,1,\dots,1) \in T^1$, where $\rho \colon T^{n+1} \to T^1$ is the group homomorphism in (\ref{exact}). Denote the standard basis of $\mathfrak{g}$ by $p_0,p_1$ and the dual basis by $p^0,p^1 \in \mathfrak{g}^*$. Then we have the following. Since the proof is straightforward, we omit the proof. \begin{proposition}\label{prop-moment} $(1)$ The $G$-action on $M(\alpha,0)$ preserves the complex structure $I_1$ and the corresponding K\"ahler form $\omega_1$. Moreover, this action admits a moment map $\mu_G \colon M(\alpha,0) \to \mathfrak{g}^*$, which is given by \begin{align}\label{moment} \mu_G([z,w]_K)=\frac{1}{2}(\sum_{i=0}^n |z_i|^2)p^0 + \{ \frac{1}{2}(|z_k|^2-|w_k|^2)-h_k \} p^1. \end{align} $(2)$ $R_{\mathrm{Exp}_G\xi}^*\omega_\mathbb{C}=e^{\sqrt{-1}\langle p^0, \xi \rangle}\omega_\mathbb{C}$ holds for $\xi \in \mathfrak{g}$, where $\mathrm{Exp}_G \colon \mathfrak{g} \to G$ is the exponential map for $G$. \end{proposition} We remark that, in (\ref{moment}), the term $\frac{1}{2}(|z_k|^2-|w_k|^2)-h_k$ is independent of $k=0,1, \dots ,n$ due to (\ref{alpha-0}). We note that, due to (\ref{h2}), for each $y \in \mathbb{R}$, there exists unique $k_0 \in \{ 0, \dots, n+1 \}$ such that $-h_{k_0 -1} >y \ge -h_{k_0}$ holds. \begin{proposition}\label{prop-image} $(1)$ The image $\mathrm{Im}\mu_G$ of the moment map $\mu_G \colon M(\alpha,0) \to \mathfrak{g}^*$ is given by \begin{align}\label{image} \mathrm{Im}\mu_G = \{ xp^0 + yp^1 \in \mathfrak{g}^*~|~ x \ge \sum_{i=k_0}^n(y+ h_i)\quad\text{if $-h_{k_0 -1} >y \ge -h_{k_0}$}\}. \end{align} $(2)$ For each $xp^0 + yp^1 \in \mathrm{Im}\mu_G$, $\mu_G^{-1}(xp^0 + yp^1)$ consists of a single $G$-orbit in $M(\alpha,0)$. \end{proposition} {\it Proof.} $(1)$ Denote the right hand side of (\ref{image}) by $\Delta$. First we show $\mathrm{Im}\mu_G \subset \Delta$. Fix any $xp^0 + yp^1=\mu_G([z,w]_K) \in \mathrm{Im}\mu_G$. Then there exists unique $k_0 \in \{ 0, \dots, n+1 \}$ such that $-h_{k_0 -1} >y \ge -h_{k_0}$ holds. Due to (\ref{alpha-0}), there exists $d_0 \in \mathbb{C}$ such that \begin{align}\label{zw2} \frac{1}{2}(|z_i|^2-|w_i|^2)-h_i=y, \quad z_i w_i=d_0 \quad \text{for $i=0, \dots, n$}. \end{align} Since $\{ \frac{1}{2}(|z_i|^2+|w_i|^2) \}^2= \{ \frac{1}{2}(|z_i|^2-|w_i|^2) \}^2 + |z_i w_i|^2=(y+ h_i)^2+|d_0|^2$, we have \begin{align}\label{zw} |z_i|^2=\sqrt{(y+ h_i)^2+|d_0|^2}+ (y+ h_i), \quad |w_i|^2=\sqrt{(y+ h_i)^2+|d_0|^2}- (y+ h_i). \end{align} So we have \begin{align} x = \frac{1}{2}\sum_{i=0}^n |z_i|^2 &= \frac{1}{2}\sum_{i=0}^n\{ \sqrt{(y+ h_i)^2+|d_0|^2} +(y+ h_i) \} \label{ineq}\\ &\ge \frac{1}{2}\sum_{i=0}^n\{ |y+ h_i| +(y+ h_i) \} =\sum_{i=k_0}^n(y+ h_i). \end{align} Thus we see $\mathrm{Im}\mu_G \subset \Delta$. Next we show $\mathrm{Im}\mu_G \supset \Delta$. Fix any $xp^0 + yp^1 \in \Delta$. Then there exists unique $k_0 \in \{ 0, \dots, n+1 \}$ such that $-h_{k_0 -1} >y \ge -h_{k_0}$ holds. We want to show that there exists $[z,w]_K \in M(\alpha,0)$ such that $\mu_G([z,w]_K)=xp^0 +yp^1$. By (\ref{ineq}), it is equivalent to show that there exist $(z,w) \in \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}$ and $d_0 \in \mathbb{C}$ such that (\ref{zw2}) holds and $f_y(d_0)=x$, where $f_y \colon \mathbb{C} \to \mathbb{R}$ is a function defined by \begin{align}\label{def-f} f_y(d)= \frac{1}{2}\sum_{i=0}^n\{ \sqrt{(y+ h_i)^2+|d|^2} +(y+ h_i) \}. \end{align} On the other hand, since $xp^0 + yp^1 \in \Delta$, we have \begin{align}\label{prop-f} f_y(0)= \frac{1}{2}\sum_{i=0}^n\{ |y+ h_i| +(y+ h_i) \} =\sum_{i=k_0}^n(y+ h_i) \le x. \end{align} So there exists $d_0 \in \mathbb{C}$ such that $f_y(d_0)=x$. Moreover, by (\ref{zw}), there exists $(z,w) \in \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}$ such that (\ref{zw2}) holds. So we see that there exists $[z,w]_K \in M(\alpha,0)$ such that $\mu_G([z,w]_K)=xp^0 +yp^1$. Thus we see that $\mathrm{Im}\mu_G \supset \Delta$. \\ $(2)$ Fix any $xp^0 + yp^1 \in \mathrm{Im}\mu_G$. In the proof of $(1)$, we constructed $[z,w]_K \in \mu_G^{-1}(xp^0 + yp^1)$. In the construction, there are some ambiguities for the choice of $d_0 \in \mathbb{C}$ and $(z,w) \in \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}$. It is easy to see that the ambiguity for $[z,w]_K$ corresponds to the $G$-action. \hfill$\Box$ Set \begin{align}\label{equ-lk} l_k= \{ xp^0 + yp^1| -h_{k-1} \ge y \ge -h_k, ~x= \sum_{i=k}^n(y+ h_i) \} \end{align} for $k=0,\dots, n+1$, and set $v_k=l_k \cap l_{k+1}$ for $k=0,\dots, n$. Then the image of the moment map $\mu_G$ is decomposed in the following way (Fig.1): \begin{align}\label{decomposition} \mathrm{Im} \mu_G = \mathrm{int}(\mathrm{Im} \mu_G) \cup ~\bigcup_{k=0}^{n+1}\mathrm{int}(l_k) ~ \cup \{ v_0 ,\dots. v_n \}, \end{align} where $\mathrm{int}(\mathrm{Im} \mu_G)$ and $\mathrm{int}(l_k)$ are the interior of $\mathrm{Im} \mu_G$ and $l_k$, respectively. Note that $\bigcup_{k=1}^n \mu_G^{-1}(l_k)$ is the exceptional divisor for the minimal resolution of $\mathbb{C}^2/\mathbb{Z}_{n+1}$. Next we determine the isotropy subgroup $G_{[z,w]_K}$ of $G$ at $[z,w]_K \in M(\alpha,0)$ as follows. \begin{proposition}\label{prop-isotropy} $(1)$ If $\mu_G([z,w]_K) \in \mathrm{int}(\mathrm{Im} \mu_G)$, then $G_{[z,w]_K}$ is the trivial subgroup and $z_i \ne 0$, $w_i \ne 0$ for $i=0,\dots, n$. \\ $(2)$ If $\mu_G([z,w]_K) \in \mathrm{int}(l_{k_0})$, then $G_{[z,w]_K}$ is the subtorus $H_{1,-(n+1-k_0)}$, whose Lie algebra is generated by $p_0 -(n+1-k_0)p_1$. Moreover, the following holds. \begin{align}\label{zw3} &z_i=0 \quad\text{for $i=0, \dots, k_0-1$},\quad w_i \ne 0 \quad\text{for $i=0, \dots, k_0-1$}\nonumber \\ &z_i \ne 0 \quad\text{for $i=k_0, \dots, n$},\hspace{9mm} w_i=0 \quad\text{for $i=k_0, \dots, n$}. \end{align} $(3)$ If $\mu_G([z,w]_K) =v_{k_0}$, then $G_{[z,w]_K}=G$ and \begin{align}\label{zw4} &z_i=0 \quad\text{for $i=0, \dots, k_0$},\hspace{9.7mm} w_i \ne 0 \quad\text{for $i=0, \dots, k_0-1$}\nonumber \\ &z_i \ne 0 \quad\text{for $i=k_0+1, \dots, n$},\quad w_i=0 \quad\text{for $i=k_0, \dots, n$}. \end{align} In particular, $\mu_G^{-1}(v_k)$ is a single point in $M(\alpha,0)$ for $k=0,\dots, n$. \end{proposition} {\it Proof.} By the proof of Proposition \ref{prop-image}, the condition $xp^0+yp^1 \in \mathrm{Im} \mu_G$ holds if and only if there exist $(z,w) \in \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}$ and $d_0 \in \mathbb{C}$ such that (\ref{zw2}) holds and $f_y(d_0)=x$, where the function $f_y$ is defined by (\ref{def-f}). \\ $(1)$ $\mu_G([z,w]_K) \in \mathrm{int}(\mathrm{Im} \mu_G)$ if and only if there exists $k_0 \in \{ 0, \dots, n+1\}$ such that $-h_{k_0 -1} >y \ge -h_{k_0}$ and $x > \sum_{i=k_0}^n(y+ h_i)$, where $xp^0+yp^1=\mu_G([z,w]_K)$. Since $x > \sum_{i=k_0}^n(y+ h_i)$, (\ref{prop-f}) implies $d_0 \ne 0$. Then (\ref{zw2}) implies the claim. \\ $(2)$ $\mu_G([z,w]_K) \in \mathrm{int}(l_{k_0})$ if and only if $-h_{k_0 -1} >y > -h_{k_0}$ and $x = \sum_{i=k_0}^n(y+ h_i)$, where $xp^0+yp^1=\mu_G([z,w]_K)$. Since $x = \sum_{i=k_0}^n(y+ h_i)$, (\ref{prop-f}) implies $d_0 = 0$. Then (\ref{zw2}) implies (\ref{zw3}). Moreover, the isotropy subgroup can be easily determined by the following. \begin{align*} [\begin{pmatrix} 0 \\ \vdots \\ 0 \\ z_{k_0} \\ z_{k_0 +1} \\ \vdots \\ z_n \end{pmatrix}\!\!,\begin{pmatrix} w_0 \\ \vdots \\ w_{k_0 -1} \\ 0 \\ 0 \\ \vdots \\ 0\end{pmatrix}]_K (e^{\sqrt{-1}s}, e^{\sqrt{-1}t}) =[\begin{pmatrix} 0 \\ \vdots \\ 0 \\ z_{k_0}e^{\sqrt{-1}\{(n+1-k_0)s+ t \}} \\ z_{k_0 +1} \\ \vdots \\ z_n \end{pmatrix}\!\!,\begin{pmatrix} w_0 \\ \vdots \\ w_{k_0 -1} \\ 0 \\ 0 \\ \vdots \\ 0\end{pmatrix}]_K. \end{align*} $(3)$ $\mu_G([z,w]_K) =v_{k_0}$ if and only if $y = -h_{k_0}$ and $x = \sum_{i=k_0}^n(y+ h_i)$, where $xp^0+yp^1=\mu_G([z,w]_K)$. By the same argument as in the proof of $(2)$, we see the claim. \hfill $\Box$ Next we define a good local coordinate around a fixed point. Let $v_{k_0} \in \mathrm{Im}\mu_G$ as in (\ref{decomposition}) for $k_0 =0, 1, \dots, n$. Since $\mu_G^{-1}(v_{k_0})$ is a single point, we set $P_{k_0}=\mu_G^{-1}(v_{k_0})$. Define an open set $U_{k_0}$ in $M(\alpha,0)$ by \begin{align*} U_{k_0}= \{ [z,w]_K ~|~ \text{$w_i \ne 0$ for $i=0, \dots, k_0 -1$, $z_j \ne 0$ for $j=k_0 +1, \dots, n$}\}. \end{align*} Define a map $\varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ by \begin{align}\label{equ-coord} \varphi_{k_0}([z,w]_K)= \Big( z_{k_0} \prod_{i=0}^{k_0 -1}&\frac{\sqrt{2(h_{k_0}-h_i)}}{w_i}\prod_{j=k_0 +1}^{n}\frac{z_j}{\sqrt{2(h_j-h_{k_0})}}, \nonumber \\ & w_{k_0} \prod_{i=0}^{k_0 -1}\frac{w_i}{\sqrt{2(h_{k_0}-h_i)}}\prod_{j=k_0 +1}^{n}\frac{\sqrt{2(h_j-h_{k_0})}}{z_j}\Big). \end{align} It is easy to see that $\varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ is well-defined and that $P_{k_0} \in U_{k_0}$ and $\varphi_{k_0}(P_{k_0})=(0,0)$. Moreover, we have the following. \begin{proposition}\label{prop-coord} $(1)$ $\varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ is a holomorphic local coordinate of $M(\alpha,0)$ with respect to $I_1$. \\ $(2)$ If we write $\varphi_{k_0}(q)=(u_1(q),u_2(q))$ for $q \in U_{k_0}$ and $u_i=x_i + \sqrt{-1}y_i$, where $x_i, y_i \in \mathbb{R}$, for $i=1,2$, then $(\frac{\partial}{\partial x_1})_{P_{k_0}}, (\frac{\partial}{\partial y_1})_{P_{k_0}}, (\frac{\partial}{\partial x_2})_{P_{k_0}}, (\frac{\partial}{\partial y_2})_{P_{k_0}}$ is an orthonormal basis of $T_{P_{k_0}}(M(\alpha,0))$. \end{proposition} {\it Proof.} $(1)$ To prove the claim, we need to describe $(M(\alpha,0), I_1)$ as a quotient in geometric invariant theory. Note that $M(\alpha,0)$ is an example of toric hyperK\"ahler varieties, which were introduced in \cite{BD}. We refer the readers to \cite{Ko} for basic properties of toric hyperK\"ahler varieties. Define the action of $T^{n+1}_\mathbb{C} = \{ \zeta=(\zeta_0, \cdots, \zeta_n) ~|~ \zeta_i \in \mathbb{C}^\times \quad \text{for $i=0, \dots, n$} \}$, where $\mathbb{C}^\times = \mathbb{C} \setminus \{ 0 \}$, on $\mathbb{H}^{n+1} = \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}$ by (\ref{equ-action}). Define a group homomorphism $\rho_\mathbb{C} \colon T^{n+1}_\mathbb{C} \to \mathbb{C}^\times$ by $\rho_\mathbb{C}(\zeta)=\zeta_0 \dots \zeta_n$. Let $K_\mathbb{C}$ be the kernel of $\rho_\mathbb{C}$, which is the complexification of the torus $K$. If we set $$\mu_{K, \mathbb{C}}=\mu_K^2+\sqrt{-1}\mu_K^3 \colon \mathbb{C}^{n+1} \times \mathbb{C}^{n+1} \to \mathfrak{k}_\mathbb{C},~~\text{where $\mathfrak{k}_\mathbb{C}=\mathfrak{k} \otimes _\mathbb{R} \mathbb{C}$, }$$ then we have \begin{align*} \mu_{K, \mathbb{C}}^{-1}(0)= \{ (z,w) \in \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}~|~ z_0w_0= \dots=z_n w_n \}. \end{align*} So $\mu_{K, \mathbb{C}}^{-1}(0)$ is an algebraic variety, on which $K_\mathbb{C}$ acts. Then we have a notion of stability for the action of $K_\mathbb{C}$ on $\mu_{K, \mathbb{C}}^{-1}(0)$ in geometric invariant theory according to $\alpha \in \mathfrak{k}$, which we have fixed in (\ref{alpha}). Since the torus $K$ acts on $\mu_K^{-1}(\alpha,0)$ freely in our case, $\alpha$-semistability is equivalent to $\alpha$-stability. So, if we denote the set of $\alpha$-stable points by $\mu_{K, \mathbb{C}}^{-1}(0)^{\alpha-st}$, then we have \begin{align*} (M(\alpha,0), I_1)=\mu_{K, \mathbb{C}}^{-1}(0)^{\alpha-st}/K_\mathbb{C}, \end{align*} where the right hand side is a geometric quotient. A point in $\mu_{K, \mathbb{C}}^{-1}(0)^{\alpha-st}/K_\mathbb{C}$, which is represented by $(z,w) \in \mu_{K, \mathbb{C}}^{-1}(0)^{\alpha-st}$, is denoted by $[z,w]_{K_\mathbb{C}}$. \begin{claim} Set \begin{align*} \tilde{U}_{k_0}\!=\! \{ (z,w) \in \mu_{K, \mathbb{C}}^{-1}(0) ~|~ \text{$w_i \!\ne\! 0$ for $i=0, \dots, k_0 \!-\!1$, $z_j \!\ne\! 0$ for $j=k_0 \!+\!1, \dots, n$}\}. \end{align*} Then $\tilde{U}_{k_0} \subset \mu_{K, \mathbb{C}}^{-1}(0)^{\alpha-st}$ holds. \end{claim} {\it Proof.} Due to Lemma 3.6 in \cite{Ko}, a point $(z,w) \in \mu_{K, \mathbb{C}}^{-1}(0)$ is $\alpha$-stable if and only if \begin{align}\label{equ-stable} \alpha \in \sum_{j=0}^n \mathbb{R}_{>0}|z_j|^2 \iota^* e^j - \sum_{i=0}^n \mathbb{R}_{>0}|w_i|^2 \iota^* e^i. \end{align} On the other hand, by (\ref{alpha}) and (\ref{equ-f}), we have \begin{align}\label{equ-alpha2} \alpha \in \sum_{j=k_0 +1}^n \mathbb{R}_{>0} \iota^* e^j - \sum_{i=0}^{k_0 -1} \mathbb{R}_{>0} \iota^* e^i. \end{align} So the claim follows from (\ref{equ-stable}) and (\ref{equ-alpha2}). \hfill $\Box$ So we have $\tilde{U}_{k_0} / K_\mathbb{C} = U_{k_0}$. Moreover, $\varphi_{k_0}([z,w]_{K_\mathbb{C}})$ is also given by the right hand side of (\ref{equ-coord}) for $(z,w) \in \tilde{U}_{k_0}$. So $\varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ is holomorphic with respect to $I_1$. If we write $\varphi_{k_0}([z,w]_{K_\mathbb{C}})=(u_1(z,w),u_2(z,w))$ for $(z,w) \in \tilde{U}_{k_0}$, then we have \begin{align}\label{equ-holo} [z,w]_{K_\mathbb{C}} = [\begin{pmatrix} \vdots \\ \frac{z_i w_i}{\sqrt{2(h_{k_0}-h_i)}} \\ \vdots \\ u_1(z,w) \\ \vdots \\ \sqrt{2(h_j-h_{k_0})} \\ \vdots \end{pmatrix},\begin{pmatrix} \vdots \\ \sqrt{2(h_{k_0}-h_i)} \\ \vdots \\ u_2(z,w) \\ \vdots \\ \frac{z_j w_j}{\sqrt{2(h_j-h_{k_0})}} \\ \vdots \end{pmatrix}]_{K_\mathbb{C}}, \end{align} where $u_1(z,w), u_2(z,w)$ in the right hand side are the $k_0$-th component. Since $z_0 w_0 = \dots = z_n w_n = u_1(z,w)u_2(z,w)$, the right hand side in (\ref{equ-holo}) depends only on $u_1(z,w),~u_2(z,w)$. So it can be seen that $ \varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ is bijective, moreover, a homeomorphism. Thus we see that $\varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ is a holomorphic local coordinate. \\ $(2)$ Define smooth curves $\tilde{\gamma}_i \colon (-\epsilon, \epsilon) \to \mu_K^{-1}(\alpha,0)$ for $i=1,2$ by \begin{align*} &\tilde{\gamma}_1(t) = \Big(\begin{pmatrix} \vdots \\ 0 \\ \vdots \\ t \\ \vdots \\ \sqrt{2(h_j-h_{k_0})+t^2} \\ \vdots \end{pmatrix},\begin{pmatrix} \vdots \\ \sqrt{2(h_{k_0}-h_i)-t^2} \\ \vdots \\ 0 \\ \vdots \\ 0 \\ \vdots \end{pmatrix}\Big), \\ &\tilde{\gamma}_2(t) = \Big(\begin{pmatrix} \vdots \\ 0 \\ \vdots \\ 0 \\ \vdots \\ \sqrt{2(h_j-h_{k_0})-t^2} \\ \vdots \end{pmatrix},\begin{pmatrix} \vdots \\ \sqrt{2(h_{k_0}-h_i)+t^2} \\ \vdots \\ t \\ \vdots \\ 0 \\ \vdots \end{pmatrix}\Big), \end{align*} for small $\epsilon > 0$. Then $\tilde{\gamma}_i$ induces smooth curves $\gamma_i \colon (-\epsilon, \epsilon) \to M(\alpha,0)$ for $i=1,2$. Then we have $\gamma_1(0)=\gamma_2(0)=P_{k_0}$ and $\frac{d(\varphi_{k_0} \circ \gamma_i)}{dt}(0)=(\frac{\partial}{\partial x_i})_{P_{k_0}}$. Moreover, $\frac{d \tilde{\gamma}_1}{dt}(0)$ is perpendicular to $\frac{d \tilde{\gamma}_2}{dt}(0)$, and $\frac{d \tilde{\gamma}_i}{dt}(0)$ are also perpendicular to the $K$-orbit through $\tilde{\gamma}_1(0)=\tilde{\gamma}_2(0)$. Then the claim follows. \hfill $\Box$ \subsection{Involutions on Ricci-flat ALE spaces of type $A_n$}\label{sec-involution} Define a map $\sigma \colon M(\alpha,0) \to M(\alpha,0)$ by \begin{align*} \sigma([z,w]_K)=[\overline{z},\overline{w}]_K. \end{align*} It is easy to see that this map is well-defined. Then the following holds. \begin{proposition}\label{prop-involution} $(1)$ $\sigma \colon M(\alpha,0) \to M(\alpha,0)$ is an isometric involution, which is anti-holomorphic with respect to $I_1$ and holomorphic with respect to $I_2$. \\ $(2)$ $\sigma([z,w]_K (\gamma_0,\gamma_1))=\sigma([z,w]_K) (\gamma_0,\gamma_1)^{-1}$ holds for any $[z,w]_K \in M(\alpha,0)$ and $(\gamma_0,\gamma_1) \in G$. \end{proposition} {\it Proof.} $(1)$ Define a map $\widetilde{\sigma} \colon \mathbb{H}^{n+1} \to \mathbb{H}^{n+1}$ by $\widetilde{\sigma}(z,w)=(\overline{z},\overline{w})$ for $(z,w) \in \mathbb{C}^{n+1}\times \mathbb{C}^{n+1}=\mathbb{H}^{n+1}$. It is easy to see that $\widetilde{\sigma}$ is an isometric involution, which is anti-holomorphic with respect to $I_1$ and holomorphic with respect to $I_2$. Moreover, it is easy to see that the differential $\widetilde{\sigma}_{*(z,w)} \colon T_{(z,w)}\mathbb{H}^{n+1} \to T_{\widetilde{\sigma}(z,w)}\mathbb{H}^{n+1}$ induces an isometry from the horizontal subspace $H_{(z,w)}$ to $H_{\widetilde{\sigma}(z,w)}$ in (\ref{o-decomp}) for $(z,w) \in \mu_K^{-1}(\alpha,0)$. Therefore the induced map $\sigma$ is also an isometric involution, which is anti-holomorphic with respect to $I_1$ and holomorphic with respect to $I_2$. \\ $(2)$ is obvious. \hfill $\Box$ To describe the set of fixed points of the involution $\sigma \colon M(\alpha,0) \to M(\alpha,0)$, we set $G_\mathbb{R}=G \cap \mathbb{R}^2$, which consists of four elements $(1,1)$, $(-1,1)$, $(1,-1)$, $(-1,-1)$. \begin{proposition}\label{prop-fixedpts} Let $M^\sigma$ be the set of fixed points of $\sigma$. Then the following holds. \\ $(1)$ $M^\sigma= \{ \mu_K^{-1}(\alpha,0) \cap (\mathbb{R}^{n+1} \times \mathbb{R}^{n+1}) \}/(K \cap \mathbb{R}^{n+1}).$ \\ $(2)$ For any $xp^0 +yp^1 \in \mathrm{Im} \mu_G$, $\mu_G^{-1}(xp^0 +yp^1) \cap M^\sigma$ is a single $G_\mathbb{R}$-orbit. \\ $(3)$ Take a subset $M^\sigma_{++}$ of $M^\sigma$ so that $\mu_G \colon M \to \mathfrak{g}^*$ induces a homeomorphism from $M^\sigma_{++}$ to $\mathrm{Im} \mu_G$. Set \begin{align*} &M^\sigma_{-+}= \{ [z,w]_K(-1, 1) ~|~ [z,w]_K \in M^\sigma_{++} \}, \\ &M^\sigma_{+-}= \{ [z,w]_K(1, -1) ~|~ [z,w]_K \in M^\sigma_{++} \}, \\ &M^\sigma_{--}= \{ [z,w]_K(-1, -1) ~|~ [z,w]_K \in M^\sigma_{++} \}. \end{align*} Then $M^\sigma = M^\sigma_{++} \cup M^\sigma_{-+} \cup M^\sigma_{+-} \cup M^\sigma_{--}$ holds. Moreover, for $k=0, \dots, n+1$, \begin{align*} &\text{$M^\sigma_{++}$and $M^\sigma_{-+}$ are glued along $l_k$ if and only if $n-k$ is odd,}\\ &\text{$M^\sigma_{+-}$and $M^\sigma_{--}$ are glued along $l_k$ if and only if $n-k$ is odd,}\\ &\text{$M^\sigma_{++}$and $M^\sigma_{--}$ are glued along $l_k$ if and only if $n-k$ is even,}\\ &\text{$M^\sigma_{+-}$and $M^\sigma_{-+}$ are glued along $l_k$ if and only if $n-k$ is even.} \end{align*} Therefore, $M^\sigma$ is homeomorphic to \begin{align*} &\text{a two-holed orientable surface of genus $\frac{n-1}{2}$ if $n$ is odd,}\\ &\text{a one-holed orientable surface of genus $\frac{n}{2}$ if $n$ is even.} \end{align*}\end{proposition} {\it Proof.} $(1)$ Set $M_\mathbb{R}=\{ \mu_K^{-1}(\alpha,0) \cap (\mathbb{R}^{n+1} \times \mathbb{R}^{n+1}) \}/(K \cap \mathbb{R}^{n+1})$. Since $M^\sigma \supset M_\mathbb{R}$ is obvious, we show that $M^\sigma \subset M_\mathbb{R}$. Fix $[z,w]_K \in M^\sigma$. Then there exists $\zeta \in K$ such that $(\overline{z},\overline{w})=(z\zeta, w\zeta^{-1})$. So we have $(\overline{z}\zeta^{-\frac{1}{2}},\overline{w}\zeta^\frac{1}{2})=(z\zeta^\frac{1}{2},w\zeta^{-\frac{1}{2}})$. This implies that $(z\zeta^\frac{1}{2},w\zeta^{-\frac{1}{2}}) \in \mu_K^{-1}(\alpha,0) \cap (\mathbb{R}^{n+1} \times \mathbb{R}^{n+1})$. Thus we see that $[z,w]_K = [z\zeta^\frac{1}{2},w\zeta^{-\frac{1}{2}}]_K \in M_\mathbb{R}$. \\ $(2)$ Fix $xp^0 + yp^1 \in \mathrm{Im} \mu_G$. Since $\mu_G^{-1}(xp^0 + yp^1)$ is a single $G$-orbit by Proposition \ref{prop-image} $(2)$, it is enough to show that $\mu_G^{-1}(xp^0 + yp^1) \cap M^\sigma \ne \emptyset$. As in the proof of Proposition \ref{prop-image} $(1)$, it is equivalent to show that there exist $(z,w) \in \mathbb{R}^{n+1} \times \mathbb{R}^{n+1}$ and $d_0 \in \mathbb{R}$ such that (\ref{zw2}) holds and $f_y(d_0)=x$, where $f_y \colon \mathbb{C} \to \mathbb{R}$ is a function defined by (\ref{def-f}). In fact, such $(z,w)$ and $d_0$ exist by the same argument as in the proof of Proposition \ref{prop-image} $(1)$. \\ $(3)$ By Proposition \ref{prop-isotropy}, for $k=0,\dots,n+1$, we have $G_{[z,w]_K}=H_{1,-(n+1-k)}$ if $\mu_G([z,w]_K) \in \mathrm{int}(l_k)$, where $\mathrm{int}(l_k)$ is the interior of $l_k$. On the other hand, for $k=0,\dots,n+1$, we have \begin{align*} & \text{$(-1,1) \in H_{1,-(n+1-k)}$ if and only if $n-k$ is odd,} \\ & \text{$(-1,-1) \in H_{1,-(n+1-k)}$ if and only if $n-k$ is even,} \\ & \text{$(1,-1) \not\in H_{1,-(n+1-k)}$ for any $k$.} \end{align*} So the claim follows. \hfill $\Box$ \subsection{Lagrangian mean curvature flows in Ricci-flat ALE spaces of type $A_n$} Suppose that $a,b \in \mathbb{Z}$ are coprime. Let $H_{a,b}$ be the subtorus of $G$, whose Lie algebra $\mathfrak{h}_{a,b}$ is generated by $ap_0+bp_1 \in \mathfrak{g}$. Set $w_{a,b} =ap_0+bp_1 \in \mathfrak{h}_{a,b}$. Then there exists $w^{a,b} \in \mathfrak{h}_{a,b}^*$ such that $\langle w^{a,b}, w_{a,b} \rangle =1$. Let $\iota_{a,b} \colon H_{a,b} \to G$ be the embedding, and $\iota_{a,b*} \colon \mathfrak{h}_{a,b} \to \mathfrak{g}$, $\iota_{a,b}^* \colon \mathfrak{g}^* \to \mathfrak{h}_{a,b}^*$ the induced maps. By Propositions \ref{prop-moment} and \ref{prop-involution}, we have the following. \begin{lemma}\label{lemma-hab} $(1)$ $\iota_{a,b}^*p^0=aw^{a,b}$, $\iota_{a,b}^*p^1=b w^{a,b}$. \\ $(2)$ The moment map $\mu_{H_{a,b}} = \iota_{a,b}^* \circ \mu_G\colon M(\alpha,0) \to \mathfrak{h}_{a,b}^*$ for the action of $H_{a,b}$ on $(M(\alpha,0), \omega_1)$ is given by \begin{align*} \mu_{H_{a,b}}([z,w]_K)=\{\frac{a}{2}(\sum_{i=0}^n |z_i|^2) + \frac{b}{2}(|z_k|^2-|w_k|^2)-b h_k \} w^{a,b}. \end{align*} \\ $(3)$ $R_{\mathrm{Exp}_{H_{a,b}}\xi}^*\omega_\mathbb{C}=e^{\sqrt{-1}a \langle w^{a,b}, \xi \rangle}\omega_\mathbb{C}$ holds for $\xi \in \mathfrak{\mathfrak{h}}_{a,b}$, where $\mathrm{Exp}_{H_{a,b}} \colon \mathfrak{h}_{a,b} \to H_{a,b}$ is the exponential map for $H_{a,b}$. \\ $(4)$ $\sigma([z,w]_K h)=\sigma([z,w]_K) h^{-1}$ holds for any $[z,w]_K \in M(\alpha,0)$ and $h \in H_{a,b}$. \end{lemma} Denote the sets of fixed points of the action of $H_{a,b}$, $G$ on $M(\alpha,0)$ by $M(\alpha,0)^{H_{a,b}}$, $M(\alpha,0)^G$, respectively. Then we have the following. \begin{lemma}\label{chi3} Suppose that $a,b \in \mathbb{Z}$ are coprime and that $b \ne- la$ for $l=0,-1, \dots, \\-(n+1)$. Then the following holds. \\ $(1)$ $M(\alpha,0)^{H_{a,b}}=M(\alpha,0)^G$ holds. In particular, $(w_{a,b})^\#_p \ne 0$ holds for each $p \in L=M^\sigma \setminus M(\alpha,0)^G$. \\ $(2)$ Let $\chi$ be the vector field on $L=M^\sigma \setminus M(\alpha,0)^G$, which is defined in Theorem \ref{construction} $(4)$. Then there exist constants $C_1 >0$ and $C_2 >0$ such that, if $p=[z,w]_K \in L$ satisfies $|p| \ge C_1$, then $|\chi_p| \le \frac{C_2}{|p|}$ holds, where $|p|= \sqrt{|z|^2 + |w|^2}$. \end{lemma} {\it Proof.} $(1)$ follows from Proposition \ref{prop-isotropy}. \\ $(2)$ The vector field $\chi$ is defined by $\chi_p=I_p \tilde{\xi}(p)^\#_p$, where $\tilde{\xi}(p) \in \mathfrak{h}_{a,b}$ satisfies $g_p(\tilde{\xi}(p)^\#_p, \eta^\#_p)= \langle aw^{a,b}, \eta \rangle$ for any $\eta \in \mathfrak{h}_{a,b}$. If we set $\tilde{\xi}(p)=\alpha_p w_{a,b}$, where $\alpha_p \in \mathbb{R}$, then we have $g_p(\alpha_p (w_{a,b})^\#_p, (w_{a,b})^\#_p)= a$. So we have $\alpha_p= \frac{a}{g_p((w_{a,b})^\#_p, (w_{a,b})^\#_p)}$. Thus we have $|\chi_p|= |I_p \tilde{\xi}(p)^\#_p|= |\alpha_p (w_{a,b})^\#_p|=\frac{|a|}{|(w_{a,b})^\#_p|}$. Therefore it is enough to show the following claim. \begin{claim}\label{wab} There exist constants $C_1 >0$ and $C_3 >0$ such that, if $p \in M(\alpha,0)$ satisfies $|p| \ge C_1$, then $|(w_{a,b})^\#_p| \ge C_3|p|$ holds. \end{claim} {\it Proof.} Set $\widetilde{S}= \{ (z,w) \in \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}~|~ |z|^2 + |w|^2=1 \}$. Since the action of $K$ on $ \mathbb{C}^{n+1} \times \mathbb{C}^{n+1}$ preserves $\widetilde{S}$, we can define $S(M(\alpha^\prime,0))=(\mu_K^{-1}(\alpha^\prime,0) \cap \widetilde{S})/K$, which is a compact subset of $M(\alpha^\prime,0)$ for each $\alpha^\prime \in \mathfrak{k}^*$. Since $b \ne 0, -(n+1)a$, it is easy to see that $(w_{a,b})^\#_p \ne 0$ for each $p \in S(M(0,0))$. So there exist a constant $C_3 >0$ and an open neighborhood $U_O$ of the origin $O \in \mathfrak{k}^*$ such that $|(w_{a,b})^\#_p| \ge C_3$ holds for each $\alpha^\prime \in U_O$ and $p \in S(M(\alpha^\prime,0))$. On the other hand, recall we have fixed $\alpha \in \mathfrak{k}$. Note that, if $p =[z,w]_K \in M(\alpha,0)$, then $\frac{p}{|p|}=[\frac{z}{|p|},\frac{w}{|p|}]_K \in S(M(\frac{\alpha}{|p|^2},0))$ and $|(w_{a,b})^\#_{\frac{p}{|p|}}|=\frac{1}{|p|}|(w_{a,b})^\#_p|$. There exists a constant $C_1 >0$ such that, if $p \in M(\alpha,0)$ satisfies $|p| \ge C_1$, then $\frac{\alpha}{|p|^2} \in U_O$. Then we have $\frac{1}{|p|}|(w_{a,b})^\#_p|=|(w_{a,b})^\#_{\frac{p}{|p|}}| \ge C_3$. So Claim \ref{wab} follows. \hfill $\Box$ \\ Thus we finish the proof of Lemma \ref{chi3}. \hfill $\Box$ So we have the following theorem. \begin{theorem}\label{LMCFALE} Suppose that $a,b \in \mathbb{Z}$ are coprime and that $b \ne- la$ for $l=0,-1, \dots, \\ -(n+1)$. Fix $c_0 \in \mathfrak{h}_{a,b}^*$ and set $c_t= c_0 - t a w^{a,b} \in \mathfrak{h}_{a,b}^*$. Suppose that $\mu_{H_{a,b}}^{-1}(c_t) \cap M(\alpha,0)^G = \emptyset$ for each $t \in [0,T)$. Then the following holds. \\ $(1)$ Set $V_{c_t} = \mu_{H_{a,b}}^{-1}(c_t) \cap L$, where $L=M^\sigma \setminus M(\alpha,0)^G$. Let $\chi$ be the vector field on $L$, which is defined in Theorem \ref{construction} $(4)$. Then, for each $p \in V_{c_0}$, the integral curve $\{ \gamma_p(t) \}$ of the vector field $\chi$ with $\gamma_p(0)=p$ exists for $t \in [0,T)$. Moreover, $\gamma_p(t) \in V_{c_t}$ holds. \\ $(2)$ $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,T)}$, defined by $F_t(p,h)=\gamma_p(t)h$, is a Lagrangian mean curvature flow. \end{theorem} {\it Proof.} $(1)$ By the same arguments as in the proof of Proposition \ref{prop-existence}, the first claim follows from Lemma \ref{chi3}. The second claim follows from Theorem \ref{construction} $(5)$. \\ $(2)$ Theorem \ref{construction} $(5)$, together with Lemma \ref{lemma-hab} and Propositions \ref{construction2}, implies the claim. \hfill $\Box$ Let us describe the set $V_{c_0}=\mu_{H_{a,b}}^{-1}(c_0) \cap L$. If we set $r_{c_0}=(\iota_{a,b}^*)^{-1}(c_0) \cap \mathrm{Im}\mu_G$, then $r_{c_0}$ is a segment or a ray in $\mathfrak{g}^*$. $V_{c_0}$ is compact if and only if $r_{c_0}$ is bounded. Since $\mu_{H_{a,b}}^{-1}(c_0)= \mu_G^{-1}(r_{c_0})$, we have \begin{align}\label{equ-decomp2} V_{c_0}&=\mu_{H_{a,b}}^{-1}(c_0) \cap L \nonumber \\ &= \mu_G^{-1}(r_{c_0}) \cap M^\sigma \nonumber \\ &= (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{++}) \cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{-+}) \nonumber \\ & \hspace*{20mm}\cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{+-}) \cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{--}) . \end{align} Thus $V_{c_0}$ is decomposed into four pieces \begin{align*} \text{$\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{++}$, $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{-+}$, $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{+-}$, $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{--}$.} \end{align*} Note that each piece is homeomorphic to the set $r_{c_0}$. By Proposition \ref{prop-fixedpts} $(3)$, we see how these pieces are connected to each other. Thus we see whether $V_{c_0}$ is connected or not. Note that the four pieces in (\ref{equ-decomp2}) are transformed into themselves or other pieces by the action of $$G_\mathbb{R}= \{ (1,1), (1,-1),(-1,1), (-1,-1) \}. $$ Since we have assumed that $a,b \in \mathbb{Z}$ are coprime, it is easy to see that \begin{align}\label{hab} & \text{$(1,-1) \in H_{a,b}$ if and only if $a$ is even and $b$ is odd,} \nonumber \\ & \text{$(-1,1) \in H_{a,b}$ if and only if $a$ is odd and $b$ is even,} \\ & \text{$(-1,-1) \in H_{a,b}$ if and only if $a$ is odd and $b$ is odd.} \nonumber \end{align} So, in any case, if we set $V_{c_0}^0=V_{c_0} \cap \mu_G^{-1}(\mathrm{int}(\mathrm{Im}\mu_G))$, we see that the restriction $F_0|_{V_{c_0}^0 \times H_{a,b}} \colon V_{c_0}^0 \times H_{a,b} \to M(\alpha,0)$ is a two-to-one map onto its image. Thus we see that the map $F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0)$ is generically a two-to-one map onto its image for any $t \in [0,T)$. Let us describe the above Lagrangian mean curvature flow $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,T)}$ in more detail. If $a=0$, the flow is static, that is, $F_t=F_0 \colon V_{c_0} \times H_{a,b} \to M(\alpha,0)$ for any $t \in [0,T)$. So we may assume $a>0$. We describe the flow in the following three cases: \\ $(1)$ the case $a>0,~\frac{b}{a}>0$, \\ $(2)$ the case $a>0,~0 > \frac{b}{a} > -(n+1),~ \frac{b}{a} \not\in \mathbb{Z}$, \\ $(3)$ the case $a>0,~-(n+1) > \frac{b}{a}$. \\ Since the third case is similar to the first one, we will discuss the first and second cases. \subsection{The case $a>0,~\frac{b}{a}>0$}\label{case1} Recall the decomposition of $\mathrm{Im}\mu_G$ in (\ref{decomposition}). Define $t_i \in \mathbb{R}$ by $\iota_{a,b}^*(v_i)=c_0 - t_i a w^{a,b}(=c_{t_i})$ for $i=0, \dots, n$. In the case $a>0,~\frac{b}{a}>0$, we have $$ -\infty = t_{-1} < t_0< t_1 < \dots < t_n < t_{n+1}=\infty. $$ Since $\mu_{H_{a,b}}^{-1}(c_0) \cap M(\alpha,0)^G = \emptyset$, there exists unique $k_0 \in \{ 0, 1, \dots, n+1 \}$ such that $t_{k_0 -1} < 0 < t_{k_0}$. Then $(\iota_{a,b}^*)^{-1}(c_0)$ intersects with $\mathrm{int}(l_{k_0})$. Moreover, the set $r_{c_0}=(\iota_{a,b}^*)^{-1}(c_0) \cap \mathrm{Im}\mu_G$ is a ray in $\mathfrak{g}^*$. Each piece $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{++}$, $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{-+}$, $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{+-}$ and $\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{--}$ in (\ref{equ-decomp2}) is homeomorphic to the ray $r_{c_0}$. By Proposition \ref{prop-fixedpts}, we see that $V_{c_0}=\mu_{H_{a,b}}^{-1}(c_0) \cap L$ consists of two connected components $V_{c_0}^{(1)}$, $V_{c_0}^{(2)}$ as follows: \\ \hspace*{10mm}$(a)$ If $n-k_0$ is odd, \\ \hspace*{20mm}$V_{c_0}^{(1)}=(\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{++}) \cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{-+})$, \\ \hspace*{20mm}$V_{c_0}^{(2)}=(\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{+-}) \cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{--})$. \\ \hspace*{20mm}Each component is diffeomorphic to $\mathbb{R}$. \\ \hspace*{10mm}$(b)$ If $n-k_0$ is even, \\ \hspace*{20mm}$V_{c_0}^{(1)}=(\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{++}) \cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{--})$, \\ \hspace*{20mm}$V_{c_0}^{(2)}=(\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{+-}) \cup (\mu_G^{-1}(r_{c_0}) \cap M^\sigma_{-+})$. \\ \hspace*{20mm}Each component is diffeomorphic to $\mathbb{R}$. Note that $\mu_{H_{a,b}}^{-1}(c_t) \cap M(\alpha,0)^G = \emptyset$ for each $t \in [0,t_{k_0})$. By Theorem \ref{LMCFALE}, we have a Lagrangian mean curvature flow $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,t_{k_0})}$. By Proposition \ref{prop-fixedpts}, for $t \in (t_{k_0}, t_{k_0 +1} )$, the four pieces $\mu_G^{-1}(r_{c_t}) \cap M^\sigma_{++}$, $\mu_G^{-1}(r_{c_t}) \cap M^\sigma_{-+}$, $\mu_G^{-1}(r_{c_t}) \cap M^\sigma_{+-}$ and $\mu_G^{-1}(r_{c_t}) \cap M^\sigma_{--}$ are connected to each other in a different way from the case $t \in [0, t_{k_0})$. Therefore, it is impossible to extend the flow $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,t_{k_0})}$ continuously to $t \in [0,t_{k_0}+\epsilon)$ for any $\epsilon >0$. This implies that the Lagrangian mean curvature flow $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,t_{k_0})}$ develops a singularity at $P_{k_0}=\mu_G^{-1}(v_{k_0})$ when $t$ goes to $t_{k_0}$. Let us describe the singularity of the flow $\{ F_t \}_{t \in [0,t_{k_0})}$. In Proposition \ref{prop-coord} we have constructed the holomorphic local coordinate $\varphi_{k_0} \colon U_{k_0} \to \mathbb{C}^2$ around $P_{k_0}$. We write $\varphi_{k_0}(q)=(u_1(q),u_2(q))$ for $q \in U_{k_0}$ and $u_i=x_i + \sqrt{-1}y_i$, where $x_i, y_i \in \mathbb{R}$, for $i=1,2$. By (\ref{equ-coord}), we have \begin{align*} (u_1,u_2)\mathrm{Exp}_{H_{a,b}}s w_{a,b} & = (u_1 e^{\sqrt{-1} \lambda^{(k_0)}_1 s}, u_2 e^{\sqrt{-1} \lambda^{(k_0)}_2 s}), \end{align*} where \begin{align}\label{lambda} \lambda^{(k_0)}_1= a(n +1 -k_0)+b,~\lambda^{(k_0)}_2= -a(n-k_0)-b. \end{align} Since we have assumed that $a>0$ and $\frac{b}{a}>0$, we have $\lambda^{(k_0)}_1 >0$ and $\lambda^{(k_0)}_2 <0$ for $k_0=0, \dots, n$. Thus we have \begin{align*} (w_{a,b})^\# & = (\sqrt{-1} \lambda^{(k_0)}_1 u_1, \sqrt{-1} \lambda^{(k_0)}_2 u_2) \\ & = -\lambda^{(k_0)}_1 y_1 \frac{\partial}{\partial x_1} + \lambda^{(k_0)}_1 x_1 \frac{\partial}{\partial y_1} -\lambda^{(k_0)}_2 y_2 \frac{\partial}{\partial x_2} + \lambda^{(k_0)}_2 x_2 \frac{\partial}{\partial y_2}. \end{align*} If we set $\mu=\langle \mu_{H_{a,b}}(\cdot), w_{a,b} \rangle \colon M(\alpha,0) \to \mathbb{R}$, then we have $i((w_{a,b})^\#)\omega_1=-d \mu$, where $\omega_1$ is the K\"ahler form of $(M(\alpha,0),I_1)$. Define $\omega_1^0, \omega_1^1 \in \Omega^2(U_{k_0})$ and $\mu^0,\mu^1 \colon U_{k_0} \to \mathbb{R}$ by \begin{align*} &\omega_1|_{U_{k_0}}=\omega_1^0 + \omega_1^1, ~~~~\omega_1^0 =dx_1 \wedge dy_1 + dx_2 \wedge dy_2, \\ &\mu|_{U_{k_0}}=\mu^0 + \mu^1, ~~~~\mu^0(u_1,u_2)=\mu(P_{k_0})+\frac{\lambda^{(k_0)}_1}{2}|u_1|^2 + \frac{\lambda^{(k_0)}_2}{2}|u_2|^2. \end{align*} By definition of $\mu^1$, we have $\mu^1(0,0)=0$. Since $i((w_{a,b})^\#)\omega_1^0=-d \mu^0$ holds, we have $i((w_{a,b})^\#)\omega_1^1=-d \mu^1$. Since $(w_{a,b})^\#_{(0,0)}=0$, we have $(d \mu^1)_{(0,0)}=0$. Moreover, by Proposition \ref{prop-coord} $(2)$, we have $(\omega_1^1)_{(0,0)}=0 $. So we have \begin{align*} \lim_{(u_1,u_2) \to (0,0)}\frac{|(d\mu^1)_{(u_1,u_2)}|}{\sqrt{|u_1|^2 + |u_2|^2}} =\lim_{(u_1,u_2) \to (0,0)}\frac{|\{ i((w_{a,b})^\#)\omega_1^1 \}_{(u_1,u_2)}|}{\sqrt{|u_1|^2 + |u_2|^2}} =0. \end{align*} Therefore we have \begin{align}\label{equ-limmu} \lim_{(u_1,u_2) \to (0,0)}\frac{\mu^1(u_1,u_2)}{|u_1|^2 + |u_2|^2}=0. \end{align} Since $c_t=c_0-taw^{a,b}=c_{k_0}+(t_{k_0}-t)aw^{a,b}$ holds, we have $\langle c_t, w_{a,b} \rangle = \mu(P_{k_0})+a(t_{k_0} -t)$. Then we have \begin{align}\label{equ-local} (u_1,u_2) &\in \mu_{H_{a,b}}^{-1}(c_t) \cap U_{k_0} \nonumber \\ &\Longleftrightarrow \mu(u_1,u_2)=\mu(P_{k_0})+a(t_{k_0}-t) \nonumber \\ &\Longleftrightarrow \frac{\lambda^{(k_0)}_1}{2}|u_1|^2 + \frac{\lambda^{(k_0)}_2}{2}|u_2|^2 +\mu^1(u_1,u_2) = a(t_{k_0}-t). \end{align} If we consider the rescaling \begin{align}\label{equ-rescaling} (v_1,v_2) =(\frac{u_1}{\sqrt{t_{k_0}-t}},\frac{u_2}{\sqrt{t_{k_0}-t}}), \end{align} then (\ref{equ-local}) is equivalent to \begin{align*} \frac{\lambda^{(k_0)}_1}{2}|v_1|^2 + \frac{\lambda^{(k_0)}_2}{2}|v_2|^2 = a - \nu_t(v_1,v_2), \end{align*} where \begin{align*} \nu_t(v_1,v_2) =\frac{\mu^1(\sqrt{t_{k_0}-t}~v_1, \sqrt{t_{k_0}-t}~v_2)}{t_{k_0}-t}. \end{align*} By (\ref{equ-limmu}), as $t$ goes to $t_{k_0}$, $\nu_t \colon \mathbb{C}^2 \to \mathbb{R}$ converges $0$ in $C^\infty$-topology on any compact subset on $\mathbb{C}^2$. Therefore, as $t$ goes to $t_{k_0}$, we have the following convergence \begin{align*} \lim_{t \to t_{k_0}} \frac{1}{\sqrt{t_{k_0}-t}} \big(\mu_{H_{a,b}}^{-1}(c_t) \cap U_{k_0}\big) = \{ (v_1,v_2) \in \mathbb{C}^2~|~\frac{\lambda^{(k_0)}_1}{2}|v_1|^2 + \frac{\lambda^{(k_0)}_2}{2}|v_2|^2 = a \}. \end{align*} Note that $U_{k_0}\cap M^\sigma= \{ (u_1,u_2) \in \mathbb{C}^2~|~u_1,u_2 \in \mathbb{R} \}$ and that the rescaling procedure (\ref{equ-rescaling}) is $H_{a,b}$-equivariant. So there exists $\epsilon >0, R>0, C>0$ such that the following $(i)$, $(ii)$ hold: \\ $(i)$ If $0 < t_{k_0} -t < \epsilon$, then $B(P_{k_0};\sqrt{t_{k_0} -t}~\!R) \cap \mathrm{Im}F_t \ne \emptyset$, where $B(P; r)$ is the geodesic ball of the radius $r>0$ centered at $P$ in $M(\alpha,0)$, \\ $(ii)$ $\sup \{|A_t(P)|~|~0 < t_{k_0} -t < \epsilon,~ P \in B(P_{k_0};\sqrt{t_{k_0} -t}~\!R) \cap \mathrm{Im}F_t \} \le \frac{C}{\sqrt{t_{k_0}-t}}$, where $A_t$ is the second fundamental form of $F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0)$. Thus we have the following. \begin{proposition}\label{prop-sing} The Lagrangian mean curvature flow $$\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,t_{k_0})}$$ develops a type I singularity at $P_{k_0}$ when $t$ goes to $t_{k_0}$. The blow-up limit at the singularity is the self-shrinker in Subsection \ref{3-1} with $d=2$, $\lambda_1= a(n-k_0 +1)+b$ and $\lambda_2= -a(n-k_0)-b$. \end{proposition} As we already mentiond, in \cite{LW1,LW2}, Lee and Wang proved that $\{ L_t \}_{t \in \mathbb{R}}$ in the last paragraph of Subsection \ref{3-1} forms an eternal solution for Brakke flow. Set $L_t^{(a,b)} = \{ ph \in M(\alpha,0)~|~ p \in \mu_{H_{a,b}}^{-1}(c_t) \cap M^\sigma, h \in H_{a,b} \}$ for $t \in \mathbb{R}$. By Proposition \ref{prop-sing}, we see that our example $\{ L_t^{(a,b)} \}_{t \in \mathbb{R}}$ becomes closer to $\{ L_t \}_{t \in \mathbb{R}}$, where $d=2$, $\lambda_1= a(n-k_0 +1)+b$ and $\lambda_2= -a(n-k_0)-b$, around $P_{k_0}$ as $t$ goes to $t_{k_0}$. So, by modifying the argument of Lee and Wang, it is not difficult to see that $\{ L_t^{(a,b)} \}_{t \in \mathbb{R}}$ forms an eternal solution for Brakke flow. \subsection{The case $a>0,~0 > \frac{b}{a} > -(n+1),~ \frac{b}{a} \not\in \mathbb{Z}$}\label{case2} Define $t_k \in \mathbb{R}$ by $\iota_{a,b}^*(v_k)=c_0-t_k a w^{a,b}(=c_{t_k})$ for $k=0,1,\dots, n$ as in Subsection \ref{case1}. Then, in the case $a>0,~0 > \frac{b}{a} > -(n+1),~ \frac{b}{a} \not\in \mathbb{Z}$, there exists $m_0 \in \{ 0, \dots, n \}$ such that \begin{align}\label{mzero} -\infty=t_{-1} < t_0 <t_1 < \dots < t_{m_0} > t_{m_0 +1} > \dots >t_n >t_{n+1}= -\infty. \end{align} If $0 > t_{m_0}$, then $\mu_{H_{a,b}}^{-1}(c_0) \cap L = \emptyset$ and we have nothing to discuss. So we may assume $t_{m_0} \ge 0$. Since we have assumed $\mu_{H_{a,b}}^{-1}(c_0) \cap M(\alpha,0)^G = \emptyset$, there exists $i_0 \in \{0, \dots, m_0 \}$ and $j_0 \in \{ m_0 +1, \dots, n+1 \}$ such that $t_{i_0 -1} < 0 < t_{i_0}$ and $t_{j_0 -1} > 0 > t_{j_0}$ and that $(\iota_{a,b}^*)^{-1}(c_0)$ intersects $\mathrm{int}(l_{i_0})$ and $\mathrm{int}(l_{j_0})$. Therefore, in this case, the set $r_{c_0}=(\iota_{a,b}^*)^{-1}(c_0) \cap \mathrm{Im}\mu_G$ is a segment in $\mathfrak{g}^*$. By Proposition \ref{prop-fixedpts}, we have the following. \\ \hspace*{10mm}$(a)$ If $j_0 -i_0$ is odd, \\ \hspace*{20mm}$V_{c_0}$ is connected and diffeomorphic to $S^1$. \\ \hspace*{10mm}$(b)$ If $j_0 -i_0$ is even, \\ \hspace*{20mm}$V_{c_0}$ consists of two connected components. \\ \hspace*{20mm}Each component is diffeomorphic to $S^1$. If we set $T= \mathrm{min}\{t_{i_0}, t_{j_0 -1} \}$, then we see that $\mu_{H_{a,b}}^{-1}(c_t) \cap M(\alpha,0)^G = \emptyset$ for each $t \in [0,T)$. By Theorem \ref{LMCFALE} we have a Lagrangian mean curvature flow $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,T)}$. It is impossible to extend the flow continuously to $t \in [0,T+\epsilon)$ for any $\epsilon >0$ as in Subsection \ref{case1}. So the Lagrangian mean curvature flow $\{ F_t \colon V_{c_0} \times H_{a,b} \to M(\alpha,0) \}_{t \in [0,T)}$ develops a singularity when $t$ goes to $T$ at $P_{i_0}$ if $T=t_{i_0}$ or at $P_{j_0 -1}$ if $T=t_{j_0 -1}$. The structure of the singularity at $P_{k_0}$ is investigated in the same way as in Subsection \ref{case1} and described in Proposition \ref{prop-sing}. Here we should note the sign of $\lambda^{(k_0)}_1$ and $\lambda^{(k_0)}_2$ in (\ref{lambda}) as follows. \begin{proposition}\label{prop-sign} Suppose that $a,b \in \mathbb{Z}$ are coprime, $a>0$, $0 > \frac{b}{a} > -(n+1)$ and $\frac{b}{a} \not\in \mathbb{Z}$. Then the following holds. \\ $(1)$ $\lambda^{(m_0)}_1 >0$ and $\lambda^{(m_0)}_2 >0$, where $m_0$ is the same as in (\ref{mzero}). \\ $(2)$ If $k_0 \ne m_0$, then $\lambda^{(k_0)}_1 \lambda^{(k_0)}_2 < 0$. \end{proposition} {\it Proof.} For $k_0=0,\dots, n$, we have \begin{align*} & \text{$\lambda^{(k_0)}_1 \underset{(\mathrm{resp.}=)}{>}0$ ~~~if and only if~~~ $n+1+\frac{b}{a} \underset{(\mathrm{resp.}=)}{>} k_0$,} \\ & \text{$\lambda^{(k_0)}_2 \underset{(\mathrm{resp.}=)}{>}0$ ~~~if and only if~~~ $k_0 \underset{(\mathrm{resp.}=)}{>} n+\frac{b}{a}$.} \end{align*} Since $\frac{b}{a} \not\in \mathbb{Z}$, we have $\lambda^{(k_0)}_1 \ne 0$ and $ \lambda^{(k_0)}_2 \ne 0$. So it is enough to show that \begin{align}\label{slope} n+1+\frac{b}{a} > m_0 > n+\frac{b}{a}. \end{align} In fact, since $\ker \iota_{a,b}^* = \mathrm{span}\{-b p^0 + a p^1 \}$, we see that the slope of the segment $r_{c_0}=(\iota_{a,b}^*)^{-1}(c_0) \cap \mathrm{Im}\mu_G$ is $-\frac{a}{b}$. On the other hand, by (\ref{equ-lk}), the slopes of $l_{m_0}$, $l_{m_0 +1}$ are $\frac{1}{n+1-m_0}$, $\frac{1}{n-m_0}$, respectively. Then, by (\ref{mzero}), we have \begin{align*} \frac{1}{n+1-m_0} < -\frac{a}{b} < \frac{1}{n-m_0}, \end{align*} which is equivalent to (\ref{slope}). Thus we finish the proof. \hfill$\Box$ Set $L_t^{(a,b)} = \{ ph \in M(\alpha,0)~|~ p \in \mu_{H_{a,b}}^{-1}(c_t) \cap M^\sigma, h \in H_{a,b} \}$ for $t \in \mathbb{R}$. By Proposition \ref{prop-sing}, it is not difficult to see that $\{ L_t^{(a,b)} \}_{t \in \mathbb{R}}$ forms an eternal solution for Brakke flow as we explained at the end of Subsection \ref{case1}. Proposition \ref{prop-sign} implies that $L_t^{(a,b)}$ shrinks to the point $P_{m_0}$ as $t$ goes to $t_{m_0}$ and that $L_t^{(a,b)} = \emptyset$ for $t >t_{m_0}$.
2,877,628,089,890
arxiv
\section{#1}} \newtheorem{dfn}{Definition}[section] \newtheorem{thm}[dfn]{Theorem} \newtheorem{lmma}[dfn]{Lemma} \newtheorem{ppsn}[dfn]{Proposition} \newtheorem{crlre}[dfn]{Corollary} \newtheorem{xmpl}[dfn]{Example} \newtheorem{rmrk}[dfn]{Remark} \newtheorem{nota}[dfn]{Notation} \newtheorem{assu}[dfn]{Assumption} \newcommand{\begin{dfn}}{\begin{dfn}} \newcommand{\begin{thm}}{\begin{thm}} \newcommand{\begin{lmma}}{\begin{lmma}} \newcommand{\begin{ppsn}}{\begin{ppsn}} \newcommand{\begin{crlre}}{\begin{crlre}} \newcommand{\begin{xmpl}}{\begin{xmpl}} \newcommand{\begin{rmrk}}{\begin{rmrk}} \newcommand{\end{dfn}}{\end{dfn}} \newcommand{\end{thm}}{\end{thm}} \newcommand{\end{lmma}}{\end{lmma}} \newcommand{\end{ppsn}}{\end{ppsn}} \newcommand{\end{crlre}}{\end{crlre}} \newcommand{\end{xmpl}}{\end{xmpl}} \newcommand{\end{rmrk}}{\end{rmrk}} \newcommand{{I\! \! A}}{{I\! \! A}} \newcommand{{I\! \! B}}{{I\! \! B}} \newcommand{{I\! \! \!\! C}}{\mathbb{C}} \newcommand{{I\! \! D}}{{I\! \! D}} \newcommand{{I\! \! E}}{{I\! \! E}} \newcommand{{I\! \! F}}{{I\! \! F}} \newcommand{{I\! \! G}}{{I\! \! G}} \newcommand{{I\! \! H}}{{I\! \! H}} \newcommand{{I\! \! I}}{{I\! \! I}} \newcommand{{I\! \! K}}{{I\! \! K}} \newcommand{{I\! \! L}}{{I\! \! L}} \newcommand{{I\! \! M}}{{I\! \! M}} \newcommand{{I\! \! N}}{{I\! \! N}} \newcommand{{I\! \! O}}{{I\! \! O}} \newcommand{{I\! \! P}}{{I\! \! P}} \newcommand{{I\! \! Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{{I\! \! S}}{{I\! \! S}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{{I\! \! U}}{{I\! \! U}} \newcommand{{I\! \! V}}{{I\! \! V}} \newcommand{{I\! \! W}}{{I\! \! W}} \newcommand{{I\! \! X}}{{I\! \! X}} \newcommand{{I\! \! Y}}{{I\! \! Y}} \newcommand{{\ \! \! Z}}{\mathbb{Z}} \newcommand{B^{N-n}_\epsilon(0)}{B^{N-n}_\epsilon(0)} \newcommand{\Omega^1 ( M )}{\Omega^1 ( M )} \newcommand{{\Omega}^1 ( \mathcal{A} )}{{\Omega}^1 ( \mathcal{A} )} \newcommand{\widetilde{{\Omega}^1_D ( \mathcal{A}_\theta )} }{\widetilde{{\Omega}^1_D ( \mathcal{A}_\theta )} } \newcommand{\widetilde{{\Omega}^2_D ( \mathcal{A}_\theta )} }{\widetilde{{\Omega}^2_D ( \mathcal{A}_\theta )} } \newcommand{{\Omega}^2( \mathcal{A} )}{{\Omega}^2( \mathcal{A} )} \newcommand{\otimes_{\mathcal{A}}}{\otimes_{\mathcal{A}}} \newcommand{\otimes^{{\rm sym}}_{\mathcal{A}}}{\otimes^{{\rm sym}}_{\mathcal{A}}} \newcommand{\otimes_{\mathbb{C}}}{\otimes_{\mathbb{C}}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{A}_{\theta}}{\mathcal{A}_{\theta}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\mathcal{Z}( \mathcal{A} )}{\mathcal{Z}( \mathcal{A} )} \newcommand{\mathcal{Z}( \mathcal{E} )}{\mathcal{Z}( \mathcal{E} )} \newcommand{\mathcal{Z}( \mathcal{F} )}{\mathcal{Z}( \mathcal{F} )} \newcommand{\mathcal{A}^{\prime}}{\mathcal{A}^{\prime}} \newcommand{\pi_{g,g_0}}{\pi_{g,g_0}} \newcommand{S_{\omega,\eta}}{S_{\omega,\eta}} \newcommand{P_{\rm sym}}{P_{\rm sym}} \newcommand{{\rm Hom}}{{\rm Hom}} \newcommand{\omega_{(0)}}{\omega_{(0)}} \newcommand{\omega_{(1)}}{\omega_{(1)}} \newcommand{\eta_{(0)}}{\eta_{(0)}} \newcommand{\eta_{(1)}}{\eta_{(1)}} \newcommand{{}_{(0)} \omega}{{}_{(0)}\omega} \newcommand{{}_{(1)} \omega}{{}_{(1)}\omega} \newcommand{{}_{(0)} \eta}{{}_{(0)}\eta} \newcommand{{}_{(1)} \eta}{{}_{(1)}\eta} \newcommand{{}_{(0)} \theta}{{}_{(0)}\theta} \newcommand{{}_{(1)} \theta}{{}_{(1)}\theta} \newcommand{\omega_{L(0)}}{\omega_{L(0)}} \newcommand{\omega_{L(1)}}{\omega_{L(1)}} \newcommand{\eta_{L(0)}}{\eta_{L(0)}} \newcommand{\eta_{L(1)}}{\eta_{L(1)}} \newcommand{\theta_{L(0)}}{\theta_{L(0)}} \newcommand{\theta_{L(1)}}{\theta_{L(1)}} \newcommand{V_{g^{(2)}}}{V_{g^{(2)}}} \newcommand{\mathcal{Z}(A_N)}{\mathcal{Z}(A_N)} \newcommand{\otimes_{A_N}}{\otimes_{A_N}} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\Delta}{\Delta} \newcommand{\delta}{\delta} \newcommand{\varepsilon}{\varepsilon} \newcommand{\epsilon}{\epsilon} \newcommand{\kappa}{\kappa} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\omega}{\omega} \newcommand{\Omega}{\Omega} \newcommand{\hat{\pi}}{\hat{\pi}} \newcommand{\sigma}{\sigma} \newcommand{\Sigma}{\Sigma} \newcommand{\theta}{\theta} \newcommand{\Theta}{\Theta} \newcommand{\vartheta}{\vartheta} \newcommand{\zeta}{\zeta} \newcommand{\partial}{\partial} \newcommand{\Gamma}{\Gamma} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal J}}{{\cal J}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal Q}}{{\cal Q}} \newcommand{{\cal R}}{{\cal R}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\cal T}}{{\cal T}} \newcommand{{\cal U}}{{\cal U}} \newcommand{{\cal V}}{{\cal V}} \newcommand{{\cal W}}{{\cal W}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal Z}}{{\cal Z}} \newcommand{\cln_\epsilon X}{{\cal N}_\epsilon X} \newcommand{\widehat{\otimes}}{\widehat{\otimes}} \def\widetilde{A}{\widetilde{A}} \def\widetilde{B}{\widetilde{B}} \def\widetilde{C}{\widetilde{C}} \def\widetilde{D}{\widetilde{D}} \def\widetilde{E}{\widetilde{E}} \def\widetilde{F}{\widetilde{F}} \def\widetilde{G}{\widetilde{G}} \def\widetilde{H}{\widetilde{H}} \def\widetilde{I}{\widetilde{I}} \def\widetilde{J}{\widetilde{J}} \def\widetilde{K}{\widetilde{K}} \def\widetilde{L}{\widetilde{L}} \def\widetilde{M}{\widetilde{M}} \def\widetilde{N}{\widetilde{N}} \def\widetilde{O}{\widetilde{O}} \def\widetilde{P}{\widetilde{P}} \def\widetilde{Q}{\widetilde{Q}} \def\widetilde{R}{\widetilde{R}} \def\widetilde{S}{\widetilde{S}} \def\widetilde{T}{\widetilde{T}} \def\widetilde{U}{\widetilde{U}} \def\widetilde{V}{\widetilde{V}} \def\widetilde{W}{\widetilde{W}} \def\widetilde{X}{\widetilde{X}} \def\widetilde{Y}{\widetilde{Y}} \def\widetilde{Z}{\widetilde{Z}} \def\widehat{\widehat} \def{\cal A}_h{{\cal A}_h} \def\a*{{\cal A}_{h,*}} \def{\cal B}(h){{\cal B}(h)} \def{\cal B}_1(h){{\cal B}_1(h)} \def{\cal B}^{\rm s.a.}(h){{\cal B}^{\rm s.a.}(h)} \def{\cal B}^{\rm s.a.}_1(h){{\cal B}^{\rm s.a.}_1(h)} \def{\cal A}^{\perp}_{h}{{\cal A}^{\perp}_{h}} \def{\cal A}^{\perp}{{\cal A}^{\perp}} \newcommand{\innerl\innerl}{\langle\innerl} \newcommand{\innerr\innerr}{\rangle\innerr} \newcommand{\int \limits}{\int \limits} \newcommand{\widehat}{\widehat} \newcommand{\Re}{\Re} \newcommand{\otimes}{\otimes} \newcommand{\dagger}{\dagger} \newcommand{\bigotimes}{\bigotimes} \newcommand{\rightarrow}{\rightarrow} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\Longrightarrow}{\Longrightarrow} \newcommand{\subset}{\subset} \newcommand{\subseteq}{\subseteq} \newcommand{\Longleftrightarrow}{\Longleftrightarrow} \newcommand{\underline}{\underline} \newcommand{\overline}{\overline} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{\nonumber}{\nonumber} \newcommand{\tnsr}{\mbox{$\bigcirc\hspace{-0.89em}\mbox{\raisebox% {-.43ex}{$\top$}}\;$}} \newcommand{\gtreqqless}{\gtreqqless} \newcommand{\lesseqqgtr}{\lesseqqgtr} \newcommand{\mbox{id}}{\mbox{id}} \newcommand{\tfrac{1}{2}}{\tfrac{1}{2}} \newcommand{1\!\!1}{1\!\!1} \newcommand{\mbox{{\boldmath $\eta$}}}{\mbox{{\boldmath $\eta$}}} \newcommand{\noindent}{\noindent} \newcommand {\CC}{\centerline} \def \mbox{}\hfill $\sqare$\vspace{1ex} {$\Box$} \def\widehat{\widehat} \def{\cal A}_h{{\cal A}_h} \def\a*{{\cal A}_{h,*}} \def{\cal B}(h){{\cal B}(h)} \def{\cal B}_1(h){{\cal B}_1(h)} \def{\cal B}^{\rm s.a.}(h){{\cal B}^{\rm s.a.}(h)} \def{\cal B}^{\rm s.a.}_1(h){{\cal B}^{\rm s.a.}_1(h)} \def{\cal A}^{\perp}_{h}{{\cal A}^{\perp}_{h}} \def{\cal A}^{\perp}{{\cal A}^{\perp}} \newcommand{\A_\Omega}{\mathcal{A}_\Omega} \newcommand{M_\Omega}{M_\Omega} \newcommand{\ast_\Omega}{\ast_\Omega} \newcommand{{}_{\mathcal{E}} \Delta}{{}_{\mathcal{E}} \Delta} \newcommand{\Delta_{\mathcal{E}}}{\Delta_{\mathcal{E}}} \newcommand{\Delta_\Omega}{\Delta_\Omega} \newcommand{\Delta_M}{\Delta_M} \newcommand{\Delta_{M_\Omega}}{\Delta_{M_\Omega}} \newcommand{{}_{M_\Omega} \Delta}{{}_{M_\Omega} \Delta} \newcommand{{}_M \Delta}{{}_M \Delta} \newcommand{{}_{M_\Omega} \Delta}{{}_{M_\Omega} \Delta} \newcommand{\sigma^{{\rm can}}}{\sigma^{{\rm can}}} \newcommand{\triangleright}{\triangleright} \newcommand{\triangleleft}{\triangleleft} \newcommand{a_{(1)}}{a_{(1)}} \newcommand{a_{(2)}}{a_{(2)}} \newcommand{a_{(1)(1)}}{a_{(1)(1)}} \newcommand{a_{(1)(2)}}{a_{(1)(2)}} \newcommand{a_{(2)(1)}}{a_{(2)(1)}} \newcommand{a_{(2)(2)}}{a_{(2)(2)}} \newcommand{b_{(1)}}{b_{(1)}} \newcommand{b_{(2)}}{b_{(2)}} \newcommand{b_{(1)(1)}}{b_{(1)(1)}} \newcommand{b_{(1)(2)}}{b_{(1)(2)}} \newcommand{b_{(2)(1)}}{b_{(2)(1)}} \newcommand{b_{(2)(2)}}{b_{(2)(2)}} \newcommand{\omega_{(2)}}{\omega_{(2)}} \newcommand{{}_{(2)} \eta}{{}_{(2)} \eta} \newcommand{{}_{(1)} m}{{}_{(1)} m} \newcommand{{}_{(2)} m}{{}_{(2)} m} \newcommand{m_{(1)}}{m_{(1)}} \newcommand{m_{(2)}}{m_{(2)}} \newcommand{{}_0 M}{{}_0 M} \newcommand{M_0}{M_0} \newcommand{g_0}{g_0} \newcommand{g^{(2)}_0}{g^{(2)}_0} \newcommand{\theta_{(0)}}{\theta_{(0)}} \newcommand{\theta_{(1)}}{\theta_{(1)}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\RNum}[1]{\uppercase\expandafter{\romannumeral #1\relax}} \newcommand{\vskip 1em}{\vskip 1em} \begin{document} \begin{center} {\Large{\bf On the Koszul formula in noncommutative geometry}}\\ \vspace{0.5in} {\large Jyotishman Bhowmick, Debashish Goswami, Giovanni Landi}\\ \end{center} \date{May 2020} \title{} \maketitle \begin{abstract} We prove a Koszul formula for the Levi-Civita connection for any pseudo-Riemannian bilinear metric on a class of centered bimodule of noncommutative one-forms. As an application to the Koszul formula, we show that our Levi-Civita connection is a bimodule connection. We construct a spectral triple on a fuzzy sphere and compute the scalar curvature for the Levi-Civita connection associated to a canonical metric. \end{abstract} \vspace{0.5in} \tableofcontents \parskip = 1 ex \section{Introduction} The relatively recent paper \cite{Connes_moscovici} (following earlier work in \cite{scalar_3}) on the computation of the scalar curvature for a class of metrics on noncommutative tori, and related Gauss--Bonnet theorems, has lead to a flourishing of what one might call noncommutative Riemannian geometry. An approach to this, for which we refer to the expository paper \cite{khalkhali} and references therein, is via the spectral properties of the Laplacian (or Dirac) operator. On the other hand, an algebraic approach was taken by a number of authors (\cite{rosenberg,sheu,article1,article2,pseudo,cylinder,tiger}) whereas one computes the curvature in terms of the Levi-Civita connection. A somewhat related approach is in \cite{majid_1,majid_2,majidkoszul2}. Some earlier studies of metric approach and Levi-Civita connection for noncommutative spaces are in \cite{LaMa88,frolich,LNW94,dubois,dubois2,DHLS96,heckenberger}. The proof of the uniqueness of Levi-Civita connections in classical differential geometry yields the Koszul formula and it turns out that this formula actually defines a connection which is torsionless and compatible with the metric. The goal of this article is to demonstrate a noncommutative analogue of this proof under some reasonable assumptions (see Theorem \ref{koszul21stjuly} and Theorem \ref{existenceuniqueness}). Connections in noncommutative geometry have been studied from several viewpoints. In \cite{dubois,dubois2} there were studied covariant derivatives on a certain class of modules of derivations of a noncommutative algebra as well as the notion of bimodule connections (see for these also \cite{beggsmajidbook} and references therein). Now in classical differential geometry, the Riemannian metric and connections live more naturally on the level of vector fields. However, in the context of noncommutative geometry, it seems more natural to work on the level of differential forms and that is what we do in the present paper. Thus, for a (possibly) noncommutative algebra $\mathcal{A}$ and the bimodule of one-forms $\mathcal{E}$ coming from a differential calculus, a (right) connection on $ \mathcal{E} $ will be a $\mathbb{C}$-linear map $ \nabla: \mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ satisfying the Leibniz rule for the right multiplication of elements in $\mathcal{A}$. However, our approach allows us to make contact with both of these approaches. In Section \ref{section7}, we prove that the Levi-Civita connection that we obtain in Theorem \ref{existenceuniqueness} is a bimodule connection with respect to a canonical symmetrization map obtained from natural assumptions. In a companion article \cite{article4}, we prove that our assumptions allow us to have a sufficiently large $ \mathcal{Z}( \mathcal{A} ) $-bimodule of derivations on the $\mathcal{A}$-bimodule $\mathcal{E}$ of forms so that we can define covariant derivatives and recover a Koszul formula on this $\mathcal{Z}( \mathcal{A} )$-bimodule. Let us discuss the plan of the article. In Section \ref{13thjuly20191}, we recall the definitions of differential calculus and connections on them. In Section \ref{section3}, we show that if the bimodule $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ ( $ \mathcal{E} $ being the space of one forms coming from a differential calculus ) admits a splitting into symmetric and antisymmetric $2$-tensors, then $ \mathcal{E} $ admits a torsionless connection which is canonically related with the Grassmann connection. As a result, we have a symmetrization map $ \sigma $ on the bimodule $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $. This allows us to define the notion of a pseudo-Riemannian metric and study its properties in Section \ref{section4}. Consequently, in Section \ref{section5}, we define the metric-compatibility condition of a connection on the center of the module $\mathcal{E}$ and prove a Koszul formula for a torsionless and metric-compatible (on the center) connection for bilinear pseudo-Riemannian metric. Under an additional assumption (see Theorem \ref{existenceuniqueness}) made in Section \ref{section6}, we prove the existence and uniqueness of a torsionless and metric-compatible (on the whole of $\mathcal{E}$) connection as an application of the Koszul formula proved in Section \ref{section5}. In Section \ref{section7}, as a genuine application of the Koszul formula, we prove that our Levi-Civita connection is indeed a bimodule connection. Finally, in Section \ref{section8}, we construct a spectral triple ( see \cite[Def.~2, page 546]{connes} ) for the fuzzy sphere and prove the existence of the Levi-Civita connection and compute the scalar curvature for a canonical bilinear pseudo-Riemannian metric. We fix some notations which we will follow. Throughout the article, $ \mathcal{A} $ will denote a complex algebra and $ \mathcal{Z}( \mathcal{A} ) $ will denote its center. The tensor product over the complex numbers $ {I\! \! \!\! C} $ is denoted by $ \otimes_{\mathbb{C}} $ while the notation $\otimes_{\mathcal{A}}$ will denote the tensor product over the algebra $ \mathcal{A}$. For a subset $ S $ of a right $ \mathcal{A} $-module $ \mathcal{E}$, $S\mathcal{A} $ will denote its right $\mathcal{A}$-linear span: $ S\mathcal{A} = {\rm span} \{ sa: s \in S, ~ a \in \mathcal{A} \}$. We will say that a subset $ S $ of a right $ \mathcal{A} $-module $ \mathcal{E} $ is right $ \mathcal{A} $-total in $ \mathcal{E} $ if the right $\mathcal{A}$-linear span of $ S $ equals $ \mathcal{E}$. For $\mathcal{A}-\mathcal{A}$-bimodules $ \mathcal{E} $ and $ \mathcal{F}, $ the symbol $ {\rm Hom}_\mathcal{A} ( \mathcal{E}, \mathcal{F} ) $ will denote the set of all right $ \mathcal{A} $-linear maps from $ \mathcal{E} $ to $ \mathcal{F} $. Similarly, $ {}_\mathcal{A} {\rm Hom} ( \mathcal{E}, \mathcal{F} ) $ will denote the set of all left $\mathcal{A}$-linear maps from $\mathcal{E}$ to $\mathcal{F}.$ In particular, we will use the shorthand notation $ \mathcal{E}^* = {\rm Hom}_\mathcal{A} ( \mathcal{E}, \mathcal{A} )$. For $\mathcal{A}-\mathcal{A}$-bimodules $ \mathcal{F} $ and $ \mathcal{F^\prime}, $ let us spell out the left and right $ \mathcal{A} $-module structures for $ {\rm Hom}_\mathcal{A} ( \mathcal{F}, \mathcal{F^\prime} ) $ and $ {}_\mathcal{A} {\rm Hom} ( \mathcal{F}, \mathcal{F}^\prime ) $. The bimodule multiplications on $ {\rm Hom}_\mathcal{A} ( \mathcal{F}, \mathcal{F}^\prime ) $ and on $ {}_\mathcal{A} {\rm Hom} ( \mathcal{F}, \mathcal{F}^\prime ) $ are respectively: \begin{align} ( a. T ) ( f ) &= a T ( f ) \in \mathcal{F}^\prime, \quad T. a ( f ) = T ( a f ), \qquad a \in \mathcal{A}, f \in \mathcal{F}, T \in {\rm Hom}_\mathcal{A} ( \mathcal{F}, \mathcal{F}^\prime ); \nonumber \\ ~ \nonumber \\ ( a. T ) ( f ) &= T ( f a ), \quad ( T. a ) ( f ) = T ( f ). a, \qquad a \in \mathcal{A}, f \in \mathcal{F}, T \in {}_\mathcal{A} {\rm Hom} ( \mathcal{F}, \mathcal{F}^\prime ). \label{22ndjuly20191} \end{align} \section{Differential calculus and connections on one-forms} \label{13thjuly20191} As already mentioned, in the context of noncommutative geometry, it is more natural to work on the level of differential forms and that is what we do here. In this section, we recall the definition of connections on the space of one-forms coming from a differential calculus. This is followed by the notion of torsion. Then we define the notion of pseudo-Riemannian metric and compatibility of a connection on one forms under some assumptions on the differential calculus and the pseudo-Riemannian metric. \begin{dfn} Suppose $\mathcal{A}$ is an algebra over ${I\! \! \!\! C}.$ A differential calculus on $\mathcal{A}$ is a pair $ ( \Omega ( \mathcal{A} ), d ) $ such that the following conditions hold: \begin{itemize} \item[1.] $\Omega ( \mathcal{A} )$ is graded: $ \Omega ( \mathcal{A} ) = \oplus_{ j \geq 0 } \Omega^j ( \mathcal{A} ), $ where $ \Omega^0 ( \mathcal{A} ) = \mathcal{A} $ and $ \Omega^j ( \mathcal{A} ) $ are $\mathcal{A}-\mathcal{A}$-bimodules. Thus, $ \Omega ( \mathcal{A} ) $ is an $ \mathcal{A} - \mathcal{A} $-bimodule. \item[2.] We have a bimodule map $ \wedge: \Omega ( \mathcal{A} ) \otimes_{\mathcal{A}} \Omega ( \mathcal{A} ) \rightarrow \Omega ( \mathcal{A} ) $ such that $$ \wedge ( \Omega^j ( \mathcal{A} ) \otimes_{\mathcal{A}} \Omega^k ( \mathcal{A} ) ) \subseteq \Omega^{j + k} ( \mathcal{A} ) .$$ \item[3.] We have a map $ d: \Omega^j ( \mathcal{A} ) \rightarrow \Omega^{j + 1} ( \mathcal{A} ) $ such that $$ d^2 = 0 \qquad \textup{and} \qquad ( \omega \wedge \eta ) = d \omega \wedge \eta + ( - 1 )^{{\rm deg} ( \omega ) } \omega \wedge d \eta .$$ \item[4] $ \Omega^j ( \mathcal{A} ) $ is the right $ \mathcal{A} $-linear span of elements of the form $ d a_0 \wedge d a_1 \wedge \cdots \wedge d a_{j - 1} $. \end{itemize} \end{dfn} \begin{assu} \textup{ Throughout the present paper, the notation $ \mathcal{E} $ will stand for the space of one-forms ${\Omega}^1 ( \mathcal{A} )$ of a differential calculus. It will also be assumed that $ \mathcal{E} $ is a finitely generated projective right $ \mathcal{A} $-module. } \end{assu} \begin{dfn} \label{rLr} Let $ ( \Omega ( \mathcal{A} ), d ) $ be a differential calculus on $\mathcal{A}.$ A (right) connection on $\mathcal{E}: = {\Omega}^1 ( \mathcal{A} )$ is a ${\mathbb C}$-linear map $\nabla :\mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} {\Omega}^1 ( \mathcal{A} )$ satisfying the Leibniz rule $$ \nabla(\omega a)=\nabla(\omega)a + \omega \otimes_{\mathcal{A}} da$$ for all $ \omega \in \mathcal{E}, a \in \mathcal{A} $. \end{dfn} The assumption that $\mathcal{E}$ is finitely generated and projective is crucial in the present paper. In fact, by Corollary 8.2 of \cite{cuntz}, a (right) module admits a connection if and only if it is projective. Let us recall the construction of the Grassmann connection $ \nabla^{Gr} $. Since $ \mathcal{E} $ is finitely generated and projective as a right $ \mathcal{A} $-module, there exists a natural number $ n $ and an idempotent $ p \in M_n ( \mathcal{A} ) $ such that $ p ( \mathcal{A}^n ) = \mathcal{E} $. If $ \{ e_j: j = 1, \cdots n \} $ is a basis of the free right $ \mathcal{A} $-module $ \mathcal{A}^n, $ then the elements $ \{ \Phi_j: = p ( e_j ) : j = 1, \cdots n \} $ form a ``frame" ( in the sense of Rieffel, \cite{rieffelgh} ) of $ \mathcal{E} $. In particular, $ {\rm Span}_{\mathbb{C}} \{ \Phi_j \} $ is right $\mathcal{A}$-total in $ \mathcal{E} $. Let $ \eta $ be an element in $ \mathcal{E} $. Then there exist elements $ \{ a_j: j = 1, \cdots n \} $ in $ \mathcal{A} $ such that $ \eta = \sum\nolimits_j \Phi_j a_j $ and the Grassmann connection $ \nabla^{Gr} $ is defined to be: $$ \nabla^{Gr} ( \eta ) = \sum\nolimits_j \Phi_j \otimes_{\mathcal{A}} d a_j .$$ It is well-known that the set of all connections on $ \mathcal{E} $ is an affine space: any two right connections on $ \mathcal{E} $ differ by an element of $ {\rm Hom}_\mathcal{A} ( \mathcal{E}, \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $. \section{Existence of a torsionless connection} \label{section3} We next recall the notion of the torsion of a connection and show the existence of a torsionless connection on a finitely generated projective module $ \mathcal{E} $ in the presence of a splitting of the right $ \mathcal{A} $-module $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $. \begin{dfn} The torsion of a connection $ \nabla: \mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ is the right $\mathcal{A}$-linear map $$ T_\nabla:= \wedge \circ \nabla + d: \mathcal{E} \rightarrow {\Omega}^2( \mathcal{A} ) .$$ A connection $\nabla$ is called torsion-less if $ T_\nabla = 0.$ \end{dfn} The torsion of the Grassmann connection $ \nabla^{Gr} $ defined in Section \ref{13thjuly20191} is non-zero. Indeed, if $ \eta = \sum\nolimits_j \Phi_j a_j $ as in Section \ref{13thjuly20191}, then \begin{align*} T_{\nabla^{Gr}} ( \eta ) & = \wedge \nabla^{Gr} ( \eta ) + d ( \eta ) \\ &= \sum\nolimits_j \Phi_j \wedge d a_j + \sum\nolimits_j d ( \Phi_j ) a_j - \sum\nolimits_j \Phi_j \wedge d a_j\\ &= \sum\nolimits_j d ( \Phi_i ) a_j. \end{align*} \begin{rmrk} Let $ M $ be a manifold and $\mathcal{A}$ the algebra $ C^\infty ( M ) $. Let us consider the classical differential calculus $ ( \Omega ( \mathcal{A} ), d ) $ where $d$ is the de-Rham differential and $ \Omega ( \mathcal{A} ) $ the usual space of forms. Thus, $\mathcal{E}$ is the usual space of one-forms. It can be easily checked that the differential calculus $ ( \Omega ( \mathcal{A} ), d ) $ is tame in the sense of Definition 2.2 of \cite{article4}. Thus, we can apply Proposition 5.1 of \cite{article4} to conclude that our definition coincides with the usual definition of a torsionless connection in the classical case. \end{rmrk} \begin{thm} \label{torsionless} Suppose the short exact sequence of right $\mathcal{A}$-modules $$ 0 \rightarrow {\rm Ker} ( \wedge ) \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \rightarrow {\rm Ran}(\wedge) = {\Omega}^2( \mathcal{A} ) $$ splits. Then there exists a torsionless connection $ \nabla_0 $ on $ \mathcal{E} $. \end{thm} \noindent {\bf Proof:} By our assumption, there is a right $\mathcal{A}$-submodule $ \mathcal{F} $ of $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ and a right $ \mathcal{A} $-module isomorphism $ Q: \mathcal{F} \rightarrow {\Omega}^2( \mathcal{A} ) $ such that $ Q ( \beta ) = \wedge ( \beta ) $ for all $ \beta \in \mathcal{F}.$ We define $ \nabla_0: \mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ by the formula: $$ \nabla_0 = \nabla^{Gr} - Q^{-1} ( T_{\nabla^{Gr}} ). $$ Then $ \nabla_0 $ is a connection since for all $ \eta \in \mathcal{E} $ and for all $ a \in \mathcal{A}, $ we have \begin{align*} \nabla_0 ( \eta a ) &= \nabla^{Gr} ( \eta a ) - Q^{-1} ( T_{\nabla^{Gr}} ( \eta a ) ) = \nabla^{Gr} ( \eta ) a + \eta \otimes_{\mathcal{A}} da - Q^{-1} ( T_{\nabla^{Gr}} ( \eta ) ) a\\ &= ( \nabla^{Gr} ( \eta ) - Q^{-1} ( T_{\nabla^{Gr}} ( \eta ) ) ) a + \eta \otimes_{\mathcal{A}} da = \nabla_0 ( \eta ) a + \eta \otimes_{\mathcal{A}} da, \end{align*} where we have used that fact that $ Q $ is a right $ \mathcal{A} $-linear map. Finally, $ \nabla_0 $ is torsionless as \begin{align*} \wedge \nabla_0 ( \eta ) + d \eta &= \wedge \nabla^{Gr} ( \eta ) - \wedge Q^{-1} ( T_{\nabla^{Gr}} ( \eta ) ) + d \eta\\ &= T_{\nabla^{Gr}} ( \eta ) - T_{\nabla^{Gr}} ( \eta ) = 0. \end{align*} This finishes the proof of the theorem. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{dfn} \label{16thjuly201923} Suppose $ \mathcal{E} $ satisfies the hypothesis of Theorem \ref{torsionless}. We will denote by the symbol $ P_{\rm sym} $ the idempotent in $ {\rm Hom}_\mathcal{A} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $ with image $ {\rm Ker} ( \wedge ) $ and kernel $ \mathcal{F} $. Moreover, $ \sigma $ will be the map $$ \sigma = 2 P_{\rm sym} - 1 .$$ \end{dfn} Let us also note that $ \sigma^2 = \mbox{id}_{\mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}}$. Thus, we have $$ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} = {\rm Ker} ( \wedge ) \oplus \mathcal{F} $$ where $ {\rm Ker} ( \wedge ) = {\rm Ran} ( P_{\rm sym} ) $ and $ \mathcal{F} = {\rm Ran} ( 1 - P_{\rm sym} ) $. Also, $ \mathcal{F} $ is isomorphic to $ {\Omega}^2( \mathcal{A} ) $ as right $ \mathcal{A} $-modules via a right $ \mathcal{A} $-linear isomorphism $ Q: \mathcal{F} \rightarrow {\Omega}^2( \mathcal{A} ) $. In fact, $ Q = \wedge $ on $ \mathcal{F} $. We will need to define the action of $ \mathcal{E}^* \otimes_{\mathcal{A}} \mathcal{E}^* $ on the space of two forms. For that, let us recall that the map $ \wedge $ is an isomorphism from $ \mathcal{F} = {\rm Ran} ( 1 - P_{\rm sym} ) $ onto $ \Omega^2 $. \begin{dfn} \label{29thjandefn} Suppose $ \phi, \psi $ are elements of $ \mathcal{E}^* $ and let $ W $ be an element of $ \Omega^2 $. We define $$ ( \phi \otimes_{\mathcal{A}} \psi ) W = 2 ( \phi \otimes_{\mathcal{A}} \psi ) \beta .$$ where $ \beta $ is the unique element in $ \mathcal{F} = {\rm Ran} ( 1 - P_{\rm sym} ) $ such that $ W = \wedge \beta $. \end{dfn} Here the factor $2$ is just a normalization factor in the product $\wedge$. Let us note the following consequence of the definition: \begin{lmma} \label{29thjanlemma1} Suppose $ \phi, \psi, W, \beta $ be as in Definition \ref{29thjandefn} and $ \gamma $ an element of $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ such that $ \wedge \gamma = W $. Then $$ ( \phi \otimes_{\mathcal{A}} \psi ) W = 2 ( \phi \otimes_{\mathcal{A}} \psi ) ( 1 - P_{\rm sym} ) ( \gamma ). $$ \end{lmma} \noindent {\bf Proof:} Since $ \wedge ( \gamma - \beta ) = 0$, then $\gamma - \beta ~ \in ~ {\rm Ker} ( \wedge ) = {\rm Ran} ( P_{\rm sym} ) $ so that $$ ( 1 - P_{\rm sym} ) ( \gamma - \beta ) = 0 .$$ Therefore, \begin{align*} 2 ( \phi \otimes_{\mathcal{A}} \psi ) ( 1 - P_{\rm sym} ) ( \gamma ) - 2 ( \phi \otimes_{\mathcal{A}} \psi ) ( \beta ) &= 2 ( \phi \otimes_{\mathcal{A}} \psi ) [ ( 1 - P_{\rm sym} ) ( \gamma ) - ( 1 - P_{\rm sym} ) ( \beta ) ]\\ &= 2 ( \phi \otimes_{\mathcal{A}} \psi ) ( 1 - P_{\rm sym} ) ( \gamma - \beta ) = 0. \end{align*} This proves the lemma. \mbox{}\hfill $\sqare$\vspace{1ex} \section{Pseudo-Riemannian metrics on centered bimodules} \label{section4} We now recall the notion of metric on a bimodule and work out some additional properties on a class of bimodules that we shall use in the rest of the paper. \begin{dfn} \label{metricdefn} Suppose $\mathcal{E}$ is an $\mathcal{A} - \mathcal{A} $-bimodule satisfying the hypothesis of Theorem \ref{torsionless} and let $\sigma$ the corresponding map as defined in Definition \ref{16thjuly201923}. A pseudo-Riemannian metric $ g $ on $ \mathcal{E} $ is an element of $ {\rm Hom}_{\mathcal{A}} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{A} ) $ such that \begin{itemize} \item[(i)] $g$ is symmetric, that is $g \sigma = g$. \item[(ii)] The map $ \mathcal{E} \rightarrow \mathcal{E}^*, ~ e \mapsto g ( e \otimes_{\mathcal{A}} - ) $ is an isomorphism of right $ \mathcal{A}$-modules. \end{itemize} We say that a pseudo-Riemannian metric $g$ is a pseudo-Riemannian bilinear metric if, in addition, $ g $ is an $ \mathcal{A} - \mathcal{A}$ bimodule map. In this case, the map $ g ( e \otimes_{\mathcal{A}} - ) $ is bilinear as well. \end{dfn} When $M$ is a manifold and $\mathcal{A} = C^\infty ( M )$ is the algebra of smooth complex valued functions on $M,$ a metric $g$ is a choice of a smooth positive definite symmetric bilinear form on the tangent ( or cotangent ) bundle. There are two equivalent ways to extend this map to the complexified spaces. The first way is to extend $g$ as a sesquilinear pairing on the module of one forms and thus linear in one variable and conjugate-linear in the other variable. In this paper, we have taken the second path, namely, we extend $g$ as a complex bilinear form, i.e, a $C^\infty ( M ) $-bilinear map on $ \Omega^1 ( M ) \otimes_{C^\infty ( M )} \Omega^1 ( M )$. In the noncommutative case, these two approaches do not remain equivalent. Since we do not deal with $\ast$-algebras in this paper we do not need any other compatibility with the $\ast$-structure as in \cite[Sect.~3]{pseudo}, or \cite[Def.~8.30]{beggsmajidbook}, or \cite{frolich}. We also mention the work \cite{heckenberger} which assumes the left $\mathcal{A}$-linearity of $g$ ( as opposed to right $\mathcal{A}$-linearity ) and derives the existence and uniqueness of Levi-Civita connections on three families of Hopf algebras. In \cite[ App.~B]{heckenberger} there is an example where the uniqueness of the Levi-Civita connection is lost when working with a sesquilinear metric. Our main result ( Theorem \ref{existenceuniqueness} ) assumes both left and right $\mathcal{A}$-linearity of $g$, a condition which is restrictive. The conformally deformed metrics studied after the work of Connes, Moscovici, are only right $\mathcal{A}$-linear. The machinery developed in the present paper ( as well as \cite{article1} ) acts as a stepping stone for the existence and uniqueness ( for which we refer to \cite{article2} ) of Levi-Civita connections for right $\mathcal{A}$-linear pseudo-Riemannian metrics on a differential calculus, satisfying the hypotheses of Theorem \ref{existenceuniqueness}. In the present paper we shall be interested in a particular kind of bimodules that are called centered. Now, the center of an $ \mathcal{A} - \mathcal{A} $-bimodule $ \mathcal{E} $ is defined to be the set $$ \mathcal{Z} ( \mathcal{E} ) = \{ e \in \mathcal{E}: e a = a e ~ \forall ~ a ~ \in \mathcal{A} \} .$$ It is easy to see that $ \mathcal{Z} ( \mathcal{E} ) $ is a $ \mathcal{Z} ( \mathcal{A} ) $-bimodule. The bimodule $ \mathcal{E} $ is called centered if $ \mathcal{Z} ( \mathcal{E} ) $ is right $ \mathcal{A} $-total in $ \mathcal{E} $, that is, the right $\mathcal{A}$-linear span of $ \mathcal{Z} ( \mathcal{E} ) $ equals $ \mathcal{E}$. Let us clarify that the property of being a centered bimodules is actually stronger than being a central bimodule in the sense of \cite{dubois} from which we have the following: \begin{dfn} Suppose $ \mathcal{A} $ is a unital algebra and $\mathcal{F}$ is an $\mathcal{A}$-$\mathcal{A}$-bimodule. Then $\mathcal{F}$ is called a central bimodule if $ e. a = a. e $ for all $e$ in $\mathcal{E}$ and for all $a$ in $ \mathcal{Z}( \mathcal{A} ) $. \end{dfn} It is easy to see that a centered bimodule is a central bimodule. Indeed, if $ \mathcal{E} $ is a centered bimodule, then for any $e \in \mathcal{E}$ there exists a natural number $ n, $ elements $ f_j \in \mathcal{Z}( \mathcal{E} ) $ and $ b_j \in \mathcal{A} $ such that $ e = \sum\nolimits_j f_j b_j $. Then \begin{equation} \label{16thjuly201921}a. e = \sum\nolimits_j a . f_j. b_j = \sum\nolimits_j f_j. a. b_j = \sum\nolimits_j f_j. b_j. a. = e. a \end{equation} for all $e \in \mathcal{E}$ and $a \in \mathcal{Z}( \mathcal{A} )$. Thus, $ \mathcal{E} $ is central. \begin{lmma} \label{lemma2} Suppose $g$ is a pseudo-Riemannian metric on a centered $\mathcal{A} - \mathcal{A} $-bimodule $\mathcal{E}$ satisfying the hypothesis of Theorem \ref{torsionless} with $\sigma$ the corresponding map as defined in Definition \ref{16thjuly201923}. Moreover, assume $$ \sigma ( \omega \otimes_{\mathcal{A}} \eta ) = \eta \otimes_{\mathcal{A}} \omega \qquad \forall ~ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) .$$ Then we have the following: \begin{itemize} \item[1.] If either $ \omega $ or $ \eta $ belongs to $ \mathcal{Z}( \mathcal{E} ), $ then $ \sigma ( \omega \otimes_{\mathcal{A}} \eta ) = \eta \otimes_{\mathcal{A}} \omega $. \item[2.]If either of $ \omega $ or $ \eta $ belongs to $ \mathcal{Z}( \mathcal{E} ),$ then \begin{equation} \label{gsigmaisg} g ( \omega \otimes_{\mathcal{A}} \eta ) = g ( \eta \otimes_{\mathcal{A}} \omega ). \end{equation} \item[3.] If $ g $ is a pseudo-Riemannian bilinear metric, then $ g ( \omega \otimes_{\mathcal{A}} \eta ) \in \mathcal{Z} ( \mathcal{A} ) $ if both $ \omega $ and $ \eta $ are in $ \mathcal{Z} ( \mathcal{E} ).$ \item[4.] If $ a $ is an element of $ \mathcal{Z}( \mathcal{A} ), $ then $ da \in \mathcal{Z}( \mathcal{E} ) $. In particular, if $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) $ and $ g $ is a pseudo-Riemannian bilinear metric, then \begin{equation} \label{22ndjuly20192} d \, g ( \omega \otimes_{\mathcal{A}} \eta ) \in \mathcal{Z}( \mathcal{E} ). \end{equation} \end{itemize} \end{lmma} \noindent {\bf Proof:} The first three assertions were proved in Lemma 2.8 of \cite{article1}. As for 4., for all $ b \in \mathcal{A}, $ we have $$ 0 = d ( a. b ) - d ( b. a ) = ( da. b + a. db ) - ( db. a + b. da ) = da. b - b. da $$ and we have used \eqref{16thjuly201921}. This proves that $ da \in \mathcal{Z}( \mathcal{E} ) $. By part 3., $ g ( \omega \otimes_{\mathcal{A}} \eta ) \in \mathcal{Z}( \mathcal{A} ) $ and hence $ d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ) \in \mathcal{Z}( \mathcal{E} ) $. This proves \eqref{22ndjuly20192}. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} \label{21stjuly201910} It is easy to see that for $ \xi \in \mathcal{E}, $ the condition $ g ( \theta \otimes_{\mathcal{A}} \xi ) = 0 $ for all $ \theta \in \mathcal{Z}( \mathcal{E} ) $ implies that $ \xi = 0 $. \end{rmrk} \section{The Koszul formula on one-forms on the center of the module} \label{section5} Throughout this section, we will assume that $ g $ is a bilinear pseudo-Riemannian metric so that the assertions of Lemma \ref{lemma2} are valid. Moreover, $ \nabla_0 $ will denote the torsionless connection of Theorem \ref{torsionless}. \subsection{Metric compatibility of a connection on the center} Let $ g $ be a pseudo-Riemannian metric on $\mathcal{E}$ and $ \nabla $ a connection on $ \mathcal{E} $. It can be checked that the map \begin{align*} & \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{Z}( \mathcal{E} ) \rightarrow \mathcal{E}, \\ & \omega \otimes_{\mathcal{Z}( \mathcal{A} )} \eta \, \mapsto \, (g \otimes_{\mathcal{A}} \mbox{id} ) \Big\{ \big[\sigma_{23}(\nabla(\omega)\otimes_{\mathcal{A}} \eta )\big] + \omega \otimes_{\mathcal{A}} \nabla ( \eta ) \Big\} \end{align*} is well defined. Indeed, for $ a \in \mathcal{Z}( \mathcal{A} ), $ we have $\sigma_{23}(\omega\otimes_{\mathcal{A}} da\otimes_{\mathcal{A}} \eta)=\omega\otimes_{\mathcal{A}} \eta\otimes_{\mathcal{A}} da$, $ a \eta = \eta a $ and $ a \nabla ( \eta ) = \nabla ( \eta ) a $ ( by \eqref{16thjuly201921} ). Using these, we get \begin{align*} \sigma_{23} (\nabla(\omega)a\otimes_{\mathcal{A}} \eta & + \omega \otimes_{\mathcal{A}} da \otimes_{\mathcal{A}} \eta ) + \omega a \otimes_{\mathcal{A}} \nabla ( \eta ) \\ & = \sigma_{23}(\nabla(\omega) \otimes_{\mathcal{A}} a \eta + \omega \otimes_{\mathcal{A}} da \otimes_{\mathcal{A}} \eta ) + \omega \otimes_{\mathcal{A}} a \nabla ( \eta ) \\ & = \sigma_{23}(\nabla(\omega)\otimes_{\mathcal{A}} a \eta ) + \sigma_{23} ( \omega \otimes_{\mathcal{A}} d a \otimes_{\mathcal{A}} \eta ) + \omega \otimes_{\mathcal{A}} \nabla( \eta ) a \\ & = \sigma_{23}(\nabla(\omega) \otimes_{\mathcal{A}} a \eta ) + \omega \otimes_{\mathcal{A}} \eta \otimes_{\mathcal{A}} da + \omega \otimes_{\mathcal{A}} \nabla ( \eta a ) - \omega \otimes_{\mathcal{A}} \eta \otimes_{\mathcal{A}} da \\ & = \sigma_{23}(\nabla(\omega)\otimes_{\mathcal{A}} a \eta )\big] + \omega \otimes_{\mathcal{A}} \nabla ( a \eta ) \end{align*} This proves the well-definedness. \begin{dfn} \label{compatibilitycenter} We say that a connection $\nabla$ on $\mathcal{E}$ is compatible with $g$ on $ \mathcal{Z}( \mathcal{E} ) $ if for all $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ), $ the following equation holds: $$ (g \otimes_{\mathcal{A}} {\rm id} ) \Big\{ \big[\sigma_{23}(\nabla(\omega)\otimes_{\mathcal{A}} \eta )\big] + \omega \otimes_{\mathcal{A}} \nabla ( \eta ) \Big\} = d ( g ( \omega \otimes_{\mathcal{Z}( \mathcal{A} )} \eta ) ) . $$ \end{dfn} \begin{rmrk} Given that $\nabla(\omega) \otimes_{\mathcal{A}} \eta \in \mathcal{E} \otimes_{\mathcal{A}} \Omega^1 \otimes_{\mathcal{A}} \mathcal{E}$ and $\omega \otimes_{\mathcal{A}} \nabla ( \eta ) \in \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \otimes_{\mathcal{A}} \Omega^1, $ we can write the metric compatibility condition on the center in another equivalent way. The condition is: $$( {\rm id} \otimes_{\mathcal{A}} g )\sigma_{12}(\nabla(\omega) \otimes_{\mathcal{A}} \eta ) + ( g \otimes_{\mathcal{A}} {\rm id} ) ( \omega \otimes_{\mathcal{A}} \nabla ( \eta ) ) = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ) ~ \forall \omega, \eta \in \mathcal{Z}( \mathcal{E} ) .$$ Here, we have used the fact that $ g $ is bilinear so that $ ( {\rm id} \otimes_{\mathcal{A}} g ) $ is well-defined. The proof of this equation is a straightforward application of the facts that $ \sigma ( e \otimes_{\mathcal{A}} f ) = f \otimes_{\mathcal{A}} e $ if either $e$ or $ f $ belong to $\mathcal{Z}( \mathcal{E} )$ and that we can write $\nabla ( e ) = \sum\nolimits_{j} f_j \otimes_{\mathcal{A}} \omega_j $ with $ \omega_j \in \mathcal{Z}( \mathcal{E} ) $ (see Part 3. in Lemma \ref{20thaugust20191}). \end{rmrk} \subsection{Consequences of the zero-torsion condition} \begin{nota}\label{notasweed} \textup{ For $ \nabla $ a torsionless connection on a centered bimodule $\mathcal{E}$ we shall use Sweedler-like notation to write \begin{equation}\label{sweedler} \nabla ( \omega ) = \sum\nolimits_j \omega_{(0)}{}_j \otimes_{\mathcal{A}} \omega_{(1)}{}_j =: \omega_{(0)} \otimes_{\mathcal{A}} \omega_{(1)} \end{equation} Likewise, for the torsionless connection $ \nabla_0 $ of Theorem \ref{torsionless}, we will write \begin{equation}\label{sweedler0} \nabla_0 ( \omega ) = \sum\nolimits_j {}_{(0)} \omega{}_j \otimes_{\mathcal{A}} {}_{(1)} \omega{}_j =: {}_{(0)} \omega \otimes_{\mathcal{A}} {}_{(1)} \omega. \end{equation} One can always take that both $\omega_{(1)} $ and $ {}_{(1)} \omega $ belongs to $\mathcal{Z}( \mathcal{E} )$ ( see Parts 3. and 4. of Lemma \ref{20thaugust20191} ). } \end{nota} \begin{lmma} \label{13thjuly20192} Suppose $ \nabla $ is a torsionless connection on a centered bimodule $\mathcal{E}$ and $\nabla_0 $ the torsionless connection $ \nabla_0 $ of Theorem \ref{torsionless} for which we use the notation \ref{notasweed} with both $\omega_{(1)} $ and $ {}_{(1)} \omega $ in $\mathcal{Z}( \mathcal{E} )$. Then for all $ \omega, \eta, \theta $ in $ \mathcal{E}, $ the following equations hold: \begin{multline*} g ( \eta \otimes_{\mathcal{A}} \omega_{(0)} ) \, g ( \theta \otimes_{\mathcal{A}} \omega_{(1)} ) - g ( \eta \otimes_{\mathcal{A}} \omega_{(1)} ) \, g ( \theta \otimes_{\mathcal{A}} \omega_{(0)} ) \\ = g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \omega ) \, g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \omega ) - g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \omega ) \, g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \omega ), \end{multline*} \begin{multline*} g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) + g ( \omega \otimes_{\mathcal{A}} \eta_{(1)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(0)} ) \\ = 2 g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) - g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta )\\ + g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \eta ), \end{multline*} \begin{multline*} g ( \eta \otimes_{\mathcal{A}} \theta_{(0)} ) g ( \omega \otimes_{\mathcal{A}} \theta_{(1)} ) - g ( \eta \otimes_{\mathcal{A}} \theta_{(1)} ) g ( \omega \otimes_{\mathcal{A}} \theta_{(0)} ) \\ = g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \theta ) g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \theta ) - g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \theta ) g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \theta ). \end{multline*} \end{lmma} \noindent {\bf Proof:} We will use Lemma \ref{29thjanlemma1}. Suppose $ \omega \in \mathcal{E} $. Since $ \nabla $ and $ \nabla_0 $ are both torsionless, we have $$ 0 = \wedge ( \nabla ( \omega ) - \nabla_0 ( \omega ) ) .$$ Using Lemma \ref{29thjanlemma1}, this implies that for all $ \phi, \psi \in \mathcal{E}^*, $ we have \begin{align*} 0 &= ( \phi \otimes_{\mathcal{A}} \psi ) \, \wedge \big( \nabla ( \omega ) - \nabla_0 ( \omega ) \big)\\ &= 2 ( \phi \otimes_{\mathcal{A}} \psi ) \, ( 1 - P_{\rm sym} ) \, \big( \nabla ( \omega ) - \nabla_0 ( \omega ) \big)\\ &= ( \phi \otimes_{\mathcal{A}} \psi ) \, ( 1 - \sigma ) \, \big( \nabla ( \omega ) - \nabla_0 ( \omega ) \big)\\ &= ( \phi \otimes_{\mathcal{A}} \psi ) ( \omega_{(0)} \otimes_{\mathcal{A}} \omega_{(1)} - \omega_{(1)} \otimes_{\mathcal{A}} \omega_{(0)} ) - ( \phi \otimes_{\mathcal{A}} \psi ) ( {}_{(0)} \omega \otimes_{\mathcal{A}} {}_{(1)} \omega - {}_{(1)} \omega \otimes_{\mathcal{A}} {}_{(0)} \omega ). \end{align*} Therefore, we obtain $$ ( \phi \otimes_{\mathcal{A}} \psi ) ( \omega_{(0)} \otimes_{\mathcal{A}} \omega_{(1)} - \omega_{(1)} \otimes_{\mathcal{A}} \omega_{(0)} ) = ( \phi \otimes_{\mathcal{A}} \psi ) ( {}_{(0)} \omega \otimes_{\mathcal{A}} {}_{(1)} \omega - {}_{(1)} \omega \otimes_{\mathcal{A}} {}_{(0)} \omega ).$$ Putting $ \phi = g ( \eta \otimes_{\mathcal{A}} - )$ and $\psi = g ( \theta \otimes_{\mathcal{A}} - )$, we obtain the first equation. The other two equations are obtained similarly. \mbox{}\hfill $\sqare$\vspace{1ex} \subsection{A Koszul formula for the Levi-Civita connection on the center} Let $ \nabla $ be a torsionless connection on $ \mathcal{E} $ which is compatible with a pseudo-Riemannian metric on $ \mathcal{Z}( \mathcal{E} ) $ as in Definition \ref{compatibilitycenter}. We will still use the Sweedler type notations $ \nabla ( \omega ) = \omega_{(0)} \otimes_{\mathcal{A}} \omega_{(1)}$ and $\nabla_0 ( \omega ) = {}_{(0)} \omega \otimes_{\mathcal{A}} {}_{(1)} \omega, $ where $ \omega_{(1)}$ and ${}_{(1)} \omega $ belong to $\mathcal{Z}( \mathcal{E} ).$ \begin{thm} \label{koszul21stjuly} Let $ \omega, \eta, \theta \in \mathcal{Z}( \mathcal{E} ) $ and $ \nabla $ is a torsionless connection on $ \mathcal{E} $ which is compatible with a bilinear pseudo-Riemannian metric $ g $. Then the following formula holds: \begin{align} \label{17thjuly20191} 2 g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) & g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) \nonumber \\ & = g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ) - g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ) + g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) \nonumber \\ & \quad - g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \omega ) \,\, g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \omega ) + g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \omega ) \,\, g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \omega ) \nonumber \\ & \quad + g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \eta ) \,\, g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta ) - g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) \,\, g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \eta ) \nonumber \\ & \quad - g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \theta ) \,\, g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \theta ) + g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \theta ) \,\, g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \theta ). \end{align} \end{thm} \noindent {\bf Proof:} Since $ \nabla $ is compatible with $ g $ on $ \mathcal{Z}( \mathcal{E} ), $ ( Definition \ref{compatibilitycenter} ) we have $$ g ( \omega_{(0)} \otimes_{\mathcal{A}} \eta ) \omega_{(1)} + g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) \eta_{(1)} = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ) .$$ In turn, since $ \omega_{(1)}, \eta_{(1)} \in \mathcal{Z}( \mathcal{E} ), $ this implies $$ \omega_{(1)} g ( \omega_{(0)} \otimes_{\mathcal{A}} \eta ) + \eta_{(1)} g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ). $$ Applying $ g ( \theta \otimes_{\mathcal{A}} - ) $ to the above equation, we get \begin{equation} \label{13thjuly20194} g ( \theta \otimes_{\mathcal{A}} \omega_{(1)} ) g ( \omega_{(0)} \otimes_{\mathcal{A}} \eta ) + g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) = g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ). \end{equation} Replacing $ \omega, \eta, \theta $ by $ \eta, \theta, \omega $ respectively in \eqref{13thjuly20194}, we get \begin{equation} \label{13thjuly20195} g ( \omega \otimes_{\mathcal{A}} \eta_{(1)} ) g ( \eta_{(0)} \otimes_{\mathcal{A}} \theta ) + g ( \omega \otimes_{\mathcal{A}} \theta_{(1)} ) g ( \eta \otimes_{\mathcal{A}} \theta_{(0)} ) = g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ). \end{equation} Replacing $ \omega, \eta, \theta $ by $ \theta, \omega, \eta $ respectively in \eqref{13thjuly20194}, we get \begin{equation} \label{13thjuly20196} g ( \eta \otimes_{\mathcal{A}} \theta_{(1)} ) g ( \theta_{(0)} \otimes_{\mathcal{A}} \omega ) + g ( \eta \otimes_{\mathcal{A}} \omega_{(1)} ) g ( \theta \otimes_{\mathcal{A}} \omega_{(0)} ) = g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ). \end{equation} By \eqref{13thjuly20194} + \eqref{13thjuly20195} - \eqref{13thjuly20196}, using $ g \sigma = g$ and the fact that $ g (\theta \otimes_{\mathcal{A}} \omega_{(1)} )$, $g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} )$ and $g ( \omega \otimes_{\mathcal{A}} \theta_{(1)} )$, $g ( \eta \otimes_{\mathcal{A}} \theta_{(1)} )$ all belong to $\mathcal{Z}( \mathcal{A} ) $ ( Lemma \ref{lemma2} ), we obtain \begin{align*} & g ( \eta \otimes_{\mathcal{A}} \omega_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \omega_{(1)} ) - g ( \eta \otimes_{\mathcal{A}} \omega_{(1)} ) g ( \theta \otimes_{\mathcal{A}} \omega_{(0)} ) ] + [ g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} )\\ & + g ( \omega \otimes_{\mathcal{A}} \eta_{(1)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(0)} ) ] + [ g ( \eta \otimes_{\mathcal{A}} \theta_{(0)} ) g ( \omega \otimes_{\mathcal{A}} \theta_{(1)} ) - g ( \eta \otimes_{\mathcal{A}} \theta_{(1)} ) g ( \omega \otimes_{\mathcal{A}} \theta_{(0)} ) ]\\ & \quad = g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) + g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ) - g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ). \end{align*} By Lemma \ref{13thjuly20192}, the left hand side of the above equation coincides with \begin{align*} 2 g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) & + g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \omega ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \omega ) + g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \omega ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \omega )\\ & - g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta ) + g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \eta )\\ & + g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \theta ) g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \theta ) + g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \theta ) g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \theta ). \end{align*} This proves the theorem. \mbox{}\hfill $\sqare$\vspace{1ex} Let us end this section by rewriting the Koszul formula in an alternative form. With $ \nabla_0 $ the torsionless connection of Theorem \ref{torsionless}, for all $ \omega, \eta, \theta \in \mathcal{Z}( \mathcal{E} ), $ the following identity holds: \begin{align} \label{16thjuly2019koszul} 2 ( g ( \omega \otimes_{\mathcal{A}} - ) & \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( \nabla ( \eta ) ) \nonumber \\ &= g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ) - g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ) + g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) \nonumber \\ & \quad - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ) \nonumber \nonumber \\ & \quad + ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \eta ) \nonumber \\ & \quad - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta ). \end{align} \section{Levi-Civita connection on a class of centered bimodules: existence} \label{section6} The goal of this section is to prove the existence and uniqueness of Levi-Civita connections for a class of centered bimodules. Let us now state the main result precisely. \begin{thm} \label{existenceuniqueness} Suppose $ ( \Omega^\cdot ( \mathcal{A} ), d ) $ is a differential calculus on $ \mathcal{A} $ such that the following conditions are satisfied: \begin{itemize} \item[1.] The space of one-forms $ {\Omega}^1 ( \mathcal{A} ):=\mathcal{E} = \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{A} $. \item[2.] We have $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} = {\rm Ker} ( \wedge ) \oplus \mathcal{F} $ where $ Q: \mathcal{F} \rightarrow {\Omega}^2( \mathcal{A} ) $ is a right $\mathcal{A}$-linear isomorphism as in Theorem \ref{torsionless}. \item[3.] The map $ \sigma $ in Definition \ref{16thjuly201923} satisfies $ \sigma ( \omega \otimes_{\mathcal{A}} \eta ) = \eta \otimes_{\mathcal{A}} \omega $ for all $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) $. \end{itemize} If $ g $ is a pseudo-Riemannian bilinear metric on $ \mathcal{E}, $ then there exists a unique connection on $ \mathcal{E} $ which is torsionless and compatible with $ g $ on $ \mathcal{Z}( \mathcal{E} ) $ as in Definition \ref{compatibilitycenter}. \end{thm} Throughout this section, we will work under the assumptions of Theorem \ref{existenceuniqueness}. \subsection{A class of centered bimodules and examples} For proving the existence and uniqueness of the Levi-Civita connection, we will need some additional results which we spell out here. \begin{lmma} \label{20thaugust20191} Let $\mathcal{E} = \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{A} $. Then 1 $. \mathcal{E} $ is centered. 2 $. \mathcal{Z}(\mathcal{E}) $ is both left and right $ \mathcal{A} $-total in $ \mathcal{E} $. 3. We have a right identification, $$ \mathcal{E} \otimes_\mathcal{A} \mathcal{E} \simeq \mathcal{E} \otimes_{\mathcal{Z}(\mathcal{A})} \mathcal{Z}(\mathcal{E}) . $$ 4. as well as a left one, $$ \mathcal{E} \otimes_\mathcal{A} \mathcal{E} \simeq \mathcal{Z}(\mathcal{E}) \otimes_{\mathcal{Z}(\mathcal{A})} \mathcal{E} . $$ 5. The collection $ \{ \omega \otimes_{\mathcal{A}} \eta : \omega, \eta \in \mathcal{Z}( \mathcal{E} ) \} $ is right $ \mathcal{A} $-total in $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ . 6. The collection $ \{ \omega \otimes_{\mathcal{A}} \eta : \omega, \eta \in \mathcal{Z}( \mathcal{E} ) \} $ is left $\mathcal{A}$-total in $\mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ \end{lmma} \noindent {\bf Proof:} Most of the statements in the lemma follow from the Proposition 2.4 of \cite{article1}. Indeed, that proposition implies that the equality $ \mathcal{E} = \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{A} $ forces $ \mathcal{E} $ to be centered. Moreover, we have right $\mathcal{A}$-module isomorphisms \begin{equation} \label{20thagust20192} \mathcal{E} \cong \mathcal{A} \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{Z}( \mathcal{E} ) \cong \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{A} \end{equation} via the canonical multiplication maps. The isomorphisms in particular imply that $ \mathcal{Z}( \mathcal{E} ) $ is both left and right $\mathcal{A}$-total in $ \mathcal{E} $. Thus, we have proved the parts 1,2, 3 and 4 of the lemma. Thus we are left to prove the last two assertions. We only prove the last but one since the proof of the last one is similar. If $ e,f $ are elements of $ \mathcal{E}, $ then \eqref{20thagust20192} allows us to conclude that there exist elements $ \omega_j, \eta_k, \eta_l \in \mathcal{Z}( \mathcal{E} ) $ and $ a_j, b_k, c_l \in \mathcal{A}$ such that $$ e = \sum\nolimits_j a_j \omega_j, \quad f = \sum\nolimits_k b_k \eta_k, \quad \omega_j b_k = \sum\nolimits_l c_l \theta_l $$ Therefore, $$ e \otimes_{\mathcal{A}} f = \sum\nolimits_{j, k} a_j \omega_j \otimes_{\mathcal{A}} b_k \eta_k = \sum\nolimits_{j,k} a_j ( \omega_j b_k ) \otimes_{\mathcal{A}} \eta_k = \sum\nolimits_{j,k,l} a_j c_l \theta_l \otimes_{\mathcal{A}} \eta_k $$ which belongs to the left $ \mathcal{A} $-linear span of $ \{ \omega \otimes_{\mathcal{A}} \eta: \omega, \eta \in \mathcal{Z}( \mathcal{E} ) \} $. This proves the fourth assertion. \mbox{}\hfill $\sqare$\vspace{1ex} We will also need the following results about the maps $ \sigma, P_{\rm sym} $ and $ Q $. \begin{ppsn} Suppose $ \mathcal{E} $ is an $ \mathcal{A} - \mathcal{A} $-bimodule satisfying the hypothesis of Theorem \ref{existenceuniqueness}. Then we have the following: 1. The map $ \sigma $ and hence $ P_{\rm sym} $ is $\mathcal{A} - \mathcal{A} $-bilinear. 2. The map $ Q: \mathcal{F} \rightarrow {\Omega}^2( \mathcal{A} ) $ is an $\mathcal{A} - \mathcal{A} $-bilinear isomorphism. \end{ppsn} \noindent {\bf Proof:} Since $ \mathcal{E} $ centered by part 1. of Lemma \ref{20thaugust20191}, Theorem 6.10 of \cite{Skd} implies there exists a unique $\mathcal{A}-\mathcal{A}$-bimodule isomorphism $\sigma^{{\rm can}}: \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}$ such that $\sigma^{{\rm can}}(\omega \otimes_{\mathcal{A}} \eta)=\eta \otimes_{\mathcal{A}} \omega$ for all $\omega,\eta \in \mathcal{Z}( \mathcal{E} )$. Thus, it suffices to show that $ \sigma = \sigma^{{\rm can}}.$ This follows once we show that $ \sigma $ is left $ \mathcal{A} $-linear. Let $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) $ and $ a, b \in \mathcal{A} $. Then $$ a \sigma ( \omega \otimes_{\mathcal{A}} \eta b ) = a \sigma ( \omega \otimes_{\mathcal{A}} \eta ) b = a \eta \otimes_{\mathcal{A}} \omega b = \eta \otimes_{\mathcal{A}} \omega a b = \sigma ( a \omega \otimes_{\mathcal{A}} \eta b ) .$$ Since $ \{ \omega \otimes_{\mathcal{A}} \eta : \omega, \eta \in \mathcal{Z}( \mathcal{E} ) \} $ is right $ \mathcal{A} $-total in $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ ( part 5. of Lemma \ref{20thaugust20191} ), this proves that $ \sigma $ is left $ \mathcal{A} $-linear and completes the proof of part 1. In particular, this implies that $ \mathcal{F} = {\rm Ran} ( 1 - P_{\rm sym} ) $ is an $ \mathcal{A} - \mathcal{A} $-bimodule. Since $ \wedge = Q $ on $ \mathcal{F} $ and $ \wedge $ is bilinear, so is $ Q $, thus establishing part 2. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{16thjuly20192} We have the following: a. Let $ \omega \in \mathcal{Z}( \mathcal{E} ) $. Then $ d \omega \in \mathcal{Z} ( {\Omega}^2( \mathcal{A} ) ) $. b. The element $ ( 1 - \sigma ) \nabla_0 ( \omega ) \in \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $ if $ \omega \in \mathcal{Z}( \mathcal{E} ) $. \end{lmma} \noindent {\bf Proof:} For part a., we start by observing that \begin{align*} \omega \wedge da + da \wedge \omega & = \wedge ( \omega \otimes_{\mathcal{A}} da + da \otimes_{\mathcal{A}} \omega )\\ & = \wedge ( 1 - P_{\rm sym} ) ( \omega \otimes_{\mathcal{A}} da + da \otimes_{\mathcal{A}} \omega ) = \tfrac{1}{2} ( 1 - \sigma ) ( \omega \otimes_{\mathcal{A}} da + da \otimes_{\mathcal{A}} \omega )\\ & = \tfrac{1}{2} \wedge ( \omega \otimes_{\mathcal{A}} da + da \otimes_{\mathcal{A}} \omega - da \otimes_{\mathcal{A}} \omega - \omega \otimes_{\mathcal{A}} da ) = 0. \end{align*} Hence, $$ 0 = d ( \omega . a ) - d ( a. \omega ) = ( d \omega . a - a. d \omega ) - ( \omega \wedge da + da \wedge \omega ) = d \omega . a - a. d \omega. $$ This proves part a. For part b: being $ \nabla_0 $ a torsionless connection, we have $ \wedge \nabla_0 ( \omega ) = - d \omega $. By applying the map $ ( 1 - \sigma ) Q^{-1}, $ we get $$ ( 1 - \sigma ) \nabla_0 ( \omega ) = - ( 1 - \sigma ) Q^{-1} ( d \omega ) .$$ Since $ Q^{-1} ( d \omega ) \in {\rm Ran} ( 1 - P_{\rm sym} ), $ we have $$ - ( 1 - \sigma ) Q^{-1} ( d \omega ) = - 2 ( 1 - P_{\rm sym} ) ( Q^{-1} d \omega ) = - 2 Q^{-1} ( d \omega ) .$$ Now, by part a $.d \omega \in \mathcal{Z} ( {\Omega}^2( \mathcal{A} ) )$ since $\omega \in \mathcal{Z}( \mathcal{E} ) $; then since the map $ Q $ is bilinear, for any $ a \in \mathcal{A} $, $$ a. Q^{-1} ( \omega ) = Q^{-1} ( a. \omega ) = Q^{-1} ( \omega. a ) = Q^{-1} ( \omega ).a $$ and thus $Q^{-1} ( d \omega )$ belongs to $ \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} \label{22ndjuly20193} More generally, if $ \mathcal{F} $ and $ \mathcal{F}^\prime $ are centered $\mathcal{A}-\mathcal{A}$-bimodules and $ T: \mathcal{F} \rightarrow \mathcal{F}^\prime $ be an $ \mathcal{A} - \mathcal{A} $-bilinear map. Then $ T ( \mathcal{Z}( \mathcal{E} ) ) \subseteq \mathcal{Z} ( \mathcal{F} ) $. Indeed, If $ \omega \in \mathcal{Z}( \mathcal{E} ) $ and $ a \in \mathcal{A} $ a simple computation yields $ a. T ( \omega ) = T ( a. \omega ) = T ( \omega. a ) = T ( \omega ).a $ and thus, $ T ( \omega ) \in \mathcal{Z}( \mathcal{E} ) $. \end{rmrk} \subsection{The existence and uniqueness of Levi-Civita connection} \begin{ppsn} \label{vg2nondegenerate} Let $ g $ be a pseudo-Riemannian bilinear metric on $\mathcal{E}$ and define a map \begin{align*} & g^{(2)} :(\mathcal{E}\otimes_{\mathcal{A}} \mathcal{E})\otimes_{\mathcal{A}} (\mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}) \rightarrow \mathcal{A}, \\ & g^{(2)}((e\otimes_{\mathcal{A}} f)\otimes_{\mathcal{A}} (e'\otimes_{\mathcal{A}} f')) = g(e \otimes_{\mathcal{A}} g(f \otimes_{\mathcal{A}} e') f'). \end{align*} Then we have the following: \begin{itemize} \item[1.] the map $ : \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \rightarrow ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} )^* $ defined by $$ e\otimes_{\mathcal{A}} f \mapsto g^{(2)}((e \otimes_{\mathcal{A}} f) \otimes_{\mathcal{A}} - )$$ is $\mathcal{A}-\mathcal{A}$-bilinear and an isomorphism of right $ \mathcal{A} $ modules. \item[2.] For an element $ \xi \in \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, $ the map $ g^{(2)} ( - \otimes_{\mathcal{A}} \xi ) $ is an element of $ {}_\mathcal{A} {\rm Hom} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{A} ) $. Moreover, the map from $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ to $ {}_\mathcal{A} {\rm Hom} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{A} ) $ defined by $$ \xi \mapsto g^{(2)} ( - \otimes_{\mathcal{A}} \xi ) $$ is an isomorphism of right $ \mathcal{A} $-modules. \end{itemize} \end{ppsn} \noindent {\bf Proof:} We only need to prove 2. since the assertion 1. was already proved in Proposition 3.7 of \cite{article1}. However, the assertion 2. follows exactly as in the proof of Proposition 3.7 of \cite{article1} using the bimodule structure of $ {}_\mathcal{A} {\rm Hom} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{A} ) $ as spelled out in \ref{22ndjuly20191}. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} \label{11thaugust2019} From the proof of Proposition 3.7 of \cite{article1}, it is easy to see that for $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) $ and $ e, f \in \mathcal{E}, $ $$ ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \eta \otimes_{\mathcal{A}} - ) ) ( e \otimes_{\mathcal{A}} f ) = g^{(2)} ( ( \eta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( e \otimes_{\mathcal{A}} f ) ) .$$ \end{rmrk} \begin{lmma} \label{21stjuly20192} For $ \omega, \eta, \theta \in \mathcal{Z}( \mathcal{E} ), $ let us define $ \psi_{\omega, \theta} ( \eta ) $ by the expression of the right hand side of \eqref{17thjuly20191} in Theorem \ref{koszul21stjuly}. Thus we have, \begin{align}\label{21stjuly20197} \psi_{\omega, \theta} ( \eta ) &= g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ) - g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ) + g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) \nonumber \\ & \quad - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ) \nonumber \\ & \quad + ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \eta ) \nonumber \\ & \quad - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta ). \end{align} Then, for $ \omega, \eta, \theta \in \mathcal{Z}( \mathcal{E} ), $ the element $ \psi_{\omega, \theta} ( \eta ) $ belongs to $ \mathcal{Z}( \mathcal{A} ) $. \end{lmma} \noindent {\bf Proof:} Let us observe that since $ \omega, \eta, \theta \in \mathcal{Z}( \mathcal{E} ), $ and $ d g ( \eta \otimes_{\mathcal{A}} \theta ), d g ( \theta \otimes_{\mathcal{A}} \omega ), d g ( \omega \otimes_{\mathcal{A}} \eta ) $ are in $ \mathcal{Z}( \mathcal{A} ) $, hence by Lemma \ref{lemma2} $ g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ), g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ) $ and $ g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) $ are also in $ \mathcal{Z}( \mathcal{A} ) $. \noindent Moreover, by part b. of Lemma \ref{16thjuly20192}, the element $ ( 1 - \sigma ) \nabla_0 ( \omega ) \in \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $. Therefore, for all $ a \in \mathcal{A}, $ \begin{align*} ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ). a &= g^{(2)} ( \theta \otimes_{\mathcal{A}} \eta \otimes_{\mathcal{A}} - ) ( a. ( 1 - \sigma ) \nabla_0 ( \omega ) )\\ &= ( g^{(2)} ( \theta \otimes_{\mathcal{A}} \eta \otimes_{\mathcal{A}} - ) . a ) ( ( 1 - \sigma ) \nabla_0 ( \omega ) )\\ &= a. g^{(2)} ( \theta \otimes_{\mathcal{A}} \eta \otimes_{\mathcal{A}} - ) ( ( 1 - \sigma ) \nabla_0 ( \omega ) )\\ \end{align*} as the map $ g^{(2)} ( \theta \otimes_{\mathcal{A}} \eta \otimes_{\mathcal{A}} - ) $ is bilinear by Proposition \ref{vg2nondegenerate}. Hence, the element $$ ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ) $$ belongs to $ \mathcal{Z}( \mathcal{A} ) $. Similarly, it belong to $ \mathcal{Z}( \mathcal{A} ) $ also the elements $$ ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \eta ) \qquad \mbox{and} \qquad ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta ) .$$ This completes the proof that $ \psi_{\omega, \theta} ( \eta ) $ belongs to $ \mathcal{Z}( \mathcal{A} ) $. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{20thjuly20191} Let $ \eta \in \mathcal{Z}( \mathcal{E} ) $. There exists a right $ \mathcal{Z}( \mathcal{A} ) $-linear map $$ \phi_\eta: \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{Z}( \mathcal{E} ) \rightarrow \mathcal{Z}( \mathcal{A} ) \qquad \textup{such that} \qquad \phi_\eta ( \theta \otimes_{\mathcal{Z}( \mathcal{A} )} \omega ) = \psi_{\omega, \theta} ( \eta ) $$ \end{lmma} \noindent {\bf Proof:} We need to check that $ \phi_\eta $ is well-defined, i.e, that for all $ a^\prime \in \mathcal{Z}( \mathcal{A} ), $ $$ \phi_\eta ( \theta a^\prime \otimes_{\mathbb{C}} \omega ) = \phi_\eta ( \theta \otimes_{\mathbb{C}} a^\prime \omega ).$$ Since $ ( 1 - \sigma ) ( \theta \otimes_{\mathcal{A}} d a^\prime ) = 0, $ we get \begin{align*} \phi_n ( \theta a^\prime \otimes_{\mathcal{A}} \omega ) &= \psi_{\omega, \theta a^\prime} ( \eta )\\ &= ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta a^\prime )\\ &= ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta ) a^\prime \\ &= \psi_{\omega,\theta} ( \eta ) a^\prime\\ &= a^\prime \psi_{\omega,\theta} ( \eta ) \qquad {\rm ( } {\rm Lemma} ~ \ref{21stjuly20192} {\rm )}\\ &= a^\prime ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta ). \end{align*} Now, since $ \eta \in \mathcal{Z}( \mathcal{E} ) $ and $ a^\prime \in \mathcal{Z}( \mathcal{A} ), $ $$ a^\prime. g ( \eta \otimes_{\mathcal{A}} - ) = g ( \eta \otimes_{\mathcal{A}} - ). a^\prime $$ and so \begin{align*} \phi_\eta ( \theta a^\prime \otimes_{\mathcal{A}} \omega ) &= ( g ( \eta \otimes_{\mathcal{A}} - ) a^\prime \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta )\\ &= ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} a^\prime g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta )\\ &= ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( a^\prime \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta ) ~ {\rm (} ~ {\rm as} ~ g ~ {\rm is} ~ {\rm bilinear} ~ {\rm )} \\ &= \psi_{a^\prime \omega, \theta} ( \eta )\\ &= \phi_\eta ( \theta \otimes_{\mathcal{A}} a^\prime \omega ). \end{align*} This proves the lemma. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{crlre} \label{21stjuly20191} Fix $ \eta \in \mathcal{Z}( \mathcal{E} ) $; then there is a right $ \mathcal{A} $-linear map $ \widetilde{\phi}_\eta: \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \rightarrow \mathcal{A} $ defined, for all $ a \in \mathcal{A}, \theta, \omega \in \mathcal{Z}( \mathcal{E} ) $, by $$ \widetilde{\phi}_\eta ( \theta \otimes_{\mathcal{Z}( \mathcal{A} )} \omega \otimes_{\mathcal{Z}( \mathcal{A} )} a ) = \phi_\eta ( \theta \otimes_{\mathcal{Z}( \mathcal{A} )} \omega ) a. $$ \noindent Moreover, $ \widetilde{\phi}_\eta $ is a left $ \mathcal{A} $-linear map from $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ to $ \mathcal{A}, $ i.e, $ \widetilde{\phi}_\eta \in {}_\mathcal{A} {\rm Hom} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{A} ) $. Hence, by part 2. of Proposition \ref{vg2nondegenerate}, there exists a unique element $ \nabla ( \eta ) \in \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ so that \begin{equation}\label{intlambda} \tfrac{1}{2} \widetilde{\phi}_\eta ( \xi ) = g^{(2)} ( \xi \otimes_{\mathcal{A}} \nabla ( \eta ) ) \end{equation} for all $ \xi \in \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $. \end{crlre} \noindent {\bf Proof:} We only prove the statement that $ \widetilde{\phi}_\eta \in {}_\mathcal{A} {\rm Hom} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}, \mathcal{A} ) $. For this it is enough to prove that for all $ \omega, \theta \in \mathcal{Z}( \mathcal{E} ) $ and $ a, b \in \mathcal{A}, $ we have $$ \widetilde{\phi}_\eta ( a ( \omega \otimes_{\mathcal{A}} \theta b ) ) = a. \widetilde{\phi}_\eta ( \omega \otimes_{\mathcal{A}} \theta b ) .$$ As $ \widetilde{\phi}_\eta $ is right $ \mathcal{A} $-linear by construction, it follows that \begin{align*} \widetilde{\phi}_\eta ( a ( \omega \otimes_{\mathcal{A}} \theta b ) ) &= \widetilde{\phi}_\eta ( a ( \omega \otimes_{\mathcal{A}} \theta ) b ) = \widetilde{\phi}_\eta ( a ( \omega \otimes_{\mathcal{A}} \theta ) ) b = \widetilde{\phi}_\eta ( \omega \otimes_{\mathcal{A}} \theta a ) b \\ &= \widetilde{\phi}_\eta ( \omega \otimes_{\mathcal{A}} \theta ) a b = a. \widetilde{\phi}_\eta ( \omega \otimes_{\mathcal{A}} \theta ) b = a. \widetilde{\phi}_\eta ( \omega \otimes_{\mathcal{A}} \theta b ). \end{align*} In the above we have used that $ \widetilde{\phi}_\eta ( \omega \otimes_{\mathcal{A}} \theta ) \in \mathcal{Z}( \mathcal{A} ) $ ( Lemma \ref{21stjuly20192} ) and $ \omega \otimes_{\mathcal{A}} \theta ~ \in ~ \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) .$ \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{17thjuly20193} For $ \omega, \theta, \eta \in \mathcal{Z}( \mathcal{E} ), a^\prime \in \mathcal{Z}( \mathcal{A} )$ one has, \begin{equation} \psi_{\omega, \theta} ( \eta a^\prime ) = \psi_{\omega, \theta} ( \eta ) a^\prime + 2 g ( \omega \otimes_{\mathcal{A}} \eta ) g ( \theta \otimes_{\mathcal{A}} d a^\prime ). \end{equation} \end{lmma} \noindent {\bf Proof:} The proof of this Lemma follows by a computation using the facts ( from Lemma \ref{lemma2} ) that $ da^\prime \in \mathcal{Z}( \mathcal{A} ), g ( \alpha \otimes_{\mathcal{A}} \beta ) \in \mathcal{Z}( \mathcal{A} ) $ for all $ \alpha, \beta \in \mathcal{Z}( \mathcal{E} ) $. Moreover, we also use the statement proved in Lemma \ref{16thjuly20192} that $ ( 1 - \sigma ) \nabla_0 ( e ) \in \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $ for all $ e \in \mathcal{Z}( \mathcal{E} ).$ We compute \begin{align*} &\psi_{\omega, \theta} ( \eta a^\prime )\\ &= g ( \omega \otimes_{\mathcal{A}} d g ( \eta a^\prime \otimes_{\mathcal{A}} \theta ) ) - g ( \eta a^\prime \otimes_{\mathcal{A}} d g ( \theta \otimes_{\mathcal{A}} \omega ) ) + g ( \theta \otimes_{\mathcal{A}} d g ( \omega \otimes_{\mathcal{A}} \eta ) ) a^\prime \\ & \quad + g ( \theta \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} \eta ) d a^\prime ) - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( ( 1 - \sigma ) \nabla_0 ( \omega ) ) a^\prime \\ & \quad+ ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( ( 1 - \sigma ) \nabla_0 ( \eta ) ) a^\prime\\ & \quad + ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( ( 1 - \sigma ) ( \eta \otimes_{\mathcal{A}} d a^\prime ) ) - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) \nabla_0 ( \theta ) a^\prime\\ &= g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ) a^\prime + g ( \omega \otimes_{\mathcal{A}} g ( \eta \otimes_{\mathcal{A}} \theta ) d a^\prime ) - g ( \eta \otimes_{\mathcal{A}} d g ( \theta \otimes_{\mathcal{A}} \omega ) ) a^\prime \\ & \quad + g ( \theta \otimes_{\mathcal{A}} d g ( \omega \otimes_{\mathcal{A}} \eta ) ) a^\prime - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( ( 1 - \sigma ) \nabla_0 ( \omega ) ) a^\prime \\ &\quad + ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( ( 1 - \sigma ) \nabla_0 ( \eta ) ) a^\prime - ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) \nabla_0 ( \theta ) a^\prime \\ &\quad + g ( \omega \otimes_{\mathcal{A}} \eta ) g ( \theta \otimes_{\mathcal{A}} d a^\prime ) - g ( \omega \otimes_{\mathcal{A}} d a^\prime ) g ( \theta \otimes_{\mathcal{A}} \eta ) + g ( \omega \otimes_{\mathcal{A}} \eta ) g ( \theta \otimes_{\mathcal{A}} d a^\prime )\\ & = \psi_{\omega, \theta} ( \eta ) a^\prime + 2 g ( \omega \otimes_{\mathcal{A}} \eta ) g ( \theta \otimes_{\mathcal{A}} d a^\prime ). \end{align*} This proves the lemma. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{20thjuly20193} For $ e \in \mathcal{Z}( \mathcal{E} ), $ let $ \nabla ( e ) $ be the element in $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ as defined in Corollary \ref{21stjuly20191}. Then for all $ a \in \mathcal{Z}( \mathcal{A} ), $ we have \begin{equation} \label{21stjuly20193} \nabla ( e a ) = \nabla ( e ) a + e \otimes_{\mathcal{A}} da. \end{equation} \end{lmma} \noindent {\bf Proof:} It suffices to prove that for all $ \omega, \theta \in \mathcal{Z}( \mathcal{E} ), $ we have \begin{equation} \label{20thjuly20192} g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( \nabla ( e a ) - \nabla ( e ) a - e \otimes_{\mathcal{A}} da ) ) = 0. \end{equation} Thus, since $ \{ \omega \otimes_{\mathcal{A}} \theta: \omega, \theta \in \mathcal{Z}( \mathcal{E} ) \} $ is left $ \mathcal{A} $-total in $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ and $ g^{(2)} $ is left $ \mathcal{A} $-linear ( Proposition \ref{vg2nondegenerate} ), formula \eqref{21stjuly20193} will imply $$ g^{(2)} ( \xi \otimes_{\mathcal{A}} ( \nabla ( e a ) - \nabla ( e ) a - e \otimes_{\mathcal{A}} da ) ) = 0 $$ for all $ \xi \in \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ and hence Proposition \ref{vg2nondegenerate} will imply that $$ \nabla ( e a ) - \nabla ( e ) a - e \otimes_{\mathcal{A}} da = 0.$$ For $ a \in \mathcal{Z}( \mathcal{A} ), $ have $$ g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} \nabla ( e a ) ) = \tfrac{1}{2} \psi_{\omega, \theta} ( e a ). $$ Therefore, \begin{align*} g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( \nabla ( e a ) & - \nabla ( e ) a - e \otimes_{\mathcal{A}} da ) ) \\ &= \tfrac{1}{2} \psi_{\omega, \theta} ( e a ) - \tfrac{1}{2} \psi_{\omega, \theta} ( e ) a - g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( e \otimes_{\mathcal{A}} da ) )\\ &= \tfrac{1}{2} \psi_{\omega, \theta} ( e ) a + g ( \omega \otimes_{\mathcal{A}} e ) g ( \theta \otimes_{\mathcal{A}} da ) \\ & \quad - \tfrac{1}{2} \psi_{\omega, \theta} ( e ) a - g ( \theta \otimes_{\mathcal{A}} da ) g ( \omega \otimes_{\mathcal{A}} e ) \\ &= 0 \end{align*} where we have used that $ g ( \omega \otimes_{\mathcal{A}} e ) \in \mathcal{Z}( \mathcal{A} ) $ ( Lemma \ref{lemma2} ) and Lemma \ref{17thjuly20193}. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{ppsn} \label{22ndjuly20194} Given the map $\nabla$ defined implicitly by formula \eqref{intlambda} and the connection $\nabla_0$ in Theorem~\ref{torsionless}, the map $$ L := \nabla - \nabla_0: \mathcal{Z}( \mathcal{E} ) \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} .$$ is right $ \mathcal{Z}( \mathcal{A} ) $-linear and so $ L $ extends to a right $ \mathcal{A} $-linear map $$ \widetilde{L}: \mathcal{E} = \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{A} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E}; ~ \omega \otimes_{\mathcal{A}} a \mapsto L ( \omega ) a .$$ \end{ppsn} \noindent {\bf Proof:} For $ \omega \in \mathcal{Z}( \mathcal{E} ) $ and $ a \in \mathcal{Z}( \mathcal{A} ), $ we have $$ L ( \omega a ) = ( \nabla - \nabla_0 ) ( \omega a ) = \nabla ( \omega ) a + \omega \otimes_{\mathcal{A}} da - \nabla_0 ( \omega ) a - \omega \otimes_{\mathcal{A}} da = L ( \omega ) a $$ since $ \nabla_0 $ is a connection on $ \mathcal{E} $ and we have used Proposition \ref{20thjuly20193}. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{crlre} \label{21stjuly20194} Consider the map $ \nabla: \mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ given by the formula $$ \nabla = \nabla_0 + \widetilde{L} $$ where $ \widetilde{L} $ is the map in Proposition \ref{22ndjuly20194}. Then $ \nabla $ is a connection on $\mathcal{E}$ which extends the map $ \nabla $ given in Corollary \ref{21stjuly20191}. \end{crlre} \noindent {\bf Proof:} The map $ \nabla $ is a connection being the sum of a connection $ \nabla_0 $ and a right $ \mathcal{A} $-linear map $ \widetilde{L} $. \mbox{}\hfill $\sqare$\vspace{1ex} \medskip We are finally ready for the \noindent {\bf Proof of Theorem \ref{existenceuniqueness}} \\ The uniqueness follows from Theorem \ref{koszul21stjuly} and the Definition of $\nabla$ in Corollary \ref{21stjuly20194}. So we are left with proving the existence. We start by proving that the connection $ \nabla $ defined in Corollary \ref{21stjuly20194} is torsionless. Let $ \widetilde{L} = \nabla - \nabla_0 $ as in Corollary \ref{21stjuly20194}. Then it suffices to prove that $ \wedge L = 0 $ since this implies $$ \wedge \nabla = \wedge \nabla_0 = - d $$ as $ \nabla_0 $ is torsionless. By right $ \mathcal{A} $-linearity of $ \widetilde{L}, $ it suffices to check that for all $ \eta \in \mathcal{Z}( \mathcal{E} ), $ $$ \wedge \widetilde{L} ( \eta ) = 0 .$$ However, $$ \wedge ( \widetilde{L} ) ( \eta ) = \wedge ( \eta_{(0)} \otimes_{\mathcal{A}} \eta_{(1)} - {}_{(0)} \eta \otimes_{\mathcal{A}} {}_{(1)} \eta ) $$ and so we need to prove that \begin{align*} \sigma ( \eta_{(0)} \otimes_{\mathcal{A}} \eta_{(1)} -{}_{(0)} \eta \otimes_{\mathcal{A}} {}_{(1)} \eta ) &= \eta_{(0)} \otimes_{\mathcal{A}} \eta_{(1)} -{}_{(0)} \eta \otimes_{\mathcal{A}} {}_{(1)} \eta, \\ \mbox{that is,} \qquad \eta_{(1)} \otimes_{\mathcal{A}} \eta_{(0)} - {}_{(1)} \eta \otimes_{\mathcal{A}} {}_{(0)} \eta &= \eta_{(0)} \otimes_{\mathcal{A}} \eta_{(1)} -{}_{(0)} \eta \otimes_{\mathcal{A}} {}_{(1)} \eta. \end{align*} By bilinearity of $ g^{(2)}, $ the fact that $ \{ \theta \otimes_{\mathcal{A}} \omega ; \theta, \omega \in \mathcal{Z}( \mathcal{E} ) \} $ is left $ \mathcal{A}$-total in $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ and Proposition \ref{vg2nondegenerate}, it suffices to prove that for all $ \omega, \theta \in \mathcal{Z}( \mathcal{E} ), $ the following equation holds: $$ g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( \eta_{(1)} \otimes_{\mathcal{A}} \eta_{(0)} - {}_{(1)} \eta \otimes_{\mathcal{A}} {}_{(0)} \eta ) ) = g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( \eta_{(0)} \otimes_{\mathcal{A}} \eta_{(1)} -{}_{(0)} \eta \otimes_{\mathcal{A}} {}_{(1)} \eta ) ). $$ By a simple computation using the facts that $ g ( \omega \otimes_{\mathcal{A}} \eta_{(1)} ), g ( \theta \otimes_{\mathcal{A}} \theta_{(1)} ) \in \mathcal{Z}( \mathcal{A} ), $ the previous expression is seen to be equivalent to: \begin{multline} g ( \theta \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \omega \otimes_{\mathcal{A}} \eta_{(1)} ) - g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) \\ = g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \eta ) g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) - g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta ). \end{multline} Now, using the expressions for $ g ( \theta \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \omega \otimes_{\mathcal{A}} \eta_{(1)} ) = \tfrac{1}{2} \psi_{\theta, \omega} ( \eta ) $ and $ g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) = \tfrac{1}{2} \psi_{\omega, \theta} $ ( see \eqref{21stjuly20197} ) and using the facts that $ g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta ), g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) \in \mathcal{Z}( \mathcal{A} ) $ and $ g \sigma = g, $ the left hand side of the previous expression reduces to the right hand side by a straightforward simplification. Next we prove that $ \nabla $ is compatible with $g.$ We claim that for all $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ), $ we have $$ g ( \omega_{(0)} \otimes_{\mathcal{A}} \eta ) \omega_{(1)} + g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) \eta_{(1)} = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ). $$ By virtue of Remark \ref{21stjuly201910}, this is equivalent to having for all $ \theta \in \mathcal{Z}( \mathcal{E} ), $ $$ g ( \eta \otimes_{\mathcal{A}} \omega_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \omega_{(1)} ) + g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} ) = g ( \theta\otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) $$ having used the facts that $ g ( \theta \otimes_{\mathcal{A}} - ) $ is left $ \mathcal{A} $-linear. Using the definition of $ \nabla, $ we have \begin{align*} 2 g ( \eta \otimes_{\mathcal{A}} \omega_{(0)} ) & g ( \theta \otimes_{\mathcal{A}} \omega_{(1)} ) + 2 g ( \omega \otimes_{\mathcal{A}} \eta_{(0)} ) g ( \theta \otimes_{\mathcal{A}} \eta_{(1)} )\\ &= g ( \eta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \theta ) ) - g ( \omega \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \eta ) ) + g ( \theta \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \omega ) )\\ & \quad - g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta ) + g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \eta ) \\ & \quad + g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \omega ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \omega ) - g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \omega ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \omega ) \\ & \quad - g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \theta ) g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \theta ) + g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \theta ) g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \theta )\\ & \quad + g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) ) - g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) ) + g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) ) \\ & \quad - g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \omega ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \omega ) + g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \omega ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \omega ) \\ & \quad + g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(1)} \eta ) - g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \eta ) g ( \theta \otimes_{\mathcal{A}} {}_{(0)} \eta ) \\ & \quad - g ( \eta \otimes_{\mathcal{A}} {}_{(0)} \theta ) g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \theta ) + g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \theta ) g ( \omega \otimes_{\mathcal{A}} {}_{(0)} \theta )\\ &= 2 g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ), \end{align*} using $ g ( \omega \otimes_{\mathcal{A}} {}_{(1)} \theta ), g ( \eta \otimes_{\mathcal{A}} {}_{(1)} \theta ) \in \mathcal{Z}( \mathcal{A} ) $ and $ g ( \alpha \otimes_{\mathcal{A}} \beta ) = g ( \beta \otimes_{\mathcal{A}} \alpha ) $ for all $ \alpha, \beta \in \mathcal{Z}( \mathcal{E} ) $. Therefore, $ \nabla $ is compatible with $ g $ on $\mathcal{Z}( \mathcal{E} )$ ( as in Definition \ref{compatibilitycenter} ). This completes the proof. \mbox{}\hfill $\sqare$\vspace{1ex} \medskip We finish this section by comparing this result with that in \cite{article1}. For this it will be useful to adopt the notation: \begin{align*} & \Pi_g^0(\nabla):\mathcal{Z}( \mathcal{E} )\otimes_{\mathbb{C}}\mathcal{Z}( \mathcal{E} ) \rightarrow \mathcal{E}, \\ & \Pi_g^0(\nabla)(\omega \otimes_{\mathbb{C}} \eta) = (g \otimes_{\mathcal{A}} \mbox{id} )\sigma_{23}(\nabla(\omega)\otimes_{\mathcal{A}} \eta ) + ( g \otimes_{\mathcal{A}} \mbox{id} ) ( \omega \otimes_{\mathcal{A}} \nabla ( \eta ) ). \end{align*} Thus, Definition \ref{compatibilitycenter} can be rephrased by saying that $ \nabla $ is compatible with $ \mathcal{E} $ on $ \mathcal{Z}( \mathcal{E} ) $ if $$ \Pi_g^0 ( \nabla ) ( \omega \otimes_{\mathcal{Z}( \mathcal{A} )} \eta ) = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ) ~ \forall ~ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) .$$ In Subsection 4.1 of \cite{article1}, it was shown that the assumption $ \mathcal{E} = \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}( \mathcal{A} )} \mathcal{A} $ allows one to define a canonical extension $ \Pi_g ( \nabla ) : \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \rightarrow \mathcal{E} $ of the map $ \Pi^0_g ( \nabla ) $. More precisely, for all $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) $ and $ a \in \mathcal{A}, $ one has \begin{equation} \label{22ndjuly20195} \Pi_g ( \nabla ) ( \omega \otimes_{\mathcal{A}} \eta a ) = \Pi^0_g ( \nabla ) ( \omega \otimes_{\mathcal{Z}( \mathcal{A} )} \eta ) a + g ( \omega \otimes_{\mathcal{A}} \eta ) da. \end{equation} We say that a connection $ \nabla $ is compatible with $ g $ on the whole of $ \mathcal{E} $ if for all $e, f$ in $\mathcal{E},$ \begin{equation} \label{10thaugust20191} \Pi_g ( \nabla ) ( e \otimes_{\mathcal{A}} f ) = d ( g ( e \otimes_{\mathcal{A}} f ) ). \end{equation} It was also shown that for any connection $ \nabla_1 $ on $ \mathcal{E}, $ the map from $ \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ to $ \mathcal{E},$ defined by $$ ~ e \otimes_{\mathcal{A}} f \mapsto \Pi_g ( \nabla_1 ) ( e \otimes_{\mathcal{A}} f ) - d ( g ( e \otimes_{\mathcal{A}} f ) ) $$ is right $ \mathcal{A} $-linear. We have the following result which recovers the main result of \cite{article1}: \begin{crlre} Suppose $ \mathcal{E}:= {\Omega}^1 ( \mathcal{A} ) $ satisfies the assumptions of Theorem \ref{existenceuniqueness} and $ g $ is a pseudo-Riemannian bilinear metric on $ \mathcal{E} $. Then there exists a unique connection on $ \mathcal{E} $ which is torsionless and compatible with $ g $ on the whole of $\mathcal{E}.$ \end{crlre} \noindent {\bf Proof:} From Theorem \ref{existenceuniqueness}, we know that there exists a unique connection $ \nabla $ which is torsionless and compatible with $ g$ on $ \mathcal{Z}( \mathcal{E} ).$ Thus, for all $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ), $ $$ \Pi^0_g ( \nabla ) ( \omega \otimes_{\mathcal{Z}( \mathcal{A} )} \eta ) = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ) .$$ Therefore, \eqref{22ndjuly20195} implies that for all $ a \in \mathcal{A}, $ $$ \Pi_g ( \nabla ) ( \omega \otimes_{\mathcal{A}} \eta a ) = d ( g ( \omega \otimes_{\mathcal{A}} \eta ) ) a + g ( \omega \otimes_{\mathcal{A}} \eta ) da = d ( g ( \omega \otimes_{\mathcal{A}} \eta a ) ), $$ i.e, $ \nabla $ is compatible with $ g $ on the whole of $ \mathcal{E} $. Uniqueness is clear from Theorem \ref{existenceuniqueness}. \mbox{}\hfill $\sqare$\vspace{1ex} \section{Levi-Civita connections as bimodule connections} \label{section7} In this section, we make contact with bimodule connections. A considerable amount of literature on Levi-Civita connections in the context of noncommutative geometry have been devoted to bimodule connections. We refer to the book \cite{beggsmajidbook} for the details. We show that in our set up ( the assumptions of Theorem \ref{existenceuniqueness} ), the Levi-Civita connection is a bimodule connection in a very natural way. This section is a genuine application of the Koszul formula of Theorem \ref{koszul21stjuly}. Let us recall the definition of bimodule connections. \begin{dfn}\label{lLr} Suppose $ \mathcal{E} = {\Omega}^1 ( \mathcal{A} ) $ for a differential calculus and $ \sigma^\prime: \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} \rightarrow \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} $ be a bimodule map. A right connection $\nabla_1$ on $ \mathcal{E} $ is said to be a bimodule connection for the pair $ ( \mathcal{E}, \sigma^\prime ) $ if, in addition to the right Leibniz rule as in Definition \ref{rLr}, there is also a $\sigma^\prime$-left Leibniz rule, that is, for all $ a \in \mathcal{A} $ and for all $ e \in \mathcal{E}, $ it holds that $$ \nabla_1 ( a e ) = a \nabla_1 ( e ) + \sigma^\prime ( da \otimes_{\mathcal{A}} e ) .$$ \end{dfn} Throughout this section, we will work under the assumptions of Theorem \ref{existenceuniqueness} and so we have a canonical choice of $ \sigma^\prime = \sigma $ as defined in Definition \ref{16thjuly201923}. We will show that the Levi-Civita connection of Theorem \ref{existenceuniqueness} is a bimodule connection for the pair $ ( \mathcal{E}, \sigma ) $. We start by proving a necessary and sufficient condition for a connection to be a bimodule connection for the pair $ ( \mathcal{E}, \sigma ) $. \begin{ppsn} \label{16thjuly20193} Suppose $ \mathcal{E} = {\Omega}^1 ( \mathcal{A} ) $ as in Theorem \ref{existenceuniqueness}. If $ \nabla_1 $ is a connection on $ \mathcal{E}, $ then it is a bimodule connection for $ ( \mathcal{E}, \sigma ) $ if and only if $ \nabla_1 ( \mathcal{Z}( \mathcal{E} ) ) \subseteq \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ).$ \end{ppsn} \noindent {\bf Proof:} Suppose $ \nabla_1 $ is a bimodule connection for the pair $ ( \mathcal{E}, \sigma ) $. Then for all $ \omega \in \mathcal{Z}( \mathcal{E} ) $ and $ a \in \mathcal{A}, $ we get \begin{align*} \nabla_1 ( \omega ) a + \omega \otimes_{\mathcal{A}} da &= \nabla_1 ( \omega a ) = \nabla_1 ( a \omega )\\ &= a \nabla_1 ( \omega ) + \sigma ( da \otimes_{\mathcal{A}} \omega ) = a \nabla_1 ( \omega ) + \omega \otimes_{\mathcal{A}} da, \end{align*} since $ \omega \in \mathcal{Z}( \mathcal{E} ) $. Thus, for all $ \omega \in \mathcal{Z}( \mathcal{E} ) $ and $ a \in \mathcal{A}, $ we have $$ \nabla_1 ( \omega ) a = a \nabla_1 ( \omega ), $$ that is, $ \nabla_1 ( \mathcal{Z}( \mathcal{E} ) ) \subseteq \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $. Conversely, suppose $ \nabla_1 $ is a connection such that $ \nabla_1 ( \mathcal{Z}( \mathcal{E} ) ) \subseteq \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $. Then for all $ \omega \in \mathcal{Z}( \mathcal{E} ) $ and for all $ a \in \mathcal{A}, $ \begin{equation} \label{12thaugust20191} \nabla_1 ( a \omega ) = \nabla_1 ( \omega a ) = \nabla_1 ( \omega ) a + \omega \otimes_{\mathcal{A}} da = a \nabla_1 ( \omega ) + \sigma ( da \otimes_{\mathcal{A}} \omega ).\end{equation} Now let $ e \in \mathcal{E} $ and $ a \in \mathcal{A} $. Since $ \mathcal{E} $ is centered, we can write $ e = \sum\nolimits_j f_j b_j $ for some $ f_j \in \mathcal{Z}( \mathcal{E} ) $ and $ b_j \in \mathcal{A} $. Then \begin{align*} \nabla_1 ( a e ) &= \sum\nolimits_j \nabla_1 ( a f_j b_j ) = \sum\nolimits_j [ \nabla_1 ( a f_j ) b_j + a f_j \otimes_{\mathcal{A}} d b_j ]\\ &= \sum\nolimits_j [ a \nabla_1 ( f_j ) b_j + \sigma ( da \otimes_{\mathcal{A}} f_j ) b_j + a f_j \otimes_{\mathcal{A}} d b_j ] \qquad {\rm (} ~ {\rm by} ~ \eqref{12thaugust20191} ~ {\rm )}\\ &= \sum\nolimits_j [ a \nabla_1 ( f_j b_j ) + \sigma ( da \otimes_{\mathcal{A}} f_j ) b_j ]\\ &= a \nabla_1 ( e ) + \sigma ( da \otimes_{\mathcal{A}} e ). \end{align*} Therefore, $ \nabla_1 $ is a bimodule connection. \mbox{}\hfill $\sqare$\vspace{1ex} Now we use the Koszul formula to prove the main result of this subsection. \begin{thm} \label{16thjuly20194} Suppose $ \mathcal{E} = {\Omega}^1 ( \mathcal{A} ) $ satisfies the assumptions of Theorem \ref{existenceuniqueness} and $ g $ is a pseudo-Riemannian bilinear metric on $\mathcal{E}$. Then the Levi-Civita connection for $ ( \mathcal{E}, g ) $ obtained in Theorem \ref{existenceuniqueness} is a bimodule connection for the pair $ ( \mathcal{E}, \sigma ) $. \end{thm} \noindent {\bf Proof:} The proof follows from Proposition \ref{16thjuly20193} and the Koszul formula as expressed in equation \eqref{16thjuly2019koszul}. We claim that it is enough to show that for all $ \omega, \eta, \theta \in \mathcal{Z}( \mathcal{E} ), $ \begin{equation} \label{23rdjuly20191} 2 ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( \nabla ( \eta ) ) \in \mathcal{Z}( \mathcal{A} ). \end{equation} Indeed, by virtue of Remark \ref{11thaugust2019}, we have \begin{align*} ( g ( \omega \otimes_{\mathcal{A}} - ) & \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( \nabla ( \eta ). a - a. \nabla ( \eta ) ) \\ & = g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} ( \nabla ( \eta ). a - a. \nabla ( \eta ) ) )\\ &= g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} \nabla ( \eta ) ). a - g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ). a \otimes_{\mathcal{A}} \nabla ( \eta) )\\ &= g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} \nabla ( \eta ) ). a - g^{(2)} ( a. ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} \nabla ( \eta) )\\ &= g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} \nabla ( \eta ) ). a - a. g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \omega ) \otimes_{\mathcal{A}} \nabla ( \eta) )\\ &= ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( \nabla ( \eta ) ). a - a. ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( \nabla ( \eta ) ), \end{align*} which is equal to zero when \eqref{23rdjuly20191} holds. Since this is true for all $ \theta, \omega \in \mathcal{Z}( \mathcal{E} ), $ a combination of part 4. of Lemma \ref{20thaugust20191}, Remark \ref{11thaugust2019} and the left $\mathcal{A}$-linearity imply that $$ g^{(2)} ( \xi \otimes_{\mathcal{A}} ( \nabla ( \eta ). a - a. \nabla ( \eta ) ) ) = 0 ~ \forall \xi \in \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} .$$ This proves our claim by Lemma \ref{vg2nondegenerate}. Now, by Lemma \ref{lemma2}, we have that the elements $ g ( \omega \otimes_{\mathcal{A}} dg ( \eta \otimes_{\mathcal{A}} \theta ) )$, $g ( \eta \otimes_{\mathcal{A}} dg ( \theta \otimes_{\mathcal{A}} \omega ) )$ and $g ( \theta \otimes_{\mathcal{A}} dg ( \omega \otimes_{\mathcal{A}} \eta ) )$ are all in $\mathcal{Z}( \mathcal{A} ) $. Next, we observe that for all $ a $ in $ \mathcal{A}, $ \begin{align*} ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ). a &= g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \eta ) \otimes_{\mathcal{A}} ( ( 1 - \sigma ) \nabla_0 ( \omega ). a ) )\\ &= g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \eta ) \otimes_{\mathcal{A}} ( a. ( 1 - \sigma ) \nabla_0 ( \omega ) ) )\\ &= a. g^{(2)} ( ( \theta \otimes_{\mathcal{A}} \eta ) \otimes_{\mathcal{A}} ( ( 1 - \sigma ) \nabla_0 ( \omega ) ) )\\ &= a. ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ). \end{align*} Here we have used the fact that $ ( 1 - \sigma ) \nabla_0 ( \omega ) \in \mathcal{Z} ( \mathcal{E} \otimes_{\mathcal{A}} \mathcal{E} ) $ by Lemma \ref{16thjuly20192}. This proves that $ ( g ( \eta\otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \omega ) \in \mathcal{Z}( \mathcal{A} ) $. Similarly, one finds that also $ ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \eta ) $ and $ ( g ( \eta \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \omega \otimes_{\mathcal{A}} - ) ) ( 1 - \sigma ) \nabla_0 ( \theta )$ belong to $\mathcal{Z}( \mathcal{A} )$. Hence, by the Koszul formula given in \eqref{16thjuly2019koszul}, $ ( g ( \omega \otimes_{\mathcal{A}} - ) \otimes_{\mathcal{A}} g ( \theta \otimes_{\mathcal{A}} - ) ) \nabla ( \eta ) \in \mathcal{Z}( \mathcal{A} ) $ which completes the proof of the the theorem. \mbox{}\hfill $\sqare$\vspace{1ex} \section{The example of the fuzzy sphere} \label{section8} This section concerns the example of a spectral triple on the fuzzy sphere. Our goal is to prove the existence and uniqueness of Levi-Civita connections ( on the corresponding module of one-forms ) for any bilinear pseudo-Riemannian metric. Our spectral triple is a truncated version of the spectral triple constructed in \cite{frolich} for a fuzzy $3$-sphere. It turns out that the module of one-forms is free of rank $3.$ We compute the connection forms (and the Christoffel symbols) of the Levi-Civita connection for the canonical pseudo-Riemannian metric coming from the spectral triple ( see \cite{frolich} ). The computations in this section are similar to those in Section 3 of \cite{frolich} and those in Section 5 of \cite{article1}. However, we provide all the details for the sake of completeness. Let us set up some notations. Firstly, the Lie algebra of $so(3) \simeq su(2)$ is generated by three elements $J_k, k=1,2,3$ with commutation relations \begin{equation}\label{23rdjuly20193} [ J_k, J_l ] = \sum\nolimits_{m=1}^3 \epsilon_{klm} J_m. \end{equation} Here $\epsilon_{klm} $ is the completely antisymmetric Levi-Civita rank $3$ tensor with $\epsilon_{123} = 1$. For a natural number $n,$ let $ \rho_{\frac{n}{2}} $ denote the $(n + 1)$-dimensional unitary irreducible representation of the Lie algebra $su(2)$. The vector space $ \mathbb{C}^{n + 1} $ is the carrier vector space of the representation $ \rho_{\frac{n}{2}}$, and $ \mathbb{C} $ will be viewed as the trivial representation space. In particular, for the fundamental, $n=1$, representation we have $$ J_k = \tfrac{1}{2} \sqrt{-1} \, \tau_k, $$ with Hermitian Pauli matrices $$ \tau_1 = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad \tau_2 = \begin{bmatrix} 0 & - \sqrt{-1} \\ \sqrt{-1} & 0 \end{bmatrix}, \quad \tau_3 = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}, $$ which are also a basis of the Clifford algebra $Cl(2,0)$, that is are such that, \begin{equation} \label{23rdjuly20192} [\tau_k, \tau_l ] = 2 \sqrt{-1}\, \sum\nolimits_{m=1}^3 \epsilon_{klm} \tau_m , \qquad \tau_j \tau_k + \tau_k \tau_j = 2 \delta_{j k} . \end{equation} Our spectral triple $ ( A_N, H_N, D_N ) $ has Hilbert space $$ H_N = K_N \otimes_{\mathbb{C}} \mathbb{C}^2, $$ where $ K_N = \oplus_{l=0}^{N} \mathbb{C}^{2l+1}$. The algebra $ A_N $ is the full matrix algebra $ B ( K_N ) $. We have the canonical action $ \pi^\prime $ of $ A_N $ on $ K_N $. Then the algebra $ A_N $ is represented on $ H_N $ by the formula $ a \mapsto \pi ( a ) $ where $$ \pi ( a ) ( h_1 \otimes_{\mathbb{C}} h_2 ) = \pi^\prime (a h_1) \otimes_{\mathbb{C}} h_2 $$ for all $ h_1 $ in $ K_N $ and $ h_2 $ in $ \mathbb{C}^2.$ Next we have the Dirac operator $ D_N $. Firstly, for $k = 1,2,3, $ we define operators $ X_k \in B ( H_N ) $ by the formula $$ X_k = \oplus^{N}_{n = 0} \,\, \rho_{\frac{n}{2}} ( J_k ), $$ and denote $\sigma_k = \sqrt{-1} \, \tau_k$, for $k = 1,2,3$. Then the Dirac operator $ D_N $ is defined as $$ D_N = \sum^3_{k = 1} X_k \otimes_{\mathbb{C}} \sigma_k. $$ Since $ X_k $ and and $ \sigma_k $ are all skew-Hermitian, the operator $ D_N $ is self-adjoint. By omitting the notation $ \pi $ while viewing an element of $ A_N $ as an operator on $ H_N $, it can be easily checked that for all $ a \in A_N, $ \begin{equation} \label{23rdjuly20194} [ D_N, a ] = \sum^3_{k = 1} \, [ X_k, a ] \otimes_{\mathbb{C}} \sigma_k, \end{equation} as elements of $ A_N \otimes_{\mathbb{C}} M_{2} ( \mathbb{C})$ acting on $H_N$. For $ k = 1,2, 3, $ we define derivations $ \delta_k $ on $ A_N $ by $\delta_k ( a ) = [ X_k, a ]$. Then \begin{equation} \label{28thjuly20194} [ D, a ] = \sum^3_{k = 1} \, \delta_k ( a ) \otimes_{\mathbb{C}} \sigma_k. \end{equation} By use of \eqref{23rdjuly20193}, one has the following commutation relations between the derivations $\delta_i $: \begin{equation} \label{28thjuly20197} [ \delta_1, \delta_2 ] = \delta_3 , \quad [ \delta_2, \delta_3 ] = \delta_1, \quad [ \delta_3, \delta_1 ] = \delta_2. \end{equation} We will denote the space of one forms of this spectral triple by the symbol $ \mathcal{E} $. Our goal is to prove that $ \mathcal{E} $ satisfies all the conditions of Theorem \ref{existenceuniqueness}. We will repeatedly use the fact that $A_{N} = B (K_N)$ has no proper ideal except $ \{ 0 \} $ and itself. Let us start by identifying the space of one forms $\mathcal{E}$ as well as the space of two-forms. We have the following proposition: \begin{ppsn} \label{28thjuly20192} The module $\mathcal{E} = \Omega^1(A_N) $ is free of rank $ 3 $ generated by the central elements $ 1 \otimes_{\mathbb{C}} \sigma_1, 1 \otimes_{\mathbb{C}} \sigma_2, 1 \otimes_{\mathbb{C}} \sigma_3 $. \end{ppsn} \noindent {\bf Proof:} We use \eqref{23rdjuly20193} and \eqref{23rdjuly20194} to write: \begin{align*} [ D, J_1 ] & = J_2 \otimes_{\mathbb{C}} \sigma_3 - J_3 \otimes_{\mathbb{C}} \sigma_2 , \\ [ D, J_2 ] & = J_3 \otimes_{\mathbb{C}} \sigma_1 - J_1 \otimes_{\mathbb{C}} \sigma_3, \\ [ D, J_3 ] & = J_1 \otimes_{\mathbb{C}} \sigma_2 - J_2 \otimes_{\mathbb{C}} \sigma_1 . \end{align*} Hence, we get \begin{equation} \label{23rdjuly20195} [ J_1, [ D, J_1 ] ] = J_2 \otimes_{\mathbb{C}} \sigma_2 + J_3 \otimes_{\mathbb{C}} \sigma_3, \end{equation} \begin{equation} \label{23rdjuly20196} [ J_2, [ D, J_2 ] ] = J_1 \otimes_{\mathbb{C}} \sigma_1 + J_3 \otimes_{\mathbb{C}} \sigma_3, \end{equation} \begin{equation} \label{23rdjuly20197} [ J_3, [ D, J_3 ] ] = J_1 \otimes_{\mathbb{C}} \sigma_1 + J_2 \otimes_{\mathbb{C}} \sigma_2. \end{equation} By ( \eqref{23rdjuly20195} - \eqref{23rdjuly20197} ) + \eqref{23rdjuly20196}, we have $$ [ J_1, [ D, J_1 ] ] - [ J_3, [ D, J_3 ] ] + [ J_2, [ D, J_2 ] ] = 2 J_3 \otimes_{\mathbb{C}} \sigma_3 .$$ Therefore, the element $J_3 \otimes_{\mathbb{C}} \sigma_3 $ is an element of $ \mathcal{E} .$ Since $ \mathcal{E} $ is a bimodule and the ideal generated by the non-zero element $ J_3 $ is equal to $ A_N, $ we can conclude that $ 1 \otimes_{\mathbb{C}} \sigma_3 $ belongs to $ \mathcal{E} $. Similarly, the elements $ 1 \otimes_{\mathbb{C}} \sigma_1 $ and $ 1 \otimes_{\mathbb{C}} \sigma_2 $ also belong to $ \mathcal{E} $. Thus, $ \mathcal{E} \simeq A_N \otimes_{\mathbb{C}} \mathbb{C}^3 $ is a free module with a basis consisting of the central elements $ 1 \otimes_{\mathbb{C}} \sigma_k, \, k = 1,2,3 $. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{crlre} \label{28thjuly20191} Let denote by the symbol $ \mathcal{E}. \mathcal{E} $ the subset $ \{ e. f : e, f ~ \in \mathcal{E} \} $ of $ A_N \otimes_{\mathbb{C}} M_2({I\! \! \!\! C})$. Then $ \mathcal{E}. \mathcal{E} = A_N \otimes_{\mathbb{C}} M_2({I\! \! \!\! C}) $. \end{crlre} \noindent {\bf Proof:} By Proposition \ref{28thjuly20192}, it follows that the elements $ 1 \otimes_{\mathbb{C}} \sigma_k $ belongs to $ \mathcal{E} $ for all $k = 1,2, 3.$ Thus, the elements $ 1 \otimes_{\mathbb{C}} \sigma_k \sigma_j \in \mathcal{E}. \mathcal{E} $. Since \eqref{23rdjuly20192} holds and $ \{ 1, \sigma_k : k = 1,2, 3 \} $ is a basis of $ B ( \mathbb{C}^2 ) = M_2({I\! \! \!\! C})$, we see that the subspace $ \{ 1 \otimes_{\mathbb{C}} X : X \in M_2({I\! \! \!\! C}) \} $ belongs to $ \mathcal{E}. \mathcal{E} $. Therefore, $ A_N \otimes_{\mathbb{C}} M_2({I\! \! \!\! C})$ belongs to $ \mathcal{E}. \mathcal{E} $. Hence, $ \mathcal{E}. \mathcal{E} = A_N \otimes_{\mathbb{C}} M_2({I\! \! \!\! C}).$ \mbox{}\hfill $\sqare$\vspace{1ex} \begin{ppsn} \label{28thjuly20193} The space of junk forms {\rm(} ~ $ A_N \otimes_{\mathbb{C}} M_2({I\! \! \!\! C}) $ ~ {\rm )} of the spectral triple is equal to the subspace $ \{ Y \otimes_{\mathbb{C}} 1 : Y \in A_N \} $. \end{ppsn} \noindent {\bf Proof:} Suppose $ a_k, b_k, \, k = 1, \cdots, n $, be elements in $ A_N $ such that $ \sum\nolimits_k a_k [ D, b_k ] = 0.$ Then by \eqref{28thjuly20194}, we get $$ \sum\nolimits_{j,k} a_j \, \delta_k ( b_j ) \otimes_{\mathbb{C}} \sigma_k = 0 .$$ Therefore, for all $k = 1,2,3,$ we have \begin{equation} \label{28thjuly20195} \sum\nolimits_j a_j \, \delta_k ( b_j ) = 0. \end{equation} We apply $ \delta_l $ to \eqref{28thjuly20195} to obtain $$ \sum\nolimits_j \, [ \delta_l ( a_j ) \delta_k ( b_j ) + a_j \delta_l \delta_k ( b_j ) ] = 0 \qquad \forall ~ k,l,$$ where we have used the fact that $ \delta_l $ is a derivation. Hence, for all $k,l,$ we get \begin{equation} \label{28thjuly20196} \sum\nolimits_j \delta_l ( a_j ) \delta_k ( b_j ) = - \sum\nolimits_j a_j \delta_l \delta_k ( b_j ), \qquad \sum\nolimits_j \delta_k ( a_j ) \delta_k ( b_j ) = - \sum\nolimits_j a_j \delta^2_k ( b_j ). \end{equation} Now, \eqref{28thjuly20196} implies that \begin{align*} \sum\nolimits_j [ D, a_j ] [ D, b_j ] & = - \sum\nolimits_j \sum^3_{k,l = 1} \delta_k ( a_j ) \delta_l ( b_j ) \otimes_{\mathbb{C}} \sigma_k \sigma_l\\ & = - \sum\nolimits_j \sum^3_{k = 1} \delta_k ( a_j ) \delta_k ( b_j ) \otimes_{\mathbb{C}} 1 \\ & \quad - \sum\nolimits_j \sum\nolimits_{k < l} [ \delta_k ( a_j ) \delta_l ( b_j ) - \delta_l ( a_j ) \delta_k ( b_j ) ] \otimes_{\mathbb{C}} \sigma_k \sigma_l\\ & = \sum\nolimits_{i,k} a_j \delta^2_k ( b_j ) \otimes_{\mathbb{C}} 1 + \sum\nolimits_{j, k < l} a_j [ \delta_k, \delta_l ] ( b_j ) \otimes_{\mathbb{C}} \sigma_k \sigma_l \end{align*} However, by virtue of \eqref{28thjuly20197}, we have \begin{align*} \sum\nolimits_{i, k < l} a_j [ \delta_k, \delta_l ] ( b_j ) \otimes_{\mathbb{C}} \sigma_k \sigma_l & = \sum\nolimits_j a_j \delta_3 ( b_j ) \otimes_{\mathbb{C}} \sigma_1 \sigma_2 \\ & \quad + \sum\nolimits_j a_j \delta_1 ( b_j ) \otimes_{\mathbb{C}} \sigma_2 \sigma_3 - \sum\nolimits_j a_j \delta_2 ( b_j ) \otimes_{\mathbb{C}} \sigma_1 \sigma_3\\ &= 0 \end{align*} by \eqref{28thjuly20195}. Thus, we have \begin{equation} \label{28thjuly20198} \sum\nolimits_j [ D, a_j ] [ D, b_j ] = \sum\nolimits_{j,k} a_j \delta^2_k ( b_j ) \otimes_{\mathbb{C}} 1. \end{equation} We claim that there exist elements $a, b \in A_N$ such that $ a \delta_k ( b ) = 0 $ for all $k,$ but $ a \sum\nolimits_k \delta^2_k ( b ) \neq 0 $. Indeed, if our claim is true, then by the arguments made above, the set of all junk forms will be of the form $ \{ X \otimes_{\mathbb{C}} 1: X \in \mathcal{I} \}, $ where $ \mathcal{I} $ is a non-zero ideal of $ M_{N + 2} ( \mathbb{C} ) $. Therefore, the space of junk forms is equal to $\{ X \otimes_{\mathbb{C}} 1: X \in M_{N + 2} ( \mathbb{C} ) \} $. So we are left to prove our claim. Let us define the bilinear form on $ \mathbb{C}^{N + 1} $ $$ \left\langle \left\langle v_1, v_2 \right\rangle\right\rangle = {\rm Re} ( \left\langle v_1, v_2 \right\rangle ), $$ where $ \left\langle \cdot, \cdot \right\rangle $ is the complex inner product on $ \mathbb{C}^{N + 1} $ with respect to which $ J_1, J_2, J_3 $ are skew hermitian elements of $ {\rm Hom}_{\mathbb{C}} ( \mathbb{C}^{N + 1}, \mathbb{C}^{N + 1} ).$ Moreover, let $ v $ be a vector in $ \mathbb{C}^{N + 1} $ such that $ J_1 ( v ) \neq 0 $. We claim that $ v $ does not belong to ${\rm Span} \{ J_1 v, J_2 v, J_3 v \}.$ Indeed, since $ J_k $ is skew-hermitian, we have $$ \left\langle J_k v, v \right\rangle = - \left\langle v,J_k v \right\rangle = - \overline{\left\langle J_k v, v \right\rangle} $$ and hence, $ \left\langle \left\langle J_k v, v \right\rangle\right\rangle = 0 $. From here, it is straightforward to verify our claim. We construct a basis $ \{ v, v_1, \cdots v_N \} $ of $ \mathbb{C}^{N + 1} $ in such a way that $v$ is the first element and $ {\rm Span} \{ J_1 v, J_2 v, J_3 v \} \subseteq \{ v_1, \cdots v_N \} $. We define $$ a ( \lambda. 1 + x + c_1 v + \sum^N_{i = 1} c_i v_i ) = c_1 v, ~ b ( \lambda. 1 + x + c_1 v + \sum^N_{i = 1} c_i v_i ) = \lambda. v$$ for all $ \lambda $ in $ \mathbb{C} $ and for all $ x $ in $ \mathbb{C}^2 \oplus \mathbb{C}^3 \oplus \cdots \oplus \mathbb{C}^N.$ Then \begin{align*} a \delta_k ( b ) ( \lambda. 1 + x + c_1 v + \sum^N_{i = 1} c_i v_i ) &= a ( X_k b - b X_k ) ( \lambda. 1 + x + c_1 v + \sum^N_{i = 1} c_i v_i )\\ &= a ( X_k. b ) ( \lambda. 1 ) = \lambda a X_k ( v ) = \lambda a J_k ( v ) = 0. \end{align*} However, \begin{align*} a ( \sum\nolimits_k \delta^2_k ( b ) ) ( 1 ) &= a \sum\nolimits_k [ X_k, [ X_k, b ] ] ( 1 )\\ &= a \sum\nolimits_k [ X_k, X_k b - b X_k ) ] ( 1 ) = a ( \sum\nolimits_k X^2_k b - X_k b X_k - X_k b X_k + b X^2_k ) ( 1 )\\ &= a ( \sum\nolimits_k X^2_k b ) ( 1 ) = a ( \sum\nolimits_k X^2_k ) ( v )\\ &= a ( \sum\nolimits_k J^2_k ) ( v ) = - \tfrac{3}{4} a ( v ) \neq 0. \end{align*} This finishes the proof of our claim about the description of the junk forms. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{crlre} \label{17thaugust20191} The space of two forms is $$ \Omega^2(A_N) = \{ X \otimes_{\mathbb{C}} \sigma_1 \sigma_2 + Y \otimes_{\mathbb{C}} \sigma_2 \sigma_3 + Z \otimes_{\mathbb{C}} \sigma_1 \sigma_3: X, Y, Z \in A_N \} . $$ \end{crlre} \noindent {\bf Proof:} The proof is an immediate consequence of Corollary \ref{28thjuly20191} and Proposition \ref{28thjuly20193}. \mbox{}\hfill $\sqare$\vspace{1ex} Let us denote by $ e_k = 1 \otimes_{\mathbb{C}} \sigma_k$, for $k =1,2,3$. Then, From Proposition \ref{28thjuly20192}, the element $ \{ e_1, e_2, e_3 \} \subseteq \mathcal{Z}( \mathcal{E} ) $ form a basis of the free right $A_N$-module $\mathcal{E}.$ We are in the position to prove the main theorem of this section. \begin{thm} The differential calculus coming from the spectral triple on the fuzzy sphere satisfies the hypothesis of Theorem \ref{existenceuniqueness} and hence there exists a unique Levi-Civita connection for any pseudo-Riemannian bilinear metric on $\mathcal{E} = \Omega^1(A_N) $. \end{thm} \noindent {\bf Proof:} Firstly, the description of $ \mathcal{E} $ in Proposition \ref{28thjuly20192} implies that $ \mathcal{Z}( \mathcal{E} ) $ is the complex linear span of $ \{ e_1, e_2, e_3 \} $. Thus, the equality $ \mathcal{Z}( \mathcal{E} ) \otimes_{\mathcal{Z}(A_N)} A_N = \mathcal{E} $ easily follows from the description of one-forms, observing that $ \mathcal{Z}(A_N) = \mathbb{C}.1 $. Next, Corollary \ref{17thaugust20191} implies that $ \Omega^2(A_N) = {\rm Span}\{ a_{kj} \, e_k \wedge e_j: k \neq j, a_{kj} = - a_{jk} \} $ and $ {\rm Ker} ( \wedge ) = {\rm Span}\{ a_{kj} \, e_k \otimes_{A_N} e_j: a_{kj} = a_{jk} \} $. If $ \mathcal{F} = {\rm Span}\{ a_{kj} \, e_k \otimes_{A_N} e_j: k \neq j, a_{kj} = - a_{jk} \}, $ then $ \mathcal{F} $ is isomorphic to $ \Omega^2(A_N) $ as right $A_N$-modules and $$\mathcal{E} \otimes_{A_N} \mathcal{E} = {\rm Ker} ( \wedge ) \oplus \mathcal{F}.$$ Finally, it is easy check that for all $\omega, \eta \in \mathcal{Z}( \mathcal{E} ), $ $$ P_{\rm sym} ( \omega \otimes_{A_N} \eta ) = \tfrac{1}{2} ( \omega \otimes_{A_N} \eta + \eta \otimes_{A_N} \omega ) $$ and therefore, $\sigma ( \omega \otimes_{A_N} \eta ) = \eta \otimes_{A_N} \omega $ for all $ \omega, \eta \in \mathcal{Z}( \mathcal{E} ) $. This verifies all the hypotheses of Theorem \ref{existenceuniqueness}. \mbox{}\hfill $\sqare$\vspace{1ex} By construction, the bimodule of 2-forms is free and we take as generators the elements \begin{equation} \label{7thaugust20192nn} f_m = \tfrac{1}{2} \, \sum\nolimits_{j,k} \epsilon_{mjk} e_j \wedge e_k , \qquad m=1,2,3. \end{equation} It is then easy to deduce that \begin{equation} \label{7thaugust20192n} e_j \wedge e_k = \sum\nolimits_{m}\, \epsilon_{jkm} f_m , \qquad j,k=1,2,3. \end{equation} As for the action of the differential, by definition $d ( a ) = \sum\nolimits_k [J_k, a] \, e_k$, for $a\in\mathcal{A}_N$. On the other hand, starting from $0=\delta^2 (J_k)$, a direct computation leads to \begin{equation} \label{7thaugust20192} d e_m = - \tfrac{1}{2} \, \sum\nolimits_{j,k} \epsilon_{mjk} \, e_j \wedge e_k = - f_m . \end{equation} \begin{thm}\label{lc-fuzzy} The connection $1$-forms of the Levi-Civita connection for the canonical pseudo-Riemannian bilinear metric for the spectral triple are given by \begin{equation} \label{16thaugust20191n} \omega_{jk} = - \tfrac{1}{2} \sum\nolimits_l \, \epsilon_{jkl} \, e_l \, . \end{equation} \end{thm} \noindent {\bf Proof:} It is easy to see ( as in equation (3.49) of \cite{frolich} ) that the canonical pseudo-Riemannian bilinear metric $g$ is defined as the right $A_N$-linear extension of the map $$ g ( e_k \otimes_{A_N} e_j ) = \delta_{kj} .$$ We are going to compute the Levi-Civita connection for this $g.$ Let $ \nabla $ be a connection on $\mathcal{E}$ which is both torsionless and compatible with $g.$ Since $ \mathcal{E} $ is a free centered module with a basis $ \{ e_1, e_2, e_3 \} \subseteq \mathcal{Z}( \mathcal{E} ), $ we can write \begin{equation} \label{16thaugust20191} \nabla ( e_k ) = \sum\nolimits_{j} e_j \otimes_{A_N} \omega_{jk}. \end{equation} Now, since the basis $e_k$ is orthonormal, the metric compatibility condition reduces to $$ (g \otimes_{A_N} {\rm id})\sigma_{23}(\nabla(e_k)\otimes_{A_N} e_j ) + ( g \otimes_{A_N} {\rm id} ) ( e_k \otimes_{A_N} \nabla ( e_j ) ) = 0. $$ By inserting \eqref{16thaugust20191}, this just give that for all $k, j$ we have antisymmetry \begin{equation} \label{15thaugust20192} \omega_{kj} = - \omega_{jk}. \end{equation} Next, the torsion zero condition $d e_k + \wedge \circ \nabla ( e_k ) = 0$ becomes \begin{equation} d e_k + \sum\nolimits_{j} e_j \wedge \omega_{jk} = 0. \end{equation} Using \eqref{7thaugust20192} and the antisymmetry \eqref{15thaugust20192} one infers that $\omega_{jk} = - \tfrac{1}{2} \sum\nolimits_l \, \epsilon_{jkl} \, e_l$ as stated in \eqref{16thaugust20191n}. This finishes the proof. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} If we write the connection $1$-forms on the basis $e_k$, $$ \omega_{jk} = \sum\nolimits_{l} e_l \, \Gamma_{k j l}, $$ we get for the Christoffel symbols $\Gamma_{l k j}$ of the Levi-Civita connection the expressions $$ \Gamma_{ljk} = \tfrac{1}{2} \epsilon_{ljk}. $$ \end{rmrk} \subsection{Computation of the curvature} Let us recall the Sweedler-like notation in \eqref{sweedler} for a general torsionless connection $ \nabla $ on a centered $A$-bimodule $\mathcal{E}: \nabla ( \omega ) = \omega_{(0)} \otimes_{\mathcal{A}} \omega_{(1)}$. We define the right $A_N$-linear map $$ R_\nabla := \nabla^2 : \, \mathcal{E} \rightarrow \mathcal{E} \otimes_{A_N} \Omega^2(A_N) $$ by a Leibniz rule: \begin{align} R_\nabla(\omega) &= \nabla (\omega_{(0)} \otimes_{A_N} \omega_{(1)}) = \nabla (\omega_{(0)}) \wedge \omega_{(1)} + \omega_{(0)} \otimes_{A_N} d \omega_{(1)} \nonumber \\ & = (\omega_{(0)})_{(0)} \otimes_{A_N} (\omega_{(0)})_{(1)} \wedge \omega_{(1)} + \omega_{(0)} \otimes_{A_N} d \omega_{(1)}. \end{align} Then, let us specialise this to the fuzzy sphere with free bimodule of 1-forms $\mathcal{E}$ and basis $(e_k, k=1,2,3)$ as before. We have the following proposition. \begin{ppsn} \label{curv-fuzzy2n} Consider the Levi-Civita connection of Theorem \ref{lc-fuzzy} with connections $1$-forms $\omega_{jk} = - \frac{1}{2} \sum\nolimits_l \, \epsilon_{jkl} \, e_l$. Then \begin{align}\label{curv} R_\nabla(e_j) &= - \tfrac{1}{4} \sum\nolimits_{p,q} \, \varepsilon_{jpq} \ e_p \otimes_{A_N} f_q \nonumber \\ &= \tfrac{1}{4} \sum\nolimits_{p} \, e_p \otimes_{A_N} ( e_p \wedge e_j ) \, . \end{align} with basis 1-forms $e_p$'s and basis two forms $f_q$'s. \end{ppsn} \noindent {\bf Proof:} Writing as before $\nabla ( e_k ) = \sum\nolimits_{j} e_j \otimes_{A_N} \omega_{jk}$ for the connection, one finds $$ R_\nabla (e_j) = \sum\nolimits_{p,k} \, e_p \otimes_{A_N} \left( \omega_{p k} \wedge \omega_{k j} + d ( \omega_{p j} ) \right) $$ Then, a direct computation using \eqref{16thaugust20191n}, gives $$ \omega_{p k} \wedge \omega_{k j} = \tfrac{1}{4} \sum\nolimits_{q} \, \varepsilon_{jpq} \, f_q \quad \mbox{and} \quad d \omega_{p j} = - \tfrac{1}{2} \sum\nolimits_{q} \, \varepsilon_{jpq} \, f_q $$ which when inserted in the previous expression lead to the first expression in \eqref{curv}. Using equation \eqref{7thaugust20192nn} one gets the second equality. \mbox{}\hfill $\sqare$\vspace{1ex} Since the bimodule $\mathcal{E}$ is free with basis of generators $(e_k, k=1,2,3)$ so is the dual bimodule $\mathcal{E}^*$ whose dual basis we shall denote $(\psi_j, j=1,2,3)$ with $\psi_j(e_k ) = \delta_{jk}$. Then, by using the canonical isomorphism $$ {\rm Hom}_{A_N}(\mathcal{E}, \mathcal{E} \otimes_{A_N} \Omega^2(A_N)) \simeq \mathcal{E} \otimes_{A_N} \Omega^2(A_N) \otimes_{A_N} \mathcal{E}^* $$ we can think of the curvature $R_\nabla$ as a map $R_\nabla \in \mathcal{E} \otimes_{A_N} \Omega^2(A_N) \otimes_{A_N} \mathcal{E}^*$. In particular for the Levi-Civita connection of the fuzzy sphere with curvature in \eqref{curv} one easily finds, $$ R_\nabla = \tfrac{1}{4} \sum\nolimits_{p j} \, e_p \otimes_{A_N} ( e_p \wedge e_j ) \, \otimes_{A_N} \psi_j \, . $$ Being $\mathcal{E}$ and $\mathcal{E}^*$ centered bimodules, we have a we well defined $ A_N - A_N $-bilinear map, \begin{align} \label{16thaugust2019n} \widehat{{\rm ev}} : \Omega^2(A_N) \otimes_{A_N} \mathcal{E}^* \, \to \, \mathcal{E} \, & , \nonumber \\ \widehat{{\rm ev}} \big( (e_j \wedge e_k) \otimes_{A_N} \psi_m \big) &= \tfrac{1}{2} \, \widehat{{\rm ev}} \big( (e_j \otimes_{A_N} e_k - e_k \otimes_{A_N} e_j) \otimes_{A_N} \psi_m \big) \nonumber \\ & = e_j \, \psi_m(e_k) - e_k \, \psi_m(e_j) = e_j \, \delta_{m k} - e_k \, \delta_{m j}. \end{align} We are prepared to define the Ricci and scalar curvature of a connection. \begin{dfn} The Ricci curvature $ {\rm Ric} $ is defined as the element in $ \mathcal{E} \otimes_{A_N} \mathcal{E} $ given by \begin{equation} \label{23rdjan1} {\rm Ric}:= ( {\rm id}_\mathcal{E} \otimes_{A_N} \widehat{{\rm ev}} ) ( R_\nabla) \in \mathcal{E} \otimes_{A_N} \mathcal{E} \, . \end{equation} Furthermore, if $ {\rm Ric} = \sum\nolimits_{k j} r_{kj} \, e_k \otimes_{A_N} e_j$ for elements $ r_{kj } \in \mathcal{Z}(A_N)$, the scalar curvature {\rm Scal} is defined as: \begin{equation} \label{23rdjan2} {\rm Scal}: = \sum\nolimits_{k j} r_{kj} \, g ( e_k \otimes_{A_N} e_j ) \, \in \, A_N. \end{equation} \end{dfn} \begin{ppsn} The scalar curvature for the fuzzy sphere is $\frac{3}{4}$. \end{ppsn} \noindent {\bf Proof:} By applying formula \eqref{16thaugust2019n} for the 2-forms $e_p \wedge e_j$ we first compute $$ \sum\nolimits_{j} \widehat{{\rm ev}} ( (e_p \wedge e_j) \otimes_{A_N} \psi_j ) = 3 e_p - e_p = 2 e_p . $$ When inserting in \eqref{curv}, this yields \begin{align*} {\rm Ric} &:= ( \mbox{id} \otimes_{A_N} \widehat{{\rm ev}} ) [ \, \tfrac{1}{4} \sum\nolimits_{p j} \, e_p \otimes_{A_N} ( e_p \wedge e_j ) \, \otimes_{A_N} \psi_j \, ] \\ &\:= \tfrac{1}{2} \, \sum\nolimits_{p} e_{p} \otimes_{A_N} e_p . \end{align*} Then, \begin{align*} {\rm Scal} & = \tfrac{1}{2}\, \sum\nolimits_p \, g ( e_p \otimes_{A_N} e_p ) \\ & = \tfrac{3}{4} \, . \end{align*} \vspace{4mm} \noindent {\bf Acknowledgement:} JB and DG were funded by a ``Research in Pairs" INDAM grant at ICTP, Trieste, and also by a ``Dipartimento di Eccellenza" fund of the Department of Mathematics and Geosciences, University of Trieste. DG was partially supported by JC Bose Fellowship and Grant from the Department of Science and Technology, Govt. of India. GL acknowledges partial support from INFN, Iniziativa Specifica GAST and from INDAM GNSAGA. Finally, JB and DG want to thank GL for the very kind hospitality during their stay in Trieste.
2,877,628,089,891
arxiv
\section{Introduction} Separation Logic ($\mathsf{SL}$) \cite{IshtiaqOHearn01,Reynolds02} is widely used to reason about programs manipulating recursively linked data structures, being at the core of several industrial-scale static program analysis techniques \cite{DBLP:conf/nfm/CalcagnoDDGHLOP15,DBLP:conf/cav/BerdineCI11,DBLP:conf/cav/DudkaPV11}. Given an integer $\mathfrak{K} \geq 1$, denoting the number of fields in a record datatype, and an infinite set $\mathbb{L}$ of memory locations (addresses), the assertions in this logic describe {\em heaps}, that are finite partial functions mapping locations to records, i.e., $\mathfrak{K}$-tuples of locations. A location $\ell$ in the domain of the heap is said to be {\em allocated} and the \emph{points-to} atom $x \mapsto (y_1,\dots,y_\mathfrak{K})$ states that the location associated with $x$ refers to the tuple of locations associated with $(y_1,\dots,y_\mathfrak{K})$. The {\em separating conjunction} $\phi * \psi$ states that the formul{\ae} $\phi$ and $\psi$ hold in non-overlapping parts of the heap, that have disjoint domains. This connective allows for modular program analyses, because the formul{\ae} specifying the behaviour of a program statement refer only to the small (local) set of locations that are manipulated by that statement, with no concern for the rest of the program's state. Formul{\ae} consisting of points-to atoms connected with separating conjunctions describe heaps of bounded size only. To reason about recursive data structures of unbounded sizes (lists, trees, etc.), the base logic is enriched by predicate symbols, with a semantics specified by user-defined inductive rules. For instance, the rules: $\mathsf{excls}(x,y) \Leftarrow \exists z~.~ x\mapsto (z,y) * z \not \bumpeq \mathsf{c}$ and $\mathsf{excls}(x,y) \Leftarrow \exists z \exists v~.~ x\mapsto (z,v) * \mathsf{excls}(v,y) * z \not \bumpeq \mathsf{c}$ describe a non-empty list segment, whose elements are records with two fields: the first is a data field, that keeps a list of locations, which excludes the location assigned to the global constant $\mathsf{c}$, and the second is used to link the records in a list whose head and tail are pointed to by $x$ and $y$, respectively. An important problem in program verification, arising during construction of Hoare-style correctness proofs, is the discharge of verification conditions, that are entailments of the form $\phi \vdash \psi_1, \ldots, \psi_n$, where $\phi$ and $\psi_1, \ldots, \psi_n$ are separating conjunctions of points-to, predicates and (dis-)equalities, also known as {\em symbolic heaps}. The \emph{entailment problem} then asks if \emph{every model of $\phi$ is a model of some $\psi_i$?} In general, the entailment problem is undecidable and becomes decidable when the inductive rules used to interpret the predicates satisfy three restrictions \cite{IosifRogalewiczSimacek13}: \begin{inparaenum}[(1)] \item {\em progress}, stating that each rule allocates {\em exactly} one memory cell, \item {\em connectivity}, ensuring that the allocated memory cells form a tree-shaped structure, and \item {\em establishment}, stating that all existentially quantified variables introduced by an inductive rule must be assigned to some allocated memory cell, in every structure defined by that rule. \end{inparaenum} For instance, the above rules are progressing and connected but not established, because the $\exists z$ variables are not explicitly assigned an allocated location, unlike the $\exists v$ variables, passed as first parameter of the $\mathsf{excls}(x,y)$ predicate, and thus always allocated by the points-to atoms $x \mapsto (z,y)$ or $x \mapsto (z,v)$, from the first and second rule defining $\mathsf{excls}(x,y)$, respectively. The argument behind the decidability of a progressing, connected and established entailment problem is that every model of the left-hand side is encoded by a graph whose treewidth\footnote{The treewidth of a graph is a parameter measuring how close the graph is to a tree, see \cite[Ch. 11]{flum-grohe-book} for a definition.} is bounded by the size of the largest symbolic heap that occurs in the problem \cite{IosifRogalewiczSimacek13}. Moreover, the progress and connectivity conditions ensure that the set of models of a symbolic heap can be represented by a Monadic Second Order ($\mathsf{MSO}$) logic formula interpreted over graphs, that can be effectively built from the symbolic heap and the set of rules of the problem. The decidability of entailments follows then from the decidability of the satisfiability problem for $\mathsf{MSO}$\ over graphs of bounded treewidth (Courcelle's Theorem) \cite{Courcelle90}. Initially, no upper bound better than elementary recursive was known to exist. Recently, a $2$-$\mathsf{EXPTIME}$\ algorithm was proposed \cite{KatelaanMathejaZuleger19,PMZ20} for sets of rules satisfying these three conditions, and, moreover, this bound was shown to be tight \cite{DBLP:conf/lpar/EchenimIP20}. Several natural questions arise: are the progress, connectivity and establishment conditions really necessary for the decidability of entailments? How much can these restriction be relaxed, without jeopardizing the complexity of the problem? Can one decide entailments that involve sets of heaps of unbounded treewidth? In this paper, we answer these questions by showing that entailments are still $2$-$\mathsf{EXPTIME}$-complete when the establishment condition is replaced by a condition on the (dis-)equations occurring in the symbolic heaps of the problem. Informally, such (dis-)equations must be of the form $x \bumpeq \mathsf{c}$ ($x \not\seq \mathsf{c}$), where $\mathsf{c}$ ranges over some finite and fixed set of globally visible constants (including special symbols such as $\mathsf{nil}$, that denotes a non-allocated address, but also any free variable occurring on the left-hand side of the entailment). We also relax slightly the progress and connectivity conditions, by allowing forest-like heap structures (instead of just trees), provided that every root is mapped to a constant symbol. These entailment problems are called \emph{equationally restricted} (\emph{e-restricted\xspace}, for short). For instance, the entailment problem $\mathsf{excls}(x,y) * \mathsf{excls}(y,z) \vdash \mathsf{excls}(x,z)$, with the above rules, falls in this category. We prove that the e-restricted\xspace condition loses no generality compared to establishment, because any established entailment problem can be transformed into an equivalent e-restricted\xspace entailment problem. E-restricted\xspace problems allow reasoning about structures that contain dangling pointers, which frequently occur in practice, especially in the context of modular program analysis. Moreover, the set of structures considered in an e-restricted\xspace entailment problem may contain infinite sequences of heaps of strictly increasing treewidths, that are out of the scope of established problems \cite{IosifRogalewiczSimacek13}. The decision procedure for e-restricted\xspace problems proposed in this paper is based on a similar idea as the one given, for established problems, in \cite{PMZ20,PZ20}. We build a suitable abstraction of the set of structures satisfying the left-hand side of the entailment bottom-up, starting from points-to and predicate atoms, using abstract operators to compose disjoint structures, to add and remove variables, and to unfold the inductive rules associated with the predicates. The abstraction is precise enough to allow checking that all the models of the left-hand side fulfill the right-hand side of the entailment and also general enough to ensure termination of the entailment checking algorithm. Although both procedures are similar, there are essential differences between our work and \cite{PMZ20,PZ20}. First, we show that instead of using a specific language for describing those abstractions, the considered set of structures can themselves be defined in $\mathsf{SL}$, by means of formul{\ae} of some specific pattern called {\em core formul{\ae}}. Second, the fact that the systems are not established makes the definition of the procedure much more difficult, due to the fact that the considered structures can have an unbounded treewidth. This is problematic because, informally, this boundedness property is essential to ensure that the abstractions can be described using a finite set of variables, denoting the {\em frontier} of the considered structures, namely the locations that can be shared with other structures. In particular, the fact that disjoint heaps may share unallocated (or ``unnamed'') locations complexifies the definition of the composition operator. This problem is overcome by considering a specific class of structures, called {\em normal structures}, of bounded treewidth, and proving that the validity of an entailment can be decided by considering only normal structures. In terms of complexity, we show that the running time of our algorithm is doubly exponential w.r.t.\ the maximal size among the symbolic heaps occurring in the input entailment problem (including those in the rules) and simply exponential w.r.t.\ the number of such symbolic heaps (hence w.r.t.\ the number of rules). This means that the $2$-$\mathsf{EXPTIME}$\ upper bound is preserved by any reduction increasing exponentially the number of rules, but increasing only polynomially the size of the rules. On the other hand, the $2$-$\mathsf{EXPTIME}$-hard lower bound is proved by a reduction from the membership problem for exponential-space bounded Alternating Turing Machines \cite{DBLP:conf/lpar/EchenimIP20}. \shortVersionOnly{ Due to space restrictions, most proofs are shifted to an appendix. } \longVersionOnly{ \mycomment[np]{minor changes in the structure (maybe omitted from conference version)} The remainder of the paper is structured as follows. In Section \ref{sec:prel}, all the necessary notions concerning Separation logic are recalled, and in Section \ref{sec:restricted}, we define the fragment of entailment problems that we are considering. In particular, we formally define the notions of connected, established and e-restricted\xspace problems. In Section \ref{sec:normalize} we introduce a preprocessing step, which transforms any problem into an equivalent {\em normalized} one, satisfying many properties that will be essential in the following. In Section \ref{sec:established} we show that e-restricted\xspace problems are, in a sense to be specified formally, strictly more general than established ones. In Section \ref{sec:injective-normal} we show that the considered entailments can be tested by focusing on a specific class of structures, called {\em normal structures}. In Section \ref{sec:core} we define {\em core formul{\ae}}, which are SL formul{\ae} of specific patterns used to describe suitable abstractions of structures, and we define an algorithm to test entailment based on these abstractions. Each structure is represented by its {\em profile}, defined as the set of core formul{\ae} it satisfies, along with some additional conditions. In Section \ref{sec:coreabs}, we show how such profiles can be effectively constructed and in Section \ref{sec:complexity} the complexity of the procedure is analyzed and the main result of the paper is stated. } \section{Separation Logic with Inductive Definitions} \label{sec:prel} Let ${\bf \mathbb{N}}$ denote the set of natural numbers. For a countable set $S$, we denote by $\card{S} \in {\bf \mathbb{N}} \cup \set{\infty}$ its cardinality. For a partial mapping $f : A \rightharpoonup B$, let $\mathrm{dom}(f) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{x \in A \mid f(x) \in B}$ and $\mathrm{img}(f) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{f(x) \mid x \in \mathrm{dom}(f)}$ be its domain and range, respectively. We say that $f$ is \emph{total} if $\mathrm{dom}(f) = A$, written $f : A \rightarrow B$ and \emph{finite}, written $f : A \rightharpoonup_{\mathit{fin}} B$ if $\card{\mathrm{dom}(f)} < \infty$. Given integers $n$ and $m$, we denote by $\interv{n}{m}$ the set $\set{n,n+1, \ldots, m}$, so that $\interv{n}{m} = \emptyset$ if $n > m$. For a relation $\lhd \subseteq A \times A$, we denote by $\lhd^*$ its reflexive and transitive closure. For an integer $n\geq0$, let $A^n$ be the set of $n$-tuples with elements from $A$. Given a tuple $\vec{a} = (a_1, \ldots, a_n)$ and $i \in \interv{1}{n}$, we denote by $\vec{a}_i$ the $i$-th element of $\vec{a}$ and by $\len{\vec{a}} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} n$ its length. By $f(\vec{a})$ we denote the tuple obtained by the pointwise application of $f$ to the elements of $\vec{a}$. \shortVersionOnly{If multiplicity and order of the elements are not important, we blur the distinction between tuples and sets, using the set-theoretic notations $x \in \vec{a}$, $\vec{a} \cup \vec{b}$, $\vec{a} \cap \vec{b}$ and $\vec{a} \setminus \vec{b}$.} \longVersionOnly{By a slight abuse of notation, we write $a \in \vec{a}$ if $a=\vec{a}_i$, for some $i \in \interv{1}{n}$. Given tuples $\vec{a}$ and $\vec{b}$, we slightly abuse notations by defining the sets $\vec{a} \cup \vec{b} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{x \mid x \in \vec{a} \text{ or } x \in \vec{b}}$, $\vec{a} \cap \vec{b} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{x \mid x \in \vec{a} \text{ and } x \in \vec{b}}$ and $\vec{a} \setminus \vec{b} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{x \mid x \in \vec{a} \text{ and } x \not\in \vec{b}}$.} Let $\mathbb{V} = \set{x,y,\ldots}$ be an infinite countable set of logical first-order variables and $\mathbb{P} = \set{p,q,\ldots}$ be an infinite countable set (disjoint from $\mathbb{V}$) of relation symbols, called \emph{predicates}, where each predicate $p$ has arity $\#p \geq 0$. We also consider a finite set $\mathbb{C}$ of \emph{constants}, of known bounded cardinality, disjoint from both $\mathbb{V}$ and $\mathbb{P}$. Constants will play a special r\^ole in the upcoming developments and the fact that $\mathbb{C}$ is bounded is of a particular importance. A \emph{term} is either a variable or a constant and we denote by $\mathbb{T} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \mathbb{V} \cup \mathbb{C}$ the set of terms. Throughout this paper we consider an integer $\mathfrak{K}\geq1$ that, intuitively, denotes the number of fields in a record datatype. Although we do not assume $\mathfrak{K}$ to be a constant in any of the algorithms presented in the following, considering that every datatype has exactly $\mathfrak{K}$ records simplifies the definition. The logic $\seplogk{\mathfrak{K}}$ is the set of formul{\ae} generated inductively by the syntax: \vspace*{-.5\baselineskip} \[\begin{array}{rcl} \phi & := & \mathsf{emp} \mid t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) \mid p(t_1, \ldots, t_{\#p}) \mid t_1 \approx t_2 \mid \phi_1 * \phi_2 \mid \phi_1 \wedge \phi_2 \mid \neg\phi_1 \mid \exists x ~.~ \phi_1 \\[-4mm] \end{array}\] where $p \in \mathbb{P}$, $t_i \in \mathbb{T}$ and $x \in \mathbb{V}$. Atomic propositions of the form $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ are called \emph{points-to atoms} and those of the form $p(t_1, \ldots, t_{\#p})$ are \emph{predicate atoms}. If $\mathfrak{K}=1$, we write $t_0 \mapsto t_1$ for $t_0 \mapsto (t_1)$. The connective $*$ is called \emph{separating conjunction}, in contrast with the classical conjunction $\wedge$. The \emph{size} of a formula $\phi$, denoted by $\size{\phi}$, is the number of occurrences of symbols in it. We write $\fv{\phi}$ for the set of \emph{free} variables in $\phi$ and $\term{\phi} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \fv{\phi} \cup \mathbb{C}$. A formula is \emph{predicate-free} if it has no predicate atoms. As usual, $\phi_1 \vee \phi_2 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \neg(\neg\phi_1 \wedge \neg\phi_2)$ and $\forall x ~.~ \phi \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \neg\exists x ~.~ \neg\phi$. For a set of variables $\vec{x} = \set{x_1,\ldots, x_n}$ and a quantifier $Q \in \set{\exists, \forall}$, we write $Q \vec{x} ~.~ \phi \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} Q x_1 \ldots Q x_n ~.~ \phi$. By writing $t_1 = t_2$ ($\phi_1 = \phi_2$) we mean that the terms (formul{\ae}) $t_1$ and $t_2$ ($\phi_1$ and $\phi_2$) are syntactically the same. A \emph{substitution} is a partial mapping $\sigma : \mathbb{V} \rightharpoonup \mathbb{T}$ that maps variables to terms. We denote by $[t_1/x_1, \ldots, t_n/x_n]$ the substitution that maps the variable $x_i$ to $t_i$, for each $i \in \interv{1}{n}$ and is undefined elsewhere. By $\phi\sigma$ we denote the formula obtained from $\phi$ by substituting each variable $x \in \fv{\phi}$ by $\sigma(x)$ (we assume that bound variables are renamed to avoid collisions if needed). By abuse of notation, we sometimes write $\sigma(x)$ for $x$, when $x \not\in \mathrm{dom}(\sigma)$. To interpret $\seplogk{\mathfrak{K}}$ formul{\ae}, we consider an infinite countable set $\mathbb{L}$ of \emph{locations}. The semantics of $\seplogk{\mathfrak{K}}$ formul{\ae} is defined in terms of \emph{structures} $(\mathfrak{s}, \mathfrak{h})$, where:\begin{compactitem} \item $\mathfrak{s} : \mathbb{T} \rightharpoonup \mathbb{L}$ is a partial mapping of terms into locations, called a \emph{store}, that interprets at least all the constants, i.e.\ $\mathbb{C} \subseteq \mathrm{dom}(\mathfrak{s})$ for every store $\mathfrak{s}$, and \item $\mathfrak{h} : \mathbb{L} \rightharpoonup_{\mathit{fin}} \mathbb{L}^\mathfrak{K}$ is a finite partial mapping of locations into $\mathfrak{K}$-tuples of locations, called a \emph{heap}. \end{compactitem} Given a heap $\mathfrak{h}$, let $\mathrm{loc}(\mathfrak{h}) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{\ell_0, \ldots, \ell_\mathfrak{K} \mid \ell_0 \in \mathrm{dom}(\mathfrak{h}),~ \mathfrak{h}(\ell_0) = (\ell_1, \ldots, \ell_\mathfrak{K})}$ be the set of locations that occur in the heap $\mathfrak{h}$. Two heaps $\mathfrak{h}_1$ and $\mathfrak{h}_2$ are \emph{disjoint} iff $\mathrm{dom}(\mathfrak{h}_1) \cap \mathrm{dom}(\mathfrak{h}_2) = \emptyset$, in which case their \emph{disjoint union} is denoted by $\mathfrak{h}_1 \uplus \mathfrak{h}_2$, otherwise undefined. The \emph{frontier between $\mathfrak{h}_1$ and $\mathfrak{h}_2$} is the set of common locations $\mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \mathrm{loc}(\mathfrak{h}_1) \cap \mathrm{loc}(\mathfrak{h}_2)$. Note that disjoint heaps may have nonempty frontier. The \emph{satisfaction relation} $\models$ between structures $(\mathfrak{s},\mathfrak{h})$ and predicate-free $\seplogk{\mathfrak{K}}$ formul{\ae} $\phi$ is defined recursively on the structure of formul{\ae}: \vspace*{-.5\baselineskip} \[\begin{array}{rclcl} (\mathfrak{s}, \mathfrak{h}) & \models & t_1 \approx t_2 & \iff & t_1, t_2 \in \mathrm{dom}(\mathfrak{s}) \text{ and } \mathfrak{s}(t_1) = \mathfrak{s}(t_2) \\ (\mathfrak{s}, \mathfrak{h}) & \models & \mathsf{emp} & \iff & \mathfrak{h} = \emptyset \\ (\mathfrak{s}, \mathfrak{h}) & \models & t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) & \iff & t_0, \ldots, t_\mathfrak{K} \in \mathrm{dom}(\mathfrak{s}),\ \mathrm{dom}(\mathfrak{h}) = \set{\mathfrak{s}(t_0)} \text{ and } \mathfrak{h}(\mathfrak{s}(t_0)) = (\mathfrak{s}(t_1), \ldots, \mathfrak{s}(t_\mathfrak{K})) \\ (\mathfrak{s}, \mathfrak{h}) & \models & \phi_1 \wedge \phi_2 & \iff & (\mathfrak{s}, \mathfrak{h}) \models \phi_i,~ i = 1,2 \\ (\mathfrak{s}, \mathfrak{h}) & \models & \neg\phi_1 & \iff & \fv{\phi_1} \subseteq \mathrm{dom}(\mathfrak{s}) \text{ and } (\mathfrak{s}, \mathfrak{h}) \not\models \phi_1 \\ (\mathfrak{s}, \mathfrak{h}) & \models & \phi_1 * \phi_2 & \iff & \text{there exist heaps $\mathfrak{h}_1$, $\mathfrak{h}_2$ such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$} \text{ and $(\mathfrak{s}, \mathfrak{h}_i) \models \phi_i$, $i=1,2$} \\ (\mathfrak{s}, \mathfrak{h}) & \models & \exists x ~.~ \phi & \iff & (\mathfrak{s}[x \leftarrow \ell], \mathfrak{h}) \models \phi, \text{ for some location $\ell \in \mathbb{L}$} \\[-2mm] \end{array}\] where $\mathfrak{s}[x \leftarrow \ell]$ is the store, with domain $\mathrm{dom}(\mathfrak{s}) \cup \set{x}$, that maps $x$ to $\ell$ and behaves like $\mathfrak{s}$ over $\mathrm{dom}(\mathfrak{s}) \setminus \set{x}$. For a tuple of variables $\vec{x} = (x_1,\ldots,x_n)$ and locations $\overline{\ell} = (\ell_1,\ldots,\ell_n)$, we call the store $\mathfrak{s}[\vec{x} \leftarrow \overline{\ell}] \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \mathfrak{s}[x_1 \leftarrow \ell_1] \ldots [x_n \leftarrow \ell_n]$ an \emph{$\vec{x}$-associate} of $\mathfrak{s}$. A structure $(\mathfrak{s},\mathfrak{h})$ such that $(\mathfrak{s},\mathfrak{h}) \models \phi$, is called a \emph{model} of $\phi$. Note that $(\mathfrak{s},\mathfrak{h}) \models \phi$ only if $\fv{\phi} \subseteq \mathrm{dom}(\mathfrak{s})$. The fragment of \emph{symbolic heaps} is obtained by confining the negation and conjunction to the formul{\ae} $t_1 \bumpeq t_2 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} t_1 \approx t_2 \wedge \mathsf{emp}$ and $t_1 \not\seq t_2 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \neg t_1 \approx t_2 \wedge \mathsf{emp}$, called \emph{equational atoms}, by abuse of language. We denote by $\shk{\mathfrak{K}}$ the set of symbolic heaps, formally defined below: \vspace*{-.5\baselineskip} \[\begin{array}{rcl} \phi & := & \mathsf{emp} \mid t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) \mid p(t_1, \ldots, t_{\#p}) \mid t_1 \bumpeq t_2 \mid t_1 \not\seq t_2 \mid \phi_1 * \phi_2 \mid \exists x ~.~ \phi_1 \\[-4mm] \end{array}\] Given quantifier-free symbolic heaps $\phi_1, \phi_2 \in \shk{\mathfrak{K}}$, it is not hard to check that $\exists x ~.~ \phi_1 * \exists y ~.~ \phi_2$ and $\exists x \exists y ~.~ \phi_1 * \phi_2$ have the same models. Consequently, each symbolic heap can be written in prenex form, as $\phi = \exists x_1 \ldots \exists x_n ~.~ \psi$, \shortVersionOnly{where $\psi$ is a quantifier-free separating conjunction of points-to atoms and (dis-)equalities. A variable $x \in \fv{\psi}$ is \emph{allocated} in $\phi$ iff there exists a (possibly empty) sequence of equalities $x \bumpeq \ldots \bumpeq t_0$ and a points-to atom $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ in $\psi$. } \longVersionOnly{where: \begin{equation}\label{eq:symbolic-heap} \psi = \Asterisk_{i=1}^\alpha t_0^i \mapsto (t_1^i, \ldots, t_\mathfrak{K}^i) ~*~ \Asterisk_{j=\alpha+1}^\beta p_j(t_1^j, \ldots, t_{\#p}^j) ~*~ \Asterisk_{k=\beta+1}^{\gamma} t_1^k \bumpeq t_2^k ~*~ \Asterisk_{\ell=\gamma+1}^{\delta} t_1^\ell \not\seq t_2^\ell \end{equation} for some integers $0 \leq \alpha \leq \beta \leq \gamma \leq \delta$. A variable $x \in \fv{\phi}$ is \emph{allocated} in a symbolic heap $\phi$ if, using the notations from (\ref{eq:symbolic-heap}), either $x \in \{t_0^1,\ldots,t_0^\alpha\}$ or there exists a sequence of terms $(t_1, \ldots, t_j)$ such that $j\geq 2$, $x = t_1$, $\set{t_i, t_{i+1}} = \{t_1^k, t_2^k\}$ for some $k \in \interv{\beta+1}{\gamma}$ and $t_j \in \{t_0^1,\ldots,t_0^\alpha\}$. Clearly, if $\phi$ is satisfiable and predicate-free then $x$ is allocated in $\phi$ if and only if $\mathfrak{s}'(x) \in \mathrm{dom}(\mathfrak{h})$ holds for every $(x_1,\ldots,x_n)$-associate $\mathfrak{s}'$ of $\mathfrak{s}$ such that $(\mathfrak{s}',\mathfrak{h}) \models \psi$. } The predicates from $\mathbb{P}$ are intepreted by a given set $\mathcal{S}$ of \emph{rules} $p(x_1, \ldots, x_{\#p}) \Leftarrow \rho$, where $\rho$ is a symbolic heap, such that $\fv{\rho} \subseteq \{x_1, \ldots, x_{\#_p}\}$. We say that $p(x_1, \ldots, x_{\#p})$ is the \emph{head} and $\rho$ is the \emph{body} of the rule. For conciseness, we write $p(x_1, \ldots, x_{\#p}) \Leftarrow_\mathcal{S} \rho$ instead of $p(x_1, \ldots, x_{\#p}) \Leftarrow \rho \in \mathcal{S}$. In the following, we shall often refer to a given set of rules $\mathcal{S}$. \begin{definition}[Unfolding]\label{def:unfolding} A formula $\psi$ is a \emph{step-unfolding} of a formula $\phi \in \seplogk{\mathfrak{K}}$, written $\phi \unfold{\mathcal{S}} \psi$, if $\psi$ is obtained by replacing an occurrence of an atom $p(t_1, \ldots, t_{\#p})$ in $\phi$ with $\rho[t_1/x_1, \ldots, t_{\#p}/x_{\#p}]$, for a rule $p(x_1, \ldots, x_{\#p}) \Leftarrow_\mathcal{S} \rho$. An \emph{unfolding} of $\phi$ is a formula $\psi$ such that $\phi \unfold{\mathcal{S}}^* \psi$. \end{definition} It is easily seen that any unfolding of a symbolic heap is again a symbolic heap. We implicitly assume that all bound variables are $\alpha$-renamed throughout an unfolding, to avoid name clashes. Unfolding extends the semantics from predicate-free to arbitrary $\seplogk{\mathfrak{K}}$ formul{\ae}: \begin{definition}\label{def:unfolding-semantics} Given a structure $(\mathfrak{s},\mathfrak{h})$ and a formula $\phi \in \seplogk{\mathfrak{K}}$, we write $(\mathfrak{s},\mathfrak{h}) \models_\mathcal{S} \phi$ iff there exists a predicate-free unfolding $\phi \unfold{\mathcal{S}}^* \psi$ such that $(\mathfrak{s},\mathfrak{h}) \models \psi$. In this case, $(\mathfrak{s},\mathfrak{h})$ is an \emph{$\mathcal{S}$-model of $\phi$}. For two formul{\ae} $\phi, \psi \in \seplogk{\mathfrak{K}}$, we write $\phi \models_\mathcal{S} \psi$ iff every $\mathcal{S}$-model of $\phi$ is an $\mathcal{S}$-model of $\psi$. \end{definition} Note that, if $(\mathfrak{s},\mathfrak{h}) \models_\mathcal{S} \phi$, then $\mathrm{dom}(\mathfrak{s})$ might have to contain constants that do not occur in $\phi$. For instance if $p(x) \Leftarrow_\mathcal{S} x \mapsto \mathsf{a}$ is the only rule with head $p(x)$, then any $\mathcal{S}$-model $(\mathfrak{s},\mathfrak{h})$ must map $\mathsf{a}$ to some location, which is taken care of by the assumption $\mathbb{C} \subseteq \mathrm{dom}(\mathfrak{s})$, that applies to any store. \begin{definition}[Entailment]\label{def:entailment} Given symbolic heaps $\phi, \psi_1, \ldots, \psi_n$, such that $\phi$ is quantifier-free and $\fv{\phi} = \fv{\psi_1} = \ldots = \fv{\psi_n} = \emptyset$, the \emph{sequent} $\phi \vdash \psi_1, \ldots, \psi_n$ is \emph{valid for $\mathcal{S}$} iff $\phi \models_\mathcal{S} \bigvee_{i=1}^n \phi_i$. An \emph{entailment problem} $\mathcal{P} = (\mathcal{S},\Sigma)$ consists of a set of rules $\mathcal{S}$ and a set $\Sigma$ of sequents, asking whether each sequent in $\Sigma$ is valid for $\mathcal{S}$. \end{definition} Note that we consider entailments between formul{\ae} without free variables. This is not restrictive, since any free variable can be replaced by a constant from $\mathbb{C}$, with no impact on the validity status or the computational complexity of the problem. We silently assume that $\mathbb{C}$ contains enough constants to allow this replacement. For conciseness, we write $\phi \vdash_\mathcal{P} \psi_1,\ldots,\psi_n$ for $\phi \vdash \psi_1,\ldots,\psi_n \in \Sigma$, where $\Sigma$ is the set of sequents of $\mathcal{P}$. The following example shows an entailment problem asking whether the concatenation of two acyclic lists is again an acyclic list: \begin{example}\label{ex:acyclic-lists} The entailment problem below consists of four rules, defining the predicates $\mathsf{ls}(x,y)$ and $\mathsf{sls}(x,y,z)$, respectively, and two sequents: \vspace*{-.5\baselineskip} \[\begin{array}{rcl} \mathsf{ls}(x,y) & \Leftarrow & x \mapsto y * x \not\seq y \mid \exists v ~.~ x \mapsto v * \mathsf{ls}(v,y) * x \not\seq y \\ \mathsf{sls}(x,y,z) & \Leftarrow & x \mapsto y * x \not\seq y * x \not\seq z \mid \exists v ~.~ x \mapsto v * \mathsf{sls}(v,y,z) * x \not\seq y * x \not\seq z \\ \mathsf{ls}(a,b) * \mathsf{ls}(b,c) & \vdash & \exists x ~.~ a \mapsto x * \mathsf{ls}(x,c) * a \not\seq c \hspace*{5mm} \mathsf{sls}(a,b,c) * \mathsf{ls}(b,c) \vdash \exists x ~.~ a \mapsto x * \mathsf{ls}(x,c) * a \not\seq c \\[-2mm] \end{array}\] Here $\mathsf{ls}(x,y)$ describes non-empty acyclic list segments with head and tail pointed to by $x$ and $y$, respectively. The first sequent is invalid, because $c$ can be allocated within the list segment defined by $\mathsf{ls}(a,b)$, in which case the entire list has a cycle starting and ending with the location associated with $c$. To avoid the cycle, the left-hand side of the second sequent uses the predicate $\mathsf{sls}(x,y,z)$ describing an acyclic list segment from $x$ to $y$ that skips the location pointed to by $z$. The second sequent is valid. \hfill$\blacksquare$ \end{example} The complexity analysis of the decision procedure described in this paper relies on two parameters. First, the \emph{width} of an entailment problem $\mathcal{P}=(\mathcal{S},\Sigma)$ is (roughly) the maximum among the sizes of the symbolic heaps occurring in $\mathcal{P}$ and the number of constants in $\mathbb{C}$. Second, the \emph{size} of the entailment problem is (roughly) the number of symbols needed to represent it, namely: \vspace*{-0.5\baselineskip} \[\begin{array}{rcl} \probwidth{\mathcal{P}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \max \big(\{\size{\rho} + \#p \mid p(x_1,\ldots,x_{\#p}) \Leftarrow_\mathcal{S} \rho\} \cup \{\size{\psi_i} \mid \psi_0 \vdash_\mathcal{P} \psi_1,\ldots,\psi_n \} \cup \set{\card{\mathbb{C}}} \big) \\ \size{\mathcal{P}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \sum_{p(x_1,\ldots,x_{\#p}) ~\Leftarrow_\mathcal{S} ~\rho} (\size{\rho}+\#p) + \sum_{\psi_0 ~\vdash_\mathcal{P}~ \psi_1,\ldots,\psi_n} \sum_{i=1}^n \size{\psi_i} \end{array}\] In the next section we give a transformation of an entailment problems with a time complexity that is bounded by the product of the size and a simple exponential of the width of the input, such that, moreover, the width of the problem increases by a polynomial factor only. The latter is instrumental in proving the final $2$-$\mathsf{EXPTIME}$\ upper bound on the complexity of the entailment problem. To alleviate the upcoming technical details, we make the following assumption: \begin{assumption}\label{ass:dist-const} Distinct constants are always associated with distinct locations: for all stores $\mathfrak{s}$, and for all $c,d \in \mathbb{C}$, we have $c \not = d$ only if $\mathfrak{s}(c) \neq \mathfrak{s}(d)$. \end{assumption} This assumption loses no generality, because one can enumerate all the equivalence relations on $\mathbb{C}$ and test the entailments separately for each of these relations, by replacing all the constants in the same class by a unique representative\footnote{The replacement must be performed also within the inductive rules, not only in the considered formul{\ae}.}, while assuming that constants in distinct classes are mapped to distinct locations. The overall complexity of the procedure is still doubly exponential, since the number of such equivalence relations is bounded by the number of partitions of $\mathbb{C}$, that is $2^{\bigO(\card{\mathbb{C}}\cdot\log\card{\mathbb{C}})} = 2^{\bigO(\card{\probwidth{\mathcal{P}}}\cdot\log\card{\probwidth{\mathcal{P}}})}$, for any entailment problem $\mathcal{P}$. Thanks to Assumption \ref{ass:dist-const}, the considered symbolic heaps can be, moreover, safely assumed not to contain atoms $c \bowtie d$, with $\bowtie \in \{ \bumpeq,\not\seq \}$ and $c,d \in \mathbb{C}$, since these atoms are either unsatisfiable or equivalent to $\mathsf{emp}$. \section{Decidable Classes of Entailments} \label{sec:dec-entailments} In general, the entailment problem (Definition \ref{def:entailment}) is undecidable and we refer the reader to \cite{DBLP:conf/atva/IosifRV14,AntonopoulosGorogiannisHaaseKanovichOuaknine14} for two different proofs. A first attempt to define a naturally expressive class of formul{\ae} with a decidable entailment problem was reported in \cite{IosifRogalewiczSimacek13}. The entailments considered in \cite{IosifRogalewiczSimacek13} involve sets of rules restricted by three conditions, recalled below, in a slightly generalized form. First, the \emph{progress} condition requires that each rule adds to the heap exactly one location, associated either to a constant or to a designated parameter. Formally, we consider a mapping $\mathsf{root} : \mathbb{P} \rightarrow {\bf \mathbb{N}} \cup \mathbb{C}$, such that $\mathsf{root}(p) \in \interv{1}{\#p} \cup \mathbb{C}$, for each $p \in \mathbb{P}$. The term $\mathsf{root}(p(t_1,\ldots,t_{\#p}))$ denotes either $t_i$ if $\mathsf{root}(p) = i \in \interv{1}{\#p}$, or the constant $\mathsf{root}(p)$ itself if $\mathsf{root}(p) \in \mathbb{C}$. The notation $\mathsf{root}(\alpha)$ is extended to points-to atoms $\alpha$ as $\mathsf{root}(t_0 \mapsto (t_1,\dots,t_\mathfrak{K})) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} t_0$. Second, the \emph{connectivity} condition requires that all locations added during an unfolding of a predicate atom form a set of connected trees (a forest) rooted in locations associated either with a parameter of the predicate or with a constant. \begin{definition}[Progress \& Connectivity]\label{def:progress-connectivity} A set of rules $\mathcal{S}$ is \emph{progressing} if each rule in $\mathcal{S}$ is of the form $p(x_1, \ldots, x_{\#p}) \Leftarrow \exists z_1 \ldots \exists z_m ~.~ \mathsf{root}(p(x_1,\ldots,x_{\#p})) \mapsto (t_1, \ldots, t_\mathfrak{K}) * \psi$ and $\psi$ contains no occurrences of points-to atoms. Moreover, $\mathcal{S}$ is \emph{connected} if $\mathsf{root}(q(u_1,\ldots,u_{\#q})) \in \{t_1, \ldots, t_\mathfrak{K}\} \cup \mathbb{C}$, for each predicate atom $q(u_1,\ldots,u_{\#q})$ occurring in $\psi$. An entailment problem $\mathcal{P} = (\mathcal{S},\Sigma)$ is progressing (connected) if $\mathcal{S}$ is progressing (connected). \end{definition} The progress and connectivity conditions can be checked in polynomial time by a syntactic inspection of the rules in $\mathcal{S}$, even if the $\mathsf{root}(.)$ function is not known \`{a}~priori. Note that this definition of connectivity is less restrictive that the definition from \cite{IosifRogalewiczSimacek13}, that asked for $\mathsf{root}(q(u_1,\ldots,u_{\#q})) \in \{t_1, \ldots, t_\mathfrak{K}\}$. For instance, the set of rules $\{ \mathsf{p}(x) \Leftarrow \exists y~.~ x \mapsto y * \mathsf{p}(y) * \mathsf{p}(\mathsf{c}), \mathsf{p}(x) \Leftarrow x \mapsto \mathsf{nil} \}$, where $\mathsf{c} \in \mathbb{C}$ is progressing and connected (with $\mathsf{root}(\mathsf{p}) = 1$) in the sense of Definition \ref{def:progress-connectivity}, but not connected in the sense of \cite{IosifRogalewiczSimacek13}, because $\mathsf{c} \not\in (y)$. Note also that nullary predicate symbols are allowed, for instance $\mathsf{q}() \Leftarrow \mathsf{c} \mapsto \mathsf{nil}$ is progressing and connected (with $\mathsf{root}(\mathsf{q}) = \mathsf{c}$). Further, the entailment problem from Example \ref{ex:acyclic-lists} is both progressing and connected. Third, the \emph{establishment} condition is defined, slightly extended from its original statement \cite{IosifRogalewiczSimacek13}: \begin{definition}[Establishment]\label{def:establishment} Given a set of rules $\mathcal{S}$, a symbolic heap $\exists x_1 \ldots \exists x_n ~.~ \phi$, where $\phi$ is quantifier-free, is $\mathcal{S}$-\emph{established} iff every $x_i$ for $i \in \interv{1}{n}$ is allocated in each predicate-free unfolding $\phi \unfold{\mathcal{S}}^* \varphi$. A set of rules $\mathcal{S}$ is \emph{established} if the body $\rho$ of each rule $p(x_1,\ldots,x_{\#p}) \Leftarrow_\mathcal{S} \rho$ is $\mathcal{S}$-established. An entailment problem $\mathcal{P} = (\mathcal{S},\Sigma)$ is established if $\mathcal{S}$ is established, and \emph{strongly established} if, moreover, $\phi_i$ is $\mathcal{S}$-established, for each sequent $\phi_0 \vdash_\mathcal{P} \phi_1,\ldots,\phi_n$ and each $i \in \interv{0}{n}$. \end{definition} For example, the entailment problem from Example \ref{ex:acyclic-lists} is strongly established. \longVersionOnly{ \subsection*{E-restricted\xspace Entailments } \label{sec:restricted} } In this paper, we replace establishment with a new condition that, as we show, preserves the decidability and computational complexity of progressing, connected and established entailment problems. The new condition can be checked in time linear in the size of the problem. This condition, called \emph{equational restrictedness} (\emph{e-restrictedness\xspace}, for short), requires that each equational atom occurring in a formula involves at least one constant. We will show that the e-restrictedness\xspace condition is more general than establishment, in the sense that every established problem can be reduced to an equivalent e-restricted\xspace problem (Theorem \ref{thm:established-restricted}). Moreover, the class of structures defined using e-restricted\xspace symbolic heaps is a strict superset of the one defined by established symbolic heaps. \begin{definition}[E-restrictedness\xspace]\label{def:restricted} A symbolic heap $\phi$ is \emph{e-restricted\xspace} if, for every equational atom $t \bowtie u$ from $\phi$, where $\bowtie \in \set{\bumpeq,\not\seq}$, we have $\set{t,u} \cap \mathbb{C} \neq \emptyset$. A set of rules $\mathcal{S}$ is \emph{e-restricted\xspace} if the body $\rho$ of each rule $p(x_1,\ldots,x_{\#p}) \Leftarrow_\mathcal{S} \rho$ is e-restricted\xspace. An entailment problem $\mathcal{P}=(\mathcal{S},\Sigma)$ is \emph{e-restricted\xspace} if $\mathcal{S}$ is e-restricted\xspace and $\phi_i$ is e-restricted\xspace, for each sequent $\phi_0 \vdash_\mathcal{P} \phi_1,\ldots,\phi_n$ and each $i \in \interv{0}{n}$. \end{definition} For instance, the entailment problem from Example \ref{ex:acyclic-lists} is not e-restricted\xspace, because several rule bodies have disequalities between parameters, e.g.\ $\mathsf{ls}(x,y) \Leftarrow x \mapsto y * x \not\seq y$. However, the set of rules $\{ \mathsf{ls_c}(x) \Leftarrow x \mapsto c * x \not\seq \mathsf{c}, \mathsf{ls_c}(x) \Leftarrow \exists y ~.~ x \mapsto y * \mathsf{ls_c}(y) * x \not\seq \mathsf{c} \}$, where $\mathsf{c} \in \mathbb{C}$ and $\mathsf{ls_c}$ is a new predicate symbol, denoting an acyclic list ending with $\mathsf{c}$, is e-restricted\xspace. Note that any atom $\mathsf{ls}(x,y)$ can be replaced by $\mathsf{ls_y}(x)$, provided that $y$ occurs free in a sequent and can be viewed as a constant. We show next that every established entailment problem (Definition \ref{def:establishment}) can be reduced to an e-restricted\xspace entailment problem (Definition \ref{def:restricted}). The transformation incurs an exponential blowup, however, as we show, the blowup is exponential only in the width and polynomial in the size of the input problem. This is to be expected, because checking e-restrictedness\xspace of a problem can be done in linear time, in contrast with checking establishment, which is at least co-$\mathsf{NP}$-hard \cite{JansenKatelaanMathejaNollZuleger17}. \longVersionOnly{ \section{Pre-Processing Step: Normalizing Entailements} \label{sec:normalize} \mycomment[np]{added section} } We begin by showing that each problem can be translated into an equivalent \emph{normalized} problem: \begin{definition}[Normalization]\label{def:normalized}\hfill \begin{compactenum}[(1)] % \item A symbolic heap $\exists \vec{x} ~.~ \psi \in \shk{\mathfrak{K}}$, where $\psi$ is quantifier-free, is \emph{normalized} iff for every atom $\alpha$ in $\psi$:\begin{compactenum}[a.] % \item\label{it1:normalized} if $\alpha$ is an equational atom, then it is of the form $x \not\seq t$ ($t \not\seq x$), where $x \in \vec{x}$, % \item\label{it1bis:normalized} every variable $x \in \fv{\psi}$ occurs in a points-to or predicate atom of $\psi$, % \item\label{it2:normalized} if $\alpha$ is a predicate atom $q(t_1,\ldots,t_{\#q})$, then $\{t_1, \ldots, t_{\#q}\} \cap \mathbb{C} = \emptyset$ and $t_i \neq t_j$, for all $i \neq j \in \interv{1}{\#q}$. \end{compactenum} % \item A set of rules $\mathcal{S}$ is \emph{normalized} iff for each rule $p(x_1,\ldots,x_{\#p}) \Leftarrow_\mathcal{S} \rho$, the symbolic heap $\rho$ is normalized and, moreover: \begin{compactenum}[a.] % \item\label{it3:normalized} For every $i \in \interv{1}{\#p}$ and every predicate-free unfolding $p(x_1,\ldots,x_{\#p}) \unfold{\mathcal{S}}^* \varphi$, $\varphi$ contains a points-to atom $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$, such that $x_i \in \{t_0,\ldots,t_\mathfrak{K}\}$. % \item\label{it4:normalized} There exist sets $\allocpar{p}{\mathcal{S}} \subseteq \interv{1}{\#p}$ and $\alloconst{p}{\mathcal{S}} \subseteq \mathbb{C}$ such that, for each predicate-free unfolding $p(x_1,\ldots,x_{\#p}) \unfold{\mathcal{S}}^* \varphi$: \begin{compactitem} \item $i \in \allocpar{p}{\mathcal{S}}$ iff $\varphi$ contains an atom $x_i \mapsto (t_1,\ldots,t_\mathfrak{K})$, for every $i\in \interv{1}{\#p}$, % \item $c \in \alloconst{p}{\mathcal{S}}$ iff $\varphi$ contains an atom $c \mapsto (t_1,\ldots,t_\mathfrak{K})$, for every $c\in \mathbb{C}$. % \end{compactitem} \item\label{it5:normalized} For every predicate-free unfolding $p(x_1,\ldots,x_{\#p}) \unfold{\mathcal{S}}^* \varphi$, if $\varphi$ contains an atom $t_0 \mapsto (t_1,\dots,t_\mathfrak{K})$ such that $t_0\in \mathbb{V} \setminus \{ x_1,\dots,x_{\#p} \}$, then $\varphi$ also contains atoms $t_0 \not \bumpeq c$, for every $c \in \mathbb{C}$. \end{compactenum} % \item An entailment problem $\mathcal{P} = (\mathcal{S},\Sigma)$ is \emph{normalized} if $\mathcal{S}$ is normalized and, for each sequent $\phi_0 \vdash_\mathcal{P} \phi_1,\ldots,\phi_n$ the symbolic heap $\phi_i$ is normalized, for each $i \in \interv{0}{n}$. \end{compactenum} \end{definition} The intuition behind Condition (\ref{it3:normalized}) is that no term can ``disappear'' while unfolding an inductive definition. Condition (\ref{it4:normalized}) states that the set of terms eventually allocated by a predicate atom is the same in all unfoldings. This allows to define the set of symbols that occur freely in a symbolic heap $\phi$ and are necessarily allocated in every unfolding of $\phi$, provided that the set of rules is normalized: \begin{definition}\label{def:alloc} Given a normalized set of rules $\mathcal{S}$ and a symbolic heap $\phi \in \shk{\mathfrak{K}}$, the set $\alloc{\phi}{\mathcal{S}}$ is defined recursively on the structure of $\phi$: \vspace*{-.5\baselineskip} \[\begin{array}{rclcrcl} \alloc{t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})}{\mathcal{S}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{t_0} && \alloc{p(t_1,\ldots,t_{\#p})}{\mathcal{S}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{t_i \mid i \in \allocpar{p}{\mathcal{S}}} \\ \alloc{t_1 \bowtie t_2}{\mathcal{S}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \emptyset,~ \bowtie \in \{\bumpeq,\not\seq\} &&&& \cup~ \alloconst{p}{\mathcal{S}} \\ \alloc{\phi_1 * \phi_2}{\mathcal{S}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \alloc{\phi_1}{\mathcal{S}} \cup \alloc{\phi_2}{\mathcal{S}} && \alloc{\exists x ~.~ \phi_1}{\mathcal{S}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \alloc{\phi_1}{\mathcal{S}} \setminus \set{x} \end{array}\] \end{definition} \begin{example} The rules $p(x,y) \Leftarrow \exists z ~.~ x \mapsto z * p(z,y) * x \not \bumpeq y$ and $p(x,y) \Leftarrow \exists z~.~ x \mapsto z$ are not normalized, because they contradict Conditions (\ref{it1:normalized}) and (\ref{it3:normalized}) of Definition \ref{def:normalized}, respectively. A set $\mathcal{S}$ containing the rules $q(x,y) \Leftarrow \exists z~.~ x \mapsto y * q(y,z)$ and $q(x,y) \Leftarrow x \mapsto y$ is not normalized, because it is not possible to find a set $\allocpar{q}{\mathcal{S}}$ satisfying Condition (\ref{it4:normalized}). Indeed, if $2 \in \allocpar{q}{\mathcal{S}}$ then the required equivalence does not hold for the second rule (because it does not allocate $y$), and if $2 \not \in \allocpar{q}{\mathcal{S}}$ then it fails for the first one (since the predicate $q(y,z)$ allocates $y$). On the other hand, $\mathcal{S}' = \{ p(x,y) \Leftarrow \exists z~.~ x \mapsto z * p(z,y) * z \not \bumpeq x * z \not \bumpeq \mathsf{nil}, p(x,y) \Leftarrow x \mapsto y, q(x,y) \Leftarrow \exists z~.~ x \mapsto y * q(y,z) * z \not \bumpeq \mathsf{nil}$ , $q(x,y) \Leftarrow x \mapsto y * r(y), r(x) \Leftarrow x \mapsto \mathsf{nil}\}$ is normalized (assuming $\mathbb{C} = \{ \mathsf{nil} \}$), with $\allocpar{p}{\mathcal{S}'} = \allocpar{r}{\mathcal{S}'} =\{ 1 \}$, $\allocpar{q}{\mathcal{S}'} = \{ 1,2 \}$ and $\alloconst{\pi}{\mathcal{S}'} = \emptyset$, for all $\pi \in \set{p,q,r}$. Then $\alloc{p(x_1,x_2) * q(x_3,x_4) * r(x_5)}{\mathcal{S}'} = \{ x_1,x_3,x_4,x_5 \}$. \hfill$\blacksquare$ \end{example} The following lemma states that every entailment problem can be transformed into a normalized entailment problem, by a transformation that preserves e-restricted\xspace-ness and (strong) establishment. \begin{lemma}\label{lemma:normalized} An entailment problem $\mathcal{P}$ can be translated to an equivalent normalized problem $\mathcal{P}_n$, such that $\probwidth{\mathcal{P}_n} = \bigO(\probwidth{\mathcal{P}}^2)$ in time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$. Further, $\mathcal{P}_n$ is e-restricted\xspace and (strongly) established if $\mathcal{P}$ is e-restricted\xspace and (strongly) established. \end{lemma} \optionalProof{Lemma \ref{lemma:normalized}}{sec:dec-entailments}{ Let $\mathcal{P} = (\mathcal{S},\Sigma)$ be an input entailment problem. We transform $\mathcal{P}$ in order to meet points (\ref{it1:normalized}), (\ref{it2:normalized}), (\ref{it3:normalized}), (\ref{it4:normalized}) and (\ref{it5:normalized}) of Definition \ref{def:normalized}, as follows. \noindent(\ref{it1:normalized}) First, we apply exhaustively, to each symbolic heap occurring in $\mathcal{P}$, the following transformations, for each term $t \in \mathbb{T}$: \begin{eqnarray} \exists x ~.~ x \bumpeq t * \phi & \leadsto & \phi[t/x] \label{eq:exists-subst} \\ t \bumpeq t * \phi & \leadsto & \phi \label{eq:equal-remove} \end{eqnarray} Note that, at this point, there are no equality atoms involving an existentially quantified variable (recall that equalities between constants can be dismissed since they are either trivially false or equivalent to $\mathsf{emp}$). We apply the following transformations, that introduce disequalities between the remaining existential variables and the rest of the terms. \begin{eqnarray} p(\vec{x}) & \Leftarrow & \exists x ~.~ \rho \leadsto \set{\begin{array}{rcl} p(\vec{x}) & \Leftarrow & \rho[t/x] \\ p(\vec{x}) & \Leftarrow & \exists x ~.~ \rho * x \not\seq t \end{array}} \label{eq:rule-split} \\[-1mm] && \begin{array}{l} \text{for all $t \in (\fv{\rho} \setminus \set{x}) \cup \mathbb{C}$, where $x \not\seq t$ does not occur in $\rho$} \end{array}\nonumber \\[2mm] \phi \vdash \psi_1,\ldots, \exists x ~.~ \psi_i, \ldots, \psi_n & \leadsto & \phi \vdash \psi_1,\ldots,\psi_{i-1},\psi_i[t/x],\exists x ~.~ x \not\seq t * \psi_i,\ldots,\psi_n\label{eq:rhs-split} \\[-1mm] && \begin{array}{l} \text{for all $t \in \mathbb{C}$, such that $x \not\seq t$ does not occur in $\psi_i$} \end{array}\nonumber \end{eqnarray} Let $\mathcal{P}_1 = (\mathcal{S}_1,\Sigma_1)$ be the result of applying the transformations (\ref{eq:exists-subst}-\ref{eq:rhs-split}) exhaustively. Because every transformation preserves the equivalence of rules and sequents, $\mathcal{P}_1$ is valid iff $\mathcal{P}$ is valid. Note that, by Definition \ref{def:entailment}, there are no free variables occurring in a sequent from $\Sigma$. Then the only remaining equality atoms $t \bumpeq u$ occurring in $\mathcal{P}_1$ must occur in a rule $p(x_1,\ldots,x_{\#p}) \Leftarrow_{\mathcal{S}_1} \rho$ and neither $t$ nor $u$ can be an existentially quantified variable, hence $t,u \in \{x_1,\ldots,x_{\#p}\} \cup \mathbb{C}$. Before proceeding further with Condition (\ref{it1:normalized}), we make sure that Condition (\ref{it2:normalized}) is satisfied. \noindent(\ref{it2:normalized}) Let $q(t_1,\ldots,t_{\#q})$ be a predicate atom occurring in a rule or a sequent from $\mathcal{P}_1$, where $t_1, \ldots, t_{\#q} \in \mathbb{T}$, and let $(t_{i_1}, \ldots, t_{i_m})$ be the subsequence obtained by removing the terms from the set $\{t_i \mid i \in \interv{1}{\#q},~ \exists j < i ~.~ t_i=t_j\} \cup \mathbb{C}$ from $(t_1, \ldots, t_{\#q})$. We consider a fresh predicate symbol $q_{i_1,\ldots,i_m}$, of arity $m$, with the new rules $q_{i_1,\ldots,i_m}(x_1,\ldots,x_m) \Leftarrow \rho\sigma$, for each rule $q(x_1,\ldots,x_{\#q}) \Leftarrow_\mathcal{S} \rho$, where the substitution $\sigma$ is defined such that, for all $j \in \interv{1}{\#q}$: \begin{compactitem} \item $\sigma(x_j) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} x_{i_\ell}$ if $t_j = t_{i_\ell}$, for some $\ell \in \interv{1}{m}$, % \item $\sigma(x_j) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} t_j$ if $t_j \in \mathbb{C}$, and % \item $\sigma(x_j) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} x_j$, otherwise. \end{compactitem} Note that the definition of the sequence $(t_{i_1}, \ldots, t_{i_m})$ guarantees that such a substitution exists and it is unique. If the rule body obtained by applying the substitution $\sigma$ contains a disequality $t \not\seq t$, for some $t \in \mathbb{T}$, we eliminate the rule. Otherwise, we apply transformation (\ref{eq:equal-remove}) to the newly obtained rule to eliminate trivial equalities. Finally, we replace each occurrence of $q(t_1,\ldots,t_{\#q})$ in $\mathcal{P}_1$ with $q_{i_1,\ldots,i_m}(t_{i_1}, \ldots, t_{i_m})$. Because $q(t_1,\ldots,t_m)$ and $q_{i_1,\ldots,i_m}(t_{i_1},\ldots,t_{i_m})$ have the same step unfoldings, they have the same predicate-free unfoldings and this transformation preserves equivalence, yielding a problem that satisfies condition (\ref{it2:normalized}). Let $\mathcal{P}_2 = (\mathcal{S}_2, \Sigma_2)$ be the outcome of this transformation, where $\mathcal{S}_2$ is the set of newly introduced rules and $\Sigma_2$ is obtained from $\Sigma_1$ by the replacement of each predicate atom $q(t_1,\ldots,t_{\#q})$ with $q_{i_1,\ldots,i_m}(t_{i_1}, \ldots, t_{i_m})$. It is easy to check that $\mathcal{P}_2$ and $\mathcal{P}_1$ have the same validity status, which is that of $\mathcal{P}$. \noindent(\ref{it1:normalized}) We will now finish the proof of Condition (\ref{it1:normalized}). Since the transformation (\ref{eq:exists-subst}) removes equalities involving an existentially quantified variable and the equalities between constants can be eliminated as explained above, the only equalities that occur in the body of a rule $p(x_1, \ldots, x_{\#p}) \Leftarrow_{\mathcal{S}_2} \rho$ are of the form $x_i \bumpeq t$, where $i \in \interv{1}{\#p}$ and $t \in \{x_j \mid j \in \interv{1}{\#p}, j \neq i\} \cup \mathbb{C}$. We show that if such an equality occurs in the body of a rule, then this rule can safely be removed because any unfolding involving it generates an unsatisfiable symbolic heap. Let $p(u_1, \ldots, u_{\#p})$ be a predicate atom that occurs in a some unfolding of a symbolic heap from $\mathcal{P}$ and assume a step-unfolding that substitutes $p(u_1, \ldots, u_{\#p})$ with $\rho[u_1/x_1, \ldots, u_{\#p}/x_{\#p}]$. We distinguish two cases: \begin{compactenum}[(i)] \item\label{it1:1:normalized} $t = x_j$, for some $j \in \interv{1}{\#p} \setminus \set{i}$: by point (\ref{it2:normalized}), $u_i$ and $u_j$ must be distinct terms. If $u_i, u_j \in \mathbb{C}$, then $u_i \not\seq u_j$ necessarily holds, by Assumption \ref{ass:dist-const}, thus the equality $x_i \bumpeq t$ is false when $x_i,x_j$ are instantiated by $u_i,u_j$. Otherwise, if $u_i \in \mathbb{V}$ (the case $u_j \in \mathbb{V}$ is symmetric) then $u_i$ and $u_j$ were necessarily introduced by existential quantifiers, in which case the disequality $u_i \not\seq u_j$ has been asserted by transformations (\ref{eq:rule-split}) or (\ref{eq:rhs-split}), thus $x_i \bumpeq t$ is false when $x_i$ is replaced by $u_i$. % \item\label{it1:2:normalized} $t \in \mathbb{C}$: by a similar argument we show that that all the relevant instances of the equality $x_i \bumpeq t$ are unsatisfiable. \end{compactenum} Consequently, if an equality occurs in a rule, then this the rule can safely be removed. \noindent(\ref{it1bis:normalized}) To ensure that all variables occur within a points-to or predicate atom, we apply exhaustively the following transformation to each symbolic heap in the problem: \begin{equation}\label{eq:remove-diseq} \exists x ~.~ \Asterisk_{i=1}^n x \not\seq t_i * \psi \leadsto \psi \text{, if $x \not\in \fv{\psi}$} \end{equation} Let $\mathcal{P}_3=(\mathcal{S}_3,\Sigma_2)$ be the outcome of this transformation. Because $\mathbb{L}$ is infinite, any formula $\exists x ~.~ \Asterisk_{i=1}^n x \not\seq t_i$ is equivalent to $\mathsf{emp}$. Consequently, $\mathcal{P}_3$ and $\mathcal{P}_2$ have the same validity status as $\mathcal{P}$ and $\mathcal{P}_3$ satisfies conditions (\ref{it1:normalized}), (\ref{it1bis:normalized}) and (\ref{it2:normalized}). \noindent(\ref{it3:normalized}+\ref{it4:normalized}) For each predicate symbol $p$ that occurs in $\mathcal{S}_3$, we consider the predicate symbols $p_{X,Y,Z,A,B,C}$, of arities $\#p$ each, where $(X,Y,Z)$ is a partition of $\interv{1}{\#p}$ and $(A,B,C)$ is a partition of $\mathbb{C}$, along with the following rules: $p_{X,Y,Z,A,B,C}(x_1,\ldots,x_{\#p}) \Leftarrow \rho'$ if and only if $p(x_1, \ldots, x_{\#p}) \Leftarrow_{\mathcal{S}_3} \rho$ and $\rho'$ is obtained from $\rho$ by replacing each predicate atom $q(t_1,\ldots,t_{\#q})$ by a predicate atom $q_{X',Y',Z',A',B',C'}(t_1,\ldots,t_{\#q})$, for some partition $(X',Y',Z')$ of $\interv{1}{\#q}$ and some partition $(A',B',C')$ of $\mathbb{C}$, such that the following holds. For each $i \in \interv{1}{\#p}$: \begin{compactitem} % \item $i \in X$ iff either a points-to atom $x_i \mapsto (t_1,\ldots,t_\mathfrak{K})$ occurs in $\rho$, or $\rho$ contains a predicate atom $r_{X'', Y'', Z'', A'', B'', C''}(t_1, \ldots, t_{\#r})$ such that $x_i=t_j$ and $j \in X''$, % \item $i \in Y$ iff either $x_i \in \{t_1,\ldots,t_\mathfrak{K}\}$ for a points-to atom $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ occurring in $\rho$, or $\rho$ contains a predicate atom $r_{X'', Y'', Z'', A'', B'', C''}(t_1, \ldots, t_{\#r})$ such that $x_i=t_j$ and $j \in Y''$. \end{compactitem} Further, for each constant $c \in \mathcal{C}$: \begin{compactitem} % \item $c \in A$ iff a points-to atom $c \mapsto (t_1, \ldots, t_\mathfrak{K})$ occurs in $\rho$ or $\rho$ contains a predicate atom $r_{X'', Y'', Z'', A'', B'', C''}(t_1, \ldots, t_{\#r})$ such that $c \in A''$, % \item $c \in B$ iff either $c \in \{t_1, \ldots, t_\mathfrak{K}\}$, for a points-to atom $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ occurring in $\rho$ or $\rho$ contains a predicate atom $r_{X'', Y'', Z'', A'', B'', C''}(t_1, \ldots, t_{\#r})$ such that $c \in B''$, \end{compactitem} Let $\Sigma_4$ (resp.\ $\mathcal{S}_4$) be the set of sequents (resp.\ rules) obtained by replacing each predicate atom $p(t_1,\ldots,t_{\#p})$ with $p_{X,Y,Z,A,B,C}(t_1,\ldots,t_{\#p})$, for some partition $(X,Y,Z)$ of $\interv{1}{\#p}$ and some partition $(A,B,C)$ of $\mathcal{C}$. For each predicate symbol $p_{X,Y,Z,A,B,C}$ we consider a fresh predicate symbol $\overline{p}_{X,Y,A,B}$, of arity $\#\overline{p} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \#p - \card{Z}$, and each predicate atom $p_{X,Y,Z,A,B,C}(t_1,\ldots,t_{\#p})$ occurring in either $\mathcal{S}_4$ or $\Sigma_4$ is replaced by $\overline{p}_{X,Y,A,B}(t_{i_1}, \ldots, t_{i_m})$, where $t_{i_1}, \ldots, t_{i_m}$ is the subsequence of $t_1,\ldots,t_{\#p}$ obtained by removing the terms from $\set{t_i \mid i \in Z}$ and each atom involving these terms is removed from $\mathcal{S}_4$ and $\Sigma_4$. Let the result of this transformation be denoted by $\mathcal{P}_n = (\mathcal{S}_n,\Sigma_n)$, with $\allocpar{\overline{p}_{X,Y,A,B}}{\mathcal{S}_n} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} X$ and $\alloconst{\overline{p}_{X,Y,A,B}}{\mathcal{S}_n} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} A$. Properties \ref{it3:normalized} and \ref{it4:normalized} follow from the definition of the rules of $\overline{p}_{X,Y,A,B}$ by an easy induction on the length of the unfolding. The equivalence between the validity of $\mathcal{P}_n$ and the validity of $\mathcal{P}_4$ is based on the following: \begin{fact}\label{fact:exist-redundant} Let $\phi$ be a symbolic heap occurring in a sequent from $\Sigma_4$, $\phi \unfold{\mathcal{S}_4}^* \psi$ be a predicate-free unfolding of $\phi$ and $p_{X,Y,Z,A,B,C}(t_1, \ldots, t_{\#p})$ be a predicate atom that occurs at some intermediate step of this predicate-free unfolding. Then each variable $t_i \in \fv{\psi}$, such that $i \in Z$, occurs existentially quantified in a subformula $\exists t_i ~.~ \Asterisk_{j=1}^n t_i \not\seq u$ of $\psi$ and nowhere else. \end{fact} \proof{ Since $\fv{\phi} = \emptyset$, it must be the case that $x_i$ has been introduced as an existentially quantified variable by an intermediate unfolding step. We show, by induction on the length of the unfolding from the point where the variable was introduced that $t_i$ cannot occur in a points-to atom. $\Box$} Since $\mathbb{L}$ is infinite, any formula $\exists x ~.~ \Asterisk_{j=1}^n x \not\seq u_j$ is trivially satisfied in any structure $(\mathfrak{s},\mathfrak{h})$, such that $\{u_1,\ldots,u_n\} \in \mathrm{dom}(\mathfrak{s})$. By Fact \ref{fact:exist-redundant}, it follows that eliminating the terms $\set{t_i \mid i \in Z}$ from each predicate atom $p_{X,Y,Z,A,B,C}(t_1, \ldots, t_{\#p})$ preserves equivalence. \noindent(\ref{it5:normalized}) The exhaustive application of rules (\ref{eq:rule-split}) and (\ref{eq:rhs-split}), that add all possibe disequalities between existentially quantified variables and constants, ensures that Condition (\ref{it5:normalized}) is satisfied. Consequently, $\mathcal{P}_n$ is normalized. Assume now that $\mathcal{P}$ is e-restricted\xspace, namely that each equational atom $t \bowtie u$ occurring in $\mathcal{P}$ is such that $\set{t,u} \cap \mathbb{C} \neq \emptyset$. Note that the transformations (\ref{eq:rule-split}) and (\ref{eq:rhs-split}) may introduce disequalities $x \not\seq t'$, where $x$ is an existentially quantified variable. In the case where $\mathcal{P}$ is e-restricted\xspace, we apply these rules only for $t \in \mathbb{C}$. Suppose that, after applying rules (\ref{eq:exists-subst}-\ref{eq:equal-remove}) exhaustively, there exist some equality $t \bumpeq u$ in a rule, such that neither $t$ nor $u$ is an existentially quantified variable. But since $\mathcal{P}$ is e-restricted\xspace, $\set{t,u} \cap \mathbb{C} \neq \emptyset$ and this rule will be eliminated by the disequalities introduced by the modified versions of the transformations (\ref{eq:rule-split}) and (\ref{eq:rhs-split}). Finally, if $\mathcal{P}$ is (strongly) established then $\mathcal{P}_n$ is (strongly) established, because the transformation does not introduce new existential quantifiers and preserves equivalence. Let us now compute the time complexity of the normalization procedure and the width of the output entailment problem. Observe that transformations (\ref{eq:exists-subst}--\ref{eq:rhs-split}) either instantiate existentially quantified variables, add or remove equalities, thus they can be applied $\bigO(\size{\mathcal{P}})$ times, increasing the width of the problem by at most $\bigO(\size{\mathcal{P}})$. After the exhaustive application of transformations (\ref{eq:exists-subst}-\ref{eq:rhs-split}), the number of rules in $\mathcal{S}$ and the number of sequents in $\Sigma$ has increased by a factor of $2^{\probwidth{\mathcal{P}}}$ and the width of the problem by a linear factor. Then $\size{\mathcal{P}_1} = \bigO(\size{\mathcal{P}}\cdot 2^{\probwidth{\mathcal{P}}})$ and $\probwidth{\mathcal{P}_1} = \bigO(\probwidth{\mathcal{P}})$. The transformation of step (\ref{it2:normalized}) increases the number of rules in $\mathcal{S}_1$ by a factor of $2^\alpha = 2^{\bigO(\probwidth{\mathcal{P}_1})} = 2^{\bigO(\probwidth{\mathcal{P}}^2)}$, where $\alpha = \max\{\#p \mid p(x_1,\ldots,x_{\#p}) \Leftarrow_{\mathcal{S}_1} \rho\} \leq \probwidth{\mathcal{P}}$ and does not change the width of the problem, i.e.\ $\size{\mathcal{P}_2} = \size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ and $\probwidth{\mathcal{P}_2} = \bigO(\probwidth{\mathcal{P}}^2)$. Next, going from $\mathcal{P}_2$ to $\mathcal{P}_3$ does not increase the bounds on the size or width of the problem and we trivially obtain $\size{\mathcal{P}_3} = \size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ and $\probwidth{\mathcal{P}_3} = \bigO(\probwidth{\mathcal{P}}^2)$. Finally, going from $\mathcal{P}_3$ to $\mathcal{P}_4$ increases the size of the problem by a factor of $2^{3\alpha}\cdot 2^{3\card{\mathbb{C}}}$ and, because $\card{\mathbb{C}} \leq \probwidth{\mathcal{P}}$, by the definition of $\probwidth{\mathcal{P}}$, we obtain $\size{\mathcal{P}_n} = \size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ and $\probwidth{\mathcal{P}_n} = \bigO(\probwidth{\mathcal{P}}^2)$. Finally, the entire procedure has to be repeated for each partition $\mathfrak{C}$ of the set of constants $\mathbb{C}$. Since the number of partitions is $2^{\bigO(\card{\mathbb{C}} \cdot \log_2\card{\mathbb{C}})} = 2^{\bigO(\probwidth{\mathcal{P}} \cdot \log_2\probwidth{\mathcal{P}})}$, we obtain that the size of the result is $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$. Since the increase in the size of the output problem is mirrored by the time required to obtain it, the execution of the procedure takes time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$. $\Box$} \begin{example} The entailment problem $\mathcal{P} = (\mathcal{S},~ \set{p(\mathsf{a},\mathsf{b}) \vdash \exists x,y~.~ q(x,y)})$ with: \vspace*{-.5\baselineskip} \[ \mathcal{S} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \left\{\begin{tabular}{rclcrcl} $p(x,y)$ & $\Leftarrow$ & $\exists z ~.~ x \mapsto z * p(z,y) * x \not \bumpeq y$ & \quad & $q(x,y)$ & $\Leftarrow$ & $\exists z~.~ x \mapsto y * q(y,z) * z \not \bumpeq \mathsf{a} * z\not \bumpeq \mathsf{b}$ \\ $p(x,y)$ & $\Leftarrow$ & $\exists z~.~ x \mapsto z$ & \quad & $q(x,y)$ & $\Leftarrow$ & $x \mapsto y$ \\[-2mm] \end{tabular}\right\} \] may be transformed into $(\mathcal{S}',~ \set{p_1() \vdash \exists x,y~.~ q_1(x,y), \exists x,y~.~ q_2(x,y)})$, where: \vspace*{-.5\baselineskip} \[ \mathcal{S}' \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \left\{\begin{tabular}{rclcrcl} $p_1()$ & $\Leftarrow$ & $\exists z ~.~ \mathsf{a} \mapsto z * p_2(z) * z \not \bumpeq \mathsf{a} * z\not \bumpeq \mathsf{b}$ & $p_1()$ & $\Leftarrow$ & $\mathsf{a} \mapsto \mathsf{b} * p_3()$ \\ $p_1()$ & $\Leftarrow$ & $\exists z~.~ \mathsf{a} \mapsto z$ & $p_2(x)$ & $\Leftarrow$ & $x \mapsto \mathsf{b} * p_3()$ \\ $p_2(x)$ & $\Leftarrow$ & $\exists z ~.~ x \mapsto z * p_2(z) * z \not \bumpeq \mathsf{a} * z \not \bumpeq \mathsf{b}$ & $p_2(x)$ & $\Leftarrow$ & $\exists z~.~ x \mapsto z$ \\ $p_3()$ & $\Leftarrow$ & $\exists z~.~\mathsf{b} \mapsto z$ & $q_1(x,y)$ & $\Leftarrow$ & $\exists z~.~ x \mapsto y * q_1(y,z) * z \not \bumpeq \mathsf{a} * z\not \bumpeq \mathsf{b}$ \\ $q_1(x,y)$ & $\Leftarrow$ & $\exists z~.~ x \mapsto y * q_2(y,z) * z \not \bumpeq \mathsf{a} * z\not \bumpeq \mathsf{b}$ & $q_2(x,y)$ & $\Leftarrow$ & $x \mapsto y$ \\[-2mm] \end{tabular}\right\} \] The predicate atoms $p_1(), p_2(x)$ and $p_3()$ are equivalent to $p(\mathsf{a},\mathsf{b})$, $p(x,\mathsf{b})$ and $p(\mathsf{b},\mathsf{b})$, respectively. $q(x,y)$ is equivalent to $q_1(x,y) \vee q_2(x,y)$. Note that $p_2(x)$ is only used in a context where $x\not \bumpeq b$ holds, thus this atom may be omitted from the rules of $p_2()$. Recall that $\mathsf{a}$ and $\mathsf{b}$ are mapped to distinct locations, by Assumption \ref{ass:dist-const}. \hfill$\blacksquare$ \end{example} \longVersionOnly{ \section{Comparing E-restricted\xspace and Established Problems} \label{sec:established} } We show that every established problem $\mathcal{P}$ can be reduced to an e-restricted\xspace problem in time linear in the size and exponential in the width of the input, at the cost of a polynomial increase of its width: \additionalMaterial{Proof of Theorem \ref{thm:established-restricted}}{sec:dec-entailments}{app:established}{ \longVersionOnly{ First, we show that every established entailment problem can be reduced to a strongly established entailment problem (Definition \ref{def:establishment}) that is, moreover, normalized: } \begin{lemma}\label{lemma:strongly-established} Every established entailment problem $\mathcal{P} = (\mathcal{S},\Sigma)$ can be reduced in time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ to a normalized and strongly established entailment problem $\mathcal{P}_e$, such that $\probwidth{\mathcal{P}_e} = \bigO(\probwidth{\mathcal{P}}^2)$. \end{lemma} \proof{ First, we use Lemma \ref{lemma:normalized} to reduce $\mathcal{P}$ to an established normalized problem $\mathcal{P}_n = (\mathcal{S}_n,\Sigma_n)$ in time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$, such that $\size{\mathcal{P}_n}=\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ and $\probwidth{\mathcal{P}_n}=\bigO(\probwidth{\mathcal{P}}^2)$. Second, given a symbolic heap $\phi$ and a variable $x$, we define the set of symbolic heaps $\allocates{\phi}{x}$ recursively on the structure of $\phi$, as follows: \[\begin{array}{rcll} \allocates{t_1 \bowtie t_2}{x} & \ \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=}\ & \emptyset & \\ \allocates{t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})}{x} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) * x \bumpeq t_0} \\ \allocates{p(t_1,\ldots,t_{\#p})}{x} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{\overline{p}(x,t_1,\ldots,t_{\#p})} \\ \allocates{\phi_1 * \phi_2}{x} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \bigcup_{i=1,2} \set{\phi_i * \psi \mid \psi \in \allocates{\phi_{3-i}}{x}} \end{array}\] where $\overline{p}$ is a fresh predicate symbol not occurring in $\mathcal{P}$, of arity $\#\overline{p} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \#p+1$ and the set of inductive rules is updated by replacing each rule $p(x_1,\ldots,x_{\#p}) \Leftarrow_\mathcal{S} \rho$ by the set of rules $\{\overline{p}(x_0,x_1,\ldots,x_{\#p}) \Leftarrow \psi \mid \psi \in \allocates{\rho}{x_0}\}$. It is straightforward to show by induction that if $(\mathfrak{s},\mathfrak{h})$ is a structure such that $(\mathfrak{s},\mathfrak{h}) \models_\mathcal{S} \psi$ for some $\psi \in \allocates{\phi}{x}$, then we have $\mathfrak{s}(x) \in \mathrm{dom}(\mathfrak{h})$. Observe that $\card{\allocates{\phi}{x}} \leq 2^{\size{\phi}}$ and $\size{\psi} = \bigO(\size{\phi})$, for each $\psi \in \allocates{\phi}{x}$. Let $\phi_0 \vdash_{\mathcal{P}_n} \phi_1, \ldots, \phi_n$ be a sequent from $\mathcal{P}_n$ and $(\mathfrak{s},\mathfrak{h})$ be a structure such that $(\mathfrak{s},\mathfrak{h}) \models_{\mathcal{S}_n} \phi_0$. By Definition \ref{def:entailment}, $\phi_0$ is quantifier-free. Assume that $\phi_1 = \exists x ~.~ \psi_1$ (the argument is repeated for all existential quantifiers occurring in $\phi_1, \ldots, \phi_n$). Note that, since $\mathcal{P}_n$ is normalized, $x$ occurs in a points-to or a predicate atom in $\phi_1$. This implies that $x$ necessarily occurs in a points-to atom in each symbolic heap $\varphi_1$ obtained by a predicate-free unfolding $\phi_1 \unfold{\mathcal{S}_n}^* \varphi_1$, by point (\ref{it3:normalized}) of Definition \ref{def:normalized}. Thus, $\mathfrak{s}'(x) \in \mathrm{loc}(\mathfrak{h})$, for each $x$-associate $\mathfrak{s}'$ of $\mathfrak{s}$ such that $(\mathfrak{s}',\mathfrak{h}) \models \psi_1$. Since $\mathcal{S}_n$ is established, each location from $\mathrm{loc}(\mathfrak{h})$ belongs to $\mathfrak{s}(\mathbb{C}) \cup \mathrm{dom}(\mathfrak{h})$, thus $\mathfrak{s}'(x) \in \mathfrak{s}(\mathbb{C}) \cup \mathrm{dom}(\mathfrak{h})$. Hence $\phi_1$ can safely be replaced by the set of symbolic heaps $\set{\psi_1[t/x] \mid t \in \mathbb{C}} \cup \set{\exists x ~.~ \varphi \mid \varphi \in \allocates{\psi_1}{x}}$. Applying this transformation to each existentially quantified variable occurring in a sequent from $\mathcal{P}_n$ yields a strongly established problem $\mathcal{P}'$. Moreover, the reduction of $\mathcal{P}_n$ to $\mathcal{P}'$ requires $\size{\mathcal{P}_n} \cdot 2^{\bigO(\probwidth{\mathcal{P}_n})} = \size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ time and the width of the outcome is $\probwidth{\mathcal{P}'} = \bigO(\probwidth{\mathcal{P}_n}) = \bigO(\probwidth{\mathcal{P}}^2)$. $\Box$}} \begin{theorem}\label{thm:established-restricted} Every established entailment problem $\mathcal{P} = (\mathcal{S}, \Sigma)$ can be reduced in time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ to normalized an e-restricted\xspace problem $\mathcal{P}_r$, such that $\probwidth{\mathcal{P}_r} = \bigO(\probwidth{\mathcal{P}})$. \end{theorem} \optionalProof{Theorem \ref{thm:established-restricted}}{sec:dec-entailments}{ Lemma \ref{lemma:strongly-established}, we can reduce $\mathcal{P}$ to a normalized strongly established entailment problem $\mathcal{P}_e = (\mathcal{S}_e, \Sigma_e)$ in time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$, such that $\probwidth{\mathcal{P}_e} = \bigO(\probwidth{\mathcal{P}}^2)$. Let $\phi \unfold{\mathcal{S}_e}^* \varphi$ be an arbitrary predicate-free unfolding of a symbolic heap $\phi$ on the right-hand side of a sequent in $\Sigma_e$, where $\varphi = \exists x_1 \ldots \exists x_n ~.~ \psi$ and $\psi$ is quantifier-free. Because $\mathcal{P}_e$ is normalized, there are no equalities in $\psi$. Let $x \not\seq y$ be a disequality from $\psi$, where $\set{x,y} \cap \mathbb{C} = \emptyset$. By Definition \ref{def:entailment}, all variables from $\mathcal{P}_e$ are existentially quantified, thus it must be the case that $x,y \in \set{x_1,\ldots,x_n}$. Because $\mathcal{P}_e$ is strongly established, $\phi$ is $\mathcal{S}_e$-established, thus both $x$ and $y$ are allocated in $\psi$. Moreover, since there are no equalities in $\psi$, there must exist two distinct points-to atoms $x \mapsto (t_1,\ldots,t_\mathfrak{K})$ and $y \mapsto (u_1,\ldots,u_\mathfrak{K})$ in $\psi$ such that, $(\mathfrak{s},\mathfrak{h}) \models_{\mathcal{S}_e} \phi$ implies $(\mathfrak{s}',\mathfrak{h}') \models_{\mathcal{S}_e} x \mapsto (t_1,\ldots,t_\mathfrak{K}) * y \mapsto (u_1,\ldots,u_\mathfrak{K})$, for any structure $(\mathfrak{s},\mathfrak{h})$, for some heap $\mathfrak{h}' \subseteq \mathfrak{h}$ and $\mathfrak{s}'$ is a $(x_1,\ldots,x_n)$-associate of $\mathfrak{s}$. But then $(\mathfrak{s}',\emptyset) \models_{\mathcal{S}_e} x \not\seq y$ and, since the choice of the structure $(\mathfrak{s},\mathfrak{h})$ was arbitrary, we can remove any disequality $x \not\seq y$ such that $\set{x,y} \cap \mathcal{C} = \emptyset$ from $\mathcal{P}_e$. This transformation takes time $\bigO(\size{\mathcal{P}_e}) = \size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ and does not increase the width of the problem. The outcome of is an e-restricted\xspace entailment problem. $\Box$} \shortVersionOnly{ \noindent The class of e-restricted\xspace problems is more general than the class of established problems, in the following sense: for each established problem $\mathcal{P}=(\mathcal{S},\Sigma)$, the treewidth of each $\mathcal{S}$-model of a $\mathcal{S}$-established symbolic heap $\phi$ is bounded by $\probwidth{\mathcal{P}}$ \cite{IosifRogalewiczSimacek13}, while e-restricted\xspace symbolic heaps may have infinite sequences of models with strictly increasing treewidth: \begin{example}\label{ex:grids} Consider the set of rules $\{\mathsf{lls}(x,y) \Leftarrow x \mapsto (y,\mathsf{nil}),~ \mathsf{lls}(x,y) \Leftarrow \exists z \exists v ~.~ x \mapsto (z,v) * \mathsf{lls}(z,y)\}$. The existentially quantified variable $v$ in the second rule in never allocated in any predicate-free unfolding of $\mathsf{lls(a,b)}$, thus the set of rules is not established. However, it is trivially e-restricted\xspace, because no equational atoms occur within the rules. Among the models of $\mathsf{lls(a,b)}$, there are all $n\times n$-square grid structures, known to have treewidth $n$, for $n > 1$ \cite{RobertsonSeymour84}. \hfill$\blacksquare$ \end{example} } \longVersionOnly{ As a concluding remark, we show that the class of e-restricted\xspace is more general than the class of established entailment problems, in the following sense. Let $\mathcal{P} = (\mathcal{S},\mathcal{P})$ be an established entailment problem. Each structure $(\mathfrak{s},\mathfrak{h})$ can be associated with a unique integer $\mathrm{tw}(\mathfrak{s},\mathfrak{h}) \geq 0$, called its \emph{treewidth}. The formal definition of the treewidth is given below, for reasons of self-containment, however the argument can be followed without it. A \emph{labeled tree} is a graph $(N,E,\lambda)$, where $N$ is a finite set of nodes, $E \subseteq N \times N$ is an undirected edge relation and $\lambda : N \rightarrow 2^\mathbb{L}$ is a labeling function. Moreover, there is a unique node $r \in N$, such that for each node $n \in N \setminus \set{r}$ there exists a unique path from $r$ to $n$. A set of nodes $M \subseteq N$ is said to be \emph{connected} if there is a path between any two nodes in the set. \begin{definition}\label{def:tw} Given a structure $(\mathfrak{s},\mathfrak{h})$, a tree decomposition of $(\mathfrak{s},\mathfrak{h})$ is a labeled tree $T=(N,E,\lambda)$, such that: \begin{compactenum} \item for each $\ell \in \mathrm{loc}(\mathfrak{h})$, the set $\set{n \in N \mid \ell \in \lambda(n)}$ is nonempty and connected, % \item for each $\ell_1 \in \mathrm{dom}(\mathfrak{h})$ and $\ell_2 \in \mathfrak{h}(\ell_1)$, we have $\ell_1,\ell_2 \in \lambda(n)$, for some $n \in N$. \end{compactenum} The \emph{treewidth} of $T$ is $\mathrm{tw}(T) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \max\set{\card{\lambda(n)} \mid n \in N} - 1$ and the \emph{treewidth} of $\mathfrak{h}$ is $\mathrm{tw}(\mathfrak{s},\mathfrak{h}) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \min\{\mathrm{tw}(T) \mid T \text{ is a tree decomposition of } (\mathfrak{s},\mathfrak{h})\}$. \end{definition} As shown in \cite{IosifRogalewiczSimacek13}, the treewidth of each $\mathcal{S}$-model of a $\mathcal{S}$-established symbolic heap $\phi$ is bounded by $\probwidth{\mathcal{P}}$. However, if $\mathcal{P}$ is e-restricted\xspace but not established, there can be infinitely many $\mathcal{S}$-models $(\mathfrak{s}_1, \mathfrak{h}_1), (\mathfrak{s}_2, \mathfrak{h}_2), \ldots$ of an e-restricted\xspace symbolic heap, such that $\mathrm{tw}(\mathfrak{s}_1, \mathfrak{h}_1) < \mathrm{tw}(\mathfrak{s}_2, \mathfrak{h}_2) < \ldots$, as shown by the example below: \begin{example}\label{ex:grids} Consider the following set of rules: \[\begin{array}{rcl} \mathsf{lls}(x,y) & \Leftarrow & x \mapsto (y,\mathsf{nil}) \\ \mathsf{lls}(x,y) & \Leftarrow & \exists z \exists v ~.~ x \mapsto (z,v) * \mathsf{lls}(z,y) \end{array}\] \begin{figure}[htb] \centerline{\input{grids.pdf_t}} \caption{\label{fig:grids}} \end{figure} The existentially quantified variable $v$ in the second rule in never allocated in any predicate-free unfolding of $\mathsf{lls}(a,b)$, thus the set of rules is not established. However, it is trivially e-restricted\xspace, because no equational atoms occur within the rules. Among the models of $\mathsf{lls}(a,b)$, there is an infinite set of $n\times n$-square grid structures depicted in Figure \ref{fig:grids}, for all $n > 1$. It is known that the treewdith of an $n \times n$-square grid is $n$ \cite{RobertsonSeymour84}. \hfill$\blacksquare$ \end{example} } \section{Normal Structures} \label{sec:injective-normal} The decidability of e-restricted\xspace entailment problems relies on the fact that, to prove the validity of a sequent, it is sufficient to consider only a certain class of structures, called \emph{normal}, that require the variables not mapped to the same location as a constant to be mapped to pairwise distinct locations: \begin{definition}\label{def:normal} A structure $(\mathfrak{s},\mathfrak{h})$ is a \emph{normal $\mathcal{S}$-model} of a symbolic heap $\phi$ iff there exists:\begin{compactenum} \item\label{it1:normal} a predicate-free unfolding $\phi \unfold{\mathcal{S}} \exists \vec{x} ~.~ \psi$, where $\psi$ is quantifier-free, and % \item\label{it2:normal} an $\vec{x}$-associate $\overline{\astore}$ of $\mathfrak{s}$, such that $(\overline{\astore},\mathfrak{h}) \models_\mathcal{S} \psi$ and $\overline{\astore}(x)=\overline{\astore}(y) \wedge x \neq y \Rightarrow \overline{\astore}(x) \in \mathfrak{s}(\mathbb{C})$, for all $x,y \in \fv{\psi}$. \end{compactenum} \end{definition} \begin{example} \label{ex:normal} Consider the formula $\varphi = p(x_1) * p(x_2)$, with $p(x) \Leftarrow_{\mathcal{S}} \exists z~.~ x \mapsto z$ and $\mathbb{C} = \{ \mathsf{a} \}$. Then the structures: $(\mathfrak{s},\mathfrak{h})$ and $(\mathfrak{s},\mathfrak{h}')$ with $\mathfrak{s} = \{ (x_1,\ell_1),(x_2,\ell_2), (\mathsf{a},\ell_3) \}$, $\mathfrak{h} = \{ (\ell_1,\ell_3), (\ell_2,\ell_3) \}$ and $\mathfrak{h}' = \{ (\ell_1,\ell_4), (\ell_2,\ell_5) \}$ are normal models of $\varphi$. On the other hand, if $\mathfrak{h}'' = \{ (\ell_1,\ell_4), (\ell_2,\ell_4) \}$ (with $\ell_4 \not = \ell_3$) then $(\mathfrak{s},\mathfrak{h}'')$ is a model of $\varphi$ but it is not normal, because any associate of $\mathfrak{s}$ will map the existentials from the predicate-free unfolding of $p(x_1) * p(x_2)$ into the same location, different from $\mathfrak{s}(\mathsf{a})$. \hfill$\blacksquare$ \end{example} Since the left-hand side symbolic heap $\phi$ of each sequent $\phi \vdash \psi_1, \ldots, \psi_n$ is quantifier-free and has no free variables (Definition \ref{def:entailment}) and moreover, by Assumption \ref{ass:dist-const}, every constant is associated a distinct location, to check the validity of a sequent it is enough to consider only structures with injective stores. We say that a structure $(\dot{\astore},\mathfrak{h})$ is \emph{injective} if the store $\dot{\astore}$ is injective. As a syntactic convention, by stacking a dot on the symbol denoting the store, we mean that the store is injective. \longVersionOnly{Moreover, we denote by $\phi \imodels{\mathcal{S}} \psi$ the fact that each injective $\mathcal{S}$-model of $\phi$ is a model of $\psi$.} \additionalMaterial{Additional Material for Normal Structures}{sec:injective-normal}{app:injective-normal}{ \longVersionOnly{ A natural question is: are normal models closed under the composition induced by the separating conjunction? If $(\mathfrak{s},\mathfrak{h})$ is a normal $\mathcal{S}$-model of $\phi_1 * \phi_2$, there exist normal $\mathcal{S}$-models $(\mathfrak{s}, \mathfrak{h}_i)$ of $\phi_i$, for $i=1,2$, such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$. The converse is not true, in general, and requires further conditions: } \begin{definition}\label{def:companions} Given symbolic heaps $\phi_1,\phi_2 \in \shk{\mathfrak{K}}$, a pair of structures $\tuple{(\mathfrak{s}_1, \mathfrak{h}_1),(\mathfrak{s}_2, \mathfrak{h}_2)}$ is a \emph{normal $\mathcal{S}$-companion for $(\phi_1,\phi_2)$} iff $(\mathfrak{s}_i, \mathfrak{h}_i)$ is a normal $\mathcal{S}$-model of $\phi_i$, for $i=1,2$ and: \begin{compactenum} % \item\label{it1:def:companions} $\overline{\astore}_1(t) = \overline{\astore}_2(t)$, for each term $t \in \fv{\psi_1} \cap \fv{\psi_2} \cup \mathbb{C}$, % \item\label{it2:def:companions} $\overline{\astore}_i(\vec{x}_i) \cap \overline{\astore}_{3-i}(\fv{\psi_{3-i}}) \subseteq \overline{\astore}_i(\mathbb{C})$, for $i=1,2$, \end{compactenum} where $\phi_i \unfold{\mathcal{S}}^* \exists \vec{x}_i ~.~ \psi_i$ are the predicate-free unfoldings and $\overline{\astore}_i$ is the $\vec{x}_i$-associate of $\mathfrak{s}_i$ satisfying conditions (\ref{it1:normal}) and (\ref{it2:normal}) of Definition \ref{def:normal}, for $i=1,2$, respectively. The normal $\mathcal{S}$-companion $\tuple{(\mathfrak{s}_1, \mathfrak{h}_1),(\mathfrak{s}_2, \mathfrak{h}_2)}$ is, moreover, \emph{injective} iff $\mathfrak{s}_1$ and $\mathfrak{s}_2$ are injective and $\mathfrak{s}_1(\fv{\phi_1} \setminus \fv{\phi_2}) \cap \mathfrak{s}_2(\fv{\phi_2} \setminus \fv{\phi_1}) = \emptyset$. \end{definition} \begin{lemma}\label{lemma:companions} Given symbolic heaps $\phi_1,\phi_2 \in \shk{\mathfrak{K}}$, a structure $(\mathfrak{s},\mathfrak{h})$ is a (injective) normal $\mathcal{S}$-model of $\phi_1 * \phi_2$ iff there exists a (injective) normal $\mathcal{S}$-companion $\tuple{(\mathfrak{s}_1, \mathfrak{h}_1), (\mathfrak{s}_2,\mathfrak{h}_2)}$ for $(\phi_1, \phi_2)$, such that $\mathfrak{h}=\mathfrak{h}_1\uplus\mathfrak{h}_2$. \end{lemma} \proof{``$\Rightarrow$'' Let $(\mathfrak{s},\mathfrak{h})$ be a normal $\mathcal{S}$-model of $\phi_1 * \phi_2$. Then there exists a predicate-free unfolding $\phi_1 * \phi_2 \unfold{\mathcal{S}}^* \exists \vec{x}_1 ~.~ \psi_1 * \exists \vec{x}_2 ~.~ \psi_2$ such that $\psi_1$ and $\psi_2$ are quantifier-free and $(\mathfrak{s},\mathfrak{h}) \models \exists \vec{x}_1 ~.~ \psi_1 * \exists \vec{x}_2 ~.~ \psi_2$. By $\alpha$-renaming if necessary, we can assume that $\vec{x}_i \cap \fv{\psi_{3-i}} = \emptyset$, for $i=1,2$, thus $(\mathfrak{s},\mathfrak{h}) \models \exists \vec{x}_1 \exists \vec{x}_2 ~.~ \psi_1 * \psi_2$. Hence there exist an $(\vec{x}_1 \cup \vec{x}_2)$-associate $\overline{\astore}$ of $\mathfrak{s}$ and two disjoint heaps $\mathfrak{h}_1$ and $\mathfrak{h}_2$, such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$ and $(\overline{\astore},\mathfrak{h}_i) \models \psi_i$, for $i=1,2$. Let $\mathfrak{s}_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \mathfrak{s}$, for $i=1,2$, so that $\mathfrak{s}=\mathfrak{s}_1 \cup \mathfrak{s}_2$. By considering the $\vec{x}_i$-associate of $\mathfrak{s}$ defined as the restriction of $\overline{\astore}$ to $\vec{x}_i \cup \mathrm{dom}(\mathfrak{s})$ and using the fact that $(\mathfrak{s},\mathfrak{h})$ is a normal $\mathcal{S}$-model of $\phi_1 * \phi_2$, it is easy to check that $(\mathfrak{s}_i,\mathfrak{h}_i)$ is a normal $\mathcal{S}$-model of $\phi_i$. Further, points (\ref{it1:def:companions}) and (\ref{it2:def:companions}) of Definition \ref{def:companions} are easy checks. Finally, if $\mathfrak{s}$ is injective then trivially $\mathfrak{s}_1$ and $\mathfrak{s}_2$ are injective and $\mathfrak{s}_1(\term{\phi_1} \setminus \term{\phi_2}) \cap \mathfrak{s}_2(\term{\phi_2} \setminus \term{\phi_1}) = \mathfrak{s}(\term{\phi_1} \setminus \term{\phi_2}) \cap \mathfrak{s}(\term{\phi_2} \setminus \term{\phi_1}) = \emptyset$. \noindent''$\Leftarrow$'' If $(\mathfrak{s}_i, \mathfrak{h}_i)$ is a normal $\mathcal{S}$-model of $\phi_i$, then there exist predicate-free unfoldings $\phi_i \unfold{\mathcal{S}}^* \exists \vec{x}_i ~.~ \psi_i$ and $\vec{x}_i$-associates $\overline{\astore}_i$ of $\mathfrak{s}_i$, that satisfy the points (\ref{it1:normal}) and (\ref{it2:normal}) of Definition \ref{def:normal}. By an $\alpha$-renaming if necessary, we assume that $\vec{x}_1 \cap \vec{x}_2 = \emptyset$. Then $\phi_1 * \phi_2 \unfold{\mathcal{S}}^* \exists \vec{x}_1 ~.~ \psi_1 * \exists \vec{x}_2 ~.~ \psi_2$ is a predicate-free unfolding. Let $\mathfrak{s}'_i$ and $\overline{\astore}'_i$ be the restrictions of $\mathfrak{s}_i$ and $\overline{\astore}_i$ to $\term{\phi_i}$ and $\term{\psi_i}$ for $i=1,2$, respectively. By point (\ref{it1:def:companions}) of Definition \ref{def:companions}, $\mathfrak{s} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \mathfrak{s}'_1 \cup \mathfrak{s}'_2$ is a well-defined store and, since $\vec{x}_1 \cap \vec{x}_2 = \emptyset$, we obtain that $\overline{\astore} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \overline{\astore}'_1 \cup \overline{\astore}'_2$ is a well-defined $(\vec{x}_1 \cup \vec{x}_2)$-associate of $\mathfrak{s}$. To show that $(\mathfrak{s},\mathfrak{h}_1 \uplus \mathfrak{h}_2)$ is a normal $\mathcal{S}$-model of $\phi_1 * \phi_2$, let $t_1, t_2 \in \term{\psi_1} \cup \term{\psi_2}$ be distinct terms such that $\overline{\astore}(t_1) = \overline{\astore}(t_2)$ and suppose, for a contradiction, that $\overline{\astore}(t_1) \not \in \overline{\astore}(\mathbb{C})$. Since $(\mathfrak{s}_i, \mathfrak{h}_i)$ is a normal $\mathcal{S}$-model of $\phi_i$, for $i=1,2$, the only interesting cases are $t_i \in \term{\psi_i} \setminus \term{\psi_{3-i}}$ and $t_i \in \term{\psi_{3-i}} \setminus \term{\psi_{i}}$. Assume $t_i \in \term{\psi_i} \setminus \term{\psi_{3-i}}$ for $i=1,2$, the other case is symmetric. Since $t_i \not\in \cst{\psi_1 * \psi_2}$, it must be the case that $t_i \in \vec{x}_i$, for $i=1,2$. Then $\overline{\astore}_1(t_1)=\overline{\astore}(t_1)=\overline{\astore}(t_2)=\overline{\astore}_2(t_2)$, which contradicts point (\ref{it2:def:companions}) of Definition \ref{def:companions}. Finally, it is easy to check that $\mathfrak{s} = \mathfrak{s}'_1 \cup \mathfrak{s}'_2$ is injective, provided that $\mathfrak{s}_1$ and $\mathfrak{s}_2$ are injective and that $\mathfrak{s}_1(\term{\phi_1} \setminus \term{\phi_2}) \cap \mathfrak{s}_2(\term{\phi_2} \setminus \term{\phi_1}) = \mathfrak{s}'_1(\term{\phi_1} \setminus \term{\phi_2}) \cap \mathfrak{s}'_2(\term{\phi_2} \setminus \term{\phi_1}) = \emptyset$. $\Box$} The following lemma states an important property of normal $\mathcal{S}$-models, that will be used to build abstract composition operators, needed to define a finite-range abstraction of an infinite set normal structures\longVersionOnly{ (see \S\ref{sec:composition})}. \begin{lemma}\label{lemma:frontier} Given symbolic heaps $\phi_1, \phi_2 \in \shk{\mathfrak{K}}$ and $\tuple{(\dot{\astore},\mathfrak{h}_1), (\dot{\astore},\mathfrak{h}_2)}$ an injective normal $\mathcal{S}$-companion for $(\phi_1,\phi_2)$, we have $\mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \subseteq \dot{\astore}\left(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}\right)$. \end{lemma} \proof{Let $\ell \in \mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) = \mathrm{loc}(\mathfrak{h}_1) \cap \mathrm{loc}(\mathfrak{h}_2)$ be a location, $\phi_i \unfold{\mathcal{S}}^* \exists \vec{x}_i ~.~ \psi_i$ be predicate-free unfoldings and $\overline{\astore}_i$ be the $\vec{x}_i$-associates of $\dot{\astore}$ that satisfy points (\ref{it1:def:companions}) and (\ref{it2:def:companions}) of Definition \ref{def:companions}, such that $(\overline{\astore}_i, \mathfrak{h}_i) \models \psi_i$, for $i=1,2$. By $\alpha$-renaming, if necessary, we assume w.l.o.g. that $\vec{x}_i \cap \fv{\psi_{3-i}} = \emptyset$, for $i = 1,2$. Because $\ell \in \mathrm{loc}(\mathfrak{h}_i)$, there exist points-to atoms $t^i_0 \mapsto (t^i_1,\ldots,t^i_\mathfrak{K})$ in $\psi_i$, such that $\ell = \overline{\astore}_1(t^1_{i_1}) = \overline{\astore}_2(t^2_{i_2})$, for some $i_1, i_2 \in \interv{0}{\mathfrak{K}}$ and all $i=1,2$. We distinguish two cases: \begin{compactitem} % \item if $t^1_{i_1} \in \term{\phi_1}$ and $t^2_{i_2} \in \term{\phi_2}$, since $\overline{\astore}_i$ is a $\vec{x}_i$-associate of $\dot{\astore}$, $\overline{\astore}_i$ and $\dot{\astore}$ agree over $\term{\phi_i}$, for $i=1,2$, we obtain $\dot{\astore}(t^1_{i_1}) = \overline{\astore}_1(t^1_{i_1}) = \overline{\astore}_2(t^2_{i_2}) = \dot{\astore}(t^2_{i_2})$, thus $t^1_{i_1} = t^2_{i_2}$, because $\dot{\astore}$ is injective, hence $\ell \in \dot{\astore}(\term{\phi_1} \cap \term{\phi_2}) \subseteq \dot{\astore}(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C})$. % \item else $t^1_{i_1} \in \term{\psi_1} \setminus \term{\phi_1} = \vec{x}_1 \cup \mathbb{C}$ (the case $t^2_{i_2} \in \term{\psi_2} \setminus \term{\phi_2}$ is symmetric). If $t^1_{i_1} \in \mathbb{C}$, we obtain $\ell = \overline{\astore}_1(t^1_{i_1}) = \dot{\astore}(t^1_{i_1}) \in \dot{\astore}(\mathbb{C})$, because $\mathbb{C} \subseteq \mathrm{dom}(\dot{\astore})$ and $\overline{\astore}$ agrees with $\dot{\astore}$ over $\mathbb{C}$. Else $t^1_{i_1} \in \vec{x}_1$ and we distinguish two cases: \begin{compactitem} \item if $t^2_{i_2} \in \cst{\psi_2}$, we obtain $\ell=\dot{\overline{\astore}}_2(t^2_{i_2})=\dot{\astore}(t^2_{i_2}) \in \dot{\astore}(\mathbb{C})$, by the above argument. % \item else $t^2_{i_2} \in \fv{\psi_2}$ and $\overline{\astore}_1(t^1_{i_1}) = \overline{\astore}_2(t^2_{i_2}) \in \overline{\astore}(\mathbb{C})$ by point (\ref{it2:def:companions}) of Definition \ref{def:companions}. $\Box$ \end{compactitem} \end{compactitem}} \begin{example} Consider the structures defined in Example \ref{ex:normal}. The structure $(\mathfrak{s},\mathfrak{h})$ is a normal model of $p(x_1) * p(x_2)$: we have $(\mathfrak{s},\mathfrak{h}_i) \models p(x_i)$ with $\mathfrak{h}_i = (\ell_i \mapsto \ell_3)$ (for $i = 1,2$), $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$ and $\mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) = \{ \ell_3 \} \subseteq \dot{\astore}(\mathbb{C})$. Similarly, $(\mathfrak{s},\mathfrak{h}')$ is a normal model of $p(x_1) * p(x_2)$, $(\mathfrak{s},\mathfrak{h}_i') \models p(x_i)$ with $\mathfrak{h}_i' = (\ell_i \mapsto \ell_{3+i})$ (for $i = 1,2$), $\mathfrak{h}' = \mathfrak{h}_1' \uplus \mathfrak{h}_2'$ and $\mathrm{Fr}(\mathfrak{h}_1',\mathfrak{h}_2') = \emptyset$. On the other hand, $(\dot{\astore},\mathfrak{h}'')$ is not normal: we have $(\mathfrak{s},\mathfrak{h}_i'') \models p(x_i)$ with $\mathfrak{h}_i'' = (\ell_i \mapsto \ell_{4})$ (for $i = 1,2$), $\mathfrak{h}'' = \mathfrak{h}_1'' \uplus \mathfrak{h}_2''$ and $\mathrm{Fr}(\mathfrak{h}''_1,\mathfrak{h}_2'') = \{ \ell_4 \} \not \subseteq \mathfrak{s}\left(\fv{p(x_1)} \cap \fv{p(x_2)} \cup \mathbb{C}\right)= \{ \ell_3\}$. \end{example} } The key property of normal structures is that validity of e-restricted\xspace entailment problems can be checked considering only (injective) normal structures. The intuition is that, since the (dis-)equalities occurring in the considered formula involve a constant, it is sufficient to assume that all the existential variables not equal to a constant are mapped to pairwise distinct locations, as all other structures can be obtained from such structures by applying a morphism that preserves the truth value of the considered formul{\ae}\shortVersionOnly{\footnote{See Appendices \ref{app:injective-normal} and \ref{app:normalproof} for more details.}}. \putInAppendix{ The proof of this result (Lemma \ref{lemma:normal-entailment}) relies on the following definition and lemmas. \begin{definition}\label{def:compatible} A total function $\gamma : \mathbb{L} \rightarrow \mathbb{L}$ is \emph{compatible} with a structure $(\mathfrak{s},\mathfrak{h})$ if and only if, for all $\ell_1, \ell_2 \in \mathbb{L}$ such that either $\ell_1,\ell_2 \in \mathrm{dom}(\mathfrak{h})$ or $\ell_1 \in \mathfrak{s}(\mathbb{C})$, if $\gamma(\ell_1) = \gamma(\ell_2)$ then $\ell_1=\ell_2$. We define $\gamma(\mathfrak{h}) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{\langle\gamma(\ell), (\gamma(\ell_1), \ldots, \gamma(\ell_\mathfrak{K}))\rangle \mid \mathfrak{h}(\ell) = (\ell_1, \ldots, \ell_\mathfrak{K})}$, whenever $\gamma$ is compatible with $(\mathfrak{s},\mathfrak{h})$. \end{definition} \begin{lemma}\label{lemma:restricted-unfolding} Let $\mathcal{S}$ be an e-restricted\xspace (resp.\ normalized) set of rules and $\phi$ be an e-restricted\xspace formula. Then, each unfolding $\psi$ of $\phi$ is e-restricted\xspace (resp.\ normalized). \end{lemma} \proof{The proof is by induction on the length of the unfolding sequence $\phi \unfold{\mathcal{S}}^* \psi$. $\Box$} \begin{lemma}\label{lemma:compatible} If $\mathcal{S}$ is an e-restricted\xspace set of rules, $\phi$ is an e-restricted\xspace formula and $(\mathfrak{s},\mathfrak{h})$ is an $\mathcal{S}$-model of $\phi$, then for any total function $\gamma$ compatible with $(\mathfrak{s},\mathfrak{h})$, the following hold: \begin{inparaenum}[(1)] % \item\label{it2:compatible} $\gamma(\mathfrak{h})$ is a heap, % \item\label{it3:compatible} $(\gamma\circ\mathfrak{s},\gamma(\mathfrak{h})) \models_\mathcal{S} \phi$. \end{inparaenum} \end{lemma} \proof{ (\ref{it2:compatible}) The set $\set{\gamma(\ell) \mid \ell \in \mathrm{dom}(\mathfrak{h})}$ is finite, because $\mathrm{dom}(\mathfrak{h})$ is finite. Consider two tuples $\langle\gamma(\ell), (\gamma(\ell_1), \ldots, \gamma(\ell_\mathfrak{K}))\rangle$ and $\langle\gamma(\ell'), (\gamma(\ell_1'), \ldots, \gamma(\ell_\mathfrak{K}'))\rangle \in \gamma(\mathfrak{h})$ and assume that $\gamma(\ell) = \gamma(\ell')$. Then since $\gamma$ is compatible with $(\mathfrak{s},\mathfrak{h})$, necessarily $\ell = \ell'$. Since $\mathfrak{h}$ is a partial function, we have $(\ell_1, \ldots, \ell_\mathfrak{K}) = (\ell_1', \ldots, \ell_\mathfrak{K}')$, so that $\gamma(\mathfrak{h})$ is also a finite partial function. \noindent(\ref{it3:compatible}) If $(\mathfrak{s},\mathfrak{h}) \models_\mathcal{S} \phi$ then there exists a predicate-free unfolding $\phi \unfold{\mathcal{S}} \psi = \exists \vec{x} ~.~ \Asterisk_{i=1}^n t_i \bumpeq u_i * \Asterisk_{i=1}^m t'_i \not\seq u'_i * \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$, such that $(\overline{\astore},\mathfrak{h}) \models \psi$, for an $\vec{x}$-associate $\overline{\astore}$ of $\mathfrak{s}$. Note that $\gamma\circ\overline{\astore}$ is an $\vec{x}$-associate of $\gamma\circ\mathfrak{s}$, because $\gamma$ is total. Moreover, because $\phi$ and $\mathcal{S}$ are both e-restricted\xspace, by Lemma \ref{lemma:restricted-unfolding}, $\psi$ is e-restricted\xspace, thus we can assume that $t_i \in \mathbb{C}$, for all $i \in \interv{1}{n}$ and that $t'_i \in \mathbb{C}$, for all $i \in \interv{1}{m}$. We consider the three types of atoms from $\psi$ below: \begin{compactitem} % \item For any $i \in \interv{1}{n}$, since $(\overline{\astore},\emptyset) \models t_i \bumpeq u_i$, we have $\overline{\astore}(t_i)=\overline{\astore}(u_i)$, thus $\gamma(\overline{\astore}(t_i))=\gamma(\overline{\astore}(s_i))$, leading to $(\gamma\circ\overline{\astore},\emptyset) \models t_i \bumpeq u_i$. % \item For any $i \in \interv{1}{m}$, since $(\overline{\astore},\emptyset) \models t'_i \not\seq u'_i$, we have $\overline{\astore}(t'_i) \neq \overline{\astore}(u'_i)$. Because $t'_i \in \mathbb{C}$ and $(\mathfrak{s},\mathfrak{h}) \models \phi$, we have $t'_i \in \mathrm{dom}(\mathfrak{s})$ and $\overline{\astore}(t'_i) = \mathfrak{s}(t'_i) \in \mathfrak{s}(\mathbb{C})$. By Definition \ref{def:compatible}, we obtain $\gamma(\overline{\astore}(t'_i)) \neq \gamma(\overline{\astore}(u'_i))$, thus $(\gamma\circ\overline{\astore},\emptyset) \models t'_i \not\seq u'_i$. % \item If $(\overline{\astore},\mathfrak{h}) \models \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$ then $\overline{\astore}(x_1), \ldots, \overline{\astore}(x_k)$ are pairwise distinct and $\mathrm{dom}(\mathfrak{h}) = \{\overline{\astore}(x_1), \ldots, \overline{\astore}(x_k)\}$. Since $\overline{\astore}(x_1), \ldots, \overline{\astore}(x_k) \in \mathrm{dom}(\mathfrak{h})$, by Definition \ref{def:compatible}, we obtain that $\gamma(\overline{\astore}(x_1)), \ldots, \gamma(\overline{\astore}(x_k))$ are pairwise distinct and $\mathrm{dom}(\gamma(\mathfrak{h})) = \set{\gamma(\overline{\astore}(x_1)), \ldots, \gamma(\overline{\astore}(x_k))}$. We have $\mathfrak{h}(\overline{\astore}(x_i)) = (\overline{\astore}(t^i_1), \ldots, \overline{\astore}(t^i_\mathfrak{K}))$, thus $\gamma(\mathfrak{h})(\overline{\astore}(x_i)) = (\gamma(\overline{\astore}(t^i_1)), \ldots, \gamma(\overline{\astore}(t^i_\mathfrak{K})))$, for each $i \in \interv{1}{k}$, by Definition \ref{def:compatible} and $(\gamma\circ\overline{\astore},\gamma(\mathfrak{h})) \models \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$. $\Box$ \end{compactitem}} } \begin{lemma}\label{lemma:normal-entailment} Let $\mathcal{P} = (\mathcal{S}, \Sigma)$ be a normalized and e-restricted\xspace entailment problem and let $\phi \vdash_\mathcal{P} \psi_1, \ldots, \psi_n$ be a sequent. Then $\phi \vdash_\mathcal{P} \psi_1, \ldots, \psi_n$ is valid for $\mathcal{S}$ iff $(\dot{\astore}, \mathfrak{h}) \models_\mathcal{S} \bigvee_{i=1}^n \psi_i$, for each normal injective $\mathcal{S}$-model $(\dot{\astore},\mathfrak{h})$ of $\phi$. \end{lemma} \optionalProof{Lemma \ref{lemma:normal-entailment}}{sec:injective-normal}{ \label{app:normalproof} This direction is trivial. ``$\Leftarrow$'' Let $(\dot{\astore},\mathfrak{h})$ be an injective $\mathcal{S}$-model of $\phi$. Then by Lemma \ref{lemma:restricted-unfolding}, there exists a predicate-free unfolding $\phi \unfold{\mathcal{S}}^* \exists \vec{x} ~.~ \varphi$, where $\varphi = \Asterisk_{i=1}^m t_i \not\seq u_i * \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$ is e-restricted\xspace and normalized, and an $\vec{x}$-associate $\overline{\astore}$ of $\dot{\astore}$ such that $(\overline{\astore},\mathfrak{h}) \models \varphi$. Note that $\varphi$ contains no equalities since it is normalized and, since it is e-restricted\xspace, we can assume that $t_i \in \mathbb{C}$, for all $i \in \interv{1}{m}$. We consider a store $\mathfrak{s}' : \mathrm{dom}(\overline{\astore}) \rightarrow \mathbb{L}$ that satisfies the following hypothesis: \begin{compactenum}[(a)] \item\label{it1:normal-entailment} $\mathfrak{s}'(t) = \overline{\astore}(t)$, for each $t \in \mathrm{dom}(\overline{\astore})$ such that $\overline{\astore}(t) \in \overline{\astore}(\mathbb{C})$, % \item\label{it2:normal-entailment} $\mathfrak{s}'(t) \neq \mathfrak{s}'(u)$, for all terms $t \neq u \in \mathrm{dom}(\overline{\astore})$ such that $\overline{\astore}(t) \not\in \overline{\astore}(\mathbb{C})$ or $\overline{\astore}(u) \not\in \overline{\astore}(\mathbb{C})$. \end{compactenum} Note that such a store exists because $\mathbb{L}$ is infinite, thus all terms that are not already mapped by $\overline{\astore}$ into locations from $\overline{\astore}(\mathbb{C})$ can be mapped to pairwise distinct locations, not occurring in $\overline{\astore}(\mathbb{C})$. Then we define the heap $\mathfrak{h}' \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \{\langle\mathfrak{s}'(x_i), (\mathfrak{s}'(t^i_1), \ldots, \mathfrak{s}'(t^i_\mathfrak{K}))\rangle \mid i \in \interv{1}{k}\}$. To prove that $\mathfrak{h}'$ is a well-defined heap, first note that the set $\set{\mathfrak{s}'(x_i) \mid i \in \interv{1}{k}}$ is finite and suppose, for a contradiction that $\mathfrak{s}'(x_i)=\mathfrak{s}'(x_j)$, for some $i \neq j \in \interv{1}{k}$. By point (\ref{it2:normal-entailment}), it must be the case that $\overline{\astore}(x_i), \overline{\astore}(x_j) \in \overline{\astore}(\mathbb{C})$, in which case we obtain $\overline{\astore}(x_i)=\mathfrak{s}'(x_i)=\mathfrak{s}'(x_j)=\overline{\astore}(x_j)$, by point (\ref{it1:normal-entailment}), thus contradicting the fact that $(\overline{\astore},\mathfrak{h}) \models \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$. Hence the locations $\set{\mathfrak{s}'(x_i) \mid i \in \interv{1}{k}}$ are pairwise distinct and $\mathfrak{h}'$ is a finite partial function. We prove next that $(\mathfrak{s}',\mathfrak{h}') \models \varphi$, considering each type of atom in $\varphi$: \begin{compactitem} % \item for any $i \in \interv{1}{m}$, since $(\overline{\astore},\emptyset) \models t_i \not\seq u_i$, we have $\overline{\astore}(t_i) \neq \overline{\astore}(u_i)$ and, since $t_i \in \mathbb{C}$, we obtain $\mathfrak{s}'(t_i)=\overline{\astore}(t_i) \in \overline{\astore}(\mathbb{C})$. We distinguish the following cases: \begin{compactitem} \item if $\overline{\astore}(u_i) \in \overline{\astore}(\mathbb{C})$ then $\mathfrak{s}'(u_i)=\overline{\astore}(u_i) \neq \overline{\astore}(t_i)=\mathfrak{s}'(t_i)$, by point (\ref{it1:normal-entailment}), % \item otherwise, $\overline{\astore}(u_i) \not\in \overline{\astore}(\mathbb{C})$ and $\mathfrak{s}'(t_i) \neq \mathfrak{s}'(u_i)$, by point (\ref{it2:normal-entailment}). \end{compactitem} In both cases, we have $(\mathfrak{s}',\emptyset) \models t_i \not\seq u_i$. % \item $(\mathfrak{s}',\mathfrak{h}') \models \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$, by the definition of $\mathfrak{h}'$. \end{compactitem} Let $\mathfrak{s}''$ be the restriction of $\mathfrak{s}'$ to $\mathrm{dom}(\dot{\astore})$. By point (\ref{it2:normal-entailment}), $(\mathfrak{s}'',\mathfrak{h}')$ is an injective normal $\mathcal{S}$-model of $\phi$, according to Definition \ref{def:normal} (simply let $\mathfrak{s}'$ be its $\vec{x}$-associate). Because $\mathfrak{s}''$ is injective, by the assumption of the Lemma, we obtain $(\mathfrak{s}'',\mathfrak{h}') \models_\mathcal{S} \psi_i$, for some $i \in \interv{1}{n}$, and we are left with proving the sufficient condition $(\dot{\astore},\mathfrak{h})\models_\mathcal{S}\psi_i$. To this end, consider the function $\gamma : \mathbb{L}\rightarrow\mathbb{L}$, defined as: \begin{compactitem} \item $\gamma(\mathfrak{s}''(x))=\overline{\astore}(x)$, for all $x \in \mathrm{dom}(\mathfrak{s}'')$, % \item $\gamma(\ell)=\ell$, for all $\ell \in \mathbb{L}\setminus\mathrm{img}(\mathfrak{s}'')$. \end{compactitem} Observe that $\gamma$ is well-defined, since by definition of $\mathfrak{s}'$, $\mathfrak{s}'(x)=\mathfrak{s}'(x') \Rightarrow \overline{\astore}(x)=\overline{\astore}(x')$. Below we check that $\gamma$ is compatible with $(\mathfrak{s}'',\mathfrak{h}')$. Let $\ell_1, \ell_2 \in \mathbb{L}$ be two locations such that $\gamma(\ell_1) = \gamma(\ell_2)$: \begin{compactitem} \item if $\ell_1, \ell_2 \in \mathrm{dom}(\mathfrak{h}')$ then $\ell_1 = \mathfrak{s}''(x_i)$ and $\ell_2 = \mathfrak{s}''(x_j)$, for some $i, j \in \interv{1}{k}$, by definition of $\mathfrak{h}'$. Suppose, for a contradiction, that $i \neq j$. Then $\overline{\astore}(x_i) = \gamma(\mathfrak{s}''(x_i)) = \gamma(\mathfrak{s}''(x_j)) = \overline{\astore}(x_j)$, which contradicts the fact that $(\overline{\astore},\mathfrak{h}) \models \Asterisk_{i=1}^k x_i \mapsto (t^i_1, \ldots, t^i_\mathfrak{K})$. Hence $i = j$, leading to $\ell_1 = \ell_2$. % \item if $\ell_1 \in \mathfrak{s}''(\mathbb{C})$, then let $c \in \mathbb{C}$ be a constant such that $\ell_1 = \mathfrak{s}''(c)$, so that $\gamma(\ell_1) = \overline{\astore}(c)$. Suppose, for a contradiction, that $\ell_2 \not\in \mathrm{img}(\mathfrak{s}'')$. Then $\gamma(\ell_2) = \ell_2 = \overline{\astore}(c)$, hence $\ell_2 \in \overline{\astore}(\mathbb{C})$. But since $\overline{\astore}$ and $\mathfrak{s}''$ agree over $\mathbb{C}$, we have $\overline{\astore}(c) \in \mathfrak{s}''(\mathbb{C})$. Hence $\ell_2 = \overline{\astore}(c) = \mathfrak{s}''(c)$, which contradicts with $\ell_2 \not\in \mathrm{img}(\mathfrak{s}'')$. Thus $\ell_2 \in \mathrm{img}(\mathfrak{s}'')$ and let $\ell_2 = \mathfrak{s}''(t)$, for some term $t$. We have $\gamma(\mathfrak{s}''(t)) = \overline{\astore}(t)$, thus $\overline{\astore}(c) = \gamma(\ell_2) = \gamma(\ell_1) = \overline{\astore}(t)$. By point (\ref{it1:normal-entailment}), we obtain $\ell_2 = \mathfrak{s}'(t) = \overline{\astore}(t) = \overline{\astore}(c) = \mathfrak{s}''(c) = \ell_1$. \end{compactitem} Moreover, it is easy to check that $(\overline{\astore}, \mathfrak{h}) = (\gamma\circ\mathfrak{s}'', \gamma(\mathfrak{h}'))$. Since $\dot{\astore}$ is the restriction of $\overline{\astore}$ to $\term{\phi}$, by Lemma \ref{lemma:compatible}, we obtain $(\dot{\astore},\mathfrak{h}) \models \psi_i$. $\Box$} \shortVersionOnly{ Another important property of injective normal structures is that the frontier $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2)$ of a heap decomposition $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$ such that $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \phi_1 * \phi_2$ and $(\dot{\astore},\mathfrak{h}_i) \models_\mathcal{S} \phi_i$, for each $i = 1,2$ is contained in the image of the common free variables and constants via $\mathfrak{s}$, i.e.\ $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \subseteq \dot{\astore}(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C})$\footnote{See Lemma \ref{lemma:frontier} in Appendix \ref{app:injective-normal} for details.}. } \section{Core Formul{\ae}} \label{sec:core} Given an e-restricted\xspace entailment problem $\mathcal{P} = (\mathcal{S}, \Sigma)$, the idea of the entailment checking algorithm is to compute, for each symbolic heap $\phi$ that occurs as the left-hand side of a sequent $\phi \vdash_\mathcal{P} \psi_1, \ldots, \psi_n$, a finite set of sets of formul{\ae} $\mathcal{F}(\phi) = \set{F_1, \ldots, F_m}$, of some specific pattern, called \emph{core formul{\ae}}. The set $\mathcal{F}(\phi)$ defines an equivalence relation, of finite index, on the set of injective normal $\mathcal{S}$-models of $\phi$, such that each set $F \in \mathcal{F}(\phi)$ encodes an equivalence class. Because the validity of each sequent can be checked by testing whether every (injective) normal model of its left-hand side is a model of some symbolic heap on the right-hand side (Lemma \ref{lemma:normal-entailment}), an equivalent check is that each set $F \in \mathcal{F}(\phi)$ contains a core formula entailing some formula $\psi_i$, for $i = 1,\dots,n$. To improve the presentation, we first formalize the notions of core formul{\ae} and abstractions by sets of core formul{\ae}, while deferring the effective construction of $\mathcal{F}(\phi)$, for a symbolic heap $\phi$, to the next section (\S\ref{sec:coreabs}). In the following, we refer to a given entailment problem $\mathcal{P} = (\mathcal{S}, \Sigma)$. First, we define core formul{\ae} as a fragment of $\seplogk{\mathfrak{K}}$. Consider the formula $\mathsf{loc}(x) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \exists y_0 \ldots \exists y_\mathfrak{K} ~.~ y_0 \mapsto (y_1,\ldots,y_\mathfrak{K}) * \bigvee_{i=0}^\mathfrak{K} x \approx y_i$. Note that a structure is a model of $\mathsf{loc}(x)$ iff the variable $x$ is assigned to a location from the domain or the range of the heap. We define also the following bounded quantifiers: \vspace*{-.5\baselineskip} \[\begin{array}{rclcrcl} \dot\exists x ~.~ \phi & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \exists x ~.~ \bigwedge_{t \in (\fv{\phi} \setminus \set{x}) \cup \mathbb{C}} \neg x \approx t \wedge \phi & \hspace*{1cm} & \heapex x ~.~ \phi & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \dot\exists x ~.~ \mathsf{loc}(x) \wedge \phi \\ \nheapex x ~.~ \phi & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \dot\exists x ~.~ \neg\mathsf{loc}(x) \wedge \phi && \nheapall x ~.~ \phi & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \neg\nheapex x ~.~ \neg \phi \\[-2mm] \end{array}\] In the following, we shall be extensively using the $\heapex x ~.~ \phi$ and $\nheapall x ~.~ \phi$ quantifiers. The formula $\heapex x ~.~ \phi$ states that there exists a location $\ell$ which occurs in the domain or range of the heap and is distinct from the locations associated with the constants and free variables, such that $\phi$ holds when $x$ is associated with $\ell$. Similarly, $\nheapall x ~.~ \phi$ states that $\phi$ holds if $x$ is associated with any location $\ell$ that is outside of the heap and distinct from all the constants and free variables. The use of these special quantifiers will allow us to restrict ourselves to injective stores (since all variables and constants are mapped to distinct locations), which greatly simplifies the handling of equalities. \additionalMaterial{Additional Material on Core Formul{\ae}}{sec:core}{app:core}{ The formal semantics of the bounded quantifiers is stated below: \begin{lemma}\label{lemma:bound-quant} Given a $\seplogk{\mathfrak{K}}$ formula $\phi$ and $x \in \fv{\phi}$, the following hold, for any structure $(\mathfrak{s},\mathfrak{h})$: \begin{compactenum} \item\label{it:heapex} $(\mathfrak{s},\mathfrak{h}) \models_\mathcal{S} \heapex x ~.~ \phi$ iff $(\mathfrak{s}[x\leftarrow \ell],\mathfrak{h}) \models_\mathcal{S} \phi$, for some $\ell \in \mathrm{loc}(\mathfrak{h}) \setminus \mathfrak{s}((\fv{\phi} \setminus \set{x}) \cup \mathbb{C})$, % \item\label{it:nheapall} $(\mathfrak{s},\mathfrak{h}) \models_\mathcal{S} \nheapall x ~.~ \phi$ iff $(\mathfrak{s}[x\leftarrow \ell],\mathfrak{h}) \models_\mathcal{S} \phi$, for all $\ell \in \mathbb{L} \setminus \left[\mathrm{loc}(\mathfrak{h}) \cup \mathfrak{s}((\fv{\phi} \setminus \set{x}) \cup \mathbb{C})\right]$. \end{compactenum} \end{lemma} \proof{ First, for any structure $(\mathfrak{s},\mathfrak{h})$, we have $(\mathfrak{s},\mathfrak{h}) \models \mathsf{loc}(x) \iff \mathfrak{s}(x) \in \mathrm{loc}(\mathfrak{h})$. \noindent(\ref{it:heapex}) By definition, $\heapex x ~.~ \phi$ is equivalent to $\exists x ~.~ \bigwedge_{t \in (\fv{\phi} \setminus \set{x}) \cup \mathbb{C}} \neg x \approx y \wedge \mathsf{loc}(x) \wedge \phi$. \noindent(\ref{it:nheapall}) By definition, $\nheapall x ~.~ \phi$ is equivalent to $\forall x ~.~ (\bigwedge_{t \in (\fv{\phi} \setminus \set{x}) \cup \mathbb{C}} \neg x \approx t \wedge \neg\mathsf{loc}(x)) \rightarrow \phi$. $\Box$}} \shortVersionOnly{ \begin{wrapfigure}{R}{0.5\textwidth} \vspace*{-0.5\baselineskip} \centerline{\input{context.pdf_t}} \caption{Inductive Definition of Context Predicates} \label{fig:context} \vspace*{-\baselineskip} \end{wrapfigure} } \longVersionOnly{ \begin{figure}[htb] \centerline{\input{context.pdf_t}} \caption{Inductive Definition of Context Predicates} \label{fig:context} \end{figure} } The main ingredient used to define core formul{\ae} are \emph{context predicates}. Given a tuple of predicate symbols $(p, q_1, \ldots, q_n) \in \mathbb{P}^{n+1}$, where $n\geq 0$, we consider a {context} predicate symbol $\context{p,q_1,\ldots,q_n}$ of arity $\#p+\sum_{i=1}^n \#q_i$. \longVersionOnly{ The rules defining the semantics of the context predicate atoms are given below: \begin{eqnarray} \context{p,p}(\vec{x},\vec{y}) & \Leftarrow & \vec{x} \bumpeq \vec{y} \text{, where $\vec{x} \cap \vec{y} = \emptyset$ and $\len{\vec{x}} = \len{\vec{y}} = \#p$}, \label{rule:emp} \\ \context{p,q_1,\ldots,q_n}(\vec{x},\vec{y}_1,\ldots,\vec{y}_n) & \Leftarrow & \exists \vec{v} ~.~ \psi\sigma * \Asterisk_{j=1}^m \context{p_j,q_{i^j_1},\ldots, q_{i^j_{k_j}}}(\sigma(\vec{w}_j),\vec{y}_{i^j_1}, \ldots, \vec{y}_{i^j_{k_j}}\!\!\!\!),~ \label{rule:notemp} \end{eqnarray} for each rule $p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{z} ~.~ \psi * \Asterisk_{j=1}^m p_j(\vec{w}_j)$ where $\vec{x},\vec{z},\vec{y}_1,\ldots,\vec{y}_n$ are pairwise disjoint tuples of variables; each substitution $\sigma : \vec{z} \rightharpoonup \vec{x} \cup \bigcup_{i=1}^n \vec{y}_i$ where $\vec{v} = \vec{z} \setminus \mathrm{dom}(\sigma)$ and pairwise disjoint (and possibly empty) sets $\setof{\{i^j_1, \ldots, i^j_{k_j}\}}{j \in \interv{1}{m}}$ with a union equal to $\interv{1}{n}$. Let $\coreset{\mathcal{S}}$ be the set of rules for contexts (\ref{rule:emp}) and (\ref{rule:notemp}), induced by the rules from $\mathcal{S}$. The satisfaction relation $(\mathfrak{s},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$ is defined as usual, w.r.t. the set of rules $\coreset{\mathcal{S}}$ (Definition \ref{def:unfolding-semantics}).} The informal intuition of a context predicate atom $\context{p,q_1,\ldots,q_n}(\vec{t},\vec{u}_1,\ldots,\vec{u}_n)$ is the following: a structure $(\mathfrak{s},\mathfrak{h})$ is a model of this atom if there exist models $(\mathfrak{s},\mathfrak{h}_i)$ of $q_i(\vec{u}_i)$, $i\in\interv{1}{n}$ respectively, with mutually disjoint heaps, an unfolding $\psi$ of $p(\vec{t})$ in which the atoms $q_i(\vec{u}_i)$ occur, and an associate $\mathfrak{s}'$ of $\mathfrak{s}$ such that $(\mathfrak{s}',\mathfrak{h} \uplus \biguplus_{i=1}^n \mathfrak{h}_i)$ is a model of $\psi$. For readability's sake, we adopt a notation close in spirit to $\mathsf{SL}$'s separating implication (known as the magic wand), and we write $\Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x})$ for $\context{p,q_1,\ldots,q_n}(\vec{x},\vec{y}_1,\ldots,\vec{y}_n)$ and $\mathsf{emp} \swand p(\vec{x})$, when $n=0$\footnote{Context predicates are similar to the {\em strong magic wand} introduced in \cite{NTKY2018}. A context predicate $\alpha \swand \beta$ is also related to the usual separating implication $\alpha \wand \beta$ of separation logic, but it is not equivalent. Intuitively, $\wand$ represents a difference between two heaps, whereas $\swand$ removes some atoms in an unfolding. For instance, if $p$ and $q$ are defined by the same inductive rules, up to a renaming of predicates, then $p(x) \wand q(x)$ always holds in a structure with an empty heap, whereas $p(x) \swand q(x)$ holds if, moreover, $p(x)$ and $q(x)$ are the same atom.}. The set of rules defining the interpretation of context predicates is the least set defined by the inference rules below, denoted $\coreset{\mathcal{S}}$: \begin{equation} \tag{$\mathsf{I}$}\label{infrule:emp} \infer[\vec{x} \cap \vec{y} = \emptyset]{ p(\vec{x}) \swand p(\vec{y}) \Leftarrow_{\coreset{\mathcal{S}}} \vec{x} \bumpeq \vec{y} }{} \end{equation} \vspace*{-\baselineskip} \begin{equation}\tag{$\mathsf{II}$}\label{infrule:notemp} \infer[\begin{array}{l} \vec{x}, \vec{z}, \vec{y}_1, \ldots, \vec{y}_n \text{ pairwise disjoint} \\ \sigma : \vec{z} \rightharpoonup \vec{x} \cup \bigcup_{i=1}^n \vec{y}_i \\ \vec{v} = \vec{z} \setminus \mathrm{dom}(\sigma) \end{array}]{ \Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi\sigma * \Asterisk_{j=1}^m \left(\gamma_j \swand p_j(\sigma(\vec{w}_j))\right) }{ p(\vec{x}) \Leftarrow_{\mathcal{S}} \exists \vec{z} ~.~ \psi * \Asterisk_{j=1}^m p_j(\vec{w}_j) \hspace*{4mm} \Asterisk_{i=1}^n q_i(\vec{y}_i) = \Asterisk_{j=1}^m \gamma_j } \end{equation} Note that $\coreset{\mathcal{S}}$ is not progressing, since the rule for $p(\vec{x}) \swand p(\vec{y})$ does not allocate any location. However, if $\mathcal{S}$ is progressing, then the set of rules obtained by applying (\ref{infrule:notemp}) only is also progressing. Rule (\ref{infrule:emp}) says that each predicate atom $p(\vec{t}) \swand p(\vec{u})$, such that $\vec{t}$ and $\vec{u}$ are mapped to the same tuple of locations, is satisfied by the empty heap. To understand rule (\ref{infrule:notemp}), let $(\mathfrak{s},\mathfrak{h})$ be an $\mathcal{S}$-model of $p(\vec{t})$ and assume there are a predicate-free unfolding $\psi$ of $p(\vec{t})$ and an associate $\mathfrak{s}'$ of $\mathfrak{s}$, such that $q_1(\vec{u}_1) , \ldots, q_n(\vec{u}_n)$ occur in $\psi$ and $(\mathfrak{s}',\mathfrak{h}) \models_{\mathcal{S}} \psi$ (Fig. \ref{fig:context}). If the first unfolding step is an instance of a rule $p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{z} ~.~ \psi * \Asterisk_{j=1}^m p_j(\vec{w}_j)$ then there exist a $\vec{z}$-associate $\overline{\astore}$ of $\mathfrak{s}$ and a split of $\mathfrak{h}$ into disjoint heaps $\mathfrak{h}_0, \ldots, \mathfrak{h}_m$ such that $(\overline{\astore},\mathfrak{h}_0) \models \psi[\vec{t}/\vec{x}]$ and $(\overline{\astore},\mathfrak{h}_j) \models_\mathcal{S} \Asterisk_{j=1}^m p_j(\vec{w}_j)[\vec{t}/\vec{x}]$, for all $j \in \interv{1}{m}$. Assume, for simplicity, that $\vec{u}_1 \cup \ldots \cup \vec{u}_n \subseteq \mathrm{dom}(\overline{\astore})$ and let $\overline{\aheap}_1, \ldots, \overline{\aheap}_n$ be disjoint heaps such that $(\overline{\astore},\overline{\aheap}_i) \models_\mathcal{S} q_i(\vec{u}_i)$. Then there exists a partition $\big\{\{i_{j,1}, \ldots, i_{j,k_j}\} \mid j\in \interv{1}{m}\big\}$ of $\interv{1}{n}$, such that $\overline{\aheap}_{i_{j,1}}, \ldots, \overline{\aheap}_{i_{j,k_j}} \subseteq \mathfrak{h}_j$, for all $j \in \interv{1}{m}$. Let $\gamma_j \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \Asterisk_{\ell=1}^{k_j} q_\ell(\vec{u}_\ell)$, then $(\overline{\astore}, \mathfrak{h}_j \setminus (\overline{\aheap}_{i_{j,1}} \cup \ldots \cup \overline{\aheap}_{i_{j,k_j}})) \models_{\coreset{\mathcal{S}}} \gamma_j \swand p_j(\vec{w}_j)[\vec{t}/\vec{x}]$, for each $j \in \interv{1}{m}$. This observation leads to the inductive definition of the semantics for $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$, by the rule that occurs in the conclusion of (\ref{infrule:notemp}), where the substitution $\sigma : \vec{z} \rightharpoonup \vec{x} \cup \bigcup_{i=1}^n \vec{y}_i$ is used to instantiate\footnote{Note that this instantiation is, in principle, redundant (i.e.\ the same rules are obtained if $\mathrm{dom}(\sigma)=\emptyset$ by chosing appropriate $\vec{z}$-associates) but we keep it to simplify the related proofs.} some of the existentially quantified variables from the original rule $p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{z} ~.~ \psi * \Asterisk_{j=1}^m p_j(\vec{w}_j)$. \putInAppendix{ Below we prove the equivalence between the atoms $p(\vec{t})$ and $\mathsf{emp} \swand p(\vec{t})$. \begin{lemma}\label{lemma:pred-equiv} A structure $(\mathfrak{s},\mathfrak{h})$ is an $\mathcal{S}$-model of $p(\vec{t})$ if and only if $(\mathfrak{s},\mathfrak{h})$ is a $\coreset{\mathcal{S}}$-model of $\mathsf{emp}\swand p(\vec{t})$. \end{lemma} \proof{ ``$\Rightarrow$'' For each rule $p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{z} ~.~ \psi * \Asterisk_{i=1}^n q_i(\vec{y}_i)$, there exists a rule $\mathsf{emp} \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{z} ~.~ \psi * \Asterisk_{i=1}^n \mathsf{emp} \swand q_i(\vec{y}_i)$, corresponding to the case where the substitution $\sigma$ is empty. The proof follows by a simple induction on the length of the predicate-free unfolding of $p(\vec{t})$. ``$\Leftarrow$'' We prove the other direction by induction on the length of the predicate-free unfolding of $\mathsf{emp} \swand p(\vec{t})$. Assume $(\mathfrak{s}, \mathfrak{h})$ is a $\coreset{\mathcal{S}}$-model of $\mathsf{emp} \swand p(\vec{t})$. Then there exist a rule $\mathsf{emp} \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi\sigma * \Asterisk_{j=1}^m \left(\mathsf{emp} \swand p_j(\sigma(\vec{w}_j))\right)$ in $\coreset{\mathcal{P}}$ and a $\vec{v}$-associate $\mathfrak{s}'$ of $\mathfrak{s}$ such that $(\mathfrak{s}', \mathfrak{h}) \models \psi\sigma\theta * \Asterisk_{j=1}^m \left(\mathsf{emp} \swand p_j(\theta\circ\sigma(\vec{w}_j))\right)$. By definition of $\coreset{\mathcal{P}}$, this entails that $p(\vec{t})$ can be unfolded into $\exists \vec{z}~.~\psi\theta * \Asterisk_{j=1}^m p_j(\theta(\vec{w}_j))$ using the rules in $\mathcal{S}$. The heap $\mathfrak{h}$ can be decomposed into $\mathfrak{h}_0\uplus \cdots\uplus \mathfrak{h}_m$, where $(\mathfrak{s}', \mathfrak{h}_j) \models \mathsf{emp}\swand p_j(\theta\circ\sigma(\vec{w}_j))$, for $j\in \interv{1}{m}$. By the induction hypothesis, $(\mathfrak{s}', \mathfrak{h}_j)$ is an $\mathcal{S}$-model of $p_j(\theta\circ\sigma(\vec{w}_j))$, and we deduce that $(\mathfrak{s}, \mathfrak{h})$ is an $\mathcal{S}$-model of $\exists z~.~ \psi\theta * \Asterisk_{j=1}^m p_j(\theta(\vec{w}_j))$. $\Box$} Another property of context predicate atoms is stated by the lemma below: \begin{lemma}\label{lemma:context-emp} If $\mathcal{S}$ is progressing, then for each store (resp. injective store) $\mathfrak{s}$, we have $(\mathfrak{s},\emptyset) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ if and only if $n=1$, $p = q_1$ and $\mathfrak{s}(\vec{t}) = \mathfrak{s}(\vec{u}_1)$ (resp. $\vec{t} = \vec{u}_1$). \end{lemma} \proof{ ``$\Rightarrow$'' If $(\mathfrak{s},\emptyset) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ then there exists a rule $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \Leftarrow_{\coreset{\mathcal{S}}} \phi$ and a substitution $\sigma$ such that $(\mathfrak{s},\emptyset) \models_{\coreset{\mathcal{S}}} \phi\sigma$, where $\sigma = [\vec{t}/\vec{x}, \vec{u}_1/\vec{y}_1, \ldots, \vec{u}_n/\vec{y}_n]$. If the rule is an instance of (\ref{infrule:emp}) then $n = 1$, $p = q_1$ and $(\mathfrak{s},\emptyset) \models \vec{t} \bumpeq \vec{u}_1$, leading to $\mathfrak{s}(\vec{t}) = \mathfrak{s}(\vec{u}_1)$. If, moreover $\mathfrak{s}$ is injective, we get $\vec{t} = \vec{u}_1$. Otherwise, if the rule is an instance of (\ref{infrule:notemp}), then since $\mathcal{S}$ is progressing, $\phi\sigma$ must contain exactly one points-to atom, hence $(\mathfrak{s},\emptyset) \models_{\coreset{\mathcal{S}}} \phi\sigma$ cannot be the case. ``$\Leftarrow$'' This is a simple application of rule (\ref{infrule:emp}). $\Box$}} \begin{example} \label{ex:contexts} Consider the set \(\mathcal{S} = \{p(x) \Leftarrow \exists z_1,z_2 ~.~ x \mapsto (z_1,z_2) * q(z_1) * q(z_2), q(x) \Leftarrow x \mapsto (x,x)\}\). We have $(\mathfrak{s},\mathfrak{h}) \models_{\mathcal{S}} p(x)$ with $\mathfrak{s} = \{ (x,\ell_1) \}$ and $\mathfrak{h} = \{ (\ell_1,\ell_2,\ell_3), (\ell_2,\ell_2,\ell_2), (\ell_3,\ell_3,\ell_3) \}$. The atom $q(y) \swand p(x)$ is defined by the following non-progressing rules: \shortVersionOnly{ \vspace*{-.5\baselineskip} \[\begin{tabular}{rclrcl} $q(y) \swand p(x)$ & $\Leftarrow$ & $\exists z_1,z_2 ~.~ x \mapsto (z_1,z_2) * q(y) \swand q(z_1) * \mathsf{emp} \swand q(z_2)$ & $q(y) \swand q(x)$ & $\Leftarrow$ & $x \bumpeq y$ \\ $q(y) \swand p(x)$ & $\Leftarrow$ & $\exists z_1,z_2 ~.~ x \mapsto (z_1,z_2) * \mathsf{emp} \swand q(z_1) * q(y) \swand q(z_2)$ & $\mathsf{emp} \swand q(x)$ & $\Leftarrow$ & $x \mapsto (x,x)$ \\[-2mm] \end{tabular}\] } \longVersionOnly{\[ \begin{tabular}{rcl} $q(y) \swand p(x)$ & $\Leftarrow$ & $\exists z_1,z_2 ~.~ x \mapsto (z_1,z_2) * q(y) \swand q(z_1) * \mathsf{emp} \swand q(z_2)$ \\ $q(y) \swand p(x)$ & $\Leftarrow$ & $\exists z_1,z_2 ~.~ x \mapsto (z_1,z_2) * \mathsf{emp} \swand q(z_1) * q(y) \swand q(z_2)$ \\ $q(y) \swand q(x)$ & $\Leftarrow$ & $x \bumpeq y$ \\ $\mathsf{emp} \swand q(x)$ & $\Leftarrow$ & $x \mapsto (x,x)$ \\ \end{tabular} \]} The two rules for $q(y) \swand p(x)$ correspond to the two ways of distributing $q(y)$ over $q(z_1)$, $q(z_2)$. We have $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$, with $\mathfrak{h}_1 = \{ (\ell_1,\ell_2,\ell_3), (\ell_2,\ell_2,\ell_2)\}$ and $\mathfrak{h}_2 = \{ (\ell_3,\ell_3,\ell_3) \}$. It is easy to check that $(\mathfrak{s}[y \leftarrow \ell_3],\mathfrak{h}_1) \models_{\coreset{\mathcal{S}}} q(y) \swand p(x)$, and $(\mathfrak{s}[y \leftarrow \ell_3],\mathfrak{h}_2) \models_{\coreset{\mathcal{S}}} q(y)$. Note that we also have $(\mathfrak{s}[y \leftarrow \ell_2],\mathfrak{h}_1') \models_{\coreset{\mathcal{S}}} q(y) \swand p(x)$, with $\mathfrak{h}_1' = \{ (\ell_1,\ell_2,\ell_3), (\ell_3,\ell_3,\ell_3)\}$. \hfill$\blacksquare$ \end{example} Having introduced context predicates, the pattern of core formul{\ae} is defined below: \begin{definition}\label{def:core-formulae} A \emph{core formula} $\varphi$ is an instance of the pattern: \[\heapex \vec{x} \nheapall \vec{y} ~.~ \Asterisk_{i=1}^n \left(\Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)\right) * \Asterisk_{i=n+1}^m t^i_0 \mapsto (t^i_1,\ldots,t^i_\mathfrak{K}) \text{\quad such that:}\] \vspace*{-\baselineskip} \begin{compactenum}[(i)] \item\label{core-formulae:no_useless_var} each variable occurring in $\vec{y}$ also occurs in an atom in $\varphi$; % \item\label{core-formulae:exists} for every variable $x \in \vec{x}$, either $x \in \vec{t}_i \setminus \bigcup_{i=1}^{k_i} \vec{u}^i_j$ for some $i \in \interv{1}{n}$, or $x = t^i_j$, for some $i \in \interv{n+1}{m}$ and some $j \in \interv{0}{\mathfrak{K}}$; % \item\label{core-formulae:roots_are_distinct} each term $t$ occurs at most once as $t = \mathsf{root}(\alpha)$, where $\alpha$ is an atom of $\varphi$. % \end{compactenum} We define moreover the set of terms $\roots{\varphi} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \lroots{\varphi} \cup \rroots{\varphi}$, where $\lroots{\varphi} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \{\mathsf{root}(q_j^i(\vec{u}^i_j)) \mid i \in \interv{1}{n}, j \in \interv{1}{k_i}\}$ and $\rroots{\varphi} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \{\mathsf{root}(p_i(\vec{t}_i)) \mid i \in \interv{1}{n}\} \cup \{t^i_0 \mid i \in \interv{n+1}{m}\}$. \end{definition} Note that an unfolding of a core formula using the rules in $\coreset{\mathcal{S}}$ is not necessarily a core formula, because of the unbounded existential quantifiers and equational atoms that occur in the rules from $\coreset{\mathcal{S}}$. Note also that a core formula cannot contain an occurrence of a predicate of the form $p(\vec{t}) \swand p(\vec{t})$ because otherwise, Condition (\ref{core-formulae:roots_are_distinct}) of Definition \ref{def:core-formulae} would be violated. \putInAppendix{ The following lemma states a technical result about core formul{\ae}, that will be used in the proof of Lemma \ref{lemma:coresep}: \begin{lemma}\label{lemma:lhs-root} For each quantifier-free core formula $\varphi$, each injective $\coreset{\mathcal{S}}$-model $(\dot{\astore},\mathfrak{h})$ of $\varphi$ such that $\card{\mathfrak{h}} \geq 1$, and each term $t \in \lroots{\varphi}$, we have $\dot{\astore}(t) \in \mathrm{loc}(\mathfrak{h}) \cup \dot{\astore}(\mathbb{C})$. \end{lemma} \proof{ Let $\varphi$ be a quantifier-free core formula of the following form (cf. Definition \ref{def:core-formulae}): \begin{equation}\label{eq:qf-core} \Asterisk_{i=1}^n \left(\Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)\right) * \Asterisk_{i=n+1}^m x_i \mapsto (t^i_1,\ldots,t^i_\mathfrak{K}) \end{equation} The proof goes by induction on $\card{\mathfrak{h}}$. In the base case, $\card{\mathfrak{h}}=1$, we prove first that the formula contains exactly one points-to or predicate atom. Suppose, for a contradiction, that it contains two or more atoms, i.e.\ $\varphi = \alpha_1 * \ldots * \alpha_m$, for $m \geq 2$. If $\alpha_1$ and $\alpha_2$ are points-to atoms, it cannot be the case that $(\dot{\astore},\mathfrak{h})$ is a $\coreset{\mathcal{S}}$-model of $\varphi$, thus we distinguish two cases: \begin{compactitem} % \item If $\alpha_1=\Asterisk_{j=1}^{k} q_j(\vec{u}_j) \swand p(\vec{t})$ and $\alpha_2$ is a points-to atom then, since $\card{\mathfrak{h}} = 1$, we must have $(\dot{\astore}, \emptyset) \models_{\coreset{\mathcal{S}}} \alpha_1$ and $(\dot{\astore},\mathfrak{h}) \models \alpha_2$. By Lemma \ref{lemma:context-emp}, we obtain $k=1$ and $q_1(\vec{u}_1) = p(\vec{t})$, which violates the condition on the uniqueness of roots in $q_1(\vec{u}_1) \swand p(\vec{t})$, in Definition \ref{def:core-formulae}. % \item Otherwise, $\alpha_1$ and $\alpha_2$ are both predicate atoms; we assume that $(\dot{\astore}, \emptyset) \models_{\coreset{\mathcal{S}}} \alpha_1$ (the case $(\dot{\astore}, \emptyset) \models_{\coreset{\mathcal{S}}} \alpha_2$ is identical). We obtain a contradiction by the argument used at the previous point. \end{compactitem} If $\varphi$ consists of a single points-to atom, then $\lroots{\varphi} = \emptyset$ and there is nothing to prove. Otherwise, $\varphi$ is of the form $\alpha_1 = \Asterisk_{i=1}^{k} q_i(\vec{u}_i) \swand p(\vec{t})$. By Lemma \ref{lemma:context-emp}, since $\mathcal{S}$ is progressing and $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^{k} q_i(\vec{u}_i) \swand p(\vec{t})$, either $k>1$ or $k=1$ and $q_1(\vec{u}_1) \neq p(\vec{t})$. By Condition (\ref{infrule:notemp}), there exists: \begin{compactenum}[(a)] % \item\label{it2:lhs-root} a rule $p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{z} ~.~ \psi * \Asterisk_{j=1}^m p_j(\vec{w}_j)$, % \item\label{it3:lhs-root} separating conjunctions of predicate atoms $\gamma_1, \ldots, \gamma_m$, such that $\Asterisk_{j=1}^m \gamma_j = \Asterisk_{i=1}^k q_i(\vec{y}_i)$, % \item\label{it1:lhs-root} a substitution $\tau : \vec{z} \rightharpoonup \vec{x} \cup \bigcup_{i=1}^n \vec{y}_i$, \end{compactenum} that induce the rule: \[\Asterisk_{i=1}^{k} q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi\tau * \Asterisk_{j=1}^m \gamma_j \swand p_j(\tau(\vec{w}_j)),\] where $\vec{v} = \vec{z} \setminus \mathrm{dom}(\tau)$. Assume w.l.o.g. that $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^{k} q_i(\vec{u}_i) \swand p(\vec{t})$ is the consequence of the above rule, meaning that: \[(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \left(\exists \vec{v} ~.~ \psi\tau * \Asterisk_{j=1}^m \left(\gamma_j \swand p_j(\tau(\vec{w}_j))\right)\right)\sigma \text{, where } \sigma = [\vec{t}/\vec{x}, \vec{u}_1/\vec{y}_1, \ldots, \vec{u}_n/\vec{y}_n].\] Let $\overline{\astore}$ be the $\vec{v}$-associate of $\dot{\astore}$ such that $(\overline{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi\tau\sigma * \Asterisk_{j=1}^m \left(\gamma_j\sigma \swand p_j(\sigma(\tau(\vec{w}_j)))\right)$. Since $\mathcal{S}$ is progressing, $\psi$ contains a points-to atom $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$, such that $(\overline{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \left(t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})\right)\tau\sigma$ and $(\overline{\astore},\emptyset) \models_{\coreset{\mathcal{S}}} \Asterisk_{j=1}^m \left(\gamma_j\sigma \swand p_j(\sigma(\tau(\vec{w}_j)))\right)$. Now consider $t \in \lroots{\phi}$, then $t = \mathsf{root}(q_i(\vec{u}_i))$, for some $i \in \interv{1}{k}$. Since $\Asterisk_{j=1}^m \gamma_j\sigma = \Asterisk_{i=1}^k q_i(\vec{u}_i)$ by Condition (\ref{it3:lhs-root}), we have $t \in \term{\gamma_j\sigma}$, for some $j \in \interv{1}{m}$. Since $(\overline{\astore},\emptyset) \models_{\coreset{\mathcal{S}}} \gamma_j\sigma \swand p_j(\sigma(\tau(\vec{w}_j)))$, by Lemma \ref{lemma:context-emp}, we have $\overline{\astore}(t) = \overline{\astore}(\sigma(\tau(r)))$, where $r = \mathsf{root}(p_j(\vec{w}_j))$. Since $\mathcal{S}$ is connected, either $r \in \set{t_1, \ldots, t_\mathfrak{K}}$ or $r \in \mathbb{C}$, by Definition \ref{def:progress-connectivity}. Since $t \in \lroots{\phi}$, we have $\overline{\astore}(t) = \dot{\astore}(t)$, and we conclude that $\dot{\astore}(t) \in \mathrm{loc}(\mathfrak{h}) \cup \dot{\astore}(\mathbb{C})$. \noindent For the induction step $\card{\mathfrak{h}} > 1$, let $t = \mathsf{root}(q_j^i(\vec{u}_j^i))$, for some $i \in \interv{1}{n}$ and some $j \in \interv{1}{k_j}$. If $n > 1$ or $m > n$ in Equation (\ref{eq:qf-core}), we have $(\dot{\astore},\mathfrak{h}') \models_{\coreset{\mathcal{S}}} \Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)$, for some heap $\mathfrak{h}' \subset \mathfrak{h}$, such that $\card{\mathfrak{h}'} \geq 1$ and, by the inductive hypothesis, we obtain $\dot{\astore}(t) \in \mathrm{loc}(\mathfrak{h}') \cup \dot{\astore}(\mathbb{C}) \subseteq \mathrm{loc}(\mathfrak{h}) \cup \dot{\astore}(\mathbb{C})$. Otherwise, $m = n = 1$ and the argument is similar to the one used in the base case. $\Box$} \begin{lemma}\label{lemma:varrightonly} Let $\phi = \Asterisk_{j=1}^{k} q_j(\vec{u}_j) \swand p_i(\vec{t})$ be a core formula and let $(\dot{\astore},\mathfrak{h})$ be an injective structure. If $\mathcal{S}$ is progressing and normalized, $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi$ and $x \in \vec{t} \setminus (\bigcup_{j=1}^k \vec{u}_j)$ then $\dot{\astore}(x) \in \mathrm{loc}(\mathfrak{h})$. \end{lemma} \proof{ We reason by induction on $\card{\mathfrak{h}}$. If $\mathfrak{h} = \emptyset$ then by Lemma \ref{lemma:context-emp}, we must have $k = 1$ and $\vec{u}_1 = \vec{t}$, thus $\vec{t} \setminus (\bigcup_{j=1}^k \vec{u}_j)$ is empty, which contradicts our hypothesis. Otherwise, by definition of the rules in $\coreset{\mathcal{S}}$, there exists a rule $p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{z}~\,~ \psi * \Asterisk_{j=1}^{m} p_j(\vec{w}_j)$, an associate $\dot{\overline{\astore}}$ of $\dot{\astore}$ and a substitution $\sigma: \vec{z} \rightharpoonup \vec{x} \cup \bigcup_{j=1}^m \vec{y}_j$ such that $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi\sigma\theta * \Asterisk_{j=1}^{m} \left(\gamma_j \swand p_j(\sigma(\vec{w}_j))\right)\theta$, where $\Asterisk_{j=1}^{m} \gamma_j = \Asterisk_{j=1}^{k} q_j(\vec{y}_j)$ and $\theta = [\vec{t}/\vec{x}, \vec{u}_1/\vec{y}_1, \ldots, \vec{u}_m/\vec{y}_m]$. Since $\mathcal{S}$ is normalized, by Condition \ref{it3:normalized} in Definition \ref{def:normalized}, $x$ occurs in all unfoldings of $p(\vec{t})$. Thus either $x$ occurs in $\psi\theta$ (hence also in $\psi\sigma\theta$), or $x$ occurs in $\vec{w}_j\theta$ (hence in $\vec{w}_j\sigma\theta$) for some $j \in \interv{1}{m}$. In the former case, necessarily $x \in \mathrm{loc}(\mathfrak{h})$, because $\psi$ is a points-to atom, since $\mathcal{S}$ is progressing. In the latter case, we have $(\dot{\astore},\mathfrak{h}') \models_{\coreset{\mathcal{S}}} \gamma_j \swand p_j(\sigma(\vec{w}_j))\theta$, for some subheap $\mathfrak{h}'$ of $\mathfrak{h}$, with $\card{\mathfrak{h}'} < \card{\mathfrak{h}}$. Since $x\not \in \bigcup_{j=1}^k \vec{u}_j$ by hypothesis and $\Asterisk_{j=1}^{m} \gamma_j = \Asterisk_{j=1}^{k} q_j(\vec{y}_j)$ we have $x \not \in \fv{\gamma_j\theta}$, thus $x \in \vec{w}_j \setminus \fv{\gamma_j}\theta$. By the induction hypothesis, we deduce that $x \in \mathrm{loc}(\mathfrak{h}')$, hence $x \in \mathrm{loc}(\mathfrak{h})$. $\Box$}} Lemma \ref{lemma:coretrans} shows that any symbolic heap is equivalent to an effectively computable finite disjunction of core formul{\ae}, when the interpretation of formul{\ae} is restricted to injective structures. For a symbolic heap $\phi \in \shk{\mathfrak{K}}$, we define the set $\coretrans{\phi}$, recursively on the structure of $\phi$, implicitly assuming w.l.o.g. that $\mathsf{emp} * \phi = \phi * \mathsf{emp} = \phi$: \[\begin{array}{lcll} \coretrans{\mathsf{emp}} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{\mathsf{emp}} & \hspace*{-1cm}\coretrans{t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})} \\ \coretrans{p(\vec{t})} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{\mathsf{emp} \swand p(\vec{t})} & \hspace*{-1cm}\coretrans{\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})} \\ \coretrans{t_1 \bumpeq t_2} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \left\{\begin{array}{cc} \set{\mathsf{emp}} & \mbox{ if $t_1 = t_2$} \\ \emptyset & \mbox{ if $t_1 \neq t_2$} \end{array}\right. & \hspace*{-1cm}\coretrans{t_1 \not\seq t_2} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \left\{\begin{array}{cc} \emptyset & \mbox{ if $t_1 = t_2$} \\ \set{\mathsf{emp}} & \mbox{ if $t_1 \neq t_2$} \end{array}\right. \\ \coretrans{\phi_1 * \phi_2} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \set{\psi_1 * \psi_2 \mid \psi_i \in \coretrans{\phi_i},~ i = 1,2} \\ \coretrans{\exists x ~.~ \phi_1} & \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} & \{\heapex x ~.~ \psi \mid \psi \in \coretrans{\phi_1}\} \cup \{\psi \mid \psi \in \coretrans{\phi_1[t/x]}, & t \in (\fv{\phi_1} \setminus \set{x}) \cup \mathbb{C}\} \end{array}\] For instance, if $\phi = \exists x~.~ p(x,y) * x \not \bumpeq y$ and $\mathbb{C} = \{ \mathsf{c} \}$, then $\coretrans{\phi} = \{ \heapex x~.~ \mathsf{emp} \swand p(x,y),\ \mathsf{emp} \swand p(\mathsf{c},y) \}$. \putInAppendix{ \begin{proposition}\label{prop:coretrans-subs} Consider a quantifier-free symbolic heap $\varphi$ and an injective substitution $\sigma$. If $\phi \in \coretrans{\varphi}$ then $\phi\sigma \in \coretrans{\varphi\sigma}$. \end{proposition} The following lemmas relate a symbolic heap $\phi$ with the core formul{\ae} $\psi \in \coretrans{\phi}$, by considering separately the cases where $\phi$ is quantifier-free, or existentially quantified. In the latter case, we require moreover that the set of rules providing the interpretation of predicates be normalized. \begin{lemma}\label{lemma:coretrans-qf} Given a quantifier-free symbolic heap $\phi \in \shk{\mathfrak{K}}$, containing only predicate atoms that are contexts, an injective structure $(\dot{\astore}, \mathfrak{h})$ is a $\coreset{\mathcal{S}}$-model of $\phi$ iff $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, for some $\psi \in \coretrans{\phi}$. \end{lemma} \proof{ ``$\Rightarrow$'' By induction on the structure of $\phi$. We consider the following cases: \begin{compactitem} \item $\phi=\mathsf{emp}$, $\phi=t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ and $\phi=\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$: in these cases, the only element in $\coretrans{\phi}$ is $\phi$ itself and we have the result. % \item $\phi=t_1 \bumpeq t_2$: since $(\dot{\astore}, \mathfrak{h}) \models t_1 \bumpeq t_2$, we have $\dot{\astore}(t_1) = \dot{\astore}(t_2)$ and $\mathfrak{h} = \emptyset$. Since $\dot{\astore}$ is injective, we obtain $t_1 = t_2$, $\coretrans{\phi} = \set{\mathsf{emp}}$ and $(\dot{\astore},\mathfrak{h}) \models \mathsf{emp}$, because $\mathfrak{h} = \emptyset$. % \item $\phi=t_1 \not\seq t_2$: since $(\dot{\astore}, \mathfrak{h}) \models t_1 \not\seq t_2$, we have $\dot{\astore}(t_1) \neq \dot{\astore}(t_2)$ and $\mathfrak{h} = \emptyset$, therefore $t_1 \neq t_2$, $\coretrans{t_1 \not\seq t_2} = \set{\mathsf{emp}}$ and $(\dot{\astore}, \mathfrak{h}) \models \mathsf{emp}$, because $\mathfrak{h} = \emptyset$. % \item $\phi=\phi_1 * \phi_2$: since $(\dot{\astore}, \mathfrak{h}) \models_\mathcal{S} \phi_1 * \phi_2$, there exist heaps $\mathfrak{h}_1$ and $\mathfrak{h}_2$, such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$ and $(\dot{\astore}, \mathfrak{h}_i) \models_\mathcal{S} \phi_i$, for $i = 1,2$. By the inductive hypothesis, there exists $\psi_i \in \coretrans{\phi_i}$ such that $(\dot{\astore}, \mathfrak{h}_i) \models_{\coreset{\mathcal{S}}} \psi_i$, for $i=1,2$. Then $(\dot{\astore}, \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi_1 * \psi_2$, where $\psi_1 * \psi_2 \in \coretrans{\phi_1 * \phi_2}$. \end{compactitem} \noindent``$\Leftarrow$'' By induction on the structure of $\phi$, we consider only the equational atoms below, the proofs in the remaining cases are straightforward: \begin{compactitem} % \item $\phi=t_1 \bumpeq t_2$: since there exists $\psi \in \coretrans{\phi}$ such that $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \psi$, necessarily $\coretrans{\phi} = \set{\mathsf{emp}}$, which implies that $t_1 = t_2$. Since $(\dot{\astore},\mathfrak{h}) \models \mathsf{emp}$, $\mathfrak{h} = \emptyset$ and $(\dot{\astore}, \mathfrak{h}) \models t_1 \bumpeq t_2$. % \item $\phi=t_1 \not\seq t_2$: since there exists $\psi \in \coretrans{\phi}$ such that $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \psi$, necessarily $\coretrans{\phi} = \set{\mathsf{emp}}$, which implies that $t_1 \neq t_2$. Since $(\dot{\astore}, \mathfrak{h}) \models \mathsf{emp}$, $\mathfrak{h} = \emptyset$ and $(\dot{\astore}, \mathfrak{h}) \models t_1 \not\seq t_2$, by injectivity of $\dot{\astore}$. $\Box$ \end{compactitem}}} \begin{lemma}\label{lemma:coretrans} Assume $\mathcal{S}$ is normalized. Consider an e-restricted\xspace normalized symbolic heap $\phi \in \shk{\mathfrak{K}}$ with no occurrences of context predicate symbols, and an injective structure $(\dot{\astore}, \mathfrak{h})$, such that $\mathrm{dom}(\dot{\astore}) = \fv{\phi} \cup \mathbb{C}$. We have $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \phi$ iff $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, for some $\psi \in \coretrans{\phi}$. \end{lemma} \optionalProof{Lemma \ref{lemma:coretrans}}{sec:core}{ ``$\Rightarrow$'' By induction on $\size{\phi}$. We consider the following cases: \begin{compactitem} \item $\phi=\mathsf{emp}$, $\phi=t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$, $\phi=t_1 \bumpeq t_2$, $\phi=t_1 \not\seq t_2$ and $\phi=\phi_1 * \phi_2$: the proof is the same as the one in Lemma \ref{lemma:coretrans-qf}. % \item $\phi=p(\vec{t})$: in this case $\coretrans{\phi} = \set{\mathsf{emp} \swand p(\vec{t})}$ and the conclusion follows application of Lemma \ref{lemma:pred-equiv}. % \item $\phi=\exists x ~.~ \phi_1$: since $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \exists x ~.~ \phi_1$, there exists $\ell \in \mathbb{L}$ such that $(\dot{\astore}[x \leftarrow \ell], \mathfrak{h}) \models_\mathcal{S} \phi_1$ and we distinguish the following cases. \begin{compactitem} % \item If $\ell \not \in \dot{\astore}(\fv{\phi}\cup\mathbb{C})$, since $\phi$ is normalized, by Definition \ref{def:normalized} (\ref{it1:normalized}) $x$ occurs in a points-to or in a predicate atom of $\phi_1$. Since $\mathcal{S}$ is normalized, by Definition \ref{def:normalized} (\ref{it3:normalized}), we have that $\ell \in \mathrm{loc}(\mathfrak{h})$. Since $\mathrm{dom}(\dot{\astore}) = \fv{\phi}\cup\mathbb{C}$, the store $\dot{\astore}[x \leftarrow \ell]$ is necessarily injective, hence $(\dot{\astore}[x \leftarrow \ell], \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi_1$, for some $\psi_1 \in \coretrans{\phi_1}$, by the inductive hypothesis and $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \heapex x ~.~ \psi_1$, by Lemma \ref{lemma:bound-quant}. % \item Otherwise, $\ell \in \dot{\astore}(\fv{\phi} \cup \mathbb{C})$ and let $t \in \fv{\phi} \cup \mathbb{C}$ be a term such that $\ell = \dot{\astore}(t)$. Then $(\dot{\astore}, \mathfrak{h}) \models_\mathcal{S} \phi_1[t/x]$ and $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi_1$, for some $\psi_1 \in \coretrans{\phi_1[t/x]}$, by the inductive hypothesis. \end{compactitem} \end{compactitem} \noindent``$\Leftarrow$'' By induction on $\size{\phi}$, considering the following cases: \begin{compactitem} \item $\phi=\mathsf{emp}$, $\phi=t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$, $\phi=t_1 \bumpeq t_2$, $\phi=t_1 \not\seq t_2$ and $\phi=\phi_1 * \phi_2$: the proof is the same as the one in Lemma \ref{lemma:coretrans-qf}. % \item $\phi=p(\vec{t})$: in this case $\psi = \mathsf{emp} \swand p(\vec{t})$ is the only possibility and the conclusion follows by an application of Lemma \ref{lemma:pred-equiv}. % \item $\phi=\exists x ~.~ \phi_1$: by the definition of $\coretrans{\phi}$, we distinguish the following cases: \begin{compactitem} \item If $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \heapex x ~.~ \psi_1$, for some $\psi_1 \in \coretrans{\phi_1}$, then $(\dot{\astore}[x \leftarrow \ell],\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi_1$, for some $\ell \in \mathrm{loc}(\mathfrak{h}) \setminus \dot{\astore}((\fv{\psi_1} \setminus \set{x}) \cup \mathbb{C})$. By the definition of $\coretrans{\phi_1}$, we have $\fv{\psi_1} \subseteq \fv{\phi_1}$ and suppose, for a contradiction, that there exists a variable $y \in \fv{\phi_1} \setminus \fv{\psi_1}$. Then $y$ can only occur either in an equality atom $y \bumpeq y$ or in some disequality $y \not\seq t$, for some term $t \neq y$, and nowhere else. Both cases are impossible, because $\phi$ is normalized, thus by Condition (\ref{it1bis:normalized}) of Definition \ref{def:normalized}, $y$ necessarily occurs in a points-to or predicate atom. Hence, $\fv{\phi_1} = \fv{\psi_1}$ and consequently, we obtain $\ell \in \mathrm{loc}(\mathfrak{h}) \setminus \dot{\astore}((\fv{\phi_1} \setminus \set{x}) \cup \mathbb{C})$. Since $\mathrm{dom}(\dot{\astore}) = (\fv{\phi_1} \setminus \set{x}) \cup \mathbb{C}$, by the hypothesis of the Lemma, $\dot{\astore}[x \leftarrow \ell]$ is injective and, by the induction hypothesis, we obtain $(\dot{\astore}[x \leftarrow \ell], \mathfrak{h}) \models_\mathcal{S} \phi_1$, thus $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \phi$. % \item Otherwise $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, for some $\psi \in \coretrans{\phi_1[t/x]}$ and some $t \in \fv{\phi} \cup \mathbb{C}$. By the induction hypothesis, we have $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \phi_1[t/x]$, thus $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \exists x ~.~ \phi_1$. $\Box$ \end{compactitem} \end{compactitem}} Next, we give an equivalent condition for the satisfaction of a context predicate atom\longVersionOnly{ (Lemma \ref{lemma:simple-unfolding-sat})}, that relies on an unfolding of a symbolic heap into a core formula: \begin{definition}\label{def:simple-formulae} A formula $\varphi$ is a \emph{core unfolding} of a predicate atom $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$, written $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \varphi$, iff there exists: \begin{compactenum} % \item\label{it1:simple-formulae} a rule $\Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{z} ~.~ \phi$, where $\phi$ is quantifier free, and % \item\label{it2:simple-formulae} a substitution $\sigma = [\vec{t}/\vec{x}, \vec{u}_1/\vec{y}_1, \ldots, \vec{u}_n/\vec{y}_n] \cup \zeta$, $\zeta \subseteq \{(z,t) \mid z \in \vec{z},~ t \in \vec{t} \cup \bigcup_{i=1}^n \vec{u}_i\}$, such that $\varphi \in \coretrans{\phi\sigma}$. \end{compactenum} \end{definition} A core unfolding of a predicate atom is always a quantifier-free formula, obtained from the translation (into a disjunctive set of core formul{\ae}) of the quantifier-free matrix of the body of a rule, in which some of the existentially quantified variables in the rule occur instantiated by the substitution $\sigma$. For instance, the rule $\mathsf{emp} \swand p(x) \Leftarrow_{\coreset{\mathcal{S}}} \exists y ~.~ x \mapsto y$ induces the core unfoldings $\mathsf{emp} \swand p(a) \sunfold{\mathcal{S}} a \mapsto a$ and $\mathsf{emp} \swand p(a) \sunfold{\mathcal{S}} a \mapsto u$, via the substitutions $[a/x, a/y]$ and $[a/x, u/y]$, respectively. \additionalMaterial{Additional Material for Core Formul{\ae}}{sec:core}{app:core-unfoldings}{ \begin{lemma}\label{lemma:simple-unfolding-sat} Given an injective structure $(\dot{\astore},\mathfrak{h})$ and a context predicate atom $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$, we have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ iff $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$, for some core unfolding $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \varphi$ and some injective extension $\dot{\overline{\astore}}$ of $\dot{\astore}$. \end{lemma} \proof{We assume w.l.o.g. a total well-founded order $\preceq$ on the set of terms $\mathbb{T}$ and, for a set $T \subseteq \mathbb{T}$, we denote by $\min_\preceq T$ the minimal term from $T$ with respect to this order. In the following, let $\theta \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} [\vec{t}/\vec{x},\vec{u}_1/\vec{y}_1,\ldots,\vec{u}_n/\vec{y}_n]$. \noindent ''$\Rightarrow$'' If $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ then there exists a rule $\Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{z} ~.~ \phi$, where $\phi$ is quantifier-free, such that $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \exists \vec{z} ~.~ \phi\theta$. Let $\overline{\astore}$ be a (not necessarily injective) $\vec{z}$-associate of $\dot{\astore}$ such that $(\overline{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi\theta$. We define a substitution $\tau$, such that $\mathrm{dom}(\tau)\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=}\term{\phi\theta} \subseteq \mathrm{dom}(\overline{\astore})$ and for each $x \in \mathrm{dom}(\tau)$: \begin{compactitem} \item if $x \in \mathrm{dom}(\dot{\astore})$ then $\tau(x)\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} x$, % \item else, if $x \not\in \mathrm{dom}(\dot{\astore})$ and $\overline{\astore}(x)=\dot{\astore}(y)$, for some $y \in \mathrm{dom}(\dot{\astore})$, then $\tau(x) \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \min_\preceq \{z \in \mathrm{dom}(\dot{\astore}) \mid \dot{\astore}(z)=\dot{\astore}(y)\}$, % \item otherwise, if $x \not\in \mathrm{dom}(\dot{\astore})$ and $\overline{\astore}(x)\neq\dot{\astore}(y)$, for all $y \in \mathrm{dom}(\dot{\astore})$, then $\tau(x)\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \min_\preceq \{y \in \mathrm{dom}(\overline{\astore}) \mid \overline{\astore}(y)=\overline{\astore}(x)\}$. \end{compactitem} Let \(E \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \setof{\set{y \in \mathrm{dom}(\overline{\astore}) \mid \overline{\astore}(y)=\overline{\astore}(x)}}{x \in \mathrm{dom}(\overline{\astore})}\); by construction, the sets in $E$ are pairwise disjoint. Let $\dot{\overline{\astore}}$ be the restriction of $\overline{\astore}$ to the set $\mathrm{dom}(\dot{\astore}) \cup \set{\min_\preceq K \mid K \in E,~ K \cap \mathrm{dom}(\dot{\astore}) = \emptyset}$. Because $\dot{\astore}$ is injective, $\dot{\overline{\astore}}$ is easily shown to also be injective, thus it is an injective extension of $\dot{\astore}$. Moreover, because $(\overline{\astore}, \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi\theta$ and $\overline{\astore}$ agrees with $\dot{\overline{\astore}} \circ \tau$ on $\mathrm{dom}(\dot{\overline{\astore}})$, we deduce that $(\dot{\overline{\astore}}, \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi(\tau \circ \theta)$. We conclude by noticing that $(\dot{\overline{\astore}}, \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$, for some $\varphi \in \coretrans{\phi(\tau\circ\theta)}$, by an application of Lemma \ref{lemma:coretrans-qf}, because $\phi(\tau\circ\theta)$ is quantifier-free. \noindent ``$\Leftarrow$'' If $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \sunfold{\mathcal{S}} \varphi$, by Definition \ref{def:simple-formulae}, we have $\varphi \in \coretrans{\phi\theta}$, for some rule $\Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{z} ~.~ \phi$, where $\phi$ is quantifier-free, and some substitution $\theta = [\vec{t}/\vec{x}, \vec{u}_1/\vec{y}_1, \ldots, \vec{u}_n/\vec{y}_i]\cup\zeta$, where $\zeta \subseteq \{(z,t) \mid z \in \vec{z},~ t \in \vec{t} \cup \bigcup_{i=1}^n \vec{u}_i\}$. Since $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$ and $\phi\theta$ is quantifier-free, by Lemma \ref{lemma:coretrans-qf}, we obtain $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi\theta$, hence $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} (\exists \vec{z} ~.~ \phi)\theta$ and $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ follows. $\Box$} \begin{lemma}\label{lemma:bijective-unfolding-sat} Given a bijective structure $(\dot{\astore},\mathfrak{h})$ and a context predicate atom $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$, we have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ if and only if $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$, for some core unfolding $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \varphi$. \end{lemma} \proof{``$\Rightarrow$'' Let $\dot{\astore}'$ be the restriction of $\dot{\astore}$ to $\vec{t} \cup \bigcup_{i=1}^n \vec{u}_i$. Clearly, we have $(\dot{\astore}', \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$. By Lemma \ref{lemma:simple-unfolding-sat}, there exists a core unfolding $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \phi$ and an injective extension $\dot{\overline{\astore}}$ of $\dot{\astore}'$, such that $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi$. Let $\tau$ be the substitution defined by $\tau(t) = u$ if and only if $\dot{\overline{\astore}}(t) = \dot{\astore}(u)$, for all $t \in \term{\phi}$. Note that, since $\dot{\astore}$ is bijective, for each $t \in \mathrm{dom}(\dot{\overline{\astore}})$, there exists a unique $u \in \mathbb{T}$, such that $\dot{\overline{\astore}}(t) = \dot{\astore}(u)$, hence $\tau$ is well-defined. Furthermore, since $\dot{\overline{\astore}}$ is injective, $\tau$ is also injective. We have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi\tau$ and we are left with proving that $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \phi\tau$ is a core unfolding. By Proposition \ref{prop:coretrans-subs} we have $\phi\tau \in \coretrans{\varphi\sigma\tau}$, hence the result. ``$\Leftarrow$'' This is a consequence of Lemma \ref{lemma:simple-unfolding-sat}, using the fact that $\dot{\astore}$ is an injective extension of itself. $\Box$} } \putInAppendix{ \label{ap:witness} The following property of core formul{\ae} leads to a necessary and sufficient condition for their satisfiability (Lemma \ref{lemma:witness}). The idea is that the particular identity of locations outside of the heap, assigned by the $\nheapall$ quantifier, is not important when considering a model of a core formula. \begin{definition} For a set of locations $L \subseteq \mathbb{L}$, we define $\dot{\astore} \istoreq{L} \dot{\astore}'$ if and only if $\mathrm{dom}(\dot{\astore}) = \mathrm{dom}(\dot{\astore}')$ and, for each term $t \in \mathrm{dom}(\dot{\astore})$, if $\{\dot{\astore}(t), \dot{\astore}'(t)\} \cap L \neq \emptyset$ then $\dot{\astore}(t)=\dot{\astore}'(t)$. \end{definition} It is easy to check that $\istoreq{L}$ is an equivalence relation, for each set $L \subseteq \mathbb{L}$. \begin{lemma}\label{lemma:istore-eq} Let $\dot{\astore}$ and $\dot{\astore}'$ be two injective stores and $\mathfrak{h}$ be a heap, such that $\dot{\astore} \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\astore}'$. If $\mathcal{S}$ is progressing, then for every core formula $\varphi$, we have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$ if and only if $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$. \end{lemma} \proof{ We assume that $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$ and show that $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$; the proof in the other direction is identical since $\istoreq{\mathrm{loc}(\mathfrak{h})}$ is symmetric. The proof is carried out by nested induction on $\card{\mathfrak{h}}$ and $\size{\varphi}$. We assume, w.l.o.g., that $\mathrm{dom}(\dot{\astore}) = \mathrm{dom}(\dot{\astore}') = \fv{\varphi} \cup \mathbb{C}$. This is without loss of generality since the truth value of $\varphi$ in $(\dot{\astore},\mathfrak{h})$ and $(\dot{\astore}',\mathfrak{h})$ depends only on the restriction of $\dot{\astore}$ (resp.\ $\dot{\astore}'$) to $\fv{\varphi}\cup \mathbb{C}$. For the base case assume that $\card{\mathfrak{h}}=0$. By hypothesis, $\varphi = \heapex \vec{x} \nheapall \vec{y} ~.~ \Asterisk_{i=1}^n \left(\Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)\right) * \Asterisk_{i=n+1}^m x_i \mapsto (t^i_1,\ldots,t^i_\mathfrak{K})$ and since $\mathfrak{h} = \emptyset$, necessarily, $\vec{x} = \emptyset$ and $m = 0$. Let $\dot{\astore}_1$ be an injective $\vec{y}$-associate of $\dot{\astore}$, where for all $y\in \vec{y}$, we have $\dot{\astore}_1(y) \in \mathbb{L} \setminus\left[\dot{\astore}((\fv{\varphi} \cup \vec{y}) \cup \mathbb{C})\right]$. Note that such a store exists because $\mathbb{L}$ is infinite, wherease $\mathrm{dom}(\dot{\astore})$ and $\vec{y}$ are both finite. By Lemma \ref{lemma:bound-quant} we have $(\dot{\astore}_1, \emptyset)\models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n \left(\Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)\right)$. Thus for $i\in \interv{1}{n}$ we have $(\dot{\astore}_1, \emptyset) \models_{\coreset{\mathcal{S}}} \Asterisk_{j=1}^{k_i}q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)$, and by Lemma \ref{lemma:context-emp}, we deduce that $k_i = 1$, $q_1^i = p_i$ and $\dot{\astore}_1(\vec{u}^i_1) = \dot{\astore}_1(\vec{t}_i)$. Since $\dot{\astore}_1$ is injective, we deduce that $\vec{u}^i_1 = \vec{t}_i$, but this is impossible because by hypothesis, the roots of a core formula are unique. Hence $(\dot{\astore},\mathfrak{h}) \not\models_{\coreset{\mathcal{S}}} \varphi$ and the implication holds. For the induction step assume that $\card{\mathfrak{h}}>0$, we consider the following cases: \begin{compactitem} \item $\varphi=\mathsf{emp}$: since $\card{\mathfrak{h}}>0$, we cannot have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \mathsf{emp}$. % \item $\varphi=t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$: in this case $\mathfrak{h} = \{(\dot{\astore}(t_0), (\dot{\astore}(t_1), \ldots, \dot{\astore}(t_\mathfrak{K})))\}$ and since $\dot{\astore}(t_0), \dot{\astore}(t_1), \ldots, \dot{\astore}(t_\mathfrak{K}) \in \mathrm{loc}(\mathfrak{h})$ and $\dot{\astore}\istoreq{\mathrm{loc}(\mathfrak{h})}\dot{\astore}'$, we also have $\mathfrak{h} = \{(\dot{\astore}'(t_0), (\dot{\astore}'(t_1), \ldots, \dot{\astore}'(t_\mathfrak{K})))\}$, thus $(\dot{\astore}',\mathfrak{h}) \models t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$. % \item $\varphi=\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$: since $\card{\mathfrak{h}} > 0$, $\varphi$ cannot be $p(\vec{t}) \swand p(\vec{t})$. Thus the first unfolding step is an instance of a rule obtained from \ref{infrule:notemp}. By Lemma \ref{lemma:simple-unfolding-sat}, there exists an injective extension $\dot{\overline{\astore}}$ of $\dot{\astore}$ such that $(\dot{\overline{\astore}}, \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$ where $\varphi \sunfold{\coreset{\mathcal{S}}} \psi$, and because $\mathcal{S}$ is progressing, $\psi$ is of the form $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) * \psi'$. Since the truth value of $\psi$ in $(\dot{\overline{\astore}}, \mathfrak{h})$ depends only on the restriction of $\dot{\overline{\astore}}$ to $\fv{\varphi} \cup \mathbb{C}$, we assume, w.l.o.g., that $\mathrm{dom}(\dot{\overline{\astore}})$ is finite. The heap $\mathfrak{h}$ can thus be decomposed into $\mathfrak{h}_0 \uplus \mathfrak{h}'$, where $(\dot{\overline{\astore}}, \mathfrak{h}_0) \models_{\coreset{\mathcal{S}}} t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ and $(\dot{\overline{\astore}}, \mathfrak{h}') \models_{\coreset{\mathcal{S}}} \psi'$. Consider the store $\mathfrak{s}_1 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \{(x, \dot{\overline{\astore}}(x)) \mid x\in \mathrm{dom}(\dot{\overline{\astore}}) \setminus \mathrm{dom}(\dot{\astore})\wedge \dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h})\}$ and let $\dot{\astore}_1 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \dot{\astore}' \cup \mathfrak{s}_1$. Since $\mathrm{dom}(\dot{\astore}) = \mathrm{dom}(\dot{\astore}')$ by hypothesis, $\dot{\astore}_1$ is well-defined. It is also injective because $\dot{\astore}'$ and $\dot{\overline{\astore}}$ are both injective, and if $\dot{\astore}_1(x) = \dot{\astore}_1(y)$, where $x\in \mathrm{dom}(\dot{\astore}')$ and $y\in \mathrm{dom}(\mathfrak{s}_1)$, then $\dot{\astore}_1(y) = \dot{\overline{\astore}}(y) \in \mathrm{loc}(\mathfrak{h})$, hence we also have $\dot{\astore}_1(x) = \dot{\astore}'(x) \in \mathrm{loc}(\mathfrak{h})$. By hypothesis $\dot{\astore} \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\astore}'$, hence $\dot{\astore}'(x) = \dot{\astore}(x) = \dot{\overline{\astore}}(x)$, so that $\dot{\overline{\astore}}(x) = \dot{\overline{\astore}}(y)$. Since $\dot{\overline{\astore}}$ is injective, we deduce that $x=y$. Now let $\dot{\astore}_2$ be an injection from $\mathrm{dom}(\dot{\overline{\astore}}) \setminus \mathrm{dom}(\dot{\astore}_1)$ onto $\mathbb{L} \setminus \left(\mathrm{img}(\dot{\overline{\astore}}) \cup \mathrm{img}(\dot{\astore}') \cup \mathrm{loc}(\mathfrak{h})\right)$. Note that such an extension necessarily exists since $\mathrm{dom}(\dot{\overline{\astore}})$, $\mathrm{dom}(\dot{\astore}')$ and $\mathrm{loc}(\mathfrak{h})$ are all finite whereas $\mathbb{L}$ is infinite. Let $\dot{\overline{\astore}'} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \dot{\astore}_1 \cup \dot{\astore}_2$, it is straightforward to verify that $\dot{\overline{\astore}}'$ is injective and that $\dot{\overline{\astore}} \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\overline{\astore}'}$. By the inductive hypothesis we have $(\dot{\overline{\astore}'}, \mathfrak{h}_0) \models_{\coreset{\mathcal{S}}} t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ and $(\dot{\overline{\astore}'}, \mathfrak{h}') \models_{\coreset{\mathcal{S}}} \psi'$, and by Lemma \ref{lemma:simple-unfolding-sat} we deduce that $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$. % \item $\varphi=\heapex x ~.~ \psi$: by Lemma \ref{lemma:bound-quant}, there exists an $x$-associate $\overline{\astore}$ of $\dot{\astore}$, such that $\overline{\astore}(x) \in \mathrm{loc}(\mathfrak{h}) \setminus \dot{\astore}((\fv{\psi} \setminus \set{x}) \cup \mathbb{C})$ and $(\overline{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$. We distinguish two cases. \begin{compactitem} % \item If $\overline{\astore}(x)=\dot{\astore}(y)$ for some $y \in \mathrm{dom}(\dot{\astore})$ then $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi[y/x]$ and, by the induction hypothesis, we have $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi[y/x]$. Since $\dot{\astore} \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\astore}'$ and $\dot{\astore}(y) \in \mathrm{loc}(\mathfrak{h})$, we have $\dot{\astore}(y) = \dot{\astore}'(y)$. Furthermore, since $\overline{\astore}(x) \not \in \dot{\astore}((\fv{\psi} \setminus \set{x}) \cup \mathbb{C})$, necessarily $y \not\in (\fv{\psi} \setminus \set{x}) \cup \mathbb{C}$ and, because $\dot{\astore}'$ is injective, $\dot{\astore}'(y) \not\in \dot{\astore}'((\fv{\psi} \setminus \set{x}) \cup \mathbb{C})$. Since $\dot{\astore}'(y) = \overline{\astore}(x) \in \mathrm{loc}(\mathfrak{h})$ and $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi[y/x]$, we deduce that $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \heapex x ~.~ \psi$. % \item Otherwise we have $\overline{\astore}(x) \neq \dot{\astore}(y)$ for all $y \in \mathrm{dom}(\dot{\astore})$ and $\overline{\astore}$ is therefore injective. Let $\overline{\astore}' \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \dot{\astore}'[x \leftarrow \overline{\astore}(x)]$. Suppose that $\overline{\astore}(x)=\dot{\astore}'(y)$, for some $y \in \mathrm{dom}(\dot{\astore}')$. Since $\dot{\astore} \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\astore}'$ we have $\mathrm{dom}(\dot{\astore}')=\mathrm{dom}(\dot{\astore})$, hence $y \in \mathrm{dom}(\dot{\astore})$ and, since $\dot{\astore}(y) \in \mathrm{loc}(\mathfrak{h})$, we obtain $\dot{\astore}(y)=\dot{\astore}'(y)=\overline{\astore}(x)$, in contradiction with the assumption of this case. Thus $\overline{\astore}'$ is injective and, using the fact that $\overline{\astore} \istoreq{\mathrm{loc}(\mathfrak{h})} \overline{\astore}'$, we deduce that $(\overline{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$ by the induction hypothesis. Since $\overline{\astore}'(x) = \overline{\astore}(x) \not\in \mathrm{dom}(\dot{\astore}) = \mathrm{dom}(\dot{\astore}')$, we have $\overline{\astore}'(x) \not\in \dot{\astore}'((\fv{\psi}\setminus\set{x}) \cup \mathbb{C})$. Moreover, $\overline{\astore}'(x) = \overline{\astore}(x) \in \mathrm{loc}(\mathfrak{h})$, thus $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \heapex x ~.~ \psi$ by Lemma \ref{lemma:bound-quant}. \end{compactitem} % \item $\nheapall x ~.~ \psi$: By Lemma \ref{lemma:bound-quant}, $(\dot{\astore},\mathfrak{h}) \models \nheapall x ~.~ \psi$ iff $(\dot{\astore}[x \leftarrow \ell] \models \psi$ holds for all locations $\ell \in \mathbb{L}$ such that $\ell \not \in \mathrm{loc}(\mathfrak{h}) \cup \dot{\astore}(\fv{\nheapall x ~.~ \psi})$. Let $\ell \in \mathbb{L}\setminus \left[\mathrm{loc}(\mathfrak{h})\cup \mathrm{img}(\dot{\astore})\right]$ be an arbitrary location. Since $\mathbb{L}$ is infinite and $\mathrm{loc}(\mathfrak{h})\cup\mathrm{img}(\dot{\astore})$ is finite, such a location exists. By definition of $\nheapall$, we have $(\dot{\overline{\astore}}[x\leftarrow \ell], \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$. Now let $\ell' \in \mathbb{L}\setminus \left[\mathrm{loc}(\mathfrak{h})\cup\mathrm{img}(\dot{\astore}')\right]$ be an arbitrary location. Clearly $\dot{\astore}[x\leftarrow\ell]$ and $\dot{\astore}'[x\leftarrow\ell']$ are injective stores and $\dot{\astore}[x\leftarrow\ell] \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\astore}'[x\leftarrow\ell']$, since $\ell,\ell' \not \in \mathrm{loc}(\mathfrak{h})$. By the induction hypothesis, we have $(\dot{\astore}'[x\leftarrow\ell'],\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$ and, since the choice of $\ell' \in \mathbb{L} \setminus \left[\mathrm{img}(\dot{\astore}') \cup \mathrm{loc}(\mathfrak{h})\right] = \mathbb{L} \setminus \left[\dot{\astore}'(\fv{\varphi} \cup \mathbb{C}) \cup \mathrm{loc}(\mathfrak{h})\right]$ was arbitrary, $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \nheapall x ~.~ \psi$, by definition of $\nheapall$. $\Box$ \end{compactitem} } The following lemma gives an alternative condition for the satisfiability of core formul{\ae}. Intuitively, it is sufficient to instantiate the bounded universal quantifiers with arbitrary locations that are not in the image of the store, nor in the range of the heap. \begin{lemma}\label{lemma:witness} Given a core formula $\varphi = \heapex \vec{x} \nheapall \vec{y} ~.~ \psi$, where $\psi$ is quantifier-free, and an injective structure $(\dot{\astore},\mathfrak{h})$, such that $\mathrm{dom}(\dot{\astore})=\fv{\varphi}\cup\mathbb{C}$, we have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$ if and only if $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, for some injective $(\vec{x}\cup\vec{y})$-associate $\dot{\overline{\astore}}$ of $\dot{\astore}$, such that $\dot{\overline{\astore}}(\vec{x}) \subseteq \mathrm{loc}(\mathfrak{h})$ and $\dot{\overline{\astore}}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$. \end{lemma} \proof{``$\Rightarrow$'' Since $\mathbb{L}$ is infinite and $\mathrm{dom}(\dot{\astore}) \cup \mathrm{loc}(\mathfrak{h})$ is finite, there exists an injective $(\vec{x}\cup\vec{y})$-associate $\dot{\overline{\astore}}$ of $\dot{\astore}$, such that $\dot{\overline{\astore}}(\vec{x}) \subseteq \mathrm{loc}(\mathfrak{h})$, $\dot{\overline{\astore}}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$ and $(\dot{\overline{\astore}},\mathfrak{h})\models_{\coreset{\mathcal{S}}} \psi$, by the semantics of the bounded quantifiers $\heapex$ and $\nheapall$ (see Lemma \ref{lemma:bound-quant}). ``$\Leftarrow$'' Let $\vec{x} = \set{x_1,\ldots,x_n}$, $\vec{y} = \set{y_1,\ldots,y_m}$ and let $\ell_1, \ldots, \ell_n \in \mathrm{loc}(\mathfrak{h}) \setminus \dot{\astore}\left((\fv{\psi}\setminus(\vec{x}\cup\vec{y})) \cup \mathbb{C}\right)$ and $\ell_{n+1}, \ldots, \ell_{n+m} \in \mathbb{L} \setminus \left(\mathrm{loc}(\mathfrak{h})\cup\dot{\astore}((\fv{\psi}\setminus\vec{y})\cup\mathbb{C})\right)$ be arbitrary locations, since $\mathbb{L}$ is infinite and $\fv{\psi}\cup \mathbb{C} \cup \mathrm{loc}(\mathfrak{h})$ is finite, such locations necessarily exist. Let $\dot{\overline{\astore}} = \dot{\astore}[x_1\leftarrow\ell_1,\ldots,x_{n}\leftarrow\ell_{n}]$. Then $\dot{\overline{\astore}}[y_1 \leftarrow \ell_{n+1}, \ldots, y_{m} \leftarrow \ell_{n+m}] \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\overline{\astore}}$, thus $(\dot{\overline{\astore}}[y_1 \leftarrow \ell_{n+1}, \ldots, y_{m} \leftarrow \ell_{n+m}],\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, by Lemma \ref{lemma:istore-eq}. Since the choice of $\ell_{n+1},\ldots,\ell_{n+m}$ is arbitrary, we deduce that $(\dot{\overline{\astore}}[y_1 \leftarrow \ell_{n+1}, \ldots, y_m \leftarrow \ell_{n+m}],\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \nheapall y ~.~ \psi$ and that $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \heapex x \nheapall y ~.~ \psi$. $\Box$}} We now define a equivalence relation, of finite index, on the set of injective structures. Intuitively, an equivalence class is defined by the set of core formul{\ae} that are satisfied by all structures in the class (with some additional conditions). First, we introduce the overall set of core formul{\ae}, over which these equivalence classes are defined: \begin{definition}\label{def:allvars} Let $\allvars{\mathcal{P}} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \allvars{\mathcal{P}}^1 \cup \allvars{\mathcal{P}}^2$, such that $\allvars{\mathcal{P}}^1 \cap \allvars{\mathcal{P}}^2 = \emptyset$ and $\card{\allvars{\mathcal{P}}^i} = \probwidth{\mathcal{P}}$, for $i = 1,2$ and denote by $\core{\mathcal{P}}$ the set of core formul{\ae} $\varphi$ such that $\roots{\varphi} \cap \fv{\varphi} \subseteq \allvars{\mathcal{P}}^1$, $\roots{\varphi} \setminus \fv{\varphi} \subseteq \allvars{\mathcal{P}}^2 \cup \mathbb{C}$ and no variable in $\allvars{\mathcal{P}}^1$ is bound in $\varphi$. \end{definition} Note that $\core{\mathcal{P}}$ is a finite set, because both $\allvars{\mathcal{P}}$ and $\mathbb{C}$ are finite. Intuitively, $\allvars{\mathcal{P}}^1$ will denote ``local'' variables introduced by unfolding the definitions on the left-hand sides of the entailments, whereas $\allvars{\mathcal{P}}^2$ will denote existential variables occurring on the right-hand sides. Second, we characterize an injective structure by the set of core formul{\ae} it satisfies: \begin{definition}\label{def:coreabs} For a core formula $\varphi = \heapex \vec{x} \nheapall \vec{y} ~.~ \psi$, we denote by $\witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\varphi}$ the set of stores $\dot{\overline{\astore}}$ that are injective $(\vec{x}\cup\vec{y})$-associates of $\dot{\astore}$, and such that:\begin{inparaenum}[(1)] \item\label{it1:witset} $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, % \item\label{it2:witset} $\dot{\overline{\astore}}(\vec{x}) \subseteq \mathrm{loc}(\mathfrak{h})$, and % \item\label{it3:witset} $\dot{\overline{\astore}}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$. \end{inparaenum} The elements of this set are called \emph{witnesses for $(\dot{\astore},\mathfrak{h})$ and $\varphi$}. The \emph{core abstraction} of an injective structure $(\dot{\astore},\mathfrak{h})$ is the set $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ of core formul{\ae} $\varphi \in \core{\mathcal{P}}$ for which there exists a witness $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\varphi}$ such that $\dot{\overline{\astore}}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$. \end{definition} An injective structure $(\dot{\astore},\mathfrak{h})$ satisfies each core formula $\varphi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$\footnote{An easy consequence of Lemma \ref{lemma:witness}\shortVersionOnly{ in Appendix \ref{ap:witness}}.}, fact that is witnessed by an extension of the store assigning the universally quantified variables random locations outside of the heap. Further, any core formula $\varphi$ such that $(\dot{\astore},\mathfrak{h})\models \varphi$ and $\lroots{\varphi} = \emptyset$ occurs in $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$. Our entailment checking algorithm relies on the definition of the \emph{profile} of a symbolic heap. Since each symbolic heap is equivalent to a finite disjunction of existential core formul{\ae}, when interpreted over injective normal structures, it is sufficient to consider only profiles of core formul{\ae}: \begin{definition}\label{def:profile} A \emph{profile} for an entailment problem $\mathcal{P} = (\mathcal{S}, \Sigma)$ is a relation $\mathcal{F} \subseteq \core{\mathcal{P}} \times 2^{\core{\mathcal{P}}}$ such that, for any core formula $\phi \in \core{\mathcal{P}}$ and any set of core formul{\ae} $F \in 2^{\core{\mathcal{P}}}$, we have $(\phi, F) \in \mathcal{F}$ iff $F = \coreabs{\dot{\astore}, \mathfrak{h}}{\mathcal{P}}$, for some injective normal $\coreset{\mathcal{S}}$-model $(\dot{\astore},\mathfrak{h})$ of $\phi$, with $\mathrm{dom}(\dot{\astore}) = \fv{\phi}\cup \mathbb{C}$. \end{definition} Assuming the existence of a profile, the effective construction of which will be given in Section \ref{sec:coreabs}, the following lemma provides an algorithm that decides the validity of $\mathcal{P}$: \begin{lemma}\label{lemma:entailment} Let $\mathcal{P} = (\mathcal{S}, \Sigma)$ be a normalized e-restricted\xspace entailment problem and $\mathcal{F} \subseteq \core{\mathcal{P}} \times 2^{\core{\mathcal{P}}}$ be a profile for $\mathcal{P}$. Then $\mathcal{P}$ is valid iff, for each sequent $\phi \vdash_\mathcal{P} \psi_1,\ldots,\psi_n$, each core formula $\varphi \in \coretrans{\phi}$ and each pair $(\varphi,F) \in \mathcal{F}$, we have $F \cap \coretrans{\psi_i} \neq \emptyset$, for some $i \in \interv{1}{n}$. \end{lemma} \optionalProof{Lemma \ref{lemma:entailment}}{sec:core}{``$\Rightarrow$'' Let $\phi \vdash_\mathcal{P} \psi_1, \ldots, \psi_n$ be a sequent and $\varphi \in \coretrans{\phi}$ be a core formula. Since $\phi$ is quantifier-free and $\fv{\phi} = \emptyset$ (Definition \ref{def:entailment}), we deduce that $\varphi$ is quantifier-free and $\roots{\varphi} \subseteq \term{\phi} \subseteq \mathbb{C}$, hence $\varphi \in \core{\mathcal{P}}$, by Definition \ref{def:allvars}. If there is no set of core formul{\ae} $F \in 2^{\core{\mathcal{P}}}$ such that $(\varphi, F) \in \mathcal{F}$, then there is nothing to prove. Otherwise, let $F \in 2^{\core{\mathcal{P}}}$ be a set of core formul{\ae}, such that $(\varphi, F) \in \mathcal{F}$. By Definition \ref{def:profile}, there exists an injective normal $\coreset{\mathcal{S}}$-model $(\dot{\astore},\mathfrak{h})$ of $\varphi$, such that $F = \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$. Since $\mathcal{P}$ is valid, $\phi \models_\mathcal{S} \bigvee_{i=1}^n \psi_i$, hence there exists $i \in \interv{1}{n}$, such that $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \psi_i$. Since $\mathrm{dom}(\dot{\astore})=\mathbb{C}=\fv{\psi_i} \cup \mathbb{C}$, by Lemma \ref{lemma:coretrans}, we obtain $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \zeta$, for some $\zeta \in \coretrans{\psi_i}$. Since $\fv{\zeta} \subseteq \fv{\psi_i} = \emptyset$, we also have that $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \zeta$. We show that $\zeta \in \core{\mathcal{P}}$. First, all predicate atoms in $\zeta$ are of the form $\mathsf{emp} \swand p(\vec{t})$, and if $\zeta$ contains two distinct occurrences of atoms $\mathsf{emp} \swand p(\vec{t})$ and $\mathsf{emp} \swand q(\vec{s})$ with $\roots{p(\vec{t})} = \roots{q(\vec{s})}$ then $\zeta$ cannot be satisfiable, because the same location cannot be allocated in two disjoint parts of the heap. Second, since $\mathcal{P}$ is normalized, all existential variables must occur in a predicate or points-to atom. Thus all the conditions of Definition \ref{def:core-formulae} are satisfied. Finally, since $\card{\allvars{\mathcal{P}}^2} = \probwidth{\mathcal{P}} \geq \size{\psi_i}$, we may assume up to an $\alpha$-renaming that all the bound variables in $\psi_i$ are in $\allvars{\mathcal{P}}^2$, hence the same holds for $\zeta$. Since any predicate atom that occurs in a core formula in $\coretrans{\psi_i}$ is of the form $\mathsf{emp} \swand p(\vec{t})$, we have $\lroots{\psi_i\sigma} = \emptyset$. By Definition \ref{def:coreabs}, we have $\zeta \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} = F$, thus $F \cap \coretrans{\psi_i} \neq \emptyset$. \noindent''$\Leftarrow$'' Let $\phi \vdash_\mathcal{P} \psi_1, \ldots, \psi_n$ be a sequent. Let $(\dot{\astore},\mathfrak{h})$ be an $\mathcal{S}$-model of $\phi$. Since $\fv{\phi} = \fv{\psi_1} = \dots = \fv{\psi_n} = \emptyset$, we may assume, w.l.o.g., that $\mathrm{dom}(\mathfrak{s}) = \mathbb{C}$, and that $\dot{\astore}$ is injective (by Assumption \ref{ass:dist-const} all constants are mapped to pairwise distinct locations). It is sufficient to prove that $(\dot{\astore},\mathfrak{h}) \models_\mathcal{S} \psi_i$, for some $i \in \interv{1}{n}$, because in this case, we also have $(\mathfrak{s}, \mathfrak{h}) \models_\mathcal{S} \psi_i$. By Lemma \ref{lemma:normal-entailment}, it is sufficient to show that any injective normal $\mathcal{S}$-model of $\phi$ is an $\mathcal{S}$-model of $\psi_i$, for some $i \in \interv{1}{n}$, so let us assume that $(\dot{\astore},\mathfrak{h})$ is also a normal $\mathcal{S}$-model of $\phi$. Since $\fv{\phi} = \emptyset$, by Lemma \ref{lemma:coretrans-qf}, we have $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$, for some $\varphi \in \coretrans{\phi}$. By Definition \ref{def:profile}, we have $(\varphi, \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}) \in \mathcal{F}$, hence $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} \cap \coretrans{\psi_i} \neq \emptyset$, for some $i \in \interv{1}{n}$. Then there exists a core formula $\zeta \in \coretrans{\psi_i}$, such that $(\dot{\astore}, \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \zeta$, by Definition \ref{def:coreabs} and, since $\mathrm{dom}(\dot{\astore}) = \mathbb{C} = \fv{\psi_i} \cup \mathbb{C}$, by Lemma \ref{lemma:coretrans}, we obtain $(\dot{\astore}, \mathfrak{h}) \models_{\mathcal{S}} \psi_i$. Since the choice of $(\dot{\astore},\mathfrak{h})$ is arbitrary, each injective normal $\mathcal{S}$-model of $\phi\sigma$ is a model of $\psi_i\sigma$, for some $i \in \interv{1}{n}$. $\Box$ } \shortVersionOnly{ The proof relies on Lemma \ref{lemma:normal-entailment}, according to which entailments can be tested by considering only normal models. As one expects, Lemma \ref{lemma:coretrans} is used in this proof to ensure that the translation $\coretrans{.}$ of symbolic heaps into core formul{\ae} preserves the injective models. } \section{Construction of the Profile Function} \label{sec:coreabs} \longVersionOnly{\subsection{Construction Rules}} For a given normalized entailment problem $\mathcal{P} = (\mathcal{S}, \Sigma)$, describe the construction of a profile $\mathcal{F}_\mathcal{P} \subseteq \core{\mathcal{P}} \times 2^{\core{\mathcal{P}}}$, recursively on the structure of core formul{\ae}. We assume that the set of rules $\mathcal{S}$ is progressing, connected and e-restricted\xspace. The relation $\mathcal{F}_\mathcal{P}$ is the least set satisfying the recursive constraints (\ref{eq:pto-core}), (\ref{eq:pred-core}), (\ref{eq:sep-core}) and (\ref{eq:ex-core}), given in this section. Since these recursive definitions are monotonic, the least fixed point exists and is unique. We shall prove later (Theorem \ref{thm:main}) that the least fixed point can, moreover, be attained in a finite number of steps by a standard Kleene iteration. \additionalMaterial{Additional Material for the Construction of Profiles}{sec:coreabs}{app:coreabs}{} \paragraph{Points-to Atoms} For a points-to atom $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$, such that $t_0, \ldots, t_\mathfrak{K} \in \allvars{\mathcal{P}}^1 \cup \mathbb{C}$, we have: \begin{eqnarray}\label{eq:pto-core} &&(t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}),~ F) \in \mathcal{F}_\mathcal{P} \text{, iff $F$ is the set containing $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ and all core formul{\ae} }\nonumber\\[-1mm] && \text{of the form } \nheapall \vec{z} ~.~ \Asterisk_{i=1}^n ~q_i(\vec{u}_i) \swand p(\vec{t}) \in \core{\mathcal{P}}, \text{ where } \vec{z}=(\vec{t} \cup \vec{u}_1 \cup \ldots \cup \vec{u}_n) \setminus \left(\set{t_0,\ldots,t_\mathfrak{K}} \cup \mathbb{C}\right)\nonumber\\[-1mm] &&\text{ such that } \mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) * \Asterisk_{i=1}^n ~\mathsf{emp} \swand q_i(\vec{u}_i) \end{eqnarray} For instance, if $\mathcal{S} = \{ p(x) \Leftarrow \exists y,z~.~ x \mapsto y * q(y,z),\ q(x,y) \Leftarrow x \mapsto y \}$, with $\allvars{\mathcal{P}}^1 = \{ u,v \}$ and $\allvars{\mathcal{P}}^2 = \{ z \}$, then $\mathcal{F}_\mathcal{P}$ contains the pair $(u \mapsto v, F)$ with $F = \{ u \mapsto v, \mathsf{emp} \swand q(u,v), \nheapall z~.~ q(v,z) \swand p(u) \}$. \putInAppendix{ \longVersionOnly{We prove that constraint (\ref{eq:pto-core}) indeed defines the profile of a points-to atom:} \begin{lemma}\label{lemma:pto-core} If $\mathcal{S}$ is progressing, then for all terms $t_0, \ldots, t_\mathfrak{K} \in \allvars{\mathcal{P}}^1 \cup \mathbb{C}$ and all sets of core formul{\ae} $F \in 2^{\core{\mathcal{P}}}$, we have $(t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}), F) \in \mathcal{F}_\mathcal{P}$ if and only if $F=\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, for some injective $\mathcal{S}$-model $(\dot{\astore},\mathfrak{h})$ of $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$, such that $\mathrm{dom}(\dot{\astore})=\set{t_0, \ldots, t_\mathfrak{K}} \cup \mathbb{C}$. \end{lemma} \proof{ Let $(\dot{\astore},\mathfrak{h})$ be an arbitrary injective model of $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ where $\mathrm{dom}(\dot{\astore}) = \set{t_0, \ldots, t_\mathfrak{K}} \cup \mathbb{C}$ and $\mathfrak{h} = \set{(\dot{\astore}(t_0), (\dot{\astore}(t_1), \ldots, \dot{\astore}(t_\mathfrak{K})))}$. We show $F = \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ below, where $F$ is defined by (\ref{eq:pto-core}): \noindent ``$\subseteq$'' Let $\phi \in F$ and consider the following cases: \begin{compactitem} \item If $\phi = t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ then $(\dot{\astore},\mathfrak{h}) \models t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ and $\lroots{\phi}=\emptyset$, thus $\phi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ (see Definition \ref{def:coreabs}). % \item Otherwise, $\phi = \nheapall \vec{z} ~.~ \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$, where $\vec{z} = \left(\bigcup_{i=1}^n \vec{u}_i \cup \vec{t}\right) \setminus (\{t_0,\ldots,t_\mathfrak{K}\} \cup \mathbb{C})$ and $\mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) * \Asterisk_{i=1}^n \mathsf{emp} \swand q_i(\vec{u}_i)$. Note that by the progressivity condition, we have $t_0 = \mathsf{root}(p(\vec{t}))$. By Definition \ref{def:simple-formulae}, there exists a rule: \[\mathsf{emp} \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi * \Asterisk_{i=1}^n (\mathsf{emp} \swand q_i(\vec{y}_i))~~(\dagger)\] such that $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) \in \coretrans{\psi\sigma}$ and $\sigma$ is an extension of $[\vec{t}/\vec{x},\vec{u}_1/\vec{y}_1,\ldots,\vec{y}_n/\vec{u}_n]$ with pairs $(z,t)$, where $z \in \vec{v}$ and $t \in \vec{t} \cup \bigcup_{i=1}^n \vec{u}_i$. By (\ref{infrule:notemp}), the rule ($\dagger$) occurs because of the existence of a rule \[p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{w} ~.~ \varphi * \Asterisk_{i=1}^n q_i(\vec{z}_i)~~(\dagger\dagger)\] and a substitution $\tau : \vec{w} \rightharpoonup \vec{x}$, such that $\psi = \varphi\tau$, $\vec{v} = \vec{w} \setminus \mathrm{dom}(\tau)$ and $\vec{y}_i = \tau(\vec{z}_i)$, for all $i \in \interv{1}{n}$. Applying $\tau$ to $(\dagger\dagger)$, by (\ref{infrule:notemp}), we obtain the rule: \[\Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi * \Asterisk_{i=1}^n (q_i(\vec{y}_i) \swand q_i(\vec{y}_i))~~(\ddagger)\] Let $\dot{\overline{\astore}}$ be an injective $\vec{v}$-associate of $\dot{\astore}$. Such an associate necessarily exists, for instance if $\dot{\overline{\astore}}$ maps $\vec{v}$ into pairwise distinct locations, that are further distinct from $\mathrm{img}(\dot{\astore})$; since $\mathbb{L}$ is infinite and $\mathrm{dom}(\dot{\astore})$ is assumed to be finite, such locations always exist. By $\alpha$-renaming if necessary, we can assume that $\vec{v} \cap \set{t_0,\ldots,t_\mathfrak{K}} = \emptyset$, thus $\dot{\astore}$ and $\dot{\overline{\astore}}$ agree on $\{t_0,\ldots,t_\mathfrak{K}\}$ and we obtain $(\dot{\overline{\astore}},\mathfrak{h}) \models t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$. Since $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) \in \coretrans{\psi\sigma}$, by Lemma \ref{lemma:coretrans-qf}, we have $(\dot{\overline{\astore}},\mathfrak{h}) \models \psi\sigma$. By Lemma \ref{lemma:context-emp}, we have $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi\sigma * \Asterisk_{i=1}^n (q_i(\vec{u}_i) \swand q_i(\vec{u}_i))$ and, by rule ($\ddagger$) we obtain $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$. There remains to prove that $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\phi}$. Since there are no existentially quantified variables in $\phi$, it suffices to show that $\dot{\overline{\astore}}(\vec{z}) \cap \mathrm{loc}(\mathfrak{h}) = \dot{\overline{\astore}}(\vec{z}) \cap \dot{\overline{\astore}}(\set{t_0,\ldots,t_\mathfrak{K}}) = \dot{\overline{\astore}}(\vec{z} \cap \set{t_0,\ldots,t_\mathfrak{K}}) = \emptyset$, because $\dot{\overline{\astore}}$ agrees with $\dot{\astore}$ on $\set{t_0,\ldots,t_\mathfrak{K}}$, $\dot{\overline{\astore}}$ is injective and $\vec{z} \cap \set{t_0,\ldots,t_\mathfrak{K}} = \emptyset$, by (\ref{eq:pto-core}). Finally, we prove the condition of Definition \ref{def:coreabs}, namely that $\dot{\overline{\astore}}(\lroots{\nheapall\vec{z} ~.~ \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})}) \cap \mathrm{dom}(\mathfrak{h}) = \{\dot{\overline{\astore}}(\mathsf{root}(q_i(\vec{u}_i))) \mid i \in \interv{1}{n}\} \cap \{\dot{\overline{\astore}}(t_0)\} = \emptyset$. Suppose, for a contradiction, that this set is not empty, thus $\dot{\overline{\astore}}(t_0)=\dot{\overline{\astore}}(\mathsf{root}(q_i(\vec{u}_i)))$, for some $i \in \interv{1}{n}$. Because $\dot{\overline{\astore}}$ is injective, we have $t_0=\mathsf{root}(q_i(\vec{u}_i))$. However, this contradicts with the condition $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \in \core{\mathcal{P}}$, which by Definition \ref{def:core-formulae}, requires that $\mathsf{root}(p(\vec{t})) \not = \mathsf{root}(q_i(\vec{u}_i))$, i.e., $t_0 \neq \mathsf{root}(q_i(\vec{u}_i))$. \end{compactitem} \noindent''$\supseteq$'' Let $\phi = \heapex \overline{\vec{x}} \nheapall \overline{\vec{y}} ~.~ \psi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ be a core formula, where $\psi$ is quantifier-free. Note that, since $\phi \in \core{\mathcal{P}}$, we have $(\overline{\vec{x}} \cup \overline{\vec{y}}) \cap \allvars{\mathcal{P}}^1 =\emptyset$ because no variable in $\allvars{\mathcal{P}}^1$ can be bound in $\phi$; thus, since $\set{t_0,\ldots,t_\mathfrak{K}} \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C}$ by hypothesis, we have: \[(\overline{\vec{x}}\cup\overline{\vec{y}}) \cap \set{t_0,\ldots,t_\mathfrak{K}} = \emptyset~~ (\dagger).\] By Definition \ref{def:coreabs}, we have $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$, for some injective witness $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\phi}$, such that $\dot{\overline{\astore}}(\overline{\vec{x}}) \subseteq \mathrm{loc}(\mathfrak{h})$ and $\dot{\overline{\astore}}(\overline{\vec{y}}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$. Since $(\dot{\astore},\mathfrak{h}) \models t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$, it must be the case that $\card{\mathfrak{h}}=1$, hence $\psi$ must be of either one of the forms: \begin{compactitem} % \item $v_0 \mapsto (v_1,\ldots,v_\mathfrak{K})$: in this case $\mathrm{dom}(\mathfrak{h})=\{\dot{\overline{\astore}}(v_0)\}$ and $\mathfrak{h}(\dot{\overline{\astore}}(v_0))=(\dot{\overline{\astore}}(v_1), \ldots, \dot{\overline{\astore}}(v_\mathfrak{K}))$, thus $\mathrm{loc}(\mathfrak{h}) = \{\dot{\overline{\astore}}(v_0), \ldots, \dot{\overline{\astore}}(v_\mathfrak{K})\}$. By ($\dagger$), $\dot{\overline{\astore}}$ and $\dot{\astore}$ must agree over $t_0, \ldots, t_\mathfrak{K}$, hence we have $\dot{\overline{\astore}}(t_i) = \dot{\astore}(t_i)$, for all $i \in \interv{0}{\mathfrak{K}}$. Since $(\dot{\astore},\mathfrak{h}) \models t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$, we obtain $\mathrm{dom}(\mathfrak{h}) = \{\dot{\astore}(t_0)\} = \{\dot{\overline{\astore}}(t_0)\}$ and $\mathfrak{h}(\dot{\astore}(t_0)) = (\dot{\astore}(t_1), \ldots, \dot{\astore}(t_\mathfrak{K})) = (\dot{\overline{\astore}}(t_1), \ldots, \dot{\overline{\astore}}(t_\mathfrak{K}))$. Since $\dot{\overline{\astore}}$ is injective, we obtain $v_i = t_i$, for all $i \in \interv{0}{\mathfrak{K}}$. By Definition \ref{def:core-formulae}, we have $\overline{\vec{x}} \cup \overline{\vec{y}} \subseteq \set{t_0, \ldots, t_\mathfrak{K}}$, thus $\overline{\vec{x}} = \overline{\vec{y}} = \emptyset$, by ($\dagger$). Then we obtain $\phi = t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ and $\phi \in F$ follows, by (\ref{eq:pto-core}). % \item $\Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$: Since $(\dot{\astore},\mathfrak{h}) \models t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$, we have $\mathrm{loc}(\mathfrak{h}) = \{\dot{\astore}(t_0), \ldots, \dot{\astore}(t_\mathfrak{K})\}$ \mycomment[me]{a couple of things can be factored for both items, it seems}. Since $\dot{\astore}$, $\dot{\overline{\astore}}$ agree over $\set{t_0,\ldots,t_\mathfrak{K}}$, we have $\mathrm{loc}(\mathfrak{h}) = \{\dot{\overline{\astore}}(t_0), \ldots, \dot{\overline{\astore}}(t_\mathfrak{K})\}$ and since $\dot{\overline{\astore}}(\overline{\vec{x}}) \subseteq \mathrm{loc}(\mathfrak{h})$ and $\dot{\overline{\astore}}$ is injective, the only possibility is $\overline{\vec{x}} = \emptyset$, so that $\phi = \nheapall \overline{\vec{y}} ~.~ \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$. Since $\nheapall \overline{\vec{y}} ~.~ \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ is a core formula, by Definition \ref{def:core-formulae}, we have $\overline{\vec{y}} \subseteq \vec{t} \cup \bigcup_{i=1}^n \vec{u}_i$ and therefore $\overline{\vec{y}} \subseteq \left(\vec{t} \cup \bigcup_{i=1}^n \vec{u}_i\right) \setminus (\set{t_0, \ldots, t_\mathfrak{K}} \cup \mathbb{C})$. Since $\mathrm{dom}(\dot{\astore})=\set{t_0, \ldots, t_\mathfrak{K}} \cup \mathbb{C}$ and $\phi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, we have that $\fv{\phi} = \set{t_0, \ldots, t_\mathfrak{K}}$ and thus $\overline{\vec{y}} = \left(\vec{t} \cup \bigcup_{i=1}^n \vec{u}_i\right) \setminus (\set{t_0, \ldots, t_\mathfrak{K}} \cup \mathbb{C})$. Indeed, all variables $y \in \vec{t} \cup \bigcup_{i=1}^n \vec{u}_i$ not occurring in $\vec{y}$ necessarily occur in $\mathrm{dom}(\dot{\overline{\astore}}) \setminus \vec{y} = \mathrm{dom}(\dot{\astore})$. By (\ref{infrule:notemp}), for each rule \[p(\vec{x}) \Leftarrow_\mathcal{S} \exists \vec{w} ~.~ \psi * \Asterisk_{j=1}^m p_j(\vec{z}_j)~~(\dagger\dagger)\] and each substitution $\tau : \vec{w} \rightharpoonup \vec{x} \cup \bigcup_{i=1}^n \vec{y}_i$, there exists a rule \[\Asterisk_{i=1}^n q_i(\vec{y}_i) \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi\tau * \Asterisk_{j=1}^m \gamma_j \swand p_j(\tau(\vec{z}_j))~~(\ddagger)\] where $\Asterisk_{j=1}^m\gamma_j = \Asterisk_{i=1}^n q_i(\vec{y}_i)$ and $\vec{v} = \vec{w} \setminus \mathrm{dom}(\tau)$. Assume w.l.o.g. that $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t})$ is a consequence of the above rule, i.e., that there exists a $\vec{v}$-associate $\mathfrak{s}'$ of $\dot{\overline{\astore}}$ such that $(\mathfrak{s}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi\tau\sigma * \Asterisk_{j=1}^m \gamma_j\sigma \swand p_j(\sigma(\tau(\vec{z}_j)))$, where $\sigma\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=}[\vec{t}/\vec{x}, \vec{u}_1/\vec{y}_1, \ldots, \vec{u}_n/\vec{y}_n]$. Since $\mathcal{S}$ is progressing, there is exactly one points-to atom in $\psi$ and, because $\card{\mathfrak{h}}=1$, it must be the case that $(\mathfrak{s}',\mathfrak{h}) \models \psi\tau\sigma$ and $(\mathfrak{s}',\emptyset) \models_{\coreset{\mathcal{S}}} \gamma_j\sigma \swand p_j(\sigma(\tau(\vec{z}_j)))$, for each $j \in \interv{1}{m}$. To prove that $\phi \in F$, it is sufficient to show the existence of a core unfolding $\mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) * \Asterisk_{i=1}^n \mathsf{emp} \swand q_i(\vec{u}_i)$. To this end, we first prove the two points of Definition \ref{def:simple-formulae}: \noindent(\ref{it1:simple-formulae}) Since $(\mathfrak{s}',\emptyset) \models_{\coreset{\mathcal{S}}} \gamma_j\sigma \swand p_j(\sigma(\tau(\vec{z}_j)))$, for each $j \in \interv{1}{m}$, by Lemma \ref{lemma:context-emp}, we obtain $\gamma_j\sigma = p_j(\vec{w}_j)$, for a tuple of variables $\vec{w}_j \in \mathrm{dom}(\mathfrak{s}')$, such that $\mathfrak{s}'(\vec{w}_j) = \mathfrak{s}'(\sigma(\tau(\vec{z}_j)))$. Since, moreover $\Asterisk_{j=1}^m \gamma_j\sigma = \Asterisk_{i=1}^n q_i(\vec{u}_i)$, we deduce that $n = m$ and, for each $i \in \interv{1}{n}$, we have $q_i = p_{j_i}$, for some $j_i \in \interv{1}{m}$. Then, by applying (\ref{infrule:notemp}) to the rule ($\dagger\dagger$), using the substitution $\tau$, we obtain the rule: \[\mathsf{emp} \swand p(\vec{x}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi\tau * \Asterisk_{i=1}^n \mathsf{emp} \swand q_i(\tau(\vec{z}_{j_i}))~~(\ddagger\ddagger)\] \noindent(\ref{it2:simple-formulae}) Let $\mu$ be the extension of $\sigma$ with the pairs $(z,u)$ such that $z \in \vec{v}$ and one of the following holds: \begin{compactitem} \item if $\mathfrak{s}'(z) = \dot{\overline{\astore}}(t_i)$, for some $i \in \interv{0}{\mathfrak{K}}$, then $u = t_i$, % \item if $\mathfrak{s}'(z) = \dot{\overline{\astore}}((\vec{u}_i)_\ell)$, for some $i \in \interv{1}{n}$ and $\ell \in \interv{1}{\#q_i}$, then $u = (\vec{u}_i)_\ell$, % \item otherwise, $u = \min_\preceq\set{v \in \vec{v} \mid \mathfrak{s}'(v) = \mathfrak{s}'(z)}$, where $\preceq$ is a total order on $\mathbb{V}$. \end{compactitem} Note that, since $\dot{\overline{\astore}}$ is injective, for each $z \in \vec{v}$ there exist at most one pair $(z,u) \in \mu$ which is well-defined. Moreover, we have $\mu(\tau(\vec{z}_{j_i})) = \vec{u}_i$, because $\mathfrak{s}'(\sigma(\tau(\vec{z}_j))) = \mathfrak{s}'(\vec{u}_i) = \dot{\overline{\astore}}(\vec{u}_i)$, for all $i \in \interv{1}{n}$. We now prove that \[\begin{array}{rcl} t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) * \Asterisk_{i=1}^n \mathsf{emp} \swand q_i(\vec{u}_i) & \in & \coretrans{\psi\tau\mu * \Asterisk_{i=1}^n \mathsf{emp} \swand q_i(\mu(\tau(\vec{z}_{j_i})))} \\ \end{array}\] or, equivalently, that $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) \in \coretrans{\psi\tau\mu}$. By a case split on the form of the atom $\alpha$ in $\psi\tau$, using the fact that $(\mathfrak{s}',\mathfrak{h}) \models \psi\tau\sigma$: \begin{compactitem} \item $\alpha = u_1 \bumpeq u_2$: we have $\mathfrak{s}'(\sigma(u_1)) = \mathfrak{s}'(\sigma(u_2))$, hence $\mu(u_1) = \mu(u_2)$, by definition of $\mu$ and $\coretrans{\alpha} = \set{\mathsf{emp}}$. % \item $\alpha = u_1 \not\seq u_2$: we have $\mathfrak{s}'(\sigma(u_1)) \neq \mathfrak{s}'(\sigma(u_2))$, hence $\mu(u_1) \neq \mu(u_2)$, by definition of $\mu$ and $\coretrans{\alpha} = \set{\mathsf{emp}}$. % \item $\alpha = u_0 \mapsto (u_1, \ldots, u_\mathfrak{K})$: since $\mathcal{S}$ is progressing, $\alpha$ is the only points-to atom in $\psi$ and $\mathrm{dom}(\mathfrak{h}) = \{\dot{\overline{\astore}}(t_0)\} = \{\mathfrak{s}'(\sigma(u_0))$, $\mathfrak{h}(\dot{\overline{\astore}}) = (\dot{\overline{\astore}}(t_0), \ldots, \dot{\overline{\astore}}(t_\mathfrak{K})) = (\mathfrak{s}'(\sigma(u_1)), \ldots, \mathfrak{s}'(\sigma(u_\mathfrak{K})))$. Then we obtain $\dot{\overline{\astore}}(t_i) = \mathfrak{s}'(\sigma(u_i))$, hence $\mu(u_i) = t_i$, for all $i \in \interv{0}{\mathfrak{K}}$ and $\coretrans{\alpha} = \set{t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})}$. \end{compactitem} We obtain the core unfolding $\mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} t_0 \mapsto (t_1,\ldots,t_\mathfrak{K}) * \Asterisk_{i=1}^n ~\mathsf{emp} \swand q_i(\vec{u}_i)$ and we are left with proving that $t_0 \not\in \{\mathsf{root}(q_i(\vec{u}_i)) \mid i \in \interv{1}{n}\}$. By the definition of $\mu$, there exists a points-to atom $u_0 \mapsto (u_1, \ldots, u_\mathfrak{K})$ in $\psi\tau$, such that $t_0 = \mu(u_0)$. Because $\mathcal{S}$ is progressing, it must be the case that $u_0 = \mathsf{root}(p(\vec{x}))$, hence $t_0 = \mathsf{root}(p(\vec{t}))$, by the definition of $\mu$. Since $\phi$ is a core formula, by Definition \ref{def:core-formulae}, we obtain $\mathsf{root}(p(\vec{t})) \not\in \set{\mathsf{root}(q_i(\vec{u}_i)) \mid i \in \interv{1}{n}}$ and we conclude that $\phi = \nheapall \overline{\vec{y}} ~.~ \Asterisk_{i=1}^n q_i(\vec{u}_i) \swand p(\vec{t}) \in F$, by (\ref{eq:pto-core}). $\Box$ \end{compactitem}}} \paragraph{Predicate Atoms} Since profiles involve only the core formul{\ae} obtained by the syntactic translation of a symbolic heap, the only predicate atoms that occur in the argument of a profile are of the form $\mathsf{emp} \swand p(\vec{t})$. We consider the constraint: \begin{eqnarray}\label{eq:pred-core} {(\mathsf{emp} \swand p(\vec{t}),~ F)} \in \mathcal{F}_\mathcal{P} \text{ if } {(\heapex \vec{y} ~.~ \psi,~ F)} \in \mathcal{F}_\mathcal{P}, \mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \psi\in \core{\mathcal{P}} \text{ and } \vec{y} = \fv{\psi} \setminus \vec{t} \end{eqnarray} \paragraph{Separating Conjunctions} Computing the profile of a separating conjunction is the most technical point of the construction. To ease the presentation, we assume the existence of a binary operation called \emph{composition}: \begin{definition}\label{def:coresep} Given a set $D \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C}$, a binary operator $\coresep{D} : 2^{\core{\mathcal{P}}} \times 2^{\core{\mathcal{P}}} \rightarrow 2^{\core{\mathcal{P}}}$ is a \emph{composition} if $\coreabs{\dot{\astore},\mathfrak{h}_1}{\mathcal{P}} \coresep{D} \coreabs{\dot{\astore},\mathfrak{h}_2}{\mathcal{P}} = \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, for any injective structure $(\dot{\astore},\mathfrak{h})$, such that \begin{inparaenum}[(i)] % \item\label{it1:def:coresep} $\mathrm{dom}(\dot{\astore}) \subseteq \allvars{\mathcal{P}}^1$, % \item\label{it2:def:coresep} $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$, % \item\label{it3:def:coresep} $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \subseteq \dot{\astore}(\allvars{\mathcal{P}}^1 \cup \mathbb{C})$, % \item\label{it4:def:coresep} $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}) \subseteq \dot{\astore}(D) \subseteq \mathrm{dom}(\mathfrak{h})$. \end{inparaenum} \end{definition} We recall that $\mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) = \mathrm{loc}(\mathfrak{h}_1) \cap \mathrm{loc}(\mathfrak{h}_2)$. If $\mathcal{S}$ is a normalized set of rules, then for any core formula $\phi$ whose only occurrences of predicate atoms are of the form $\mathsf{emp} \swand p(\vec{t})$, we define $\alloc{\phi}{\coreset{\mathcal{S}}}$ as the homomorphic extension of $\alloc{\mathsf{emp} \swand p(\vec{t})}{\coreset{\mathcal{S}}} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \alloc{p(\vec{t})}{\mathcal{S}}$ to $\phi$ (see Definition \ref{def:alloc}). Assuming that $\mathcal{S}$ is a normalized set of rules and that a composition operation $\coresep{D}$ (the construction of which will be described \longVersionOnly{in \S\ref{sec:composition}}\shortVersionOnly{below}, see Lemma \ref{lemma:coresep}) exists, we define the profile of a separating conjunction: \begin{eqnarray}\label{eq:sep-core} && \hspace*{-8mm} {(\phi_1 * \phi_2, \add{X_1}{F_1} \coresep{D} \add{X_2}{F_2})} \in \mathcal{F}_\mathcal{P} \text{, if } {(\phi_i,F_i)} \in \mathcal{F}_\mathcal{P} \text{\, } X_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \fv{\phi_{3-i}} \setminus \fv{\phi_i},~i=1,2 \nonumber\\[-1mm] && \hspace*{-8mm} \alloc{\phi_1}{\coreset{\mathcal{S}}}\cap\alloc{\phi_2}{\coreset{\mathcal{S}}} = \emptyset, \hspace*{2mm} D \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \alloc{\phi_1 * \phi_2}{\coreset{\mathcal{S}}} \cap (\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}) \\[-1mm] && \hspace*{-8mm} \add{x}{F} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \{\heapex \vec{y} \nheapall \vec{z} ~.~ \psi \mid \heapex \vec{y} \nheapall \vec{z} \nheapall \hat{x} ~. ~\psi[\hat{x}/x] \in F\}, ~\add{\set{x_1,\ldots,x_n}}{F} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \add{x_1}{ \ldots \add{x_n}{F}} \nonumber \end{eqnarray} The choice of the set $D$ above ensures (together with the restriction to normal models) that $\coresep{D}$ is indeed a composition operator. Intuitively, since the considered models are normal, every location in the frontier between the heaps corresponding to $\phi_1$ and $\phi_2$ will be associated with a variable, thus $D$ denotes the set of allocated locations on the frontier. Note that, because $\mathcal{P}$ is normalized, $\alloc{\phi_1 * \phi_2}{\coreset{\mathcal{S}}}$ is well-defined. Because the properties of the composition operation hold when the models of its operands share the same store (Definition \ref{def:coresep}), we use the $\add{x}{F}$ function that adds free variables (mapped to locations outside of the heap) to each core formula in $F$. \putInAppendix{ \longVersionOnly{ We prove below that the definition of the $D$ set in Equation \ref{eq:sep-core} satisfies the condition from Definition \ref{def:coresep}, for any normal $\mathcal{S}$-companion of $(\phi_1,\phi_2)$ (see Definition \ref{def:companions}): } \begin{lemma}\label{lemma:coresep-d} If $\mathcal{S}$ is normalized, $\phi_1,\phi_2 \in \shk{\mathfrak{K}}$ are symbolic heaps and $\tuple{(\dot{\astore},\mathfrak{h}_1), (\dot{\astore},\mathfrak{h}_2)}$ is an injective normal $\mathcal{S}$-companion for $(\phi_1,\phi_2)$, then: \[\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}_1 \uplus \mathfrak{h}_2) \subseteq \dot{\astore}\left(\alloc{\phi_1 * \phi_2}{\mathcal{S}} \cap \left(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}\right)\right) \subseteq \mathrm{dom}(\mathfrak{h}_1 \uplus \mathfrak{h}_2).\] \end{lemma} \proof{ Let $\ell \in \mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}_1 \uplus \mathfrak{h}_2)$ be a location. By Lemma \ref{lemma:frontier}, since $\dot{\astore}$ is injective and $\mathbb{C} \subseteq \mathrm{dom}(\dot{\astore})$, we have $\ell \in \dot{\astore}\left(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}\right)$. For $i=1,2$, let $\phi_i \unfold{\mathcal{S}}^* \exists \vec{x}_i ~.~ \psi_i$ be the predicate-free unfolding and $\overline{\astore}_i$ be the $\vec{x}_i$-associate of $\dot{\astore}$ that satisfy points (\ref{it1:def:companions}) and (\ref{it2:def:companions}) of Definition \ref{def:companions}. Assume that $\ell \in \mathrm{dom}(\mathfrak{h}_1)$ (the case $\ell \in \mathrm{dom}(\mathfrak{h}_2)$ is symmetric). Because $(\overline{\astore}_1, \mathfrak{h}_1) \models \psi_1$, there exists a points-to atom $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ in $\psi_1$, such that $\overline{\astore}_1(t_0) = \ell$. Since $\mathcal{S}$ is normalized, by Definition \ref{def:alloc}, the set $\alloc{\phi_1}{\mathcal{S}}$ is well-defined and we distinguish two cases. \begin{compactitem} \item If $t_0 \in \alloc{\phi_1}{\mathcal{S}}$, then $\ell \in \dot{\astore}(\alloc{\phi_1}{\mathcal{S}})$, because $\overline{\astore}_1$ and $\dot{\astore}$ agree over $\alloc{\phi_1}{\mathcal{S}}$. % \item Otherwise, we must have $t_0 \in \vec{x}_1$. Since $\ell \in \mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2)$, we have $\ell \in \mathrm{loc}(\mathfrak{h}_2)$, thus there exists a points-to atom $u_0 \mapsto (u_1, \ldots, u_\mathfrak{K})$ in $\psi_2$ such that $\ell = \overline{\astore}_2(u_i)$, for some $i \in \interv{1}{\mathfrak{K}}$. Note that $\ell = \overline{\astore}_2(u_0)$ is impossible, because $\ell \in \mathrm{dom}(\mathfrak{h}_1)$. Suppose, for a contradiction, that $\ell \not\in \overline{\astore}(\mathbb{C})$. Then $u_i \in \fv{\psi_2}$ must be the case, which contradicts the condition $\overline{\astore}_1(\vec{x}_1) \cap \overline{\astore}_2(\fv{\psi_2}) \subseteq \overline{\astore}(\mathbb{C})$, required at point (\ref{it2:def:companions}) of Definition \ref{def:companions}. Hence $\ell \in \overline{\astore}(\mathbb{C})$ must be the case. Since $\ell = \overline{\astore}_1(t_0)$ either $t_0 \in \mathbb{C}$ or $t_0$ is an existentially allocated variable. The second case cannot occur, because of the Condition (\ref{it5:normalized}) of Definition \ref{def:normalized}. Then we have $t_0 \in \mathbb{C}$ and, moreover, we have $t_0 \in \alloc{\phi_1}{\mathcal{S}}$, by Definition \ref{def:alloc}, thus $t_0 \in \alloc{\phi_1}{\mathcal{S}} \cap \mathbb{C}$. \end{compactitem} In each case we obtain $\ell \in \overline{\astore}_1(\alloc{\phi_1}{\mathcal{S}}) \cup \overline{\astore}_2(\alloc{\phi_2}{\mathcal{S}}) \subseteq \dot{\astore}(\alloc{\phi_1 * \phi_2}{\mathcal{S}})$, because $\overline{\astore}_i$ agrees with $\dot{\astore}$ over $\alloc{\phi_i}{\mathcal{S}}$, for $i=1,2$. We obtain: \[\begin{array}{rcl} \ell & \in & \dot{\astore}(\alloc{\phi_1 * \phi_2}{\mathcal{S}}) \cap \dot{\astore}\left(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}\right) \\ & = & \dot{\astore}\left(\alloc{\phi_1 * \phi_2}{\mathcal{S}} \cap (\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C})\right) \text{, because $\dot{\astore}$ is injective .} \end{array}\] The second inclusion follows trivially from the fact that $\dot{\astore}(\alloc{\phi_i}{\mathcal{S}}) \subseteq \mathrm{dom}(\mathfrak{h}_i)$, for $i=1,2$, which is an easy consequence of Definition \ref{def:alloc}. $\Box$} \longVersionOnly{ The following lemma is used to prove the correctness of the profile construction for separating conjunctions, by stating the effect of this operation on structures: } \begin{lemma}\label{lemma:add} Given an injective structure $(\dot{\astore},\mathfrak{h})$, a variable $x \not\in \mathrm{dom}(\dot{\astore})$ and a location $\ell \not\in\mathrm{loc}(\mathfrak{h})\cup\mathrm{img}(\dot{\astore})$, we have $\coreabs{\dot{\astore}[x\leftarrow\ell],\mathfrak{h}}{\mathcal{P}} = \add{x}{\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}}$. \end{lemma} \proof{ ``$\subseteq$'' Let $\varphi = \heapex\vec{x}\nheapall\vec{y}\,.\,\phi \in \coreabs{\dot{\astore}[x\leftarrow\ell],\mathfrak{h}}{\mathcal{P}}$ be a core formula, where $\phi$ is quantifier-free. By Definition \ref{def:coreabs}, there exists a witness $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}[x\leftarrow\ell]}{\mathfrak{h}}{\varphi}$, such that $\dot{\overline{\astore}}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$. Let $\dot{\overline{\astore}}'$ be the store identical to $\dot{\overline{\astore}}$, except that $x \not \in \mathrm{dom}(\dot{\overline{\astore}}')$ and $\dot{\overline{\astore}}'(\hat{x}) = \dot{\overline{\astore}}(x)$, for some variable $\hat{x} \not\in \allvars{\mathcal{P}}^1$. Since $\ell \not\in \mathrm{loc}(\mathfrak{h})$, we have $\dot{\overline{\astore}}' \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\heapex\vec{x}\nheapall\vec{y}\nheapall \hat{x}~.~\phi[\hat{x}/x]}$, because $\hat{x}\notin \allvars{\mathcal{P}}^1$ we have $\heapex\vec{x}\nheapall\vec{y}\nheapall \hat{x}~.~\phi[\hat{x}/x] \in \core{\mathcal{P}}$, hence $\heapex\vec{x}\nheapall\vec{y}\nheapall \hat{x}~.~\phi[\hat{x}/x] \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, from which we deduce that $\varphi \in \add{x}{\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}}$. \noindent``$\supseteq$'' Let $\varphi = \heapex\vec{x}\nheapall\vec{y} ~.~\phi \in \add{x}{\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}}$, where $\phi$ is quantifier-free, and let $\psi = \heapex\vec{x}\nheapall\vec{y}\nheapall \hat{x} \,.\,\phi[\hat{x}/x]$. By (\ref{eq:sep-core}), we have $\psi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$. By Definition \ref{def:coreabs}, there exists a witness $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\psi}$, such that $\dot{\overline{\astore}}(\lroots{\psi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$. W.l.o.g., by Lemma \ref{lemma:istore-eq}, we can assume that $\dot{\overline{\astore}}$ is such that $\ell \neq \dot{\overline{\astore}}(y)$, for all $y \in \mathrm{dom}(\dot{\overline{\astore}})$ such that $\dot{\overline{\astore}}(y) \not\in \mathrm{loc}(\mathfrak{h})$. With this assumption, $\dot{\overline{\astore}}[x \leftarrow \ell]$ is injective. We prove that $\dot{\overline{\astore}}[x \leftarrow \ell] \in \witset{\mathcal{S}}{\dot{\astore}[x\leftarrow\ell]}{\mathfrak{h}}{\varphi}$: \begin{compactitem} \item Let $\dot{\overline{\astore}}'$ be the store identical to $\dot{\overline{\astore}}[x \leftarrow \ell]$ except that the images of $x$ and $\hat{x}$ are switched. Since $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi[\hat{x}/x]$ we have $(\dot{\overline{\astore}}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi$. Since $\dot{\overline{\astore}}(x), \ell \not\in \mathrm{loc}(\mathfrak{h})$ (as $\dot{\overline{\astore}}(x) = \dot{\astore}(\hat{x})$, by definition, and $\dot{\astore}(\hat{x}) \not \in\mathrm{loc}(\mathfrak{h})$, by Condition (\ref{it3:witset}) of Definition \ref{def:coreabs}), we have $\dot{\overline{\astore}}' \istoreq{\mathrm{loc}(\mathfrak{h})} \dot{\overline{\astore}}[x \leftarrow \ell]$ thus $(\dot{\overline{\astore}}[\hat{x}\leftarrow\ell],\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi$, by Lemma \ref{lemma:istore-eq}. % \item Since $x\not\in\vec{x}$, we have $\dot{\overline{\astore}}[x\leftarrow\ell](\vec{x}) = \dot{\overline{\astore}}(\vec{x}) \subseteq \mathrm{loc}(\mathfrak{h})$. % \item Since $\ell \not\in \mathrm{loc}(\mathfrak{h})$ and $\dot{\overline{\astore}}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$, we have $\dot{\overline{\astore}}[x\leftarrow\ell](\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$. \end{compactitem} Since $\lroots{\heapex\vec{x}\nheapall\vec{y}\nheapall \hat{x} ~.~\phi[\hat{x}/x]} = \lroots{\varphi}$, we have $\dot{\overline{\astore}}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$, thus $\dot{\overline{\astore}}[x \leftarrow \ell] \in \witset{\mathcal{S}}{\dot{\astore}[x\leftarrow\ell]}{\mathfrak{h}}{\varphi}$, which implies $\varphi \in \coreabs{\dot{\astore}[x\leftarrow\ell],\mathfrak{h}}{\mathcal{P}}$. $\Box$} } \paragraph{Existential Quantifiers} Since profiles involve only core formul{\ae} obtained by the syntactic translation of a symbolic heap (Lemma \ref{lemma:entailment}), it is sufficient to consider only existentially quantified core formul{\ae}, because the syntactic translation $\coretrans{.}$ does not produce universal quantifiers. The profile of an existentially quantified core formula is given by the constraint: \begin{eqnarray}\label{eq:ex-core} &&(\heapex x' ~.~ \phi[x'/x],~ \rem{x}{F}) \in \mathcal{F}_\mathcal{P} \text{, if $x \in \fv{\phi}$, $x' \in \allvars{\mathcal{P}}^2$, $x'$ not bound in $\phi$, ${(\phi,F)} \in \mathcal{F}_\mathcal{P}$}, \qquad \\[-1mm] && \rem{x}{F} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \{\heapex \hat{x} ~.~ \psi[\hat{x}/x] \mid \psi \in F,~ x \in \fv{\psi}, \text{ $\hat{x}$ not in $\psi$}\} \cap \core{\mathcal{P}} \cup \{\psi \mid \psi \in F,~ x \not\in \fv{\psi}\} \nonumber \\[-1mm] && \rem{\set{x_1,\ldots,x_n}}{F} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \rem{x_1}{ \ldots \rem{x_n}{F} \ldots} \nonumber \end{eqnarray} Note that $\hat{x}$ is a fresh variable, which is not bound or free in $\psi$. In particular, if $x \in \roots{\psi}$, then we must have $\hat{x}\in \allvars{\mathcal{P}}^2$, so that $\heapex \hat{x} ~.~ \psi[\hat{x}/x] \in \core{\mathcal{P}}$. Similarly the variable $x$ is replaced by a fresh variable $x' \in \allvars{\mathcal{P}}^2$ in $\heapex x' ~.~ \phi[x'/x]$ to ensure that $\heapex x' ~.~ \phi[x'/x]$ is a core formula. \putInAppendix{ \longVersionOnly{ The following lemma is used to prove the correctness of the profile construction for bounded existential quantifiers, by stating the effect of the above function on structures: } \begin{lemma}\label{lemma:rem} Given an injective structure $(\dot{\astore},\mathfrak{h})$ and a variable $x \in \mathrm{dom}(\dot{\astore}) \cap \allvars{\mathcal{P}}^1$ such that $\dot{\astore}(x) \in \mathrm{loc}(\mathfrak{h})$, we have $\coreabs{\dot{\astore}',\mathfrak{h}}{\mathcal{P}} = \rem{x}{\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}}$, where $\dot{\astore}'$ is the restriction of $\dot{\astore}$ to $\mathrm{dom}(\dot{\astore}) \setminus \set{x}$. \end{lemma} \proof{ First note that because $\dot{\astore}$ is injective, $\dot{\astore}'$ is necessarily injective, thus $\coreabs{\dot{\astore}',\mathfrak{h}}{\mathcal{P}}$ is well defined. We prove both inclusions. ``$\subseteq$'' Let $\varphi = \heapex \vec{x} \nheapall \vec{y} ~.~ \phi \in \coreabs{\dot{\astore}',\mathfrak{h}}{\mathcal{P}}$ be a core formula, where $\phi$ is quantifier-free. By Definition \ref{def:coreabs}, there exists a witness $\dot{\overline{\astore}'} \in \witset{\mathcal{S}}{\dot{\astore}'}{\mathfrak{h}}{\varphi}$, such that $\dot{\overline{\astore}'}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$. Since $x \not\in \mathrm{dom}(\dot{\astore}')$ and $(\dot{\astore}',\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$, we have $x \not\in \fv{\varphi}$. By $\alpha$-renaming if necessary, we can assume w.l.o.g. that $x \not\in \vec{x} \cup \vec{y}$ ($\dagger$). This is possible since $x \in \allvars{\mathcal{P}}^1$, hence if $x\in \vec{x} \cup \vec{y}$ then by definition of $\core{\mathcal{P}}$ it cannot occur in $\roots{\phi}$; it can therefore be renamed by a variable not occurring in $\allvars{\mathcal{P}}^1$. We distinguish the following cases. \begin{compactitem} % \item If $\dot{\astore}(x) \neq \dot{\overline{\astore}'}(x')$ for all $x' \in \vec{x}$, then $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)]$ is an injective associate of $\dot{\overline{\astore}'}$: indeed, by hypothesis, $\dot{\astore}(x) \in \mathrm{loc}(\mathfrak{h})$ and $\dot{\overline{\astore}'}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$, thus $\dot{\astore}(x) \not\in \dot{\overline{\astore}'}(\vec{y})$. Since $\phi$ is quantifier-free and $\dot{\overline{\astore}'}$ agrees with $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)]$ on $\fv{\phi}$, we obtain $(\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)], \mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi$. We now prove that $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)] \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\varphi}$, which suffices to show $\varphi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, by the definition of the latter set: \begin{compactitem} \item $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)](\vec{x}) \subseteq \dot{\overline{\astore}'}(\vec{x}) \cup \set{\dot{\astore}(x)} \subseteq \mathrm{loc}(\mathfrak{h})$, because $\dot{\overline{\astore}'} \in \witset{\mathcal{S}}{\dot{\astore}'}{\mathfrak{h}}{\varphi}$ and $\dot{\astore}(x) \in \mathrm{loc}(\mathfrak{h})$ by hypothesis. % \item $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)](\vec{y}) = \dot{\overline{\astore}'}(\vec{y})$ and $\dot{\overline{\astore}'}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$, because $\dot{\overline{\astore}'} \in \witset{\mathcal{S}}{\dot{\astore}'}{\mathfrak{h}}{\varphi}$. % \item $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)](\lroots{\varphi}) = \dot{\overline{\astore}'}(\lroots{\varphi})$ because $x \not\in \fv{\phi}$, and $\dot{\overline{\astore}'}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$ because $\dot{\overline{\astore}'} \in \witset{\mathcal{S}}{\dot{\astore}'}{\mathfrak{h}}{\varphi}$. \end{compactitem} Consequently we obtain $\varphi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, and since $x \not\in \fv{\varphi}$, we have $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} \subseteq \rem{x}{\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}}$, hence the result. % \item Otherwise, $\dot{\astore}(x) = \dot{\overline{\astore}'}(x')$ for some $x' \in \vec{x}$, hence $\varphi$ is of the form $\heapex x' \heapex \vec{x}' \nheapall \vec{y} ~.~ \phi$, where $\vec{x}' \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \vec{x} \setminus \set{x'}$. Clearly, the variable $x'$ must be unique, otherwise $\dot{\overline{\astore}'}$ would not be injective. Let $\dot{\overline{\astore}}$ be the injective store obtained from $\dot{\overline{\astore}'}[x \leftarrow \dot{\astore}(x)]$ by removing the pair $(x',\dot{\astore}(x))$ from it. We prove that $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\heapex\vec{x}'\nheapall\vec{y} ~.~ \phi[x/x']}$: \begin{compactitem} % \item $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi[x/x']$, because $\dot{\overline{\astore}}$ agrees with $\dot{\overline{\astore}'}[x\leftarrow\dot{\astore}(x)]$ on $\fv{\phi[x/x']}$. % \item $\dot{\overline{\astore}}(\vec{x}')=\dot{\overline{\astore}'}(\vec{x}') \subseteq \mathrm{loc}(\mathfrak{h})$, because $\dot{\overline{\astore}'} \in \witset{\mathcal{S}}{\dot{\astore}'}{\mathfrak{h}}{\varphi}$. % \item $\dot{\overline{\astore}}(\vec{y})=\dot{\overline{\astore}'}(\vec{y})$, because $x \not\in \vec{y}$ ~($\dagger$) and $\dot{\overline{\astore}'}(\vec{y}) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$, because $\dot{\overline{\astore}'} \in \witset{\mathcal{S}}{\dot{\astore}'}{\mathfrak{h}}{\varphi}$. \end{compactitem} Furthermore, we have $\dot{\overline{\astore}'}[x\leftarrow\dot{\astore}(x)](\lroots{\varphi})=\dot{\overline{\astore}'}(\lroots{\varphi})$ because $x \not\in \fv{\varphi}$, hence $x \not\in \lroots{\varphi}$. Thus $\dot{\overline{\astore}}(\lroots{\varphi}) \subseteq \dot{\overline{\astore}'}(\lroots{\varphi})$ and, since $\dot{\overline{\astore}'}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$ by Definition \ref{def:coreabs}, we deduce that $\dot{\overline{\astore}}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$. Still by Definition \ref{def:coreabs}, we obtain that $\heapex \vec{x}' \nheapall \vec{y} ~.~ \phi[x/x'] \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ and thus $\heapex x' \heapex \vec{x}' \nheapall \vec{y} ~.~ \phi \in \rem{x}{\coreabs{\dot{\astore}}{\mathcal{P}}}$ (with $\hat{x}\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} x'$). \end{compactitem} ``$\supseteq$'' Let $\varphi = \heapex \vec{x} \nheapall \vec{y} \,.\, \phi \in \rem{x}{\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}}$, for some quantifier-free formula $\phi$. We distinguish the following cases. \begin{compactitem} \item If $\varphi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ and $x \not\in \fv{\varphi}$, then for any injective structure $(\dot{\astore},\mathfrak{h})$ meeting the conditions of Definition \ref{def:coreabs}, the structure $(\dot{\astore}',\mathfrak{h})$ is injective and trivially meets the conditions of Definition \ref{def:coreabs}, hence $\varphi \in \coreabs{\dot{\astore}',\mathfrak{h}}{\mathcal{P}}$. % \item Otherwise, $\varphi = \heapex \hat{x} \heapex \vec{x}' \nheapall \vec{y} ~. \phi[\hat{x}/x]$, $x \in \fv{\heapex \vec{x}' \nheapall \vec{y} ~. \phi}$ and $\heapex \vec{x}' \nheapall \vec{y} ~. \phi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, where $\vec{x}' \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \vec{x} \setminus \set{x}$. Let $\dot{\overline{\astore}}$ be an injective $(\vec{x}'\cup\vec{y})$-associate of $\dot{\astore}$ meeting the conditions from Definition \ref{def:coreabs}. It is easy to check that $(\dot{\overline{\astore}}\setminus \{ (x,\dot{\overline{\astore}}(x) \}) \cup \{ (\hat{x},\dot{\overline{\astore}}(x)) \} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\varphi}$, thus $\varphi \in \coreabs{\dot{\astore}',\mathfrak{h}}{\mathcal{P}}$. $\Box$ \end{compactitem} }} \paragraph{The Profile Function} Let $\mathcal{F}_\mathcal{P}$ be the least relation that satisfies the constraints (\ref{eq:pto-core}), (\ref{eq:pred-core}), (\ref{eq:sep-core}) and (\ref{eq:ex-core}). We prove that $\mathcal{F}_\mathcal{P}$ is a valid profile for $\mathcal{P}$, in the sense of Definition \ref{def:profile}: \begin{lemma}\label{lemma:profile} Given a progressing and normalized entailment problem $\mathcal{P} = (\mathcal{S}, \Sigma)$, a symbolic heap $\varphi \in \shk{\mathfrak{K}}$ with $\fv{\varphi} \subseteq \allvars{\mathcal{P}}^1$, a core formula $\phi \in \coretrans{\varphi}$ and a set of core formul{\ae} $F \subseteq \core{\mathcal{P}}$, we have $(\phi, F) \in \mathcal{F}_\mathcal{P}$ iff $F=\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, for some injective normal $\coreset{\mathcal{S}}$-model $(\dot{\astore},\mathfrak{h})$ of $\phi$, with $\mathrm{dom}(\dot{\astore}) = \fv{\varphi}\cup \mathbb{C}$. \end{lemma} \optionalProof{Lemma \ref{lemma:profile}}{sec:coreabs}{By induction on the structure of $\mathcal{F}_\mathcal{P}$, defined as the least set satisfying the constraints (\ref{eq:pto-core}), (\ref{eq:pred-core}), (\ref{eq:sep-core}) and (\ref{eq:ex-core}), we prove that $(\dot{\astore},\mathfrak{h})$ is an injective normal $\coreset{\mathcal{S}}$-model of $\phi$ if and only if $(\phi,~ \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}) \in \mathcal{F}_\mathcal{P}$. Based on the structure of the core formula $\phi \in \coretrans{\varphi}$, for some symbolic heap $\varphi \in \shk{\mathfrak{K}}$, we distinguish the following cases: \begin{compactitem} % \item $\phi = t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$: because $\mathcal{S}$ is progressing, by Lemma \ref{lemma:pto-core}, we obtain that $(\phi, F) \in \mathcal{F}_\mathcal{P}$ if and only if $F = \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, for some injective $\mathcal{S}$-model $(\dot{\astore},\mathfrak{h})$ of $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$, such that $\mathrm{dom}(\dot{\astore}) = \set{t_0, \ldots, t_\mathfrak{K}} \cup \mathbb{C}$. Since any injective $\mathcal{S}$-model $(\dot{\astore},\mathfrak{h})$ of $t_0 \mapsto (t_1,\ldots,t_\mathfrak{K})$ is also normal, we conclude this case. % \item $\phi = \mathsf{emp} \swand p(\vec{t})$: ``$\Rightarrow$'' Since $\mathcal{F}_\mathcal{P}$ is the least relation satisfying (\ref{eq:pred-core}), ${(\mathsf{emp} \swand p(\vec{t}), F)} \in \mathcal{F}_\mathcal{P}$ if and only if ${(\heapex \vec{y} ~.~ \psi, F)} \in \mathcal{F}_\mathcal{P}$, for some core unfolding $\mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \psi$, where $\vec{y} = \fv{\psi} \setminus \vec{t}$. By the induction hypothesis, there exists an injective normal $\coreset{\mathcal{S}}$-model $(\dot{\astore},\mathfrak{h})$ of $\heapex \vec{y} ~.~ \psi$ such that $F = \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ and $\mathrm{dom}(\dot{\astore}) = \fv{\heapex \vec{y} ~.~ \psi} \cup \mathbb{C}$. Since $\mathcal{P}$ is normalized, by Condition \ref{it1bis:normalized} in Definition \ref{def:normalized} we have $\fv{\heapex \vec{y} ~.~ \psi} = \fv{\phi}$. By Lemma \ref{lemma:simple-unfolding-sat}, $(\dot{\astore},\mathfrak{h})$ is an injective $\coreset{\mathcal{S}}$-model of $\mathsf{emp} \swand p(\vec{t})$. Because $\phi$ is quantifier-free, $(\dot{\astore},\mathfrak{h})$ is also an injective normal $\coreset{\mathcal{S}}$-model of $\phi$. ``$\Leftarrow$'' Let $(\dot{\astore},\mathfrak{h})$ be an injective normal $\coreset{\mathcal{S}}$-model of $\mathsf{emp} \swand p(\vec{t})$. By Lemma \ref{lemma:simple-unfolding-sat}, there exists a core unfolding $\mathsf{emp} \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}} \psi$ and an injective extension $\dot{\overline{\astore}}$ of $\dot{\astore}$, such that $(\dot{\overline{\astore}},\mathfrak{h})$ is an injective $\coreset{\mathcal{S}}$-model of $\psi$. Let $\vec{y} \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \fv{\psi} \setminus \vec{t}$. Then every variable $x \in \vec{y}$ occurs in a points-to or a predicate atom, by Definition \ref{def:simple-formulae}. Since $\mathcal{S}$ is normalized, we obtain that $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h})$, by point (\ref{it3:normalized}) of Definition \ref{def:normalized}, and therefore $(\dot{\astore},\mathfrak{h})$ is an injective $\coreset{\mathcal{S}}$-model of $\heapex \vec{y} ~.~ \psi$. Since $\psi$ is satisfiable, it cannot contain two atoms with the same root. We have $\fv{\psi} = \fv{\phi} \subseteq \allvars{\mathcal{P}}^1$. Furthermore, since $\card{\allvars{\mathcal{P}}^2} = \probwidth{\mathcal{P}}$ and $\size{\psi} \leq \probwidth{\mathcal{P}}$, we can assume w.l.o.g. that $\vec{y} \subseteq \allvars{\mathcal{P}}^2$, hence $\heapex \vec{y} ~.~ \psi$ is a core formula. By the induction hypothesis, we obtain that $\tuple{\heapex \vec{y} ~.~ \psi,~ \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}} \in \mathcal{F}_\mathcal{P}$, thus $\tuple{\mathsf{emp} \swand p(\vec{t}),~ \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}} \in \mathcal{F}_\mathcal{P}$ follows, by (\ref{eq:pred-core}). % \item $\phi = \phi_1 * \phi_2$: ``$\Rightarrow$'' Since $\mathcal{F}_\mathcal{P}$ is the least set satisfying (\ref{eq:sep-core}), $(\phi_1 * \phi_2, F) \in \mathcal{F}_\mathcal{P}$ if and only if $(\phi_i, F_i) \in \mathcal{F}_\mathcal{P}$ and $F = \add{X_1}{F_1} \coresep{D} \add{X_2}{F_2}$, where $X_i = \fv{\phi_i} \setminus \fv{\phi_{3-i}}$, for $i=1,2$, $\alloc{\phi_1}{\coreset{\mathcal{S}}} \cap \alloc{\phi_2}{\coreset{\mathcal{S}}} = \emptyset$ and $D = \alloc{\phi_1*\phi_2}{\mathcal{S}} \cap (\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C})$. Since $\fv{\phi_i} \subseteq \fv{\phi} \subseteq \allvars{\mathcal{P}}^1$, by the inductive hypothesis, there exist injective normal $\coreset{\mathcal{S}}$-models $(\dot{\astore}_i, \mathfrak{h}_i)$ of $\phi_i$, such that $F_i = \coreabs{\dot{\astore}_i,\mathfrak{h}_i}{\mathcal{P}}$, for $i=1,2$. By renaming locations if necessary, we assume w.l.o.g. that $\dot{\astore}_1$ and $\dot{\astore}_2$ agree over $\term{\phi_1} \cap \term{\phi_2}$ and that $\dot{\astore}_i(\fv{\phi_i} \setminus \fv{\phi_{3-i}}) \cap (\dot{\astore}_{3-i}(\fv{\phi_{3-i}} \setminus \fv{\phi_i}) \cup \mathrm{loc}(\mathfrak{h}_{3-i})) = \emptyset$, for $i=1,2$ ($\dagger$). This is feasible since the truth value of formul{\ae} does not depend on the name of the locations. Let $\dot{\astore} =\dot{\astore}_1 \cup \dot{\astore}_2$. It is easy to check that $\tuple{(\dot{\astore},\mathfrak{h}_1), (\dot{\astore},\mathfrak{h}_2)}$ is an injective normal $\coreset{\mathcal{S}}$-companion for $(\phi_1, \phi_2)$, by Definition \ref{def:companions}. Moreover, by Lemma \ref{lemma:add}, we have $\coreabs{\dot{\astore},\mathfrak{h}_i}{\mathcal{P}} = \add{X_i}{F_i}$, for $i=1,2$. Next, we prove that $\mathfrak{h}_1$ and $\mathfrak{h}_2$ are disjoint heaps. Suppose, for a contradiction, that $\mathrm{dom}(\mathfrak{h}_1) \cap \mathrm{dom}(\mathfrak{h}_2) \neq \emptyset$. By assumption ($\dagger$), there exists a variable $x \in \fv{\phi_1} \cap \fv{\phi_2}$, such that $\dot{\astore}(x) \in \mathrm{dom}(\mathfrak{h}_1) \cap \mathrm{dom}(\mathfrak{h}_2)$. Since $\mathcal{P}$ is normalized, by Conditions (\ref{it4:normalized}) and (\ref{it5:normalized}) in Definition \ref{def:normalized}, the only variables that can be allocated by a model of a core formula $\phi_i$ are $\alloc{\phi_i}{\coreset{\mathcal{S}}}$, we must have $x \in \alloc{\phi_1}{\coreset{\mathcal{S}}} \cap \alloc{\phi_2}{\coreset{\mathcal{S}}}$, which contradicts with the condition that $\alloc{\phi_1}{\coreset{\mathcal{S}}} \cap \alloc{\phi_2}{\coreset{\mathcal{S}}} = \emptyset$. We conclude that $\mathfrak{h}_1$ and $\mathfrak{h}_2$ are disjoint and let $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$. By Lemmas \ref{lemma:frontier} and \ref{lemma:coresep-d}, we respectively have $\mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \subseteq \dot{\astore}\left(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}\right) \subseteq \dot{\astore}(\allvars{\mathcal{P}}^1 \cup \mathbb{C})$ and $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}) \subseteq \dot{\astore}(D) \subseteq \mathrm{dom}(\mathfrak{h})$. Thus $(\dot{\astore},\mathfrak{h})$ is an injective normal $\coreset{\mathcal{S}}$-model of $\phi_1 * \phi_2$ and, by Definition \ref{def:coresep}, we have $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} = \coreabs{\dot{\astore},\mathfrak{h}_1}{\mathcal{P}} \coresep{D} \coreabs{\dot{\astore},\mathfrak{h}_2}{\mathcal{P}} = \add{X_1}{F_1} \coresep{D} \add{X_2}{F_2}$. \noindent''$\Leftarrow$'' Let $(\dot{\astore},\mathfrak{h})$ be an injective normal $\coreset{\mathcal{S}}$-model of $\phi_1 * \phi_2$. Note that since $\phi_1 * \phi_2$ is satisfiable we must have $\alloc{\phi_1}{\coreset{\mathcal{S}}} \cap \alloc{\phi_2}{\coreset{\mathcal{S}}} = \emptyset$. By Lemma \ref{lemma:companions}, there exists an injective $\coreset{\mathcal{S}}$-normal companion $\tuple{(\dot{\astore}_1, \mathfrak{h}_1), (\dot{\astore}_2, \mathfrak{h}_2)}$ for $(\phi_1, \phi_2)$, such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$. Since $(\dot{\astore}_i, \mathfrak{h}_i)$ is an injective normal $\coreset{\mathcal{S}}$-model of $\phi_i$, we have $\tuple{\phi_i, \coreabs{\dot{\astore}_i,\mathfrak{h}_i}{\mathcal{P}}} \in \mathcal{F}_\mathcal{P}$, by the inductive hypothesis, for $i=1,2$. We prove that $\dot{\astore}(X_i) \cap \mathrm{loc}(\mathfrak{h}_{i}) = \emptyset$, where $X_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \fv{\phi_i} \setminus \fv{\phi_{3-i}}$, for $i=1,2$. Let $i=1$, the case $i=2$ being symmetric, and suppose, for a contradiction, that $\dot{\astore}(x) \in \mathrm{loc}(\mathfrak{h}_1)$, for some $x \in X_1$. Because $\mathcal{S}$ is normalized, by point (\ref{it3:normalized}) of Definition \ref{def:normalized}, we have $\dot{\astore}(x) \in \mathrm{loc}(\mathfrak{h}_2)$, thus $\dot{\astore}(x) \in \mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2)$. By Lemma \ref{lemma:frontier}, $\dot{\astore}(x) \subseteq \dot{\astore}(\fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}$ and, since $\dot{\astore}$ is injective, we deduce that $x \in \fv{\phi_1} \cap \fv{\phi_2} \cup \mathbb{C}$, which contradicts the hypothesis that $x \in X_1$. Hence $\dot{\astore}(X_i) \cap \mathrm{loc}(\mathfrak{h}_{i}) = \emptyset$ and, by Lemma \ref{lemma:add}, we obtain $\coreabs{\dot{\astore},\mathfrak{h}_i}{\mathcal{P}} = \add{X_i}{\coreabs{\dot{\astore}_i,\mathfrak{h}_i}{\mathcal{P}}}$, for $i=1,2$. Moreover, by Lemmas \ref{lemma:frontier} and \ref{lemma:coresep-d}, we respectively have $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \subseteq \dot{\astore}(\allvars{\mathcal{P}}^1 \cup \mathbb{C})$ and $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}) \subseteq \dot{\astore}(D) \subseteq \mathrm{dom}(\mathfrak{h})$. By Definition \ref{def:coresep}, we have $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} = \add{X_1}{\coreabs{\dot{\astore}_1,\mathfrak{h}_1}{\mathcal{P}}} \coresep{D} \add{X_2}{\coreabs{\dot{\astore}_2,\mathfrak{h}_2}{\mathcal{P}}}$, thus $\tuple{\phi_1 * \phi_2, \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}} \in \mathcal{F}_\mathcal{P}$, by (\ref{eq:sep-core}). % \item $\heapex x' \,.\, \phi_1'$: By $\alpha$-renaming if necessary, we assume that $x' \in \allvars{\mathcal{P}}^2$. Note that this is possible because $\card{\allvars{\mathcal{P}}^2} \geq \size{\phi_1'}$. Furthermore, since we also have $\card{\allvars{\mathcal{P}}^1} \geq \size{\phi_1'}$, we may assume that there exists a variable $x \in \allvars{\mathcal{P}}^1 \setminus \fv{\phi_1'}$. It is clear that $\phi_1 = \phi_1'[x/x']$ is a core formula. ``$\Rightarrow$'' Since $\mathcal{F}_\mathcal{P}$ is the least relation satisfying (\ref{eq:ex-core}), we have $(\heapex x' ~.~ \phi_1',~ F) \in \mathcal{F}_\mathcal{P}$ only if there exists a set of core formul{\ae} $F_1 \subseteq \core{\mathcal{P}}$, such that $F = \rem{x}{F_1}$ and $(\phi_1,F_1) \in \mathcal{F}_\mathcal{P}$. By the inductive hypothesis, there exists an injective normal $\coreset{\mathcal{S}}$-model $(\dot{\astore}_1,\mathfrak{h})$ of $\phi_1$ such that $F_1 = \coreabs{\dot{\astore}_1,\mathfrak{h}}{\mathcal{P}}$. By Lemma \ref{lemma:rem}, we obtain $F = \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, where $\dot{\astore}$ is the restriction of $\dot{\astore}_1$ to $\mathrm{dom}(\dot{\astore}_1) \setminus \set{x}$. Since $\mathcal{S}$ is normalized and the only occurrences of predicate atoms in $\phi_1$ are of the form $\mathsf{emp} \swand p(\vec{t})$, we have $\dot{\astore}_1(x) \in \mathrm{loc}(\mathfrak{h})$. Thus we conclude by noticing that $(\dot{\astore},\mathfrak{h})$ is an injective normal $\coreset{\mathcal{S}}$-model of $\heapex x' ~.~ \phi_1'$. ``$\Leftarrow$'' Let $(\dot{\astore},\mathfrak{h})$ be an injective normal $\coreset{\mathcal{S}}$-model of $\heapex x' ~.~ \phi_1'$, with $\mathrm{dom}(\dot{\astore}) = (\fv{\phi_1} \setminus \set{x}) \cup \mathbb{C}$. There exists $\ell \in \mathrm{loc}(\mathfrak{h}) \setminus \mathrm{img}(\dot{\astore})$ such that $(\dot{\astore}[x \leftarrow \ell],\mathfrak{h})$ is an injective normal $\coreset{\mathcal{S}}$-model of $\phi_1$. Since $\dot{\astore}[x \leftarrow\ell]$ is an injective extension of $\dot{\astore}$ and $\ell \in \mathrm{loc}(\mathfrak{h})$, by Lemma \ref{lemma:rem}, $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} = \rem{x}{\coreabs{\dot{\astore}[x \leftarrow \ell],\mathfrak{h}}{\mathcal{P}}}$ and $(\heapex x' ~.~ \phi_1',~ F) \in \mathcal{F}_\mathcal{P}$ follows, by the inductive hypothesis. $\Box$ \end{compactitem}} \longVersionOnly{ \subsection{Construction of the Composition Operation} \label{sec:composition} } \longVersionOnly{ As stated by Definition \ref{def:coresep}, a composition operation combines the core abstractions of two injective structures with disjoint heaps into a set of core formul{\ae} that is the actual core abstraction of the disjoint union of the two structures. Since there are infinitely many structures with the same core abstraction, this set cannot be computed by enumerating the models of its operands and computing the core abstraction of their compositions. For this reason, the construction works symbolically on core formul{\ae}, by saturating the separating conjunction of two core formul{\ae} via a \emph{modus ponens}-style consequence operator. } \shortVersionOnly{The composition operation $\coresep{D}$ works symbolically on core formul{\ae}, by saturating the separating conjunction of two core formul{\ae} via a \emph{modus ponens}-style consequence operator. } \begin{definition}\label{def:conseq} Given formul{\ae} $\phi, \psi$, we write $\phi \Vdash \psi$ if $\phi = \varphi * [\alpha \swand p(\vec{t})] * [(\beta * p(\vec{t})) \swand q(\vec{u})]$ and $\psi = \varphi * [(\alpha * \beta) \swand q(\vec{u})]$ (up to the commutativity of $*$ and the neutrality of $\mathsf{emp}$) for some formula $\varphi$, predicate atoms $p(\vec{t})$ and $q(\vec{u)}$ and conjunctions of predicate atoms $\alpha$ and $\beta$. \end{definition} \begin{example} Consider the structure $(\mathfrak{s},\mathfrak{h})$ and the rules of Example \ref{ex:contexts}. We have $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$, with $(\mathfrak{s}[y \leftarrow \ell_3],\mathfrak{h}_1) \models_{\coreset{\mathcal{S}}} q(y) \swand p(x)$ and $(\mathfrak{s}[y \leftarrow \ell_3],\mathfrak{h}_2) \models_{\mathcal{S}} q(y)$, i.e., $(\mathfrak{s}[y \leftarrow \ell_3],\mathfrak{h}_2) \models_{\coreset{\mathcal{S}}} \mathsf{emp} \swand q(y)$, thus $(\mathfrak{s}[y \leftarrow \ell_3],\mathfrak{h}) \models_{\coreset{\mathcal{S}}} q(y) \swand p(x) * \mathsf{emp} \swand q(y) \Vdash \mathsf{emp} \swand p(x)$. \hfill$\blacksquare$ \end{example} \putInAppendix{ We prove below that $\Vdash$ is a logical consequence relation: \begin{lemma}\label{lemma:conseq} If $\phi \Vdash^* \psi$ then $\phi \models_{\coreset{\mathcal{S}}} \psi$. \end{lemma} \proof{ The proof is by induction on the length $n \geq 0$ of the derivation sequence from $\phi$ to $\psi$. If $n=0$ then $\phi=\psi$ and there is nothing to prove. Assume $n = 1$, the case $n > 1$ follows immediately by the inductive hypothesis. We assume that $\phi = [\alpha \swand p(\vec{t})] * [(\beta * p(\vec{t})) \swand q(\vec{u})]$ and $\psi = (\alpha * \beta) \swand q(\vec{u})$, for some predicate atoms $p(\vec{t})$ and $q(\vec{u})$ and some possibly empty conjunctions of predicate atoms $\alpha$ and $\beta$. Then there exist two disjoint heaps $\mathfrak{h}_1$ and $\mathfrak{h}_2$, such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$, $(\mathfrak{s},\mathfrak{h}_1) \models_{\coreset{\mathcal{S}}} \alpha \swand p(\vec{t})$ and $(\mathfrak{s},\mathfrak{h}_2) \models_{\coreset{\mathcal{S}}} (\beta * p(\vec{t})) \swand q(\vec{u})$. We prove that $(\mathfrak{s}, \mathfrak{h}) \models \psi$ by induction on $\card{\mathfrak{h}_2}$. If $\card{\mathfrak{h}_2}=0$ then $\beta = \mathsf{emp}$ and, by Lemma \ref{lemma:context-emp}, we obtain $p=q$ and $\mathfrak{s}(\vec{t}) = \mathfrak{s}(\vec{u})$. Thus $\mathfrak{h} = \mathfrak{h}_1$ and $(\mathfrak{s},\mathfrak{h}) \models (\alpha * \beta) \swand q(\vec{u})$ follows trivially. If $\card{\mathfrak{h}_2} > 0$, then there exists a rule \begin{equation}\label{eq:coreset-rule} (\delta * p(\vec{x})) \swand q(\vec{y}) \Leftarrow_{\coreset{\mathcal{S}}} \rho \end{equation} and a substitution $\tau$ such that $[(\delta * p(\vec{x})) \swand q(\vec{y})]\tau = (\beta * p(\vec{t})) \swand q(\vec{u})$ and $(\mathfrak{s}', \mathfrak{h}_2) \models \rho\tau$, where $\mathfrak{s}'$ is an associate of $\mathfrak{s}$. Since $\card{\mathfrak{h}_2} > 0$, by definition of $\coreset{\mathcal{S}}$, rule (\ref{eq:coreset-rule}) must be an instance of (\ref{infrule:notemp}). Thus $\rho$ is of the form $\exists \vec{v} ~.~ \psi'\sigma * \Asterisk_{j=1}^m (\gamma_j \swand p_j(\sigma(\vec{w}_j)))$ for some substitution $\sigma$, where $\gamma_1, \ldots, \gamma_m$ are separating conjunctions of predicate atoms such that $\delta * p(\vec{x}) = \Asterisk_{j=1}^m \gamma_j$. Still because (\ref{eq:coreset-rule}) is an instance of (\ref{infrule:notemp}), there exists a rule \begin{equation}\label{eq:asys-rule} q(\vec{y}) \Leftarrow_\mathcal{S} \exists \vec{z} ~.~ \psi' * \Asterisk_{j=1}^m p_j(\vec{w}_j) \end{equation} and we have $\vec{v} = \vec{z} \setminus \mathrm{dom}(\sigma)$. Since $(\mathfrak{s},\mathfrak{h}_2) \models_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi'\sigma\tau * \Asterisk_{j=1}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j)))$, there exists a $\vec{v}$-associate $\overline{\astore}$ of $\mathfrak{s}$ such that $(\overline{\astore},\mathfrak{h}_2) \models_{\coreset{\mathcal{S}}} \psi'\sigma\tau * \Asterisk_{j=1}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j)))$. Hence, there exist two disjoint heaps $\mathfrak{h}'_2$ and $\mathfrak{h}''_2$ such that $\mathfrak{h}_2 = \mathfrak{h}'_2 \uplus \mathfrak{h}''_2$, $(\overline{\astore},\mathfrak{h}'_2) \models \psi'\sigma\tau$ and $(\overline{\astore},\mathfrak{h}''_2) \models_{\coreset{\mathcal{S}}} \Asterisk_{j=1}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j)))$. We deduce that \[(\overline{\astore},\mathfrak{h}_1\uplus\mathfrak{h}''_2) \models_{\coreset{\mathcal{S}}} [\alpha \swand p(\vec{t})] * [\Asterisk_{j=1}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j)))].\] Since $\delta * p(\vec{x}) = \Asterisk_{j=1}^m \gamma_j$, we can assume w.l.o.g. that $\gamma_1$ is of the form $p(\vec{x}) * \delta'$, so that $\gamma_1\tau = p(\vec{t}) * \delta'\tau$ and \[(\overline{\astore},\mathfrak{h}_1\uplus\mathfrak{h}''_2) \models_{\coreset{\mathcal{S}}} [\alpha \swand p(\vec{t})] * [p(\vec{t}) * \delta'\tau \swand p_1(\tau(\sigma(\vec{w}_1)))] * [\Asterisk_{j=2}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{z}_j)))].\] There therefore exist two disjoint heaps $\mathfrak{h}_3$ and $\mathfrak{h}_4$ such that $\mathfrak{h}_1\uplus\mathfrak{h}''_2 = \mathfrak{h}_3\uplus\mathfrak{h}_4$ and the following hold: \[\begin{array}{rcl} (\overline{\astore},\mathfrak{h}_3) & \models_{\coreset{\mathcal{S}}} & [\alpha \swand p(\vec{t})] * [p(\vec{t}) * \delta'\tau \swand p_1(\tau(\sigma(\vec{z}_1)))], \\ (\overline{\astore},\mathfrak{h}_4) & \models_{\coreset{\mathcal{S}}} & \Asterisk_{j=2}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j))). \end{array}\] Because $\mathcal{S}$ is assumed to be progressing, $\psi'$ contains exactly one points-to atom, thus $\card{\mathfrak{h}'_2} = 1$ and $\card{\mathfrak{h}_3} \leq \card{\mathfrak{h}_1} + \card{\mathfrak{h}''_2} < \card{\mathfrak{h}_1} + \card{\mathfrak{h}_2} = \card{\mathfrak{h}}$. By the inductive hypothesis, we deduce that $(\overline{\astore},\mathfrak{h}_3) \models_{\coreset{\mathcal{S}}} \alpha*\delta'\tau \swand p_1(\tau(\sigma(\vec{w}_1)))$. Putting it all together, we obtain \begin{eqnarray*} (\overline{\astore},\mathfrak{h})& \models_{\coreset{\mathcal{S}}}& \psi'\sigma\tau * [\alpha*\delta'\tau \swand p_1(\tau(\sigma(\vec{w}_1)))] * [\Asterisk_{j=2}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j)))], \text{ hence}\\ (\mathfrak{s},\mathfrak{h}) &\models_{\coreset{\mathcal{S}}}& \exists \vec{v} ~.~ \psi'\sigma\tau * [\alpha*\delta'\tau \swand p_1(\tau(\sigma(\vec{w}_1)))] * [\Asterisk_{j=2}^m \gamma_j\tau \swand p_j(\tau(\sigma(\vec{w}_j)))]. \end{eqnarray*} Since $\delta = \delta'*\Asterisk_{j=2}^m \gamma_j$, rule (\ref{eq:asys-rule}) implies the existence of the following rule that is an instance of (\ref{infrule:notemp}): \[(\eta * \delta) \swand q(\vec{y}) \Leftarrow_{\coreset{\mathcal{S}}} \exists \vec{v} ~.~ \psi'\sigma * [\eta * \delta' \swand p_1(\sigma(\vec{w}_1))] * [\Asterisk_{j=2}^m \gamma_j \swand p_j(\sigma(\vec{w}_j))],\] where $\eta$ is a separating conjunction of predicate atoms, such that $\eta\tau = \alpha$. Thus we obtain $(\mathfrak{s},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} (\eta\tau * \delta\tau) \swand q(\tau(\vec{y}))$ and $(\mathfrak{s},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \alpha * \beta \swand q(\vec{u})$ follows. $\Box$ } } \longVersionOnly{ The composition operator is defined as follows.} \shortVersionOnly{\noindent} We define a relation on the set of core formul{\ae} $\core{\mathcal{P}}$, parameterized by a set $D \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C}$: \begin{eqnarray} &&\heapex \vec{x}_1 \nheapall \vec{y}_1 ~.~ \psi_1, \heapex \vec{x}_2 \nheapall \vec{y}_2 ~.~ \psi_2 \deriv{D} \heapex \vec{x} \nheapall \vec{y} ~.~ \psi \label{eq:deriv} \\[-1mm] &&\text{if } \psi_1 * \psi_2 \Vdash^* \psi, \vec{x}_1 \cap \vec{x}_2 = \emptyset, \vec{x} = (\vec{x}_1 \cup \vec{x}_2) \cap \fv{\psi}, \vec{y} = ((\vec{y}_1 \cup \vec{y}_2) \cap \fv{\psi}) \setminus \vec{x}, \lroots{\psi} \cap D = \emptyset. \nonumber \end{eqnarray} \vspace*{-.5\baselineskip} The composition operator is defined by lifting the $\deriv{}$ relation to sets of core formul{\ae}: \begin{equation}\label{eq:coresep} F_1 \coresep{D} F_2 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{\psi \mid \phi_1 \in F_1, \phi_2 \in F_2, \phi_1,\phi_2 \deriv{D} \psi} \end{equation} \noindent We show that $\coresep{D}$ is indeed a composition, in the sense of Definition \ref{def:coresep}: \begin{lemma}\label{lemma:coresep} Let $\mathcal{S}$ be a normalized, progressing, connected and e-restricted\xspace set of rules, $D \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C}$ be a set of terms and $(\dot{\astore}, \mathfrak{h})$ be an injective structure, with $\mathrm{dom}(\dot{\astore}) \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C}$. Let $\mathfrak{h}_1$ and $\mathfrak{h}_2$ be two disjoint heaps, such that:\begin{inparaenum}[(1)] \item\label{it1:coresep} $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$, % \item\label{it2:coresep} $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \subseteq \dot{\astore}(\allvars{\mathcal{P}}^1 \cup \mathbb{C})$ and % \item\label{it3:coresep} $\mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}) \subseteq \dot{\astore}(D) \subseteq \mathrm{dom}(\mathfrak{h})$. \end{inparaenum} Then, we have $\coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}} = \coreabs{\dot{\astore},\mathfrak{h}_1}{\mathcal{P}} \coresep{D} \coreabs{\dot{\astore},\mathfrak{h}_2}{\mathcal{P}}$. \end{lemma} \optionalProof{Lemma \ref{lemma:coresep}}{sec:coreabs}{ ``$\subseteq$'' Let $\psi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ be a core formula. By equation (\ref{eq:coresep}), it is sufficient to show the existence of core formul{\ae} $\psi_i \in \coreabs{\dot{\astore},\mathfrak{h}_i}{\mathcal{P}}$, for $i=1,2$, such that $\psi_1 * \psi_2 \deriv{D} \psi$. \noindent ({\bf A}) First, we proceed under the following assumptions: \begin{compactenum} \item $\psi$ is quantifier-free thus, by Definition \ref{def:core-formulae}, it is of the form: \[\psi = \Asterisk_{i=1}^n \underbrace{\left(\Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)\right)}_{\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=}\lambda_{i \in \interv{1}{n}}} * \Asterisk_{i=n+1}^m \underbrace{x_i \mapsto (t^i_1,\ldots,t^i_\mathfrak{K})}_{\stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=}\lambda_{i \in \interv{n+1}{m}}}, \text{ for some } 0 \leq n \leq m,\] % \item $\dot{\astore}$ is bijective, i.e.\ $\mathrm{img}(\dot{\astore}) = \mathbb{L}$; \item $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \psi$ and $\dot{\astore}(\lroots{\psi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$ ($\dagger$) \end{compactenum} We show the existence of two quantifier-free core formul{\ae} $\psi_1,\psi_2$ with $\psi_1, \psi_2 \deriv{D} \psi$, $\dot{\astore} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}_i}{\psi_i}$ and $\roots{\psi_i} \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C} \cup \roots{\psi}$, for $i = 1,2$. By definition, there exist $m$ disjoint heaps $\mathfrak{h}'_1, \ldots, \mathfrak{h}'_m$, such that $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2 = \biguplus_{i=1}^m \mathfrak{h}'_i$ and $(\dot{\astore},\mathfrak{h}'_i) \models_{\coreset{\mathcal{S}}} \lambda_i$, for all $i \in \interv{1}{m}$. First, we prove that: \[\lroots{\psi} \cap D = \emptyset. ~(\dagger\dagger)\] Suppose, for a contradiction, that there exists a variable $x \in \lroots{\psi} \cap D$. Then $\dot{\astore}(x) \in \dot{\astore}(\lroots{\psi})$, leading to $\dot{\astore}(x) \not\in \mathrm{dom}(\mathfrak{h})$, by ($\dagger$). But we also have $\dot{\astore}(x) \in \dot{\astore}(D)$, hence $\dot{\astore}(D) \not\subseteq \mathrm{dom}(\mathfrak{h})$, which contradicts the hypothesis $\dot{\astore}(D) \subseteq \dot{\astore}(\mathrm{dom}(\mathfrak{h})$ from the statement of the Lemma. Second, we build $\psi_1$ and $\psi_2$, distinguishing the following cases: \noindent ({\bf A.1}) If for all $i\in \interv{1}{m}$, either $\mathfrak{h}'_i \subseteq \mathfrak{h}_1$ or $\mathfrak{h}'_i \subseteq \mathfrak{h}_2$, then we let $\psi_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \Asterisk\{\lambda_j \mid j \in \interv{1}{m},~ \mathfrak{h}'_j \subseteq \mathfrak{h}_i\}$, for $i = 1,2$ (note that we may have $\psi_i = \mathsf{emp}$, if $\mathfrak{h}_i$ is empty). It is clear that the formula $\psi$ can be written in the form $\psi_1 * \psi_2$, up the commutativity of $*$ and neutrality of $\mathsf{emp}$ for $*$. Since $\lroots{\psi} \cap D = \emptyset$ by ($\dagger\dagger$), we deduce that $\psi_1, \psi_2 \deriv{D} \psi$ (\ref{eq:deriv}) trivially, since $\psi = \psi_1 * \psi_2$. \noindent ({\bf A.2}) Otherwise, there exists $i \in \interv{1}{m}$ such that $\mathfrak{h}'_i \not\subseteq \mathfrak{h}_1$ and $\mathfrak{h}'_i \not\subseteq \mathfrak{h}_2$. Thus, necessarily, $\card{\mathfrak{h}'_i} > 1$. Furthermore, since $\card{\mathfrak{h}'_j} = 1$ for all $j \in \interv{n+1}{m}$, we must have $i \in \interv{1}{n}$. For the sake of readability we drop all references to $i$ and write $\lambda_i = \Asterisk_{j=1}^{k} q_j(\vec{u}_j) \swand p(\vec{t})$ instead of $\lambda_i = \Asterisk_{j=1}^{k_i} q^i_j(\vec{u}^i_j) \swand p_i(\vec{t}_i)$. Since $\dot{\astore}$ is bijective by assumption, by Lemma \ref{lemma:bijective-unfolding-sat}, there exists a core unfolding $\lambda_i \sunfold{\coreset{\mathcal{S}}} \varphi_i$, such that $(\dot{\astore},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi_i$. Because $\card{\mathfrak{h}'_i}>1$, entails that $\varphi_i\neq \mathsf{emp}$, the rule used to obtain this core unfolding (see Definition \ref{def:simple-formulae}) must have been generated by inference rule (\ref{infrule:notemp}). Since $\mathcal{S}$ is progressing, we deduce that $\varphi_i$ is of the form $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K}) * \Asterisk_{j=1}^{k'} (\gamma_j \swand p'_j(\vec{t}_j))$, for some separating conjunctions of predicate atoms $\gamma_1, \ldots, \gamma_{k'}$ such that $\Asterisk_{j=1}^{k'} \gamma_j = \Asterisk_{j=1}^k q_j(\vec{u}_j)$, and that $t_0 = \mathsf{root}(p(\vec{t}))$. Then $\dot{\astore}(t_0) \in \mathrm{dom}(\mathfrak{h}'_i) \subseteq \mathrm{dom}(\mathfrak{h})$ and assume that $\dot{\astore}(t_0) \in \mathrm{dom}(\mathfrak{h}_1)$ (the case $\dot{\astore}(t_0) \in \mathrm{dom}(\mathfrak{h}_2)$ is symmetric). We construct a sequence of formul{\ae} by applying the same process to each occurrence of a subformula of the form $\alpha' \swand p'(\vec{t}')$ such that $\dot{\astore}(\mathsf{root}(p'(\vec{t}'))) \in \mathrm{dom}(\mathfrak{h}_1)$, leading to \(\Asterisk_{j=1}^k q_j(\vec{u}_j) \swand p(\vec{t}) \sunfold{\coreset{\mathcal{S}}}^* \alpha * \Asterisk_{j=1}^h \delta_j \swand r_j(\vec{v}_j)\), where:\begin{compactitem} % \item $\alpha$ is a separating conjunction of points-to atoms, % \item $\delta_1, \ldots, \delta_h$ are separating conjunctions of predicate atoms, such that $\Asterisk_{j=1}^h \delta_j = \Asterisk_{j=1}^k q_j(\vec{u}_j)$, % \item $(\dot{\astore},\mathfrak{h}'_i) \models_{\coreset{\mathcal{S}}} \alpha * \Asterisk_{j=1}^h \delta_j \swand r_j(\vec{v}_j)$, % \item $\dot{\astore}(\mathsf{root}(r_j(\vec{v}_j))) \in \mathrm{dom}(\mathfrak{h}_2)$, for all $j \in \interv{1}{h}$. \end{compactitem} Let $\lambda_{i,1}^1 \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \Asterisk_{j=1}^h r_j(\vec{v}_j) \swand p(\vec{t})$. By definition $\mathfrak{h}'_i = \mathfrak{h}^1_{i,1} \uplus \mathfrak{h}'_{i,1}$, with $(\dot{\astore},\mathfrak{h}^1_{i,1}) \models_{\coreset{\mathcal{S}}} \alpha$ and $(\dot{\astore},\mathfrak{h}'_{i,1}) \models_{\coreset{\mathcal{S}}} \Asterisk_{j=1}^h \delta_j \swand r_j(\vec{v}_j)$. Note that by construction $\mathfrak{h}^1_{i,1} \subseteq \mathfrak{h}^1$ (but we do not necessarily have $\mathfrak{h}'_{i,1} \subseteq \mathfrak{h}_2$). Furthermore, it is easy to check that $\alpha \models_{\coreset{\mathcal{S}}} \lambda_{i,1}^1$ (indeed, by construction, $\alpha$ is obtained by starting from $p(\vec{t})$ and repeatedly unfolding all atoms not occurring in $\Asterisk_{j=1}^h r_j(\vec{v}_j)$), hence $(\dot{\astore},\mathfrak{h}^1_{i,1}) \models_{\coreset{\mathcal{S}}} \lambda_{i,1}^1$. By Definition \ref{def:conseq}, we have $\lambda_{i,1}^1 * \left(\Asterisk_{j=1}^h \delta_j \swand r_j(\vec{v}_j)\right) \Vdash^* \lambda_i$. We now prove that: \[\mathsf{root}(r_j(\vec{v}_j)) \in \allvars{\mathcal{P}}^1 \cup \mathbb{C}, \text{ for each } j \in \interv{1}{h}. ~~ (\star) \enspace\] Since $(\dot{\astore},\mathfrak{h}^1_{i,1}) \models_{\coreset{\mathcal{S}}} \lambda_{i,1}^1$, by Lemma \ref{lemma:lhs-root}, we have $\dot{\astore}(\mathsf{root}(r_j(\vec{v}_j))) \in \mathrm{loc}(\mathfrak{h}_1) \cup \dot{\astore}(\mathbb{C})$. If $\dot{\astore}(\mathsf{root}(r_j(\vec{v}_j))) \in \dot{\astore}(\mathbb{C})$, we obtain $\mathsf{root}(r_j(\vec{v}_j)) \in \mathbb{C}$ by injectivity of $\dot{\astore}$. Otherwise $\dot{\astore}(\mathsf{root}(r_j(\vec{v}_j))) \in \mathrm{loc}(\mathfrak{h}_1)$, and since $\dot{\astore}(\mathsf{root}(r_j(\vec{v}_j))) \in \mathrm{dom}(\mathfrak{h}_2) \subseteq \mathrm{loc}(\mathfrak{h}_2)$ by construction, we obtain $\dot{\astore}(\mathsf{root}(r_j(\vec{v}_j))) \in \mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \subseteq \dot{\astore}(\allvars{\mathcal{P}}^1 \cup \mathbb{C})$, by hypothesis (\ref{it3:coresep}) of the Lemma. Since $\dot{\astore}$ is injective, we deduce that $\mathsf{root}(r_j(\vec{v}_j)) \in \allvars{\mathcal{P}}^1 \cup \mathbb{C}$. We repeat the entire process until we get a formula that satisfies Condition ({\bf A.1}). Note that the unfolding terminates because at each step we increase the number $h$ of separating conjunctions $\delta_1, \ldots, \delta_h$ and $\Asterisk_{j=1}^h \delta_j = \Asterisk_{j=1}^k q_j(\vec{u}_j)$, where $k \geq h$ is fixed. If we denote by $s$ the number of unfolding steps, and by $\psi(i)$ the formula obtained after step $i$, we eventually obtain a sequence of formul{\ae} $\psi(s) \Vdash^* \dots \Vdash^* \psi(0) = \psi$ where $\psi(s)$ satisfies Condition ({\bf A.1}), and $(\dot{\astore},\mathfrak{h}) \models \psi(i)$, for all $i = 0,\ldots,s$. By Point ({\bf A.1}), we therefore obtain formul{\ae} $\psi_j$ such that $(\dot{\astore},\mathfrak{h}_j) \models_{\coreset{\mathcal{S}}} \psi_j$, for $j=1,2$ and $\psi_1 * \psi_2 \Vdash^* \psi$, which, by ($\dagger\dagger$), leads to $\psi_1, \psi_2 \deriv{D} \psi$ (\ref{eq:deriv}). We prove that $\dot{\astore}(\lroots{\psi_i}) \cap \mathrm{dom}(\mathfrak{h}_i) = \emptyset$, for $i = 1,2$. Let $i=1$ and $x \in \lroots{\psi_1}$ (the proof is identical for the case $i=2$). If $x \in \lroots{\psi}$ then $\dot{\astore}(x) \not\in \mathrm{dom}(\mathfrak{h})$, by ($\dagger$). Otherwise, $x \not\in \lroots{\psi}$ was introduced during the unfolding, hence $\dot{\astore}(x) \in \mathrm{dom}(\mathfrak{h}_2)$, by the construction of $\psi_1$. In both cases, we have $\dot{\overline{\astore}}(x) \not\in \mathrm{dom}(\mathfrak{h}_1)$. Since $(\dot{\astore},\mathfrak{h}_1) \models_{\coreset{\mathcal{S}}} \psi_1$ and $\psi_1$ is quantifier-free, by construction, we have $\dot{\astore} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}_1}{\psi_1}$, thus $\psi_1 \in \coreabs{\dot{\astore},\mathfrak{h}_1}{\mathcal{P}}$, as required. Next, we show that for $i= 1, 2$, each root in $\psi_i$ is contained in $\allvars{\mathcal{P}}^1 \cup \mathbb{C} \cup \roots{\psi}$, and that it occurs with multiplicity one. We give the proof when $i = 1$, the proof for $i=2$ is symmetric. First, each $x \in \roots{\psi_1}$ is either a root of $\psi$ or it is introduced by the unfoldings described above. In the second case we have $x \in \allvars{\mathcal{P}}^1 \cup \mathbb{C}$ by ($\star$). Second, we show that all variables from $\roots{\psi_1}$ occur with multiplicity one. Suppose, for a contradiction, that $x$ occurs twice as a root in $\psi_1$. If both occurrences of $x$ are in points-to atoms $x \mapsto (t_1, \ldots, t_\mathfrak{K})$ or in a predicate atom $\delta \swand p(\vec{t})$ with $x = \mathsf{root}(p(\vec{t}))$, then since all atoms are conjoined by separating conjunctions, $\phi_1$ is unsatisfiable, which contradicts the fact that $(\dot{\astore},\mathfrak{h}_1) \models_{\coreset{\mathcal{S}}} \psi_1$. If one occurrence of $x$ occurs in $\lroots{\psi_1}$ then we have shown that $\dot{\astore}(x) \not\in \mathrm{dom}(\mathfrak{h}_1)$, thus the other occurrence of $x$ cannot occur in $\rroots{\psi_1}$, which entails that it also occurs in $\lroots{\psi_1}$. Finally, assume that both occurrences of $x$ occur in $\lroots{\psi_1}$. Because $\psi \in \core{\mathcal{P}}$, it must be the case that at least one occurrence of $x$ was introduced during the unfolding. This entails that $\dot{\astore}(x)\in\mathrm{dom}(\mathfrak{h})$ thus $x$ cannot occur in $\lroots{\psi}$, because $\psi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$ (Definition \ref{def:coreabs}), hence both occurrences of $x$ have been introduced during the unfolding. But each time a variable $x$ is introduced in $\lroots{\psi_1}$, there is another occurrence of the same variable $x$ that is introduced in $\rroots{\psi_2}$, hence $\psi_2$ is unsatisfiable, which contradicts the fact that $(\dot{\astore},\mathfrak{h}_2) \models_{\coreset{\mathcal{S}}} \psi_2$. \noindent({\bf B}) Let $\psi = \heapex \vec{x} \nheapall \vec{y} \,.\, \varphi$, where $\varphi$ is a quantifier-free core formula in $\core{\mathcal{P}}$ and let $\dot{\astore}$ be an injective store. Note that, since $\psi\in \core{\mathcal{P}}$ and $\mathrm{dom}(\dot{\astore}) \subseteq \allvars{\mathcal{P}}^1$, we have $(\vec{x} \cup \vec{y}) \cap \mathrm{dom}(\dot{\astore}) = \emptyset$. Because $\psi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$, by Definition \ref{def:coreabs}, there exists a witness $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\psi}$, satisfying the three points of Definition \ref{def:coreabs}, and such that: \[\dot{\overline{\astore}}(\lroots{\psi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset.~(\ddagger)\] Note that $\dot{\overline{\astore}}$ is injective by Definition \ref{def:coreabs}, and we can assume w.l.o.g. that it is bijective. To this aim, we consider any bijection $\ell \mapsto x_{\ell}$ between $\mathbb{L} \setminus \mathrm{img}(\dot{\overline{\astore}})$ and $\mathbb{V} \setminus \mathrm{dom}(\dot{\overline{\astore}})$. Such a bijection exists because both $\mathbb{L} \setminus \mathrm{img}(\dot{\overline{\astore}})$ and $\mathbb{V} \setminus \mathrm{dom}(\dot{\overline{\astore}})$ are infinitely countable. Let $\dot{\overline{\astore}'}$ be the extension of $\dot{\overline{\astore}}$ with the set of pairs $\set{(x_\ell, \ell) \mid \ell \mapsto x_\ell}$. It is easy to check that $\dot{\overline{\astore}'}$ is bijective. Since $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \varphi$ by point \ref{it1:witset} of Definition \ref{def:coreabs} and $\varphi$ is quantifier-free, we have $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\overline{\astore}}}{\mathfrak{h}}{\varphi}$, hence $\varphi \in \coreabs{\dot{\overline{\astore}},\mathfrak{h}}{\mathcal{P}}$, because $\lroots{\varphi} = \lroots{\psi}$ and $\dot{\overline{\astore}}(\lroots{\varphi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$ follows from ($\ddagger$). By case ({\bf A}), there exist quantifier-free core formul{\ae} $\varphi_1, \varphi_2$, such that $\varphi_1, \varphi_2 \deriv{D} \varphi$, $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}_i}{\varphi_i}$ and $\roots{\varphi_i} \subseteq \allvars{\mathcal{P}}^1 \cup \mathbb{C} \cup \roots{\varphi}$, for $i = 1,2$. Let $\dot{\astore}_i$ be the restriction of $\dot{\overline{\astore}}$ to $\fv{\varphi_i} \cup \mathbb{C}$ and define the following sets, for $i=1,2$: \[\vec{x}_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{x \in \mathrm{dom}(\dot{\overline{\astore}}) \setminus \mathrm{dom}(\dot{\astore})\mid \dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_i)} \hspace*{0.5cm} \vec{y}_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \set{x \in \mathrm{dom}(\dot{\astore}_i) \setminus \mathrm{dom}(\dot{\astore}) \mid \dot{\overline{\astore}}(x) \not\in \mathrm{loc}(\mathfrak{h}_i)}\] Note that we do not know at this point whether $\vec{x}_i \subseteq \mathrm{dom}(\dot{\astore}_i)$ (this will be established later), while $\vec{y}_i \subseteq \mathrm{dom}(\dot{\astore}_i)$ holds by definition. We prove that for all variables $x \in \vec{x}_i$, there exists a subformula $\delta$ occurring in $\varphi_i$ such that $x \in \fv{\delta}$, and either $\delta$ is a points-to atom or $\delta = \alpha \swand \beta$ with $x \in \fv{\beta} \setminus \fv{\alpha}$. To this aim, we begin by proving that if some formula $\varphi'$ is obtained from the initial formula $\varphi$ by a sequence of unfoldings as defined in Part ({\bf A}) and if $x \in \fv{\varphi'}$, then $\varphi'$ contains a formula of the form above. The proof is by induction on the length of the unfolding: \begin{compactitem} % \item{If $\varphi = \varphi'$, then by the hypothesis $x \not \in \mathrm{dom}(\dot{\astore})$ and $x \in \fv{\varphi}$, thus $x\in \vec{x} \cup \vec{y}$. Since $\dot{\astore}_i(x) \in \mathrm{loc}(\mathfrak{h}_i) \subseteq \mathrm{loc}(\mathfrak{h})$, we have $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h})$, hence by Condition (\ref{it3:witset}) of Definition \ref{def:coreabs}, necessarily $x \in \vec{x}$. Then the proof follows immediately from Condition (\ref{core-formulae:exists}) in Definition \ref{def:core-formulae}.} % \item{Otherwise, according to the construction above, $\varphi'$ is obtained from an unfolding $\varphi''$ of $\varphi$, by replacing some formula $\lambda_i = \Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j) \swand p_i(\vec{t}_i)$ in $\varphi''$ by $\lambda_{i,1}^1 * \left(\Asterisk_{j=1}^h (\delta_j \swand r_j(\vec{v}_j))\right)$, with $\lambda_{i,1}^1 = \Asterisk_{j=1}^h \left(r_j(\vec{v}_j) \swand p_i(\vec{t}_i)\right)$, and all atoms in $\delta_j$ occur in $\Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j)$. \begin{compactitem} \item{ If $x$ occurs in $\varphi''$, then by the induction hypothesis $\varphi''$ contains a formula $\delta$ satisfying the condition above. If $\delta$ is distinct from $\lambda_i$ then $\delta$ occurs in $\varphi'$ and the proof is completed. Otherwise, we have $\delta = \alpha \swand \beta$ with $\beta = p_i(\vec{t}_i)$, $\alpha = \Asterisk_{j=1}^{k_i} q_j^i(\vec{u}^i_j)$ and $x \in \fv{\beta} \setminus \fv{\alpha}$. We distinguish two cases: If $x \in \fv{r_j(\vec{v}_j)}$, for some $j \in \interv{1}{h}$, then $x \in \fv{r_j(\vec{v}_j)} \setminus\fv{\delta_j}$ (since $x\not \in \fv{\alpha}$ and $\fv{\delta_j} \subseteq \fv{\alpha}$), thus the formula $\Asterisk_{j=1}^h \delta_j \swand r_j(\vec{v}_j)$ fulfills the required property. Otherwise, $x \in \fv{p_i(\vec{t}_i)} \setminus \fv{\Asterisk_{j=1}^h r_j(\vec{v}_j)}$ and $\lambda_{i,1}^1$ fulfills the property.} % \item{ Now assume that $x$ does not occur in $\varphi''$. This necessarily entails that $x \in \fv{r_j(\vec{v}_j)}$, for some $j \in \interv{1}{h}$, and that $x\not \in \fv{\delta_j}$, thus $x \in \fv{r_j(\vec{v}_j)} \setminus \fv{\delta_j}$ and the formula $\delta_j \swand r_j(\vec{v}_j)$ fulfills the required property.} \end{compactitem}} \end{compactitem} We show that such a formula $\delta$ cannot occur in $\varphi_{3-i}$, hence necessarily occurs in $\varphi_i$, which entails that $x \in \mathrm{dom}(\dot{\astore}_i)$, and also that $\vec{x}_1 \cap \vec{x}_2 = \emptyset$. This is the case because if $\delta$ occurs in $\varphi_{3-i}$, then there exists a subheap $\mathfrak{h}_{3-i}'$ of $\mathfrak{h}_{3-i}$ such that $(\dot{\overline{\astore}},\mathfrak{h}_{3-i}') \models \delta$. By Lemma \ref{lemma:varrightonly}, since $x \in \fv{\beta} \setminus \fv{\alpha}$ when $\delta$ is of the form $\alpha \swand\beta$, we have $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_{3-i})$. Furthermore, by hypothesis $x \in \vec{x}_i$, hence $\dot{\overline{\astore}}(x)\in \mathrm{loc}(\mathfrak{h}_i)$. Therefore $\dot{\overline{\astore}}(x) \in \mathrm{Fr}(\mathfrak{h}_1, \mathfrak{h}_2) \subseteq \mathrm{img}(\dot{\astore})$ by the hypothesis (\ref{it2:coresep}) of the Lemma. Since $\dot{\overline{\astore}}$ is injective, this entails that $x \in \mathrm{dom}(\dot{\astore})$, which contradicts the definition of $\vec{x}_i$. Let $\psi_i \stackrel{\scalebox{0.5}{\text{$\mathsf{def}$}}}{=} \heapex\vec{x}_i \heapex\vec{y}_i ~.~ \varphi_i$, for $i = 1,2$. Due to the previous property, $\psi_i$ satisfies Condition (\ref{core-formulae:exists}) of Definition \ref{def:core-formulae}. By definition of $\vec{y}_i$, we have $\vec{y}_i \subseteq \mathrm{dom}(\mathfrak{s}_i)$ and by definition of $\mathfrak{s}_i$, we have $\mathrm{dom}(\mathfrak{s}_i) \subseteq \fv{\varphi_i} \cup \mathbb{C}$, thus $\psi_i$ also fulfills Condition (\ref{core-formulae:no_useless_var}) of the same definition. By part ({\bf A}) $\varphi_i$ is a core formula, hence Condition (\ref{core-formulae:roots_are_distinct}) is satisfied, which entails that $\psi_i$ is a core formula. Still by part ({\bf A}) of the proof, $(\dot{\overline{\astore}},\mathfrak{h}_i) \models_{\coreset{\mathcal{S}}} \varphi_i$, thus we also have also $(\dot{\astore}_i,\mathfrak{h}_i) \models_{\coreset{\mathcal{S}}} \varphi_i$, by the definition of $\dot{\astore}_i$, for $i=1,2$. By the definition of $\vec{x}_i$ and $\vec{y}_i$, we have $\dot{\astore}_i \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}_i}{\psi_i}$ and since $\dot{\astore}_i(\lroots{\varphi_i}) = \dot{\overline{\astore}}(\lroots{\varphi_i})$ and $\dot{\overline{\astore}}(\lroots{\varphi_i}) \cap \mathrm{dom}(\mathfrak{h}_{i}) = \emptyset$, we obtain $\psi_i \in \coreabs{\dot{\astore}, \mathfrak{h}_i}{\mathcal{P}}$, for $i=1,2$. Since $\varphi_1 * \varphi_2 \deriv{D} \varphi$ and $\varphi_1,\varphi_2$ are quantifier-free, we have, by definition of $\deriv{D}$: \[\psi_1 * \psi_2 \deriv{D} \heapex\vec{x}'\nheapall\vec{y}'\varphi, \text{ where } \vec{x}' = (\vec{x}_1 \cup \vec{x}_2) \cap \fv{\varphi} \text{ and } \vec{y}' = ((\vec{y}_1 \cup \vec{y}_2) \cap \fv{\varphi}) \setminus \vec{x}'.\] To complete the proof, it is sufficient to show that $\vec{x}' = \vec{x}$ and that $\vec{y} = \vec{y}'$, so that $\heapex\vec{x}'\nheapall\vec{y}'\varphi = \psi$. \begin{compactitem} % \item[$\vec{x}' = \vec{x}$]{``$\subseteq$'' Let $x \in \vec{x}'$. We have $x \in \vec{x}_i$, for some $i = 1,2$, and $x\in \fv{\varphi}$. By definition of $\vec{x}_i$, this entails that $x \in \mathrm{dom}(\dot{\overline{\astore}}) \setminus \mathrm{dom}(\dot{\astore})$ and that $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_i) \subseteq \mathrm{loc}(\mathfrak{h})$. Since $x \in \fv{\varphi}$ and $x \in \mathrm{dom}(\dot{\overline{\astore}}) \setminus \mathrm{dom}(\dot{\astore})$, necessarily $x \in \vec{x} \cup \vec{y}$, and because of Condition (\ref{it3:witset}) in Definition \ref{def:coreabs}, we have $x \not \in \vec{y}$. Hence $x \in \vec{x}$. ``$\supseteq$'' Let $x \in \vec{x}$. We have $x \in \fv{\varphi}$ by Definition \ref{def:core-formulae} (\ref{core-formulae:exists}), and $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h})$ by Definition \ref{def:coreabs} (\ref{it1:witset}), thus $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_i)$, for some $i = 1,2$, so that $x \in \vec{x}_i$. Consequently $x \in \vec{x}'$.} % \item[$\vec{y} = \vec{y}'$]{``$\subseteq$'' Let $y \in \vec{y}'$. By definition, we have $y \in \vec{y_i}$ for some $i = 1,2$, $y \in \fv{\varphi}$, and $y \not \in \vec{x} = \vec{x'}$. Since $y \in \vec{y_i}$, we have $y \not \in \mathrm{dom}(\dot{\astore})$, thus $y \not \in \fv{\psi}$, hence $y \in \vec{x} \cup \vec{y}$. Since $y \not \in \vec{x}$, we deduce that $y \in \vec{y}$. ``$\supseteq$'' Let $y\in \vec{y}$. By definition, $y \not \in \mathrm{dom}(\dot{\astore})$ and $y \not \in \vec{x}$, moreover $y \in \fv{\varphi}$, by Definition \ref{def:core-formulae} (\ref{core-formulae:no_useless_var}). By Definition \ref{def:coreabs} (\ref{it3:witset}), we have $\dot{\overline{\astore}}(y) \not \in \mathrm{loc}(\mathfrak{h})$. By definition of $\deriv{D}$, since $y \in \fv{\varphi}$, necessarily $y \in \fv{\varphi_i}$, for some $i=1,2$. Since $y \not \in \mathrm{dom}(\dot{\astore})$, we deduce that $y \in \vec{x}_i \cup \vec{y}_i$. Since $\dot{\overline{\astore}}(y) \not \in \mathrm{loc}(\mathfrak{h})$, we have $\dot{\overline{\astore}}(y) \not \in \mathrm{loc}(\mathfrak{h}_i)$, hence $y \in \vec{y}_i$. Consequently, $y \in \vec{y}'$. } \end{compactitem} \noindent``$\supseteq$'' Let $\psi \in \coreabs{\dot{\astore},\mathfrak{h}_1}{\mathcal{P}} \coresep{D} \coreabs{\dot{\astore},\mathfrak{h}_2}{\mathcal{P}}$ be a core formula. By the definition of $\coresep{D}$ (\ref{eq:coresep}), there exists $\psi_i \in \coreabs{\dot{\astore},\mathfrak{h}_i}{\mathcal{P}}$, for $i = 1,2$, such that $\psi_1, \psi_2 \deriv{D} \psi$. By the definition of $\deriv{D}$ (\ref{eq:deriv}), we have $\psi_i = \heapex \vec{x}_i \nheapall \vec{y}_i ~.~ \phi_i$, for $i = 1,2$, with $\vec{x} = (\vec{x}_1 \cup \vec{x}_2) \cap \fv{\phi}$, $\vec{y} = ((\vec{y}_1 \cup \vec{y}_2) \cap \fv{\phi}) \setminus \vec{x}$, $\vec{x}_1 \cap \vec{x}_2 = \emptyset$ and $\psi = \heapex \vec{x} \nheapall \vec{y} ~.~ \phi$, where $\phi_1$, $\phi_2$ and $\phi$ are quantifier-free core formul{\ae} and $\lroots{\phi} \cap D = \emptyset$. Since $\psi_i \in \coreabs{\dot{\astore},\mathfrak{h}_i}{\mathcal{P}}$, by Definition \ref{def:coreabs}, there exist witnesses $\dot{\overline{\astore}}_i \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}_i}{\nheapall \vec{y}_i ~.~ \phi_i}$, such that $\dot{\overline{\astore}}_i(\vec{x}_i) \subseteq \mathrm{loc}(\mathfrak{h}_i)$ and $\dot{\overline{\astore}}_i(\lroots{\phi_i}) \cap \mathrm{dom}(\mathfrak{h}_i) = \emptyset$, for $i=1,2$. W.l.o.g. we can choose these witnesses such that $\mathrm{dom}(\dot{\overline{\astore}}_i) = \vec{x}_i \cup \mathrm{dom}(\dot{\astore})$. Let $\dot{\overline{\astore}}$ be any extension of $\dot{\overline{\astore}}_1 \cup \dot{\overline{\astore}}_2$ such that $\vec{y}_1 \cup \vec{y}_2 \subseteq \mathrm{dom}(\dot{\overline{\astore}})$ and $\dot{\overline{\astore}}(y_1) \neq \dot{\overline{\astore}}(y_2) \not\in \mathrm{loc}(\mathfrak{h}) \cup \mathrm{img}(\dot{\overline{\astore}}_1) \cup \mathrm{img}(\dot{\overline{\astore}}_2)$, for all variables $y_1 \neq y_2 \in \vec{y}_1 \cup \vec{y}_2$. Note that such an extension exists, because $\mathbb{L}$ is infinite and $\vec{y}_1$, $\vec{y}_2$ are finite. Moreover, $\dot{\overline{\astore}}$ is a well-defined store, because $\dot{\overline{\astore}}_1$ and $\dot{\overline{\astore}}_2$ both agree over $\mathrm{dom}(\dot{\astore})$ and $\vec{x}_1 \cap \vec{x}_2 = \emptyset$. We prove that $\dot{\overline{\astore}}$ is injective. Suppose, for a contradiction, that $\dot{\overline{\astore}}(x_1) = \dot{\overline{\astore}}(x_2)$, for some variables $x_1 \neq x_2 \in \mathrm{dom}(\dot{\overline{\astore}})$. By the definition of $\dot{\overline{\astore}}$, since $\dot{\overline{\astore}}_1$ and $\dot{\overline{\astore}}_2$ are injective, the only possibility is $x_i \in \mathrm{dom}(\dot{\overline{\astore}}_i) \setminus \mathrm{dom}(\dot{\overline{\astore}}_{3-i})$, for $i=1,2$ (hence $x_i \not \in \mathrm{dom}(\dot{\astore})$). Then $x_i \in \vec{x}_i$ must be the case, thus $\dot{\overline{\astore}}_i(x_i) \in \mathrm{loc}(\mathfrak{h}_i)$, leading to $\dot{\overline{\astore}}_1(x_1) \in \mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \subseteq \mathrm{img}(\dot{\astore})$, by the hypothesis of the Lemma, hence $x_i \in \mathrm{dom}(\dot{\astore})$, by injectivity of $\dot{\astore}$, which yields a contradiction. We prove next that $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\phi}$. Since $\dot{\overline{\astore}}_i \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}_i}{\nheapall \vec{y}_i ~.~ \phi_i}$, we have $(\dot{\overline{\astore}}_i, \mathfrak{h}_i) \models_{\coreset{\mathcal{S}}} \nheapall \vec{y}_i ~.~ \phi_i$, for $i = 1,2$. We show that $\dot{\overline{\astore}}(\vec{y}_i) \cap \mathrm{loc}(\mathfrak{h}_i) = \emptyset$ for $i = 1,2$. Suppose, for a contradiction, that $i=1$ and $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_1)$, for some $x \in \vec{y}_1$ (the proof when $i=2$ is symmetric). By definition of $\dot{\overline{\astore}}$, this is possible only if $x \in \vec{x}_2$, and this entails that $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_2)$,thus $\dot{\overline{\astore}}(x) \in \mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \subseteq \dot{\astore}(\allvars{\mathcal{P}}^1 \cup \mathbb{C})$, by the hypothesis of the Lemma. By the injectivity of store $\dot{\astore}$, this entails that $x\in \mathrm{dom}(\dot{\astore})$, which contradicts the fact that $x \in \vec{x}_2$ (since, by definition of $\vec{x}_2$, we have $\vec{x}_2 \cap \mathrm{dom}(\dot{\astore}) = \emptyset$). Then $\dot{\overline{\astore}}(\vec{y}_i) \cap \mathrm{loc}(\mathfrak{h}_i) = \emptyset$, hence, $(\dot{\overline{\astore}},\mathfrak{h}_i) \models_{\coreset{\mathcal{S}}} \phi_i$, for $i = 1,2$. Then $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi_1 * \phi_2$, leading to $(\dot{\overline{\astore}},\mathfrak{h}) \models_{\coreset{\mathcal{S}}} \phi$, by Lemma \ref{lemma:conseq}, since $\phi_1 * \phi_2 \Vdash^* \phi$. Moreover, $\dot{\overline{\astore}}(\vec{x}_1 \cup \vec{x}_2) = \dot{\overline{\astore}}_1(\vec{x}_1) \cup \dot{\overline{\astore}}_2(\vec{x}_2) \subseteq \mathrm{loc}(\mathfrak{h}_1) \cup \mathrm{loc}(\mathfrak{h}_2) = \mathrm{loc}(\mathfrak{h})$ and $\dot{\overline{\astore}}(\vec{y}_i) \cap \mathrm{loc}(\mathfrak{h}) = \emptyset$, for $i=1,2$, by the definition of $\dot{\overline{\astore}}$. Then $\dot{\overline{\astore}} \in \witset{\mathcal{S}}{\dot{\astore}}{\mathfrak{h}}{\phi}$, by Definition \ref{def:coreabs}. Finally, we prove that $\dot{\overline{\astore}}(\lroots{\phi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$. Suppose, for a contradiction, that there exists $x \in \lroots{\phi}$ such that $\dot{\overline{\astore}}(x) \in \mathrm{dom}(\mathfrak{h})$. By Definition \ref{def:conseq}, we have $\lroots{\phi} \subseteq \lroots{\phi_1} \cup \lroots{\phi_2}$ and we assume that $x \in \lroots{\phi_1}$ (the case $x \in \lroots{\phi_2}$ is symmetrical). Since $(\dot{\overline{\astore}},\mathfrak{h}_1) \models_{\coreset{\mathcal{S}}} \phi_1$, we obtain $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_1) \cup \dot{\overline{\astore}}(\mathbb{C})$, by Lemma \ref{lemma:lhs-root}, and since $x \not\in \mathbb{C}$ and $\dot{\overline{\astore}}$ is injective, we obtain $\dot{\overline{\astore}}(x) \in \mathrm{loc}(\mathfrak{h}_1)$. Moreover, we have $\dot{\overline{\astore}}_1(x) \not\in \mathrm{dom}(\mathfrak{h}_1)$, hence $\dot{\overline{\astore}}_1(x) \in \mathrm{dom}(\mathfrak{h}_2) \subseteq \mathrm{loc}(\mathfrak{h}_2)$. Thus $\dot{\overline{\astore}}(x) \in \mathrm{Fr}(\mathfrak{h}_1,\mathfrak{h}_2) \cap \mathrm{dom}(\mathfrak{h}) \subseteq \dot{\overline{\astore}}(D)$, leading to $x \in D$, by the injectivity of $\dot{\overline{\astore}}$. This contradicts the hypothesis $\lroots{\phi} \cap D = \emptyset$ (\ref{eq:deriv}). We obtain that $\dot{\overline{\astore}}(\lroots{\phi}) \cap \mathrm{dom}(\mathfrak{h}) = \emptyset$, thus $\phi \in \coreabs{\dot{\astore},\mathfrak{h}}{\mathcal{P}}$. $\Box$} \section{Main Result} \label{sec:complexity} In this section, we state the main complexity result of the paper. As a prerequisite, we prove that the size of the core formul{\ae} needed to solve an entailment problem $\mathcal{P}$ is polynomial in $\probwidth{\mathcal{P}}$ and the number of such formul{\ae} is simply exponential in $\probwidth{\mathcal{P}} + \log(\size{\mathcal{P}})$. \begin{lemma}\label{lem:card} Given an entailment problem $\mathcal{P}$, for every formula $\phi \in \core{\mathcal{P}}$, we have $\size{\phi} = \bigO(\probwidth{\mathcal{P}}^2)$ and $\card{\core{\mathcal{P}}} = 2^{\bigO(\probwidth{\mathcal{P}}^3 \times \log(\size{\mathcal{P}}))}$. \end{lemma} \optionalProof{Lemma \ref{lem:card}}{sec:complexity}{ Let $\phi \in \core{\mathcal{P}}$ be a core formula. Then $\phi$ can be viewed as a formula built over atoms of the form $p(\vec{t})$ and $t_0 \mapsto (t_1, \ldots, t_\mathfrak{K})$ using the connectives $*$, $\swand$ and the quantifiers $\heapex$ and $\nheapall$. By Definition \ref{def:core-formulae} (\ref{core-formulae:roots_are_distinct}), $\phi$ contains at most $\card{\allvars{\mathcal{P}}}$ occurrences of such atoms. Since, by points (\ref{core-formulae:no_useless_var}) and (\ref{core-formulae:exists}) of Definition \ref{def:core-formulae}, all the variables in $\phi$ necessary occur in an atom, this entails that $\phi$ contains at most $\card{\allvars{\mathcal{P}}}\times \alpha$ (bound or free) variables, where $\alpha = \max(\{ \#p \mid p \in \mathbb{P} \} \cup \{ \mathfrak{K}+1 \})$ denotes the maximal arity of the relation symbols (including $\mapsto$) in $\phi$. Since each atom is of size at most $\alpha+1$ and since there is at most one connective $*$ or $\swand$ for each atom, we deduce that $\size{\phi} \leq 2\times\card{\allvars{\mathcal{P}}}\times \alpha + \card{\allvars{\mathcal{P}}}\times (\alpha + 2)$. By definition, we have $\alpha \leq \probwidth{\mathcal{P}}$, and $\allvars{\mathcal{P}}$ is chosen is such a way that $\card{\allvars{\mathcal{P}}}= 2\times\probwidth{\mathcal{P}}$, thus $\size{\phi} = \bigO(\probwidth{\mathcal{P}}^2)$. The symbols that may occur in the formula include the set of free and bound variables, the predicate symbols and the symbols $\mapsto$, $*$, $\swand$ $\nheapall$, $\heapex$, yielding at most $(\card{\allvars{\mathcal{P}}} \times \alpha) + \size{\mathcal{P}} + 5 \leq \probwidth{\mathcal{P}}^2 + \size{\mathcal{P}} + 5$ symbols. Thus there are at most $(\probwidth{\mathcal{P}}^2 + \size{\mathcal{P}} + 5)^{\bigO(\probwidth{\mathcal{P}}^2)} = 2^{\bigO(\probwidth{\mathcal{P}}^3 \times \log(\size{\mathcal{P}}))}$ core formul{\ae} in $\core{\mathcal{P}}$. $\Box$} \begin{theorem}\label{thm:main} Checking the validity of progressing, connected and e-restricted\xspace entailment problems is $2$-$\mathsf{EXPTIME}$-complete. \end{theorem} \proof{ $2$-$\mathsf{EXPTIME}$-hardness follows from \cite{DBLP:conf/lpar/EchenimIP20}; since the reduction in \cite{DBLP:conf/lpar/EchenimIP20} involves no (dis-)equality, the considered systems are trivially e-restricted\xspace. We now prove $2$-$\mathsf{EXPTIME}$-membership. Let $\mathcal{P}$ be an e-restricted\xspace problem. By Lemma \ref{lemma:normalized}, we compute, in time $\size{\mathcal{P}} \cdot 2^{\bigO(\probwidth{\mathcal{P}}^2)}$, an equivalent normalized e-restricted\xspace problem $\mathcal{P}_n$ of $\size{\mathcal{P}_n} = \size{\mathcal{P}} \times 2^{\bigO(\probwidth{\mathcal{P}}^2)}$ and $\probwidth{\mathcal{P}_n} = \bigO(\probwidth{\mathcal{P}}^2)$. We fix an arbitrary set of variables $\allvars{\mathcal{P}_n} = \allvars{\mathcal{P}_n}^1 \uplus \allvars{\mathcal{P}_n}^2$ with $\card{\allvars{\mathcal{P}_n}^i} = \probwidth{\mathcal{P}_n}$, for $i = 1,2$ and we compute the relation $\mathcal{F}_{\mathcal{P}_n}$, using a Kleene iteration, as explained in Section \ref{sec:coreabs} (Lemma \ref{lemma:profile}). By Lemma \ref{lem:card}, if $\psi \in \core{\mathcal{P}_n}$ then $\size{\psi}= \bigO(\probwidth{\mathcal{P}}^2)$ and if $(\psi,F) \in \mathcal{F}_{\mathcal{P}_n}$ then $\card{F} = 2^{\bigO(\probwidth{\mathcal{P}_n}^3 \times \log(\size{\mathcal{P}_n}))} = 2^{\bigO(\probwidth{\mathcal{P}}^8 \times \log(\size{\mathcal{P}}))}$, hence $\mathcal{F}_\mathcal{P}$ can be computed in $2^{2^{\bigO(\probwidth{\mathcal{P}}^8 \times \log(\size{\mathcal{P}}))}}$ steps. It thus suffices to check that each of these steps can be performed in polynomial time w.r.t.\ $\core{\mathcal{P}_n}$ and $\size{\mathcal{P}_n}$. This is straightforward for points-to atoms, predicate atoms and existential formul{\ae}, by iterating on the rules in $\mathcal{P}_n$ and applying the construction rules (\ref{eq:pto-core}), (\ref{eq:pred-core}) and (\ref{eq:ex-core}) respectively. For the disjoint composition, one has to compute the relation $\Vdash^*$, needed to build the operator $\coresep{D}$, according to (\ref{eq:deriv}) and (\ref{eq:coresep}). We use again a Kleene iteration. It is easy to check that $\phi \Vdash \psi \Rightarrow \size{\psi} \leq \size{\phi}$, furthermore, one only needs to check relations of the form $\phi_1 * \phi_2 \Vdash \psi$ with $\phi_1, \phi_2, \psi \in \core{\mathcal{P}_n}$. This entails that the number of iteration steps is $2^{\bigO(\probwidth{\mathcal{P}}^8 \times \log(\size{\mathcal{P}}))}$ and, moreover, each step can be performed in time polynomial w.r.t.\ $\core{\mathcal{P}_n}$. Finally, we apply Lemma \ref{lemma:entailment} to check that all the entailments in $\mathcal{P}_n$ are valid. This test can be performed in time polynomial w.r.t.\ $\card{\mathcal{F}_{\mathcal{P}_n}}$ and $\size{\mathcal{P}_n}$. $\Box$} \section{Conclusion and Future Work} We presented a class of $\mathsf{SL}$ formul{\ae} built from a set of inductively defined predicates, used to describe pointer-linked recursive data structures, whose entailment problem is $2$-$\mathsf{EXPTIME}$-complete. This fragment, consisting of so-called e-restricted\xspace formul\ae, is a strict generalization of previous work defining three sufficient conditions for the decidability of entailments between $\mathsf{SL}$ formul{\ae}, namely progress, connectivity and establishment \cite{IosifRogalewiczSimacek13,KatelaanMathejaZuleger19,PMZ20}. On one hand, every progressing, connected and established entailment problem can be translated into an e-restricted\xspace problem. On the other hand, the models of e-restricted\xspace formul{\ae} form a strict superset of the models of established formul{\ae}. The proof for the $2$-$\mathsf{EXPTIME}$\ upper bound for e-restricted\xspace entailments leverages from a novel technique used to prove the upper bound of established entailments \cite{KatelaanMathejaZuleger19,PMZ20}. A natural question is whether the e-restrictedness\xspace condition can be dropped. We conjecture that this is not the case, and that entailment is undecidable for progressing, connected and non-e-restricted\xspace sets. Another issue is whether the generalization of symbolic heaps to use guarded negation, magic wand and septraction from \cite{PZ20} is possible for e-restricted\xspace entailment problems. The proof of these conjectures is on-going work. \longVersionOnly{ Future work focuses on finding efficient ways to implement the algorithm in this paper, such as relationships with SMT solving and the application of these techniques to combinations of symbolic heaps with SMT-supported theories of data (integers, real numbers, strings, sets, etc.). As evidenced by Example \ref{ex:acyclic-lists}, non-e-restricted\xspace rules can sometimes be transformed into e-restricted\xspace ones by replacing variables with constants (and propagating these replacements into the rules). It would be interesting from a practical point of view to automate this transformation and identify syntactic conditions ensuring that it is applicable. }
2,877,628,089,892
arxiv
\section{Introduction} \label{intro} The current cosmological paradigm of a flat universe with low matter density (\omegam) consisting of Cold Dark Matter (CDM) and dark energy has become established recently through a variety of astronomical observations, especially the Cosmic Microwave Background (CMB), supernova Hubble diagram, and statistics of large galaxy surveys \citep[e.g.,][]{sper03a,asti06a,sper06a}. These key techniques probe the large-scale, high-redshift universe where the development of cosmic structure may be studied in the linear clustering regime ($\delta\ll 1$). Although the standard model appears to have withstood every test thus far \citep[e.g.,][]{prim06a}, testing the \lcdm\ model on small scales, fully into the non-linear regime ($\delta\gg 1$), will further our understanding of galaxy formation and evolution and may also refine the fundamental parameters of the world model itself. The dark matter halos of galaxies and clusters are the largest virialized manifestations of small-scale, non-linear clustering. An important probe of the structure of dark matter halos is the relationship between concentration and virial mass. The concentration parameter is defined as, $c\equiv r_{\Delta}/r_s$, where $r_{\Delta}$ -- the ``virial radius'' -- is usually taken to be the radius within which the average density equals $\Delta\rho_c$, where $\rho_c$ is the critical density of the universe, and $\Delta$ is a number typically between $100-500$. The quantity $r_s$ is the scale radius of the NFW profile \citep{nfw}, but it is replaced by $r_{-2}$, the radius where the logarithmic density slope equals -2, for more general profiles \citep[][]{nava04a,grah06a}. The virial mass ($M$) is the mass enclosed within $r_{\Delta}$. N-body simulations of CDM models find that $c$ declines slowly with increasing $M$ but with substantial intrinsic scatter, independent of $M$ (\citealt{bull01a}, hereafter B01; \citealt{dola04a}, hereafter D04; \citealt{kuhl05a,shaw06a,macc06a}). For a given $M$ the value of $c$ in CDM models varies significantly with changes in cosmological parameters, particularly the normalization \sige, \omegam, and $w$, the parameter representing the dark energy equation of state (\citealt{eke01a,alam02a}; D04; \citealt{kuhl05a}). The amount of intrinsic scatter in the $c-M$ relation is a robust prediction of CDM models, though preferentially selecting relaxed, early forming systems yields smaller scatter than for the entire halo population \citep{jing00a,wech02a,macc06a}. The expected decrease of $c$ with increasing $M$ in CDM models has yet to be confirmed by observations. Optical studies of groups and clusters ($M>10^{13}\msun$) using several different techniques -- galaxy dynamics \citep{bivi06a,loka06a}, redshift-space caustics \citep{rine06a}, and weak gravitational lensing \citep{mand06a} -- all are consistent with no variation in $c$. However, the $c-M$ relations obtained by these studies are consistent with the concordance \lcdm\ model. X-ray studies of the $c-M$ relation using data from the \asca\ satellite \citep{wu00a,sato00a} found, $c\sim M_{200}^{-0.5}$, much steeper than produced in CDM models (B01; D04; \citealt{macc06a}). However, it has only become possible with data from the \chandra\ and \xmm\ satellites to obtain spatially resolved temperature measurements of sufficient quality for reliable X-ray constraints on the mass profiles of elliptical galaxies, galaxy groups, and clusters \citep[e.g., see reviews by][and references therein]{buot03d,arna05a}. Indeed, our recent study of the mass profiles of seven early-type galaxies with \chandra\ indicates $c-M$ values consistent with \lcdm\ \citep{hump06b}. The $c-M$ relations for clusters ($M>10^{14}\msun$) obtained using \xmm\ \citep{poin05a} and \chandra\ \citep{vikh06a} each are consistent with no variation in $c$ and with the gentle decline with increasing $M$ expected for CDM models. In each case the scatter in $\logten\ c$ about the mean relation is not well constrained and is consistent with the CDM prediction. Our present investigation aims to improve significantly the constraints on the $c-M$ relation by analyzing a wider mass range with many more systems than the previous \chandra\ and \xmm\ studies. For this purpose we undertook a program to obtain accurate mass constraints on relaxed systems with $10^{12}\la M \la 10^{14}\msun$, representing 24 individual early-type galaxies and galaxy groups/clusters, each of which possesses high-quality \chandra\ and/or \xmm\ data \citep{hump06b,gast06a,zapp06a}. To these systems we added results for 15 relaxed, massive clusters from \citet{poin05a} and \citet{vikh06a}. Using this combined sample, optimized for X-ray mass measurements, we obtain empirical constraints on the local $c-M$ relation with a simple power-law model and compare this model to the predictions of a suite of CDM models. We note that the primary purpose of this paper is to assemble accurate $c-M$ measurements from \chandra\ and \xmm\ observations from galaxy to cluster scales and to test whether $c$ decreases with increasing $M$ similar to that predicted by CDM models. However, we also make an initial attempt to constrain the parameters of CDM models to examine the efficacy of the $c-M$ relation in this regard and to highlight sources of systematic error both in the models and data that need to be addressed with future studies. The paper is organized as follows. We describe the X-ray $c$ and $M$ measurements in \S \ref{obs}. Our procedure for fitting a power-law model to the $c-M$ data is discussed in \S \ref{method}. We define the cosmological models that are compared to the X-ray data in \S \ref{models}. We present our results in \S \ref{results} and discuss sources of systematic error in \S \ref{sys}. In \S \ref{conc} we give our conclusions. All distance-related quantities, unless stated otherwise, are computed assuming a flat universe with $\omegam=0.3$, $\omegalambda=0.7$, and $H_0=100h$~\kmsmpc\ with $h=0.7$. \section{Observations} \label{obs} \begin{table*}[t] \scriptsize \begin{center} \caption{Halo Concentration and Virial Mass} \label{tab.data} \begin{tabular}{lcc|ccc|ccc|ccc} \tableline\tableline\\[-7pt] & & & & & & $M$ & $\sigma_M$ & & & \\ Name & $z$ & $\Delta$ & $c_0$ & $\sigma_{c_0}$ & $\sigma_{\log c_0}$ & ($\msun$) & ($\msun$) & $\sigma_{\log M}$ & Corr & Corr$_{\rm log}$ & Ref.\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12)\\ \tableline \\[-7pt] NGC4125 & 0.0045 & 101.5 & 10.19 & 1.78 & 7.52e-02 & 6.24e+12 & 8.04e+11 & 6.24e-02 & -7.87e-01 & -7.61e-01 & a\\ NGC720 & 0.0058 & 101.6 & 18.54 & 7.25 & 1.65e-01 & 6.65e+12 & 1.42e+12 & 9.63e-02 & -7.88e-01 & -7.83e-01 & a\\ NGC6482 & 0.0131 & 102.3 & 18.40 & 5.39 & 1.35e-01 & 7.15e+12 & 1.72e+12 & 8.63e-02 & -8.46e-01 & -8.87e-01 & a\\ NGC5129 & 0.0230 & 103.1 & 14.95 & 2.46 & 7.71e-02 & 1.54e+13 & 2.18e+12 & 5.96e-02 & -8.99e-01 & -8.94e-01 & b\\ NGC1407 & 0.0059 & 101.7 & 17.88 & 4.93 & 1.09e-01 & 1.59e+13 & 3.12e+12 & 9.35e-02 & -8.41e-01 & -8.35e-01 & a\\ NGC533 & 0.0185 & 102.7 & 17.32 & 1.21 & 3.14e-02 & 2.29e+13 & 1.11e+12 & 2.05e-02 & -9.03e-01 & -9.12e-01 & b\\ NGC2563 & 0.0149 & 102.4 & 10.16 & 3.43 & 1.16e-01 & 2.61e+13 & 5.46e+12 & 9.78e-02 & -9.01e-01 & -9.74e-01 & b\\ RGH80 & 0.0379 & 104.4 & 10.28 & 1.01 & 4.15e-02 & 2.81e+13 & 1.60e+12 & 2.55e-02 & -9.23e-01 & -9.28e-01 & b\\ NGC1550 & 0.0124 & 102.2 & 17.16 & 1.12 & 2.80e-02 & 3.12e+13 & 1.49e+12 & 2.08e-02 & -9.53e-01 & -9.58e-01 & b\\ NGC4472 & 0.0033 & 101.4 & 13.07 & 1.64 & 5.09e-02 & 3.27e+13 & 3.99e+12 & 5.97e-02 & -8.90e-01 & -8.89e-01 & a\\ NGC4325 & 0.0257 & 103.3 & 11.46 & 1.28 & 5.61e-02 & 3.50e+13 & 8.10e+12 & 8.34e-02 & -9.71e-01 & -9.83e-01 & b\\ NGC4649 & 0.0037 & 101.5 & 20.80 & 2.45 & 5.17e-02 & 3.50e+13 & 4.62e+12 & 6.29e-02 & -8.88e-01 & -8.63e-01 & a\\ NGC5044 & 0.0090 & 101.9 & 11.32 & 0.29 & 1.09e-02 & 3.73e+13 & 1.17e+12 & 1.40e-02 & -9.50e-01 & -9.53e-01 & b\\ IC1860 & 0.0223 & 103.1 & 9.59 & 0.88 & 3.91e-02 & 5.10e+13 & 4.62e+12 & 3.98e-02 & -9.66e-01 & -9.72e-01 & b\\ MKW4 & 0.0200 & 102.9 & 12.72 & 0.86 & 2.96e-02 & 6.22e+13 & 3.38e+12 & 2.39e-02 & -9.43e-01 & -9.46e-01 & b\\ NGC4261 & 0.0075 & 101.8 & 3.72 & 0.86 & 1.09e-01 & 6.69e+13 & 1.03e+13 & 6.35e-02 & -8.29e-01 & -7.95e-01 & a\\ RXJ1159.8+5531 & 0.0810 & 108.0 & 11.48 & 2.86 & 1.28e-01 & 9.10e+13 & 6.89e+13 & 1.73e-01 & -8.12e-01 & -9.79e-01 & b\\ A262 & 0.0163 & 102.5 & 8.87 & 0.67 & 3.46e-02 & 1.11e+14 & 1.06e+13 & 3.99e-02 & -9.77e-01 & -9.83e-01 & b\\ MS0116.3-0115 & 0.0452 & 105.0 & 6.57 & 2.21 & 1.41e-01 & 1.28e+14 & 8.04e+13 & 2.14e-01 & -8.35e-01 & -9.72e-01 & b\\ ESO5520200 & 0.0314 & 103.8 & 7.79 & 0.75 & 4.46e-02 & 1.31e+14 & 1.96e+13 & 6.06e-02 & -9.53e-01 & -9.57e-01 & b\\ MKW9 & 0.0498 & 105.4 & 7.48 & 0.93 & 5.41e-02 & 1.44e+14 & 3.61e+13 & 1.11e-01 & -6.49e-01 & -9.36e-01 & d\\ AWM4 & 0.0317 & 103.9 & 9.27 & 0.84 & 3.72e-02 & 1.62e+14 & 1.82e+13 & 5.10e-02 & -9.18e-01 & -9.33e-01 & b\\ ESO3060170 & 0.0358 & 104.2 & 9.17 & 1.04 & 6.12e-02 & 1.81e+14 & 7.56e+13 & 1.09e-01 & -8.84e-01 & -9.76e-01 & b\\ A2717 & 0.0490 & 105.3 & 6.45 & 0.36 & 2.46e-02 & 1.83e+14 & 1.20e+13 & 2.84e-02 & -9.79e-01 & -9.83e-01 & b\\ A1991 & 0.0592 & 106.2 & 8.91 & 0.66 & 3.22e-02 & 1.93e+14 & 2.67e+13 & 6.04e-02 & -1.00e+00 & -1.00e+00 & e\\ A1983 & 0.0442 & 104.9 & 5.34 & 0.99 & 8.15e-02 & 1.99e+14 & 7.62e+13 & 1.76e-01 & -2.87e-01 & -3.92e-01 & d\\ A2589 & 0.0414 & 104.7 & 6.36 & 0.32 & 2.33e-02 & 3.24e+14 & 2.37e+13 & 3.00e-02 & -9.56e-01 & -9.66e-01 & c\\ A2597 & 0.0852 & 108.4 & 8.26 & 0.70 & 3.71e-02 & 3.56e+14 & 3.91e+13 & 4.80e-02 & -7.87e-01 & -1.00e+00 & d\\ A383 & 0.1883 & 116.8 & 9.61 & 0.89 & 4.03e-02 & 4.70e+14 & 4.76e+13 & 4.41e-02 & -5.12e-01 & -1.00e+00 & e\\ A133 & 0.0569 & 106.0 & 6.72 & 0.61 & 3.97e-02 & 5.37e+14 & 6.44e+13 & 5.23e-02 & -5.50e-01 & -1.00e+00 & e\\ A1068 & 0.1375 & 112.7 & 5.44 & 0.38 & 3.07e-02 & 6.96e+14 & 6.01e+13 & 3.76e-02 & -9.42e-01 & -1.00e+00 & d\\ A907 & 0.1603 & 114.5 & 7.75 & 0.94 & 5.27e-02 & 7.39e+14 & 5.99e+13 & 3.53e-02 & -3.87e-01 & -1.00e+00 & e\\ A1795 & 0.0622 & 106.5 & 6.80 & 0.38 & 2.44e-02 & 1.02e+15 & 8.78e+13 & 3.75e-02 & -6.48e-01 & -1.00e+00 & e\\ PKS0745 & 0.1028 & 109.9 & 7.32 & 0.57 & 3.40e-02 & 1.20e+15 & 1.44e+14 & 5.24e-02 & -2.64e-01 & -1.00e+00 & d\\ A478 & 0.0881 & 108.6 & 7.61 & 0.58 & 3.29e-02 & 1.25e+15 & 1.65e+14 & 5.74e-02 & -2.28e-01 & -1.00e+00 & e\\ A2029 & 0.0779 & 107.8 & 8.47 & 0.44 & 2.26e-02 & 1.27e+15 & 1.18e+14 & 4.02e-02 & -4.19e-01 & -1.00e+00 & e\\ A1413 & 0.1429 & 113.1 & 6.56 & 0.38 & 2.52e-02 & 1.29e+15 & 1.29e+14 & 4.37e-02 & -4.41e-01 & -1.00e+00 & e\\ A2204 & 0.1523 & 113.9 & 6.77 & 0.55 & 3.51e-02 & 1.41e+15 & 1.55e+14 & 4.80e-02 & -2.55e-01 & -1.00e+00 & d\\ A2390 & 0.2302 & 120.0 & 4.13 & 0.32 & 3.41e-02 & 2.13e+15 & 2.14e+14 & 4.38e-02 & -3.13e-01 & -1.00e+00 & e\\ \tableline \\ \end{tabular} \tablecomments{Col.(1): Cluster name. Col.(2) Redshift. Col.(3) Reference overdensity for virial radius and mass definition. Col.(4) Scaled concentration $c_0 = (1+z)c$. Col.(5) Standard deviation on $c_0$. Col.(6) Standard deviation on $\log_{10} c_0$. Col.(7) Virial mass. Col.(8) Standard deviation on $M$. Col.(9) Standard deviation on $\log_{10} M$. Col.(10) Correlation coefficient for $c_0$ and $M$. Col.(11) Correlation coefficient for $\log_{10} c_0$ and $\log_{10} M$. Col.(12) Reference for $c$ and $M$ values: a -- \citet{hump06b}, b -- \citet{gast06a}, c -- \citet{zapp06a}, d -- \citet{poin05a}, e -- \citet{vikh06a}. Note that literature results obtained at different $\Delta$ values have been converted to those in column (3) as explained in \S \ref{obs}.} \end{center} \end{table*} The measurements of concentrations and virial masses of 23 early-type galaxies and galaxy groups \citep{hump06b,gast06a} form the core of our sample. In these papers we considered only those systems with the highest quality \chandra\ and/or \xmm\ data. Moreover, to insure that hydrostatic equilibrium is a good approximation, we visually inspected all the early-type galaxies and groups in the public data archives and selected those systems that possessed the most regularly shaped X-ray images devoid of strong asymmetries. To these we added several clusters to populate the high-mass portion of the $c-M$ diagram. Firstly, we include our analysis of the radio-quiet (and very symmetrical) cluster A2589 \citep{zapp06a}. Secondly, we added clusters from the \xmm\ study by \citet{poin05a} and the \chandra\ study by \citet{vikh06a}, each of whom also focused on the highest quality observations of the most relaxed systems. For clusters that are common to both studies, we used results from \citet{vikh06a} because of more accurate background subtraction and temperature constraints, the latter resulting from the much smaller point spread function of \chandra; i.e., less biased temperature measurements in the presence of strong temperature gradients. For those few (lowest mass) systems in the \citet{poin05a} and \citet{vikh06a} studies that overlap with our work cited above, we use our values. The final sample (see Table \ref{tab.data}) consists of 39 systems spanning the mass range $(0.06-20)\times 10^{14}\msun$, with $c$ and $M$ values inferred using the NFW dark matter profile. We follow the convention that defines the virial radius (\rvir) so that the mean density within \rvir\ equals $\Delta\rho_c$, where $\rho_c(z)$ is the critical density of the universe at redshift $z$, and $\Delta$ is obtained from the solution to the top-hat spherical collapse model. To evaluate $\Delta$ we use the approximation obtained by \citet{brya98a}. For simplicity, when considering dark energy (DE) models with $w \ne -1$, we adopt the value of $\Delta$ appropriate for $w = -1$. Although technically the virial overdensity changes with $w \ne -1$ \citep{kuhl05a}, this choice has no effect on our conclusions. The values of $\Delta$ for the concordance cosmology at the appropriate redshift for each system are listed in Table \ref{tab.data}. For results obtained using $\Delta$ values different from those in Table \ref{tab.data}, we converted $c$ and $M$ to our adopted $\Delta$ using the formula of \citet{hukr03a}, which assumes an NFW mass profile. We converted the standard deviations for $c$ and $M$ in one of two ways. For those systems taken from our previous work, we have at least 20 Monte Carlo error simulations. Hence, we convert each simulated value of $c$ and $M$ to the new $\Delta$ and then compute the standard deviation of the converted values. This procedure provides a self-consistent conversion of the standard deviations. For systems taken from the literature, we simply convert the lower limit (e.g., $c - \sigma_c$) and upper limit (e.g., $c + \sigma_c$) each to the new $\Delta$. The new standard deviation is then set to one-half the difference of the converted upper and lower limits. (By performing this procedure on our own data and comparing to the rigorous method using the error simulations, we find that this procedure is adequate for our present study.) Since we perform our analysis of the $c-M$ relation in log space (see below), we also require the standard deviations of $\log_{10} c$ and $\log_{10} M$. As above, for systems analyzed in our previous studies, we compute self-consistently the log errors using Monte Carlo error simulations. The other systems use the simple prescription noted above, but using the log values. The log errors on $c$ and $M$ are listed in Table \ref{tab.data} as, respectively, $\sigma_{\log c}$ and $\sigma_{\log M}$. For the systems with Monte Carlo error simulations we also computed the covariance between $c$ and $M$ (and $\log_{10} c$ and $\log_{10} M$) for the different systems; e.g., $\sum (c_i -\langle c\rangle)(M_i - \langle M\rangle)$, where the brackets represent the mean quantity, and the sum is over the simulations. In Table \ref{tab.data} we report these as the correlation coefficient, which is the covariance divided by $\sigma_c\sigma_M$ (or $\sigma_{\log c}\sigma_{\log M}$). For the objects from the literature we used the average covariance obtained from the others to evaluate the correlation coefficient, setting the coefficient to -1 if a value $<-1$ was obtained. Most of the correlation coefficients have values near -1 indicating that $c$ and $M$ are anti-correlated. Finally, the results listed in Table \ref{tab.data} for the concentration parameter actually refer to the quantity, $c_0=(1+z)c$, since this quantity is what is fitted in our analysis below. Note that $c_0$ is also used when computing the covariance and correlation coefficient. \section{Analysis Method} \label{method} We focus our analysis on a simple power-law representation of the $c-M$ data, \begin{equation} c = \frac{c_{14}}{1+z}\left(\frac{M}{M_{14}}\right)^{\alpha}, \label{eqn.cm} \end{equation} where $z$ is the redshift, and both $c_{14}$ and $\alpha$ are constants independent of $M$. We set the reference mass to, $M_{14}=10^{14}h^{-1}\msun$, which lies close to the midpoint (in log space) of the mass range of our sample. CDM models generally predict that $c$ decreases with increasing $M$ (e.g., B01; D04; \citealt{kuhl05a,macc06a}). A key goal of our study, therefore, is to determine whether $\alpha<0$. For the most massive systems ($M\ga 3\times 10^{14}\msun$) in CDM models D04 find that $c_{14}$ is much more sensitive to cosmological parameter variations than $\alpha$ (see \S \ref{models}). To constrain the parameters of eqn.\ [\ref{eqn.cm}] it is necessary to account for the error estimates on both $c$ and $M$. Consequently, we employ the BCES method (i.e., bivariate correlated errors with intrinsic scatter) described by \citet{akri96} to estimate $\alpha$, $c_{14}$, and the intrinsic scatter about the best relation. We performed the BCES fitting using software kindly provided by M. Bershady\footnote{http://www.astro.wisc.edu/$\sim$mab/archive/stats/stats.html}. Since BCES is a linear regression method we transform eqn.\ [\ref{eqn.cm}] to the form, $y = \alpha x + b$, where $x \equiv \log_{10} M$, $y\equiv \log_{10} c_0 = \log_{10} (1+z)c$, and $b\equiv \log_{10}(c_{14}/M_{14}^{\alpha})$. The parameter $c_{14}$ is derived from $\alpha$ and $b$: $c_{14}=10^{b}M_{14}^{\alpha}$. Since the fractional error $\sigma_{\log M}/\log_{10}M$ is typically smaller by more than an order of magnitude of the fractional error, $\sigma_{\log c_{0}}/\log_{10} c_{0}$ (see Table \ref{tab.data}), we always use $x=\log_{10}M$ as the independent variable. By default we also use the Corr$_{\rm log}$ values in Table \ref{tab.data} for the BCES method. Despite the strong anti-correlation between $c$ and $M$, we find that including the Corr$_{\rm log}$ values in the analysis has an insignificant impact on the estimated $\alpha$ and $c_{14}$ values. We determine the best estimate of $\alpha$ and $c_{14}$ by performing the BCES method on $10^6$ bootstrap resamplings of the data. We take the mean of the bootstrap simulations to be the best estimates and construct error ellipses about these best values. The confidence contour spacings computed from the bootstrap simulations correspond quite closely to the $\Delta\chi^2$ values for two parameters assuming normally distributed errors. We estimate the intrinsic scatter on the concentration as, \begin{eqnarray} \left(\sigma_y^2\right)_{\rm int} & = & \left(\sigma_y^2\right)_{\rm total} - \left(\sigma_y^2\right)_{\rm stat} \nonumber\\ & = & \frac{1}{N}\sum_{i=1}^N\left(y_i - y_i^{model}\right)^2 - \frac{1}{N}\sum_{i=1}^N \sigma_{y_i}^2, \label{eqn.scat} \end{eqnarray} where $N$ is the number of data points $(x_i,y_i)=(\log_{10}M_i,\log_{10}{c_{0,i}})$ corresponding to the entries in Table \ref{tab.data}; $y_i^{model} = \alpha x_i + b$, where $\alpha$ and $b$ are the best (mean) estimates from the bootstrap simulations; and $\sigma_{y_i} = \sigma_{\log {c_0}}$ in Table \ref{tab.data}. \section{Cosmological Models} \label{models} \begin{table}[t] \footnotesize \begin{center} \caption{Cosmological Model Parameters} \label{tab.models} \begin{tabular}{lccccccc} \tableline\tableline\\[-7pt] Name & \omegam\ & \omegalambda\ & $\Omega_Bh^2$ & $h$ & $\sigma_8$ & $n_s$ & -w\\ \tableline \\[-7pt] \lone\ & 0.30 & 0.70 & 0.022 & 0.7 & 0.90 & 1.00 & 1.0 \\ \lthree\ & 0.24 & 0.76 & 0.022 & 0.73 & 0.76 & 0.96 & 1.0 \\ DECDM & 0.30 & 0.70 & 0.022 & 0.7 & 0.90 & 1.00 & 0.6 \\ QCDM & 0.30 & 0.70 & 0.022 & 0.7 & 0.82 & 1.00 & $\approx 0.8$ \\ OCDM & 0.30 & 0.00 & 0.022 & 0.7 & 0.90 & 1.00 & $\cdots$ \\ \tableline \\ \end{tabular} \tablecomments{\omegam\ is the energy density parameter for matter in the universe; \omegalambda\ is the energy density parameter associated with a cosmological constant or, more generally, dark energy; $\Omega_B$ is the energy density parameter of baryons; $h$ is $H_0$/100~\kmsmpc; $\sigma_8$ is the rms mass fluctuation within spheres of comoving radius $8h^{-1}$~Mpc. See \S \ref{models}.} \end{center} \end{table} In Table \ref{tab.models} we define the CDM-based cosmological models to be compared with the observations. \lone\ is the standard concordance model, which essentially reflects the combined constraints from the first year of WMAP CMB observations, the supernovae Hubble diagram, and the large-scale clustering of galaxies \citep[e.g.,][]{sper03a}. \lthree\ effectively updates \lone\ using the third year of WMAP \citep{sper06a}. Each of these models assumes a constant dark energy equation of state with $w=-1.0$ (i.e., a cosmological constant). In addition, we consider a dark energy model with $w=-0.6$ (DECDM) and a quintessence model (QCDM) with a Ratra-Peebles potential \citep[and references therein]{ratr88a,peeb03a} as implemented in D04. The QCDM model has a nearly constant $w(z)\approx -0.8$ (see Fig.\ 1 of D04) and a lower \sige. Finally, we include an open model (OCDM) with the same parameters as \lone\ except with $\omegalambda=0$. The median relation between concentration and virial mass as a function of redshift for CDM halos is described well by the semi-analytic model proposed by B01, \begin{equation} c(M,z) = K\frac{1+z_c}{1+z}, \label{eqn.cm.b01} \end{equation} where $K$ is the normalization constant, $z$ is the redshift of the halo, and $z_c$ is the redshift when the halo ``collapsed''. This collapse redshift is defined implicitly by the equation, \begin{equation} \sigma(FM) = \delta_c(z_c), \end{equation} where $\delta_c$ is the equivalent linear overdensity for spherical collapse at $z_c$, $\sigma$ is the $z=0$ linear rms density fluctuation corresponding to a mass $FM$, and $F$ is a constant. (Note that eqn.\ [\ref{eqn.cm.b01}] is not a power-law, though over small mass ranges -- within a factor of 5-10 -- the B01 model can be well approximated by one.) The constants $F$ and $K$ must be specified by comparison with numerical simulations. For halos with $M\la 10^{13}h^{-1}\msun$ it is found that $F=0.01$ and $K\approx 3.5$ provide the best description of the results from N-body simulations, although $F=0.001$ and $K \approx 3$ is acceptable (B01; \citealt{kuhl05a,macc06a}). Larger box simulations that focus on higher mass ranges clearly prefer $F=0.001$. One of us (J. Bullock) finds that $F=0.001$ and $K=3.1$ matches the $c-M$ results for CDM halos ($0.6-2.5 \, \times 10^{14}h^{-1}\msun$) simulated by \citet{tasi04a}. For simulations of even more massive clusters ($3.1-17 \, \times 10^{14}h^{-1}\msun$), D04 find higher concentrations and that $K=3.5$ is required for the same $F$. Consequently, when comparing CDM models to the full mass range of the X-ray data in Table \ref{tab.data} we shall consider the $c-M$ relations obtained with the B01 model using $F=0.001$ with both $K=3.1$ and $K=3.5$. (In \S \ref{fk} we discuss further the different $F$ and $K$ values obtained by different investigators.) \begin{table}[t] \footnotesize \begin{center} \caption{Power-Law Approximation for CDM Clusters} \label{tab.models.cm} \begin{tabular}{lcc} \tableline\tableline\\[-7pt] Name & $\alpha$ & $c_{14}$\\ \tableline \\[-7pt] \lone\ & -0.104 & 7.63\\ DECDM & -0.094 & 9.05\\ QCDM & -0.111 & 7.39\\ OCDM & -0.091 & 11.47\\ \lone$\rm _B$ & -0.142 & 8.43\\ \lthree$\rm _B$ & -0.155 & 6.23\\ \tableline \\ \end{tabular} \tablecomments{Power-law parameters (eqn.\ \ref{eqn.cm}) for the theoretical relation between concentration and virial mass of high-mass clusters obtained by D04 for the \lone, DECDM, QCDM, and OCDM models. We converted the results presented in Table 2 of D04 to our definition of the virial radius at $z=0$ (i.e., $\Delta=101.1$). The \lone$\rm _B$ and \lthree$\rm _B$ models are the same as the \lone\ and \lthree\ models, but we computed the power-law parameters using the B01 model with $F=0.001$ and $K=3.5$. See \S \ref{models}.} \end{center} \end{table} In the high-mass ``cluster'' regime D04 find that the $c-M$ relation for CDM models is adequately parameterized by the simple power-law model given by eqn.\ [\ref{eqn.cm}]. We list the power-law parameters obtained by D04 in Table \ref{tab.models.cm} but converted to our definition of virial radius. (D04 use $\Delta=200\rho_b=60\rho_c$ for $\omegam=0.3$.) That is, initially we compute $c(M)$ using $c_{14}$ and $\alpha$ from D04 (at $z=0$). Then we convert the resulting $c$ and $M$ values to $\Delta=101.1$ using the approximation of \citet{hukr03a}. These values are used to compute the slope and $c_{14}$, which are listed in Table \ref{tab.models.cm}. The parameter $c_{14}$ decreases by $\approx 20\%$ while $\alpha$ remains nearly constant under this transformation. For comparison, in Table \ref{tab.models.cm} we also list $\alpha$ and $c_{14}$ computed from the tangent line to the $c-M$ profiles located at $M=M_{14}$ of the \lone\ and \lthree\ models obtained using the B01 approach with $F=0.001$ and $K=3.5$. These results are listed as \lone$\rm _B$ and \lthree$\rm _B$ respectively. (We note that the virial quantities for the X-ray data in Table \ref{tab.data} all refer to the \lone\ cosmology. For consistent comparison, we also use the \lone\ cosmology when converting all the D04 models mentioned above.) The concentration is sensitive to the formation time and dynamical state of the halo (e.g., \citealt{nfw,eke01a,jing00a}; B01; \citealt{wech02a,macc06a}). For example, \citet{jing00a} and \citet{macc06a} find that their ``relaxed'' dark halo samples have concentrations that are systematically larger by $\sim 10\%$ compared to the whole population. \citet{wech02a} found that the halo concentration at fixed mass is set almost exclusively by the ``formation epoch'' of the halo (the time when the mass accretion rate of the halos is slowed below a critical value). Indeed, when \citet{wech02a} focused on halos with no major mergers since $z = 2$ they found $\sim 10 \%$ higher concentrations for that population. Therefore, a proper comparison between the theoretical $c-M$ relation with observations requires the observed and simulated halos be selected in a consistent manner. The X-ray data presented in Table \ref{tab.data} represent objects that were selected to be the most relaxed, X-ray--bright, systems. But the theoretical $c-M$ models discussed above (Table \ref{tab.models.cm}) were obtained using all available halos in the N-body simulations. Consequently, we consider this source of systematic error in our comparisons of the CDM models with the X-ray data. Finally, the magnitude of the intrinsic scatter about the median theoretical $c-M$ relation does not vary over a large class of CDM models and is independent of $M$. The value of $\sigma_{\log c}\approx 0.14$ ($\log\equiv\logten$) obtained by B01 has been found by several independent investigations (\citealt{jing00a,tasi04a}; D04; \citealt{macc06a}); note this consistency is observed after accounting for the fact that $\ln$, rather than \logten, is used in these other studies. Early forming, relaxed, halos tend to exhibit smaller scatter $\sigma_{\log c}\approx 0.10$ \citep{jing00a,wech02a,macc06a}. \vskip 1cm \section{Results} \label{results} \begin{figure*}[t] \parbox{0.49\textwidth}{ \centerline{\includegraphics[height=0.35\textheight]{f1a.eps}}} \parbox{0.49\textwidth}{ \centerline{\includegraphics[height=0.35\textheight]{f1b.eps}}} \caption{\label{fig.cm} Results of fitting eqn.\ \ref{eqn.cm} in log space to the entire sample. Displayed are the best-fitting model (solid blue line), the $1\sigma$ intrinsic scatter (dotted blue lines), and the predicted relation obtained from cosmological simulations for the ({\sl Left panel}) \lone\ and ({\sl Right panel}) \lthree\ models (dashed lines). The two dashed lines attempt to represent fits to different mass ranges in the cosmological simulations as explained in the text (\S \ref{models}). The lower (red) dashed line refers to fits of halos up to $M\sim 0.3\times 10^{15}\msun$ ($K=3.1$) while the upper (green) dashed line refers to fits of higher mass halos $M\approx (0.3-1)\times 10^{15}\msun$ ($K=3.5$) obtained by D04.} \end{figure*} \begin{figure*}[t] \parbox{0.49\textwidth}{ \centerline{\includegraphics[height=0.35\textheight]{f2a.eps}}} \parbox{0.49\textwidth}{ \centerline{\includegraphics[height=0.35\textheight]{f2b.eps}}} \caption{\label{fig.errors} Best-fitting value (X) and confidence contours (68\%, 95\%, 99\%, 99.9\%, 99.99\%) for $\alpha$ and $c_{14}$ obtained from the entire sample. ({\sl Left panel}) As in Fig.\ \ref{fig.cm} results for the \lone\ (green 1) and \lthree\ (red 3) models are displayed. Models with subscript ``c'' refer to the higher normalization $K=3.5$ for the cluster regime preferred by D04. ({\sl Right panel}) Models have $c_{14}$ increased by 10\% to represent relaxed, early-forming halos (see \S \ref{models} and \citealt{macc06a}).} \end{figure*} In \S \ref{sys} we discuss possible sources of systematic error both in the measurements from the X-ray data and the theoretical predictions. Possible systematic errors affecting the X-ray mass measurements of our sample of relaxed galaxies and clusters are expected to be small and do not change the conclusions of the analysis presented here (see beginning of \S \ref{sys} and \S \ref{he}). \subsection{All Halos} \label{all} When fitting the power-law relation (eqn.\ \ref{eqn.cm}) to the entire sample we obtain $\alpha=-0.172\pm 0.026$ and $c_{14}=9.0\pm 0.4$ (quoted errors $1\sigma$). The estimated intrinsic scatter in $\logten\ (1+z)c$ is $0.102\pm 0.004$ (eqn.\ \ref{eqn.scat}), with a total scatter of 0.12. In Figure \ref{fig.cm} we plot the best-fitting model with intrinsic scatter. Inspection of Figure \ref{fig.cm} reveals that the power-law with intrinsic scatter is a good approximate representation of the X-ray results, especially for $\logten\ M > 13.5$. For lower masses the X-ray measurements tend to lie above the best-fitting model, though most are consistent with lying within the $1\sigma$ range covered by the intrinsic scatter. Using equation [3] of \citet{trem02a} we compute a reduced $\chi^2$ value of 1.07 (37 dof) for the fit, confirming the visual impression that the power law is a good, but not perfect, representation of the X-ray data. The slope $\alpha$ is constrained to be negative at the $6.6\sigma$ level, demonstrating at high significance that $c$ decreases with increasing $M$, as expected in CDM models. In Figure \ref{fig.cm} we also plot the theoretical $c-M$ relation for the \lone\ model for both $K=3.1$ and $K=3.5$ as discussed in \S \ref{models}. For $\logten\ M > 14$ the $K=3.5$ model recommended by D04 for clusters lies very close to, but just below, the empirical power-law fit. Allowing for a systematic $\approx 10\%$ increase in concentration for relaxed, early forming halos \citep{jing00a,wech02a,macc06a} lifts the $K=3.5$ model slightly over the power-law, with a marginal improvement in the agreement between the models. A recent theoretical study \citep{wang06a} finds that the higher mass halos experience less of bias associated with relaxation and formation time. In fact, systematically increasing the concentrations of the $K=3.5$ model by $\approx 7\%$ would provide a closer match to the empirical power-law fit to the X-ray data. For $\logten\ M <14$ the $K=3.5$ model must quickly transform into the $K=3.1$ model as $M$ decreases from the cluster to the galaxy/group regime as discussed in \S \ref{models}. But in this mass range the $K=3.1$ model lies consistently below the power-law model and is similar to the $1\sigma$ lower limit on the power-law given by the intrinsic scatter. Increasing the concentrations of the $K=3.1$ model by $\approx 10\%$ for relaxed halos improves the agreement with the power-law, and has the effect of essentially pushing the $K=3.1$ model into the $K=3.5$ model. With this accounting for the systematic bias arising from relaxed, early forming halos the \lone\ model is an equally good representation of the X-ray data for masses $\logten\ M > 13.5$. For the lowest masses, the \lone\ model still lies below the power-law (within the $1\sigma$ intrinsic scatter) and may represent a real discrepancy. We summarize this qualitative discussion with a quantitative comparison of the \lone\ model with the data as represented by the results of the empirical power-law fit. (We do not fit the B01 model to the data for the following reasons. First, it is non-trivial to account for the error bars in both coordinates when fitting a general model; the Akritas \& Bershady method only applies to linear regression. Second, the prediction of the B01 model possesses significant systematic uncertainty in the parameter $K$ (at least for masses $<10^{14}\msun$) as discussed above. By showing the B01 model predictions separately for the range of interesting K values, without fitting, we provide a clear demonstration of the importance of reducing this systematic uncertainty with future theoretical studies. When that happens, formal fitting of the B01 model will be investigated.) In Figure \ref{fig.errors} we show the error contours estimated for $\alpha$ and $c_{14}$ from the bootstrap simulations (\S \ref{method}). We calculated the slope and concentration of the $c-M$ relation of the \lone\ model at $10^{14}h^{-1}\msun$ for both the $K=3.1$ and $K=3.5$ cases. (The values for $K=3.5$ are listed in Table \ref{tab.models.cm}.) These results are plotted in Figure \ref{fig.errors} as ``1'' for $K=3.1$ and $1_c$ for $K=3.5$, the latter representing the ``cluster'' regime. The right panel of Figure \ref{fig.errors} increases the concentrations by 10\% to represent relaxed, early forming systems, and shows the \lone\ model lies near the $68\%$ confidence contour. This level of agreement applies for masses $\logten\ M > 13.5$. As discussed above, the agreement is worse at the lowest masses. However, performing a similar comparison at $10^{13}h^{-1}\msun$ we find the local \lone\ slope and normalization lie just within the 95\% confidence contour of the power-law model (both with and without the 10\% correction for relaxed systems); i.e., the disagreement is not very significant even at the low-mass end. If real, the discrepancy may signify very early forming fossil groups in the sample \citep{dong05a}. The intrinsic scatter of 0.10 in $\logten\ (1+z)c$ obtained for the power-law fit is smaller than the value of 0.14 for all dark matter halos in CDM simulations but agrees extremely well with the value expected for the most relaxed, early forming systems \citep{jing00a,wech02a,macc06a}. Consideration of this result for the scatter and the average $c-M$ relation above, we conclude that the \lone\ model is consistent with the X-ray data, provided the X-ray sample reflects the most relaxed, early forming systems in the population. (This corroborates our selection criteria discussed in \S \ref{obs}.) Now we perform the analogous comparison of the \lthree\ model with the X-ray data and associated power-law fit. The $c-M$ relation is displayed in the right panel of Figure \ref{fig.cm}. The \lthree\ model lies well-below the power-law at all masses, even when allowing for the expected 10\% increase in concentration for relaxed halos. In Figure \ref{fig.errors} we compare the \lthree\ model at $10^{14}h^{-1}\msun$ with the empirical power-law. Even when considering the 10\% increase in concentrations for relaxed halos, the \lthree\ model lies on the 99.99\% contour. Systematic errors associated with the X-ray measurements cannot explain this level of disagreement (see \S \ref{sys}). The key parameter responsible for the poor performance of the \lthree\ model with respect to \lone\ is the low value of \sige\ (0.76). In order to bring the \lthree\ model within the 99\% contour in the right panel of Figure \ref{fig.errors} requires $\sige>0.84$ where we have kept the other cosmological parameters fixed to their values in Table \ref{tab.models}. This limit is conservative since (1) we use $K=3.5$, (2) we assume a full 10\% upward shift for the bias from relaxed, early forming systems, which may be less for massive clusters, and (3) we have approximated the B01 models as power-laws using their predictions only near $10^{14}h^{-1}\msun$. The sensitivity of the concentrations to \sige\ results from the impact that \sige\ has on the average halo formation times \citep[e.g.,][]{eke01a,alam02a,bosc03a}. Other cosmological parameters, however, contribute to the large discrepancy of the \lthree\ model. If we remove the tilt of the power spectrum (i.e., set $n_s=1$) then we obtain $\sige>0.80$ at the 99\% confidence level. Finally, if we further set $\omegam=0.3$ and $h=0.73$ (so the model is the same as \lone\ except with variable \sige), then the 99\% constraint falls to $\sige>0.76$, the value associated with the \lthree\ model. (We reiterate this limit is conservative as noted above. Similarly, we obtain $\sige<1.07$ at 99\% confidence, where the $K=3.1$ model is used here to be conservative.) Hence, although the lower value of \sige\ is the primary cause of the poor performance of the \lthree\ model, the combined action of the power-spectrum tilt with the lower value of \omegam\ exacerbate the discrepancy with the X-ray $c-M$ relation. If the \lthree\ parameters are correct, particularly the low value of \sige, then a fundamental modification of the model is required to increase the concentration values to match the X-ray results. Both D04 and \citet{kuhl05a} have shown that changing the dark energy equation of state parameter ($w$) has the effect of systematically raising (larger $w$) or lowering (smaller $w$) halo concentrations. As we show below, a model with $w\approx -0.8$ and $\sige\approx 0.8$ can describe the X-ray data in the cluster regime. \subsection{Only Halos with $M>10^{14}\msun$} \label{clusters} \begin{figure}[t] \centerline{\includegraphics[height=0.35\textheight]{f3.eps}} \caption{\label{fig.cm.clusters} Results of fitting eqn.\ \ref{eqn.cm} in log space only for halos with $M>10^{14}\msun$. Displayed are the best-fitting model (solid blue line) and the $1\sigma$ intrinsic scatter (dotted blue lines). Also shown is the predicted relation for the \lone\ model from cosmological simulations. The (solid red) line is the power-law fit obtained by D04 while the (dashed red) line is the B01 model with $F=0.001$ and $K=3.5$ found by D04 to best match the simulated clusters. Note the D04 model is converted to our definition of the virial radius (\S \ref{models}).} \end{figure} \begin{figure*}[t] \parbox{0.49\textwidth}{ \centerline{\includegraphics[height=0.35\textheight]{f4a.eps}}} \parbox{0.49\textwidth}{ \centerline{\includegraphics[height=0.35\textheight]{f4b.eps}}} \caption{\label{fig.errors.clusters} ({\sl Left panel}) $c$-$M$ relations for various CDM models obtained by D04 for galaxy clusters (see Table \ref{tab.models.cm}). The solid (red) line is \lone, the dashed (blue) line is OCDM, the dotted (magenta) line is DECDM, and the dot-dashed (cyan) line is QCDM. Finally, the short dashed (green) line is the \lthree\ model computed using the B01 prescription with $K=3.5$. ({\sl Right panel}) Best-fitting value (X) and confidence contours (68\%, 95\%, 99\%, 99.9\%, 99.99\%) for $\alpha$ and $c_{14}$ obtained by fitting only systems with $M>10^{14}\msun$. Also shown are the results for the models plotted in the left panel: \lone\ (1), OCDM (O), DECDM (D), QCDM (Q), and \lthree\ ($3_{\rm B}$).} \end{figure*} As discussed in \S \ref{models}, D04 provide results of fitting eqn.\ [\ref{eqn.cm}] to a suite of CDM models for massive clusters, including an open model and multiple dark energy models. We may compare these theoretical predictions directly to the power-law fit to the X-ray data with the following considerations. Firstly, we convert D04's results to our definition of the virial radius (see \S \ref{models}). Secondly, since D04 analyze only massive clusters ($M>4\times 10^{14}\msun$, $h=0.7$ and converted to $\Delta=101.1$), we restrict our analysis to high-mass systems as well. In order to allow more precise constraints on the power-law fit we consider clusters down to a somewhat smaller mass limit, $M>1\times 10^{14}\msun$. (As shown below, this choice is justified since there is no obvious trend in the $c-M$ relation over this mass range.) Finally, we need to consider the biases ($\approx 10\%$ higher $c$, smaller scatter in $\logten\ c$) for relaxed, early forming systems as done for the whole sample in the previous section. However, the systematic increase in $c$ may be less since we are considering the most massive systems \citep{wang06a}. In Figure \ref{fig.cm.clusters} we show the result of fitting eqn.\ [\ref{eqn.cm}] to the high-mass sample, for which we obtain the following parameter constraints, $\alpha = -0.04\pm 0.05$ and $c_{14} = 7.6\pm 0.5$. These parameters are consistent with a constant $c-M$ relation, as well as profiles that both decrease and increase with increasing $M$, and agree well with similar fits obtained in this mass range from previous X-ray studies with either \xmm\ \citep{poin05a} or \chandra\ \citep{vikh06a}. Since the power-law fit is quite consistent with $\alpha=0$, it follows that it is the lower mass systems ($<10^{14}\msun$) which require $\alpha < 0$ found when fitting the whole sample in the previous section \citep[see][]{gast06a}. (Note that the power-law parameters inferred in the previous section for the full sample lies on the 99\% confidence contour obtained for the fit to the high-mass subsample -- see Figure \ref{fig.errors.clusters}.) We measure an intrinsic scatter in $\logten\ (1+z)c$ of $0.07\pm 0.01$ which is somewhat less than the value of 0.10 obtained for the entire sample. D04 also obtain 0.10 for the intrinsic scatter (after accounting for a factor of 2.3 converting $\ln$ to \logten) for all of their CDM clusters. The smaller scatter we measure presumably can be explained because the X-ray sample preferentially contains relaxed, early forming systems. For comparison we plot two versions of the \lone\ model in Figure \ref{fig.cm.clusters}. The first version we show is the power-law fit obtained by D04 converted to our definition of the virial radius (see \S \ref{models}). The second version arises from using the B01 model with $K=3.5$, which D04 found to best represent their high-mass cluster simulations. Both models are very similar and provide a reasonable description of the X-ray data, though the B01 representation is slightly steeper. In the left panel of Figure \ref{fig.errors.clusters} we plot the power-laws obtained by D04 for the CDM models as well as the B01 representation of the \lthree\ model with $K=3.5$. The values of $\alpha$ and $c_{14}$ for all of these models (Table \ref{tab.models.cm}) are plotted in the right panel of Figure \ref{fig.errors.clusters} along with the error contours derived from the power-law fit to the X-ray data. As done in the previous section, the $\alpha$ and $c_{14}$ values for the B01 \lthree\ model were obtained using the $c-M$ slope near $M=10^{14}h^{-1}\msun$. We emphasize that the D04 parameters may be compared directly to the results of our power-law fit to the X-ray data, since the same model is fitted over nearly the same mass range. Since the B01 representation of the \lthree\ model is very nearly a power-law over the range being investigated, a similarly direct comparison is appropriate. The results for the \lone\ and \lthree\ models obtained using only the high-mass subsample corroborate those obtained from the full sample. As noted above, the \lone\ model is an acceptable match to the X-ray data; the parameters lie near the 68\% confidence contour in Figure \ref{fig.errors.clusters}, even if $c$ is increased to correct for early forming halos. In contrast, the \lthree\ model is very inconsistent with the high mass data, with parameters lying outside the 99.99\% contour. Even if we apply the full 10\% correction to $c_{14}$ appropriate for the most relaxed, early forming systems, then the model merely moves on top of the 99.99\% contour. However, as noted above, it is expected that this correction is less for the highest mass systems. Hence, the \lthree\ model is also rejected using only the most massive clusters. The $c-M$ data for high-mass clusters clearly exclude the OCDM model. The concentrations lie systematically above the data and the \lone\ model. The latter is expected because structures form earlier in the OCDM model. In principle, the OCDM model may be brought into acceptable agreement with the X-ray data by lowering \sige. We estimate the effect of lowering \sige\ using the \lone\ B01 model. For example, lowering $\sige$ from 0.90 to 0.70 lowers $c_{14}$ by 25\%. This would move the OCDM model to the right in Figure \ref{fig.errors.clusters} on to the 99\% confidence contour. Considering any of the 10\% systematic bias expected for early forming systems would increase the discrepancy. (Note that for $\sige=0.76$, appropriate for the \lthree\ model, the OCDM model would be rejected at the $\approx 99.9\%$ confidence level without additional consideration of early formation bias.) Studies of the abundances of galaxy clusters using weak gravitational lensing typically find \sige\ consistent with 0.9 \citep[e.g.,][]{hoek02a,vanw05a,jarv03a}. The optical study of \citet{rine06b} uses the caustic technique to measure cluster abundances and also finds $\sige\approx 0.9$ with a lower limit of 0.72 (95\% conf.). However, X-ray studies of cluster abundances show a large variation in \sige, with values ranging from 0.7-0.9 \citep[see][and references therein]{arna05a}. The X-ray studies use a simple conversion between mass and global temperature or mass and global X-ray luminosity ($\lx$). A recent study by \citet{stan06a} argues that considering reasonable errors in the $M-\lx$ conversion allows agreement of cluster abundances with the recent WMAP parameters with $\sige=0.85$ and $\omegam=0.24$ -- though see \citet{reip06a}. We take $\sige=0.7$ to be a conservative lower limit established by the abundances of galaxy clusters. Therefore, an open CDM model with $\omegam\approx 0.3$ is ruled out ($>99\%$ confidence) from joint consideration of only the X-ray $c-M$ relation and cluster abundances. Since $\sige$ and $\omegam$ are tightly coupled from cluster abundance studies, and all the studies cited above give $\sige\approx 0.45$ for $\omegam\approx 1$, using the X-ray $c-M$ data we find that we can also exclude CDM models with $\omegam\approx 1$ at high significance ($>99.9\%$ confidence). For this comparison we have employed the entire sample of 39 systems and have set $h\approx 0.5$ for the $\omegam\approx 1$ models to better satisfy cosmic age constraints from stellar populations in globular clusters \citep{chab98a}. Consequently, the combination of the X-ray $c-M$ relation and cluster abundances, assisted by constraints on the cosmic age, provides novel evidence for a flat, low-$\omegam$ universe with dark energy using observations only in the local ($z\ll 1$) universe. Finally, we consider the alternative dark energy models, DECDM and QCDM. The DECDM model lies systematically above the \lone\ model and deviates from the power-law fit to the X-ray data at the 99.9\% confidence level. This behavior is expected since $w=-0.6$ implies halos form earlier compared to the $w=-1$ \lone\ model \citep[see also][]{kuhl05a}. By lowering \sige\ it should be possible to bring the DECDM model into good agreement with the data. To illustrate this trade-off between $w$ and \sige, we consider the QCDM model which has $w\approx -0.8$ and $\sige=0.82$. As shown in Figure \ref{fig.errors.clusters} the QCDM model matches the X-ray results nearly as well as the \lone\ model. In fact, a mild increase in $c_{14}$ owing to effect from selecting relaxed, early forming halos would produce even better agreement. Our analysis indicates that a model similar to QCDM is able to satisfy the most recent constraints from other cosmological observations since $w\approx -0.8$ is marginally consistent with the constraints imposed by supernovas \citep[$w=1.023 \pm 0.090\, \rm (stat) \pm 0.054\, (sys)$, ][]{asti06a}, and $\sige=0.82$ is also marginally consistent with the 3-yr WMAP results. Put another way, if the low \sige\ value of \lthree\ is correct, then the X-ray $c-M$ data imply $-1 < w\approx -0.8$. \section{Systematic Errors} \label{sys} When comparing to theoretical predictions of the $c-M$ relation we have considered the small biases in the mean concentration level and scatter expected for the highly relaxed, early forming systems selected for our study. We found that the \lone\ model is a good representation of the X-ray data provided this bias for relaxed systems applies, especially for the low-mass range of our sample ($M<10^{14}\msun$). However, the \lthree\ model does not agree with the $c-M$ data, even considering this bias. Could other sources of systematic error seriously affect these conclusions? For 24 of 39 systems in Table \ref{tab.data} we have performed a detailed investigation of systematic errors on $c$ and $M$ \citep{hump06b,zapp06a,gast06a}. Generally, we find that the estimated systematic error is less than the statistical error, particularly for the systems of lowest mass. For the higher mass systems, the errors are usually comparable. But from consideration of all of these systems we recognize no obvious trend that would systematically shift the observed $c-M$ relation in one direction. Although we do not believe systematic errors associated with the X-ray data analysis compromise the conclusions of our present investigation, here we list the most important issues to be resolved in future studies that seek to use the X-ray $c-M$ relation for precision constraints on cosmological models. Below in \S \ref{he} we also consider the impact on our present analysis of systematic errors resulting from the hydrostatic equilibrium approximation. (Note we verified that our analysis is insensitive to the definition of the virial radius. An interesting example is to compare to results obtained for $\Delta=2500$, because within this radius all of the systems in our study possess good X-ray constraints. When defining the virial radius to correspond to $\Delta=2500$ for every system, we arrive at the same conclusions obtained in \S \ref{results} but with correspondingly different $\alpha$ and $c_{14}$ values. The predictions of the cosmological models were also converted to $\Delta=2500$ for this comparison.) \subsection{Early Formation Bias} Accounting for the biases associated with preferentially observing the most relaxed, early forming systems is required for precision constraints on cosmology. While \cite{wech02a} found a nearly one-to-one correlation between halo ``formation epoch'' and concentration, it is not obvious how this theoretically motivated parameter should connect with the dynamical state of a real cluster. More direct quantifiers (e.g. rejecting cases with recent major mergers and disturbed profiles) suggest that ``relaxed'' halos should have $\approx 10\%$ larger average $c$ and smaller scatter. For massive systems $(M\ga (\rm few)\times 10^{14}\msun$), the outlook for analyzing a sample with a well-determined, observational selection function is excellent, because there exist several well-defined catalogs of the brightest, most massive clusters in X-rays \citep[e.g.,][]{reip02a}, of which many of the systems have been observed either with \chandra\ or \xmm. For lower masses the number of X-ray catalogs of complete samples is small, and the number with good coverage with \chandra\ or \xmm\ observations is even smaller. For this purpose we have scheduled observations of a complete, X-ray flux-limited sample of 15 systems in the approximate mass range $10^{13}-10^{14}\,\msun$ with \chandra. \subsection{Hydrostatic Equilibrium} \label{he} The determination of the mass distribution from X-ray observations requires that hydrostatic equilibrium is a suitable approximation. This approximation has been tested for massive clusters by comparing to results obtained from gravitational lensing \citep[e.g.,][and references therein]{buot03c}. There is very good agreement between the methods, especially outside the inner cores. This agreement is especially encouraging since some of the clusters are manifestly not completely relaxed \citep[e.g., A2390, ][]{alle01d}. At the low-mass end, good agreement between X-ray mass measurements with stellar dynamics in elliptical galaxies provides further indication that the hydrostatic equilibrium approximation is quite accurate for obviously relaxed systems \citep{hump06b,brid06a}. For over ten years cosmological hydrodynamical simulations have found that the hydrostatic equilibrium approximation is quite accurate in massive galaxy clusters \citep[e.g.,][]{tsai94a,buot95a,evra96a,math99a}. The most recent studies conclude that X-ray mass estimates of the most massive, relaxed clusters should typically underestimate the mass by a small amount ($\approx 10\%$) because of turbulent pressure in the hot gas, with less of an effect in lower mass systems \citep[e.g.,][]{dola05a,rasi06a,naga06a}. The systems analyzed in our present investigation were selected to be highly relaxed -- as indicated by regularly shaped X-ray image morphology. This is reinforced by our analysis of the X-ray $c-M$ relation (\S \ref{results}), especially by the small intrinsic scatter in the concentrations which is a robust prediction of CDM models \citep{jing00a,dola04a,wech02a,macc06a}. Underestimating the virial masses of our relaxed clusters by $10\%$, as suggested by CDM simulations, would indicate that our measurement of $c_{14}$ should be raised by a factor $1.1^{-\alpha}\approx 1.02$, using $\alpha=-0.17$. In addition, since $c\propto (\mvir)^{1/3}/r_s$, where $r_s$ is the NFW scale radius, the total increase in $c_{14}$ should be $\approx 5\%$, provided the estimate of $r_s$ is unaffected by the presence of turbulent pressure. This systematic error, if real, would have the effect of increasing the discrepancies between the X-ray $c-M$ relation and (most) CDM models. However, essentially all of conclusions in \S \ref{results} remain unchanged if we consider the bias for relaxed early forming systems to be 15\% rather than 10\%, within the range obtained by current simulations \citep{jing00a,wech02a,macc06a}. The exceptions are the OCDM and DECDM models (\S \ref{clusters}, see Fig.\ \ref{fig.errors.clusters}), for which the $\approx 5\%$ boost in $c_{14}$ would bring the models into slightly better agreement with the data. However, the OCDM model is still rejected at $>99.99\%$ confidence, and the $\omegam\approx 0.3$ open CDM models are still excluded ($>99\%$ conf.), the latter provided that at least a 5\% bias (of the expected $\approx 10\%$) for relaxed, early forming systems applies. We conclude, therefore, that the level of systematic error suggested by CDM simulations to affect the X-ray mass measurements does not change the conclusions of our present study. \subsection{Semi-Analytic Model Predictions of $c-M$ relation} \label{fk} In order to use the X-ray $c-M$ relation for precision constraints on cosmological parameters, it is necessary to be able to predict halo concentrations produced in different cosmological models with high precision. This must be achieved with a semi-analytic procedure, because it is not feasible to resort to N-body simulations to fully investigate parameter space for obtaining confidence regions. The procedure proposed by B01 is currently one of the most promising models of this kind. With just two parameters $F$ and $K$ (see \S \ref{models}) it can reproduce the results of CDM N-body simulations with ranges of power spectrum shapes, $\sige$ normalizations, matter content, and dark energy variables $w$ (B01, \citealt{kuhl05a}). Recently, \citet{macc06a} show that the normalizations of the B01 models (i.e., $K$ values) obtained by different investigators analyzing halos covering approximately the same mass range ($\la 10^{13}h^{-1}\msun$) can differ by 10-20\%. For halos with $M \approx 10^{13}- 10^{14} h^{-1}\msun$, the halos of \citet{macc06a} prefer $F=0.001$ and $K=2.6$ while those \citet{tasi04a} prefer $K=3.1$. The ``cluster'' halos ($\ga 3\times 10^{14}\msun$) studied by D04 suggest a higher normalization $K=3.5$. Because so few studies have investigated the high-mass halo regime, it is unclear whether these differences reflect numerical details in the simulations or demand a revision of the B01 model. More studies of the concentrations of the highest mass halos are very much needed. In this context it is important to note that \citet{shaw06a} have recently studied the $c-M$ relation of massive halos, where most of their halos have $M\approx 1\times 10^{14}\msun$. They obtain power-law fit parameters $\alpha=-0.12\pm 0.03$ and $c_{14}=8.30 \pm 0.04$ for the model \lone\ but with $\sige=0.95$. Converting their results to $\sige=0.90$ yields a 6\% reduction in the concentration, $c_{14}=7.81 \pm 0.04$, which is just 2\% larger than the D04 value quoted in Table \ref{tab.models.cm}. We consider this quite good agreement, considering the differences between theoretical studies noted above for lower mass halos. Finally, we mention that if we adopt the lowest normalization quoted in the literature ($F=0.001$, $K=2.6$, \citealt{macc06a}) for our entire mass range, then in order for the \lone\ model to match the X-ray data, it is necessary to boost the concentrations produced in the \lone\ model by another 16\% over the $10\%$ attributed to the preferential selection of relaxed, early forming systems. This 16\% increase can be achieved by increasing either \sige ($\approx 1.0$), $w$ ($\approx -0.8$), or both. We note that the 99\% upper limit derived in \S \ref{all} would increase \sige\ from 1.07 to 1.15 if $K=2.6$. \subsection{Gas Physics \& Adiabatic Contraction} The published CDM predictions for the $c-M$ relation we have considered in this paper are all derived from dissipationless N-body simulations containing only dark matter. While the details of gasdynamics should be unimportant for massive systems ($M\ga 10^{14}\msun$), the effects of dissipation and feedback from star formation and AGN likely influence the dark matter profile inferred from observations of lower mass halos. It is noteworthy that the systems with $M\sim 10^{13}\msun$ show the largest deviations from the \lone\ model (\S \ref{all}). For most of these the \chandra\ data require a significant contribution of stellar mass from the central galaxy \citep{hump06b,gast06a}. However, allowing for adiabatic contraction \citep[e.g.,][]{blum86a,gned04a} of the dark matter profile in most cases degrades the fits, suggesting a more complex interplay between the baryons and the dark matter \citep[e.g.,][]{loeb03a,elza04a,dutt06a}. Some cosmological simulations with gas predict a small, but significant, systematic increase ($\approx 3\%$) in the concentration of the total mass \citep{lin06a}, but the inability of CDM simulations to reproduce observed X-ray temperature and density profiles within 50-100~kpc of cluster centers \citep[e.g.,][]{lewi00a,muan02a,borg04a} makes it difficult to interpret the reliability of such results. Future precision cosmology studies which aim to use the X-ray $c-M$ relation for low-mass systems ($M\la 10^{13}\msun$) will require better understanding of the influence of the central galaxy on the inferred dark matter distribution. \section{Conclusions} \label{conc} We present the concentration ($c$)-virial mass ($M$) relation of 39 galaxy systems ranging in mass from individual early-type galaxies up to the most massive galaxy clusters, $(0.06-20) \times 10^{14}\msun$. We selected for analysis the most relaxed systems possessing the highest quality data currently available in the \chandra\ and \xmm\ public data archives. Measurements for 24 systems were taken from our recent work \citep{hump06b,gast06a,zapp06a} which populate the lower mass portion of the sample ($M\la 10^{14}\msun$). We obtained results for 15 massive galaxy clusters in our sample from the studies by \citet{poin05a} and \citet{vikh06a}. Our principal objective is to measure the $c-M$ relation accurately from galaxy to cluster scales and determine whether $c$ decreases with increasing $M$ as generally predicted by CDM models. However, we also use the $c-M$ relation to provide an initial demonstration of the ability of the $c-M$ relation to constrain cosmological parameters, which also serves to highlight key sources of systematic error -- both in the theoretical models and the observations. We parameterize the X-ray $c-M$ relation using a simple power-law model. The best estimates of the parameters -- the slope ($\alpha$) and normalization ($c_{14}$) evaluated at $M=M_{14}=10^{14}h^{-1}\msun$ -- were obtained via linear regression in log space taking into account the uncertainties on both $c$ and $M$. We employed the BCES method of \citet{akri96} for this analysis. Fitting the power-law model to the entire sample yields $\alpha=-0.172\pm 0.026$ and $c_{14}=9.0\pm 0.4$ (quoted errors $1\sigma$). The slope $\alpha$ is negative and inconsistent with 0 at $6.6\sigma$. The previous studies of galaxy clusters ($M\ga 10^{14}\msun$) with \chandra\ and \xmm\ by \citet{poin05a} and \citet{vikh06a} did not place strong constraints on $\alpha$, and were quite consistent with $\alpha=0$; i.e., it is the lower mass galaxy groups that require $\alpha < 0$ \citep[see][]{gast06a}. Recent optical studies of the $c-M$ relation in the group-cluster regime (see \S \ref{intro}) also do not place strong constraints on the slope and are very consistent with $\alpha = 0$. Our analysis, therefore, provides crucial evidence that $c$ decreases with increasing $M$, as expected in CDM models (\citealt{nfw,jing00a}; B01; D04; \citealt{macc06a}). We compare the X-ray data to the \lone\ model (with $\omegam=0.3$, $\sige=0.9$, $n_s=1$), the ``concordance model'' effectively representing the combined constraints from the first year of WMAP CMB data, supernovae, and galaxy surveys \citep[e.g.,][]{sper03a}. We judge the median $c-M$ relation of the \lone\ model to be consistent with the empirical power-law fit provided the X-ray sample consists of the most relaxed, early forming systems, for which a systematic increase in the concentrations of $\approx 10\%$ is expected \citep{jing00a,wech02a,macc06a}. We measure an intrinsic scatter in $\logten\ (1+z)c$ of $0.102\pm 0.004$ in excellent agreement with the prediction of CDM simulations for the most relaxed, early forming systems. The amount of scatter is a robust prediction of CDM variants and provides additional evidence that our sample comprises the most relaxed systems that are best suited for X-ray mass measurements requiring approximate hydrostatic equilibrium. The X-ray $c-M$ relation places interesting constraints on \sige. Within the context of the concordance model noted above, the $c-M$ relation requires $0.76<\sige<1.07$ (99\% conf.), assuming a 10\% upward bias in the concentrations for early forming systems. This confidence range is conservative as explained in \S \ref{all}. Next we compare the X-ray data to the \lthree\ model (with $\omegam=0.24$, $h=0.73$, $\sige=0.76$, $n_s=0.96$) which effectively updates the \lone\ model using the analysis of the third year of WMAP data \citep{sper06a}. The X-ray $c-M$ relation rejects the \lthree\ model at the 99.99\% confidence level. The primary reason for the poor performance of the \lthree\ model is the low value of \sige, but the lower value of \omegam\ and the tilt of the power spectrum also contribute to the poor fit. For this comparison we have assumed a uniform bias for relaxed, early forming halos of $\approx 10\%$, as suggested by numerical simulations. This bias would have to be $\approx 50\%$ to bring the \lthree\ model into agreement with the X-ray data. While the early-type galaxy and group-cluster mass halos $(\ga 10^{13}\msun)$ studied here apparently prefer slightly higher concentrations than predicted for typical halos in the concordance $\Lambda$CDM model, the opposite is true for late-type galaxies \citep[e.g.,][]{alam02a,bosc03a,dutton05,kuz06,gnedin06}. This may indicate that a selection/formation-time bias operates across the galaxy type spectrum, with late-type galaxies inhabiting the low-concentration tail of the distribution \citep[though see][]{napo04,napo05}. Since D04 provide results of power-law fits to the $c-M$ relations of massive clusters formed in a variety of CDM models, including an open model and several dark energy models, we analyze separately the X-ray $c-M$ relation for the 22 systems in our sample with $M>10^{14}\msun$. In this mass range we obtain $c_{14}$ and $\alpha\approx 0$ values consistent with those inferred from previous \chandra\ and \xmm\ studies in the cluster mass regime noted above \citep{poin05a,vikh06a}. As also found for the entire sample, the $c-M$ relation of the high-mass subsample is consistent with the \lone\ model and very inconsistent with the \lthree\ model. However, an open model with $\omegam\approx 0.3$ is ruled out ($>99\%$ confidence) from joint consideration of only the X-ray $c-M$ relation and published constraints on \sige\ from the analysis of the abundances of galaxy clusters ($\sige > 0.70$, see \S \ref{clusters}). Since cluster abundance studies also find $\sige\approx 0.45$ if $\omegam\approx 1$, using the X-ray $c-M$ data we find that we can also reject CDM models with $\omegam\approx 1$ at a high significance level ($>99.9\%$ confidence). Consequently, the combination of the X-ray $c-M$ relation and cluster abundances (and local constraints on the age of the universe) provides novel evidence for a flat, low-$\omegam$ universe with dark energy using observations only in the local ($z\ll 1$) universe. If the values of \sige, \omegam, and $n_s$ of the \lthree\ model are correct, agreement with the X-ray $c-M$ relation may be achieved by increasing the dark energy equation of state parameter $w$. We find that a quintessence model with $w\approx -0.8$ and $\sige=0.82$ performs nearly as well as the \lone\ model, and the larger value of $w$ remains marginally consistent with supernova constraints \citep{asti06a}. Finally, we discuss key sources of systematic error associated with both the X-ray measurements and theoretical models that need to be addressed before the X-ray $c-M$ relation is suitable for precision cosmology. In particular, if the virial masses are systematically underestimated by $\sim 10\%$, as suggested by CDM simulations, then we estimate that $c_{14}$ is increased by $\sim 5\%$, less than the $\approx 10\%$ increase expected for selecting relaxed, early forming systems. This level of systematic error does not change the conclusions of our present study (see \S \ref{he}), but it will be important for future studies of precision cosmology. \acknowledgements We thank A.\ Cooray for discussions and comments on the manuscript. D.A.B., P.J.H., and F.G.\ gratefully acknowledge partial support from NASA grants NNG04GE76G, issued through the Office of Space Sciences Long-Term Space Astrophysics program, and NAG5-13059, issued through the Office of Space Science Astrophysics Data Program. Partial support for this work was also provided by NASA through Chandra Award Numbers GO4-5139X and GO6-7118X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. We also are grateful for partial support from NASA-XMM grants, NAG5-13643, NAG5-13693, NNG04GL06G, and NNG05GL02G. \input{ms.bbl} \end{document}
2,877,628,089,893
arxiv
\section{Introduction} Community detection (CD) has found applications in social, biological, business, and other kinds of networks. However, CD algorithms suffer from various flaws -- (i) Most existing CD algorithms are heavily dependent on {\em vertex ordering} \cite{good2010}, yielding completely different community structures when the same network is processed in a different order. For example, Figure \ref{example}(a) shows dissimilar community structures after running {\tt InfoMap}~ \cite{Rosvall29012008} on $100$ different vertex orderings of the Football network \cite{Clauset2004}. (ii) Most optimization algorithms may produce multiple community structures with the same ``optimal'' value of the objective function. For instance, in Figure \ref{example}(b), assigning vertex $x$ to either $A$ or $B$ results in the same modularity score \cite{Newman:2006:}. (iii) Different CD algorithms detect communities in different ways, e.g., as dense groups internally \cite{newman03fast}, or groups with sparse connections externally \cite{Fortunato201075}. It is therefore natural to think of an ensemble approach in which the strengths of different CD algorithms may help overcome the weaknesses of any specific CD algorithm. Some preliminary attempts have been made by \cite{DahlinS13,lanc12consensus}\footnote{Note that ensemble approaches have proved successful in clustering and classification \cite{Xu:2005}.}. \begin{figure}[!h] \centering \scalebox{0.17}{ \includegraphics{heatmap.eps}} \caption{(a) Similarity between pair-wise community structures (based on NMI \cite{danon2005ccs}) after running Infomap algorithm on $100$ different vertex orderings of the Football network. (b) A schematic network consisting of two cliques $A$ and $B$ of size $m$ (representing two communities) connected by a bridging vertex $x$. Assigning vertex $x$ to either $A$ or $B$ yields the same value of the optimization metrics (such as modularity \cite{Newman:2006:}).}\label{example} \vspace{-3mm} \end{figure} \noindent{\bf Contributions.} In this paper, we design two ensemble CD algorithms. The {\tt EnDisCo}~algorithm runs multiple ``base'' CD algorithms using a variety of vertex orderings to derive a first set of communities. We then consider the memberships of vertices obtained from base CD algorithms as features and derive a latent network using pair-wise similarity of vertices. The final disjoint community structure is obtained by running any CD algorithm again on the latent network. The {\tt MeDOC}~ algorithm leverages the fact that many communities returned by base algorithms are redundant and can therefore be grouped into ``meta-communities'' to avoid unnecessary computation. We use meta-communities to build an association matrix, where each entry indicates the probability of a vertex belonging to a meta-community. Finally, we obtain both disjoint and overlapping community structures via post-processing on the association matrix. \emph{To the best of our knowledge, we are the first to propose (i) an ensemble framework for overlapping community detection, and (ii) an overlapping CD algorithm that leverages disjoint community information.} We run experiments to identify the best parameter settings for {\tt EnDisCo}~and {\tt MeDOC}. Experiments on both synthetic and real-world networks show that our algorithms outperform both the state-of-the-art non-ensemble based methods \cite{blondel2008,Rosvall29012008,Leskovec} and a recently proposed ensemble approach \cite{lanc12consensus} by a significant margin\footnote{We report $p$-values for all our experiments to show statistical significance.}. We also show that our ensemble approaches reduce the effect of vertex ordering. \IEEEpubidadjcol \noindent\textbf{Note:} We use the term ``community structure'' to indicate the result (set of ``communities'') returned by an algorithm. Each community is a set of vertices. \section{Related Work} There has been a great deal of work on clustering data using ensemble approaches (see \cite{Xu:2005} for a review). However, when it comes to clustering vertices in \emph{networks}, ensemble approaches have been relatively scarce\footnote{See the survey \cite{Fortunato201075} for various community detection algorithms.}. Dahlin and Svenson \cite{DahlinS13} were the first to propose an instance-based ensemble CD algorithm for networks which fuses different community structures into a final representation. A few methods addressed the utility of merging several community structures \cite{raghavan-2007}. A new ensemble scheme called CGGC was proposed to maximize modularity \cite{OvelgonneG12}. Kanawati proposed YASCA, an ensemble approach to different network partitions derived from ego-centered communities computed for each selected seed \cite{Kanawati2014}. He further emphasized the quality and diversity of outputs obtained from the base algorithms for ensemble selection \cite{Kanawati2015}. A ``consensus clustering'' \cite{lanc12consensus} approach was recently proposed which leverages a {\em consensus matrix} to produce a disjoint community structure which outperformed previous approaches. Our work differs from this approach in at least three significant ways: (i) they measure the number of times two vertices are assigned to the same community, thus ignoring the global similarity of vertices; whereas we capture the global similarity by representing the network within a feature space and grouping redundant base communities into meta communities; (ii) they either take multiple algorithms or run a particular algorithm multiple times for generating an ensemble matrix, whereas we consider both options; (iii) we are the first to show how aggregating multiple disjoint base communities can lead to discover both disjoint and overlapping community structures simultaneously. We show experimentally that {\tt EnDisCo}~beats consensus clustering. \begin{algorithm}\label{insimul}\small \caption{{\tt EnDisCo}: {\bf En}semble-based {\bf Dis}joint {\bf Co}mmunity Detection } \KwData{Graph $G(V,E)$; \\Base algorithms ${\cal AL}=\{Al_m\}_{m=1}^M$; \\$K$: Number of iterations; \\$\mathbb{INV}(.,.)$: Involvement function;\\ $\mathbb{SIM}(.,.)$: Similarity function between two vectors; \\$RAlgo$: Algorithm for re-clustering} \KwResult{Disjoint community structure $\mathbb{DC}$} $\Gamma=\phi$ \hfill \tcp{Set of all base community structures} \tcp{{\color{blue} Generating base partitions}} \For{each algorithm $Al_m\in {\cal AL}$}{ Run $Al_m$ on $G$ for $K$ different vertex orderings and obtain $K$ community structures, denoted by the set $\Gamma_m$; each community structure $\mathbb{C}_m^k \in \Gamma_m$ is of different size and indicated by $\mathbb{C}_m^k=\{C_m^{1k},...,C_m^{ak}\}$;\label{algo1:ensemble}\\ $\Gamma=\Gamma \cup \Gamma_m$; } \For{each $v$ in $V$}{\label{algo1:v} $F(v)=\phi$; \hfill \tcp{Feature vector of $v$} $D_v=0$; \hfill \tcp{Max distance of $v$ to any community} $Clu=0$; \hfill \tcp{Total no of communities} \tcp{{\color{blue} Constructing ensemble matrix}} \For{each $\Gamma_m \in \Gamma$}{\label{gammas} \For{each $\mathbb{C}_m^k\in \Gamma_m$}{ \For{each $C\in \mathbb{C}_m^k$}{\label{algo1:c} Compute $d_v^C=1-\mathbb{INV}(v,C)$;\label{algo1:dist} \\ $F(v)=F(v) \cup d_v^C$;\label{algo1:fea}\\ \If{$ d_v^C \geq D_v $}{ $D_v=d_v^C$;\label{algo1:D} } $Clu=Clu+1$;\label{algo1:cl} } } }\label{gammae} $P(v)=\phi$;\\ \For{each $F_i(v) \in F(v)$}{\label{algo1:pre} \tcp{Posterior probability of $v$ in $C_i^k$} Compute $P(C_i|v) =\frac{D_v-F_i(v) +1}{Clu.D_v + Clu -\sum_{k=1}^{Clu} F_k(v)}$;\label{algo1:prob}\\ $P(v)=P(v)\cup P(C_i|v)$;\label{algo1:post} } } Build an ensemble matrix $\mathbb{M}_{|V|\times|V|}$, where $\forall u,v\in V;\ \mathbb{M}(u,v)$=$\mathbb{SIM}(P(u),P(v))$;\label{algo1:sim}\\ \tcp{{\color{blue} Re-clustering the vertices from $M$}} Run $RAlgo$ for {\em re-clustering vertices} from $M$ and discover a disjoint community structure $\mathbb{DC}$;\label{algo1:assign} \\ \Return $\mathbb{DC}$ \end{algorithm} \section{{\tt EnDisCo}: Ensemble-based Disjoint Community Detection} {\tt EnDisCo}~({\bf En}semble-based {\bf Dis}joint {\bf Co}mmunity Detection) starts by first using different CD algorithms to identify different community structures. Second, an ``involvement'' function is used to measure the extent to which a vertex is involved with a given community, which in turn sets the posterior probabilities of each vertex belonging to different communities. Third, {\tt EnDisCo}~transforms the network into a feature space. Fourth, an ensemble matrix is constructed by measuring the pair-wise similarity of vertices. Finally, we apply any standard CD algorithm on the ensemble matrix and discover the final disjoint community structure. \subsection{Algorithmic Description} {\tt EnDisCo}~ follows three fundamental steps (a pseudo-code is shown in Algorithm \ref{insimul}): \noindent{\bf (i) Generating base partitions.} Given a network $G=(V,E)$ and a set ${\cal AL}=\{Al_m\}_{m=1}^M$ of $M$ different base CD algorithms, {\tt EnDisCo}~runs each algorithm $Al_m$ on $K$ different vertex orderings (randomly selected) of $G$. This generates a set of $K$ different community structures denoted $\Gamma_m=\{\mathbb{C}_m^k\}_{k=1}^K$, where each community structure $\mathbb{C}_m^k=\{C_m^{1k},\cdots,C_m^{ak}\}$ constitutes a specific partitioning of vertices in $G$, and each $\mathbb{C}_m^k$ might be of different size (Step \ref{algo1:ensemble}). \noindent{\bf (ii) Constructing ensemble matrix.} Given a $\Gamma_m$, we then compute the extent of $v$'s involvement in each community $C$ in $\mathbb{C}_m^k$ via an ``involvement'' function $\mathbb{INV}(v,C)$ (Step \ref{algo1:dist}). Possible definitions of $\mathbb{INV}$ are given in Section \ref{algo1:Paramater}. For each vertex $v$, we construct a feature vector $F(v)$ whose elements indicate the distance of $v$ (measured by $1-\mathbb{INV}$) from each community obtained from different runs of the base algorithms (Step \ref{algo1:fea}). The size of $F(v)$ is same as the number of communities $Clu$ in $\Gamma$ (approx. $\bar{a}MK$, where $\bar{a}$ is the average size of a base community structure). Let $D_v$ be the largest distance of $v$ from any community in the sets in $\Gamma$ (i.e., $D_v=\max_i F_i(v)$ in Step \ref{algo1:D}). We define the conditional probability of $v$ belonging to community $C_i$ (Step \ref{algo1:prob}) as: \begin{equation}\small P(C_i|v) =\frac{D_v-F_i(v) +1}{Clu.D_v + Clu -\sum_{k=1}^{Clu} F_k(v)} \end{equation} The numerator ensures that the greater the distance $F_i(v)$ of $v$ from community $C_i$, the less likely $v$ is to be in community $C_i$. The normalization factor in the denominator ensures that $\sum_{k=1}^{Clu} P(C_i|v)=1$. Add-one smoothing in the numerator allows a non-zero probability to be assigned to all $C_i$s, especially for $C_{\hat{k}}$ such that $\hat{k}=\argmax\limits_{k} F_k(v)$. The set of posterior probabilities of $v$ is: $P(v)=\{ P(C_k|v) \}_{k=1}^ {Clu}$ (Step \ref{algo1:post}), which in turn transforms a vertex into a point in a multi-dimensional feature space. Finally we construct an ensemble matrix $M$ whose entry $M(u,v)$ is the similarity (obtained from a function $\mathbb{SIM}$ whose possible definitions are given in Section \ref{algo1:Paramater}) between the feature vectors of $u$ and $v$ (Step \ref{algo1:sim}). The ensemble matrix ensures that the more communities a pair of vertices share the more likely they are connected in the network \cite{Leskovec}. \noindent{\bf (iii) Discovering final community structure.} In Step \ref{algo1:assign} we use a CD algorithm $RAlgo$ to re-cluster the vertices from $M$ and discover the final disjoint community structure (Step \ref{algo1:assign}). \subsection{Parameter Selection}\label{algo1:Paramater} \noindent $\bullet$ {\bf Involvement Function ($\mathbb{INV}$):} We use two functions to measure the involvement of a vertex $v$ in a community $C$: (i) {\em Restricted Closeness Centrality} ($RCC$): This is the inverse of the average shortest-path distance from the vertex $v$ to the vertices in community $C$, i.e., $RCC(v,C)=\frac{|C|}{\sum_{u\in C} dist(v,u)}$; (ii) {\em Inverse Distance from Centroid} ({\em IDC}): we first identify the vertex with highest closeness centrality (w.r.t. the induced subgraph of $C$) in community $C$, mark it as the centroid of $C$ (denoted by $u_c$), and then measure the involvement of $v$ as the inverse of the shortest-path distance between $v$ and $u_c$, i.e., $IDC(v,C)=\frac{1}{dist(v,u_c)}$. \noindent $\bullet$ {\bf Similarity Function ($\mathbb{SIM}$):} We consider cosine similarity ($COS$) and Chebyshev distance ($CHE$) (essentially, $1-CHE$) to measure the similarity between two vectors. \noindent $\bullet$ {\bf Algorithm for Re-clustering ($RAlgo$):} we consider each base CD algorithm as the one to re-cluster the vertices from the ensemble matrix. The idea is to show that a non-ensemble CD algorithm can perform even better when considering the ensemble matrix of network $G$ than the adjacency matrix of $G$. However, one can use any CD algorithm in this step to detect the community structure. We will show the effect of different algorithms used in this step in Section \ref{sec:impact}. \noindent $\bullet$ {\bf Number of Iterations ($K$):} Instead of fixing a hard value, we set $K$ to be dependent on the number of vertices $|V|$ in the network. We vary $K$ from $0.01$ to $0.50$ (with step $0.05$) of $|V|$ and confirm that for most of the networks, the accuracy of the algorithm converges at $K=0.2|V|$ (Figures \ref{parameter_dis}(c) and \ref{parameter_dis}(f)), and therefore we set $K=0.2|V|$ in our experiments. \subsection{Complexity Analysis}\label{algo1:complexity} Suppose $N=|V|$ is the number of vertices in the network, $M$ is the number of base algorithms and $K$ is the number of vertex orderings. Further suppose $\bar{a}$ is the average size of the community structure. Then the loop in Step \ref{algo1:v} of Algorithm 1 would iterate $\bar{a}NMK$ times (where $M,K\ll N$). The construction of the ensemble matrix in Step \ref{algo1:sim} would take $\mathcal{O}(N^2)$. Graph partitioning is NP-hard even to find a solution with guaranteed approximation bounds --- however, heuristics such as the famous Kernighan-Lin algorithm take $O(N^2\cdot \mbox{log}(N))$ time. \vspace{-3mm} \begin{algorithm}\label{meclud}\small \caption{{{\tt MeDOC}}: A {\bf Me}ta Clustering based {\bf D}isjoint and {\bf O}verlapping {\bf C}ommunity Detection} \KwData{Graph $G(V,E)$;\\ Base algorithms ${\cal AL}=\{Al_m\}_{m=1}^ M$;\\ $K$: Number of iterations; \\$W(.,.)$: Matching between pair-wise communities;\\ $RAlgo$: Algorithm for re-clustering; \\${\cal F}(.,.)$: vertex-to-community association; \\ $\tau$: threshold for overlapping community detection} \KwResult{Disjoint ($\mathbb{DC}$) and overlapping ($\mathbb{OC}$) community structures} \tcp{{\color{blue} Constructing multipartite network}} \For{$Al_m$ in ${\cal AL}$}{ Run $Al_m$ on $G$ for $K$ different vertex orderings and obtain $K$ community structures, denoted by the set $\Gamma_m=\{\mathbb{C}_m^k\}_{k=1}^K$; each community structure $\mathbb{C}_m^k \in \Gamma_m$ may be of different size and is denoted by $\mathbb{C}_m^k=\{C_m^{1k},...,C_i^{ak}\}$; \label{algo2:perm}} Construct a $P$-partite graph $GP$ (where $P=M.K$) consisting of $M.K$ partitions, each corresponding to each community structure obtained in Step 2: vertices in partition $m^k$ are communities in $\mathbb{C}_m^k$ and edges are drawn between two pair-wise vertices (communities) $C_m^{ik}$ and $C_n^{jk}$ with the edge weight $W(C_m^{ik},C_n^{jk'})$;\label{algo2:cons}\\ \tcp{{\color{blue} Re-clustering the multipartite network}} Run $RAlgo$ to re-cluster vertices in $GP$ and discover a meta-community structure, $\mathbb{C}_{GP}=\{C_{GP}^{l}\}_{l=1}^L$;\label{algo2:run}\\ \tcp{{\color{blue} Constructing an association matrix}} Construct an association matrix $\mathbb{A}_{|V|\times L}$, where ${\mathbb A}(v,l)={\cal F}(v,C_{GP}^l)$, indicating the association of vertex $v$ to a meta-community $C_{GP}^l$;\label{algo2:asso}\\ \tcp{{\color{blue} Discovering final community structure}} Each row in $\mathbb{A}$ indicates the membership probabilities of the corresponding vertex in $L$ meta-communities;\\ To get $\mathbb{DC}$, we assign a vertex $v$ to community $C^* = \argmax\limits_{C} \ \mathbb{A} (v,C) $;\label{algo2:dc}\\ To get $\mathbb{OC}$, we assign a vertex $v$ to a set of communities $C_v^*$ so that $\forall C\in C_v^*: \mathbb{A} (v,C) \geq \tau$;\label{algo2:oc}\\ \Return $\mathbb{DC}$, $\mathbb{OC}$ \end{algorithm} \vspace{-5mm} \section{{\tt MeDOC}: Meta-clustering Approach} {\tt MeDOC}~ ({\bf Me}ta Clustering based {\bf D}isjoint and {\bf O}verlapping {\bf C}ommunity Detection) starts by executing all base CD algorithms, each with different vertex orderings, to generate a set of community structures. It then creates a multipartite network. After this, a CD algorithm is used to partition the multipartite network. Finally, a vertex-to-community association function is used to determine the membership of a vertex in a community. Unlike {\tt EnDisCo}, {\tt MeDOC}~ yields both disjoint and overlapping community structures from the network. \subsection{Algorithmic Description} {\tt MeDOC}~ has the following four basic steps (pseudo-code is in Algorithm \ref{meclud}): \noindent{\bf (i) Constructing multipartite network.} {\tt MeDOC}~takes $M$ CD algorithms ${\cal AL}=\{Al_m\}_{m=1}^M$ and runs each $Al_m$ on $K$ different vertex orderings of $G$. For each ordering $k$, $Al_m$ produces a community structure $\mathbb{C}_m^k=\{C_m^{1k},...,C_i^{ak}\}$ of varying size (Step \ref{algo2:perm}). After running on $K$ vertex orderings, each algorithm $Al_m$ produces $K$ different community structures $\Gamma_m=\{\mathbb{C}_m^k\}_{k=1}^K$. Therefore at the end of Step \ref{algo2:perm}, we obtain $K$ community structures each from $M$ algorithms (essentially, we have $P=M.K$ community structures). We now construct a $P$-partite network (aka meta-network) $GP$ as follows: the vertices are members of $\bigcup_{m} \mathbb{C}^k_m$, i.e., a community present in a community structure obtained from any of the base algorithms in ${\cal AL}$ and any vertex ordering, is a vertex of $GP$. We draw an edge from a community $C_m^{ik}$ to a community $C_n^{jk'}$ and associate a weight $W(C_m^{ik},C_n^{jk'})$ (Step \ref{algo2:cons}). Possible definitions of $W$ will be given later in Section~\ref{algo2:Parameter}. Since each $\mathbb{C}_m^k$ is disjoint, the vertices in each partition are never connected. \noindent{\bf (ii) Re-clustering the multipartite network.} Here we run any standard CD algorithm $RAlgo$ on the multipartite network $GP$ and obtain a community structure containing (say) $L$ communities $\mathbb{C}_{GP}=\{C_{GP}^l\}_{l=1}^L$. Note that in this step, we indeed cluster the communities obtained earlier in Step 2; therefore each such community $C_{GP}^l$ obtained here is called a ``meta-community'' (or community of communities) (Step \ref{algo2:run}). \noindent{\bf (iii) Constructing an association matrix.} We determine the association between a vertex $v$ and a meta-community $C_{GP}^l$ by using a function $\mathcal{F}$ and construct an association matrix $\mathbb{A}$ of size $|V|\times L$, where each entry ${\mathbb A}(v,l)={\cal F}(v,C_{GP}^l)$ (Step \ref{algo2:asso}). Possible definitions of $\mathcal{F}$ will be given later in Section~\ref{algo2:Parameter}. \noindent{\bf (iv) Discovering final community structure.} Final vertex-to-community assignment is performed by processing $\mathbb{A}$. The entries in each row of $A$ denote membership probabilities of the corresponding vertex in $L$ communities. For disjoint community assignment, we label each vertex $v$ by the community $l$ in which $v$ possesses the most probable membership in $\mathbb{A}$, i.e., $l^* = \argmax\limits_{l} \ \mathbb{A} (v,l)$. Tie-breaking is handled by assigning the vertex to the community to which most of its direct neighbors belong. Note that every meta-community can not be guaranteed to contain at least one vertex, that in turn can not assure $L$ communities in the final community structure. For discovering overlapping community structure, we assign a vertex $v$ to those communities for which the membership probability exceeds a threshold $\delta$. Possible ways to specify threshold will be specified later in Section~\ref{algo2:Parameter}. \subsection{Parameter Selection}\label{algo2:Parameter} \noindent $\bullet$ {\bf Matching Function ($W$)}: Given two communities $C_i$ and $C_j$, we measure their matching/similarity via Jaccard Coefficient ({\em JC})=$\frac{|C_i \cap C_j|}{|C_i \cup C_j|}$ and average precision ({\em AP}) =$\frac{1}{2}(\frac{|C_i \cap C_j|}{|C_i|} + \frac{|C_i \cap C_j|}{|C_j|})$. \noindent $\bullet$ {\bf Association Function ($\mathcal{F}$)}: Given a meta-community $C$ consisting of (say,) $\gamma$ communities, the association of $v$ with $C$ can be calculated as $\mathcal{F}(v,C)=\frac{\sum_{l=1}^{\gamma} \delta(v,C^l)}{\gamma}$, where $\delta$ returns $1$ if $v$ is a part of $C^l$, $0$ otherwise. For example, if $C=\{\{1,2,3,5\},\{1,2,7\},\{2,7,8\}\}$, then $\mathcal{F}(1,C)=\frac{2}{3}$. Alternatively, a weighted association measure may assign a score to $v$ w.r.t. $C$ based on the co-occurrence of the other community members with $v$, i.e., {\scriptsize $\mathcal{F}_w(v,C)= \frac{\big|\bigcap \limits_{C^l\in C} C^l \delta(v,C^l) \big|} {\big|\bigcup\limits_{C^l\in C} C^l \delta(v,C^l)\big|}$}. In our earlier example, $\mathcal{F}_w(1,C)=\frac{|\{1,2\}|}{|\{ 1,2,3,5,7\}|}=\frac{2}{5}$. \noindent $\bullet$ {\bf Threshold ($\tau$):} We choose the threshold $\tau$ automatically as follows. We first assign each vertex to its most probable community -- this produces a disjoint community structure. Each vertex $v_i$ is represented by a feature vector $F(v_i)$ which is the entire $i$'th row of the association matrix $\mathbb{A}$. We then measure the average similarity of vertices in $C$ as follows: $AS(C)=\frac{\sum_{(u,v)| u,v \in C \wedge E_{uv}\in E_C} COS(F(u),F(v))}{|E_C|}$, where $E_C$ is the set of edges completely internal to $C$, $E_{uv}$ is an edge $(u,v)$, and $COS$ is cosine similarity. The probability that two vertices are connected in $C$ is then defined as: \begin{equation}\small P(C)=\frac{e^{{[AS(C)]}^2}}{1+e^{{[AS(C)]}^2}} \end{equation} For a vertex $v$, if $P(C\cup \{v\}) \geq P(C)$, we further assign $v$ to $C$, in addition to its current community. We compare our threshold selection method with the following method: each vertex is assigned to its top $n\%$ high probable communities (we set $n$ to $5\%$ or $10\%$). Our experiments show that {\tt MeDOC}~delivers excellent performance with our threshold selection method (see Figures \ref{parameter_over}(g)-(i)). Other input parameters $RAlgo$ and $K$ remain same as discussed in Section \ref{algo1:Paramater}. \vspace{-2mm} \subsection{Complexity Analysis} The most expensive step of {\tt MeDOC}~ is to construct the multipartite network in Step 3. If $M$ is the number of base algorithms, $K$ is the number of vertex orderings and $\bar{a}$ is the average size of a base community structure, the worst case scenario occurs when each vertex in one partition is connected to each vertex in other partitions --- if this happens, the total number of edges is $\mathcal{O}(\bar{a}^2M^2K^2)$. However, in practice the network is extremely sparse and leads to $\mathcal{O}(\bar{a}MK)$ edges (because in sparse graphs $\mathcal{O}(|V|)\sim \mathcal{O}(|E|)$). Further, constructing the association matrix would take $\mathcal{O}(NL)$ iterations (where $L\ll N$). \begin{table*} \caption{Properties of the real-world networks. $N$: number of vertices, $E$: number of edges, $C$: number of communities, $\rho$: average edge-density per community, $S$: average size of a community, $O_m$: average number of community memberships per vertex.}\label{dataset} \centering \scalebox{0.75}{ \begin{tabular}{l||l|l|l|r|r|r|r|r|r|c} \multicolumn{11}{c}{(a) Networks with disjoint communities}\\\hline Networks & Vertex type & Edge type & Community type & N & E & C & $\rho$ & S & $O_m$ & Reference\\\hline University & Faculty & Friendship & School & 81 & 817 & 3 & 0.54 & 27 & 1 & \cite{PhysRevE.77.016107} \\ Football & Team & Games & Group-division & 115 & 613 & 12 & 0.64 & 9.66 & 1 & \cite{Clauset2004} \\ Railway & Stations & Connections & Province & 301 & 1,224 & 21 & 0.24 & 13.26 & 1 & \cite{Chakraborty:2014} \\ Coauthorship & Researcher & Collaborations & Research area & 103,677 & 352,183 & 24 & 0.14 & 3762.58 & 1 & \cite{0002SGM14,ChakrabortySTGM13} \\\hline \multicolumn{11}{c}{}\\ \multicolumn{11}{c}{(b) Networks with overlapping communities}\\\hline Networks & Vertex type & Edge type & Community type & N & E & C & $\rho$ & S & $O_m$ & Reference\\\hline Senate & Senate & Similar voting pattern & Congress & 1,884 & 16,662 & 110 & 0.45 & 81.59 & 4.76 & \cite{PhysRevE.91.012821}\\ Flickr & User & Friendship & Joined group & 80,513 & 5,899,882 & 171 & 0.046 & 470.83 & 18.96 & \cite{Wang-etal12} \\ Coauthorship & Researcher & Collaborations & Publication venues & 391,526 & 873,775 & 8,493 & 0.231 & 393.18& 10.45 & \cite{Palla}\\ LiveJournal & User & Friendship & User-defined group & 3,997,962 & 34,681,189 & 310,092 & 0.536 & 40.02 & 3.09 & \cite{Leskovec} \\ Orkut & User & Friendship & User-defined group & 3,072,441 & 117,185,083 & 6,288,363 & 0.245 & 34.86 & 95.93 & \cite{Leskovec}\\\hline \end{tabular}} \vspace{-3mm} \end{table*} \section{Results of Disjoint Community Detection} \subsection{Datasets}\label{dis_dataset} We use the LFR benchmark model \cite{PhysRevE} to generate synthetic networks with ground-truth community structure by varying the number of vertices $n$, mixing parameter $\mu$ (the ratio of inter- and intra-community edges), average degree $\bar k$, maximum degree $k_{max}$, minimum (maximum) community size $c_{min}$ ($c_{max}$), average percentage $O_n$ of overlapping vertices and the average number $O_m$ of communities to which a vertex belongs.\footnote{ Unless otherwise stated, we generate networks with the same parameter configuration used in \cite{Chakraborty:2014,Kanawati2014,1742}: $n=10000$, $\bar k=50$, $k_{max}=150$, $\mu=0.3$, $O_n=0$, $O_m=1$, $c_{max}=100$, $c_{min}=20$.} Note that for each parameter configuration, we generate 50 LFR networks, and the values in all the experiments are reported by averaging the results. We also use $4$ real-world networks mentioned in Table \ref{dataset}(a) for experiments (see detailed description in Appendix \cite{si}). \subsection{Baseline Algorithms} We compare {\tt EnDisCo}~and {\tt MeDOC}~with the following algorithms: \ (i) {\em Modularity-based}: FastGreedy ({\tt FstGrdy}) \cite{newman03fast}, Louvain ({\tt Louvain}) \cite{blondel2008} and {\tt CNM}~ \cite{Clauset2004}; (ii) {\em Vertex similarity-based}: WalkTrap ({\tt WalkTrap}) \cite{JGAA-124}; (iii) {\em Compression-based}: InfoMap ({\tt InfoMap}) \cite{Rosvall29012008}; (iv) {\em Diffusion-based}: Label Propagation ({\tt LabelPr}) \cite{raghavan-2007}. These algorithms are also used as base algorithms in ${\cal AL}$ in our ensemble approaches. We further compare our methods with Consensus Clustering ({\tt ConsCl}) \cite{lanc12consensus}, a recently-proposed ensemble-based framework for disjoint community detection. \subsection{Evaluation Metrics} \label{dis_eval} As we know the ground-truth community structure, we measure performance of competing CD algorithms using the standard {\em Normalized Mutual Information} ({\em NMI}) \cite{danon2005ccs} and {\em Adjusted Rand Index} ({\em ARI}) \cite{hubert1985}. \vspace{-3mm} \begin{figure} \centering \scalebox{0.23}{ \centering \includegraphics{parameter.eps}} \vspace{-3mm} \caption{Dependencies of the performance of {\tt EnDisCo}~ (left panel) and {\tt MeDOC}~ (right panel) on different parameters. The quality of the ground-truth community is varied by changing $\mu$ from $0.1$ to $0.8$ (keeping the other LFR parameters default) and the performance is measured using NMI. In (c) and (f), we vary $K$ and report the accuracy for three different LFR and Football networks. The value corresponding to one parameter is reported by averaging the values for all possible combinations of the other parameters. The results are statistically significant (for multiple curves in (c) and (f), we report the range of $p$-values).}\label{parameter_dis} \vspace{-7mm} \end{figure} \begin{table*}[!ht] \centering \caption{(A) Relative percentage improvement (averaged over NMI and ARI) of {\tt EnDisCo}~(E) and {\tt MeDOC}~(M) over the baseline algorithms for disjoint community detection. Each row corresponds to an algorithm $Al$ and the value indicates the performance improvement of the ensemble approach with $Al$ as the re-clustering algorithm over the isolated performance of $Al$ without ensemble. (B) We further compare the our ensemble methods with {\tt ConsCl}~(C) for each base algorithm separately and the results are reported averaging over all the networks. The rows in Table-B are same as Table-A. } \label{performance:non} \scalebox{0.8}{ \begin{tabular}{|l|cccccccc|c|ccccccccccc|c|c>{\columncolor[gray]{0.8}}cc|} \cline{1-21} \cline{23-25} \multirow{2}{*}{Algorithm} & \multicolumn{8}{c|}{{\bf Synthetic Networks}} & &\multicolumn{11}{c|}{{\bf Real-world Networks}} & &\multicolumn{3}{c|}{{{\bf Average over} }}\\ \multirow{2}{*}{ } & \multicolumn{2}{c}{LFR ($\mu=0.1$)} & &\multicolumn{2}{c}{LFR ($\mu=0.3$)} & &\multicolumn{2}{c|}{LFR ($\mu=0.6$)} & &\multicolumn{2}{c}{Football} & &\multicolumn{2}{c}{Railway} & &\multicolumn{2}{c}{University} & &\multicolumn{2}{c|}{Coauthorship} & & \multicolumn{3}{c|}{{\bf All Networks}}\\ \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \cline{14-15} \cline{17-18} \cline{20-21} \cline{23-25} & E & M & & E & M & & E & M& & E & M& & E & M& & E & M& & E & M&& \multicolumn{1}{c}{E} &M & C \\ \cline{1-9} \cline{11-21} \cline{23-25} {\tt FstGrdy} & 2.39 & 2.93 & & 2.71 & 3.02 & & 3.81 & 3.92 & & 0 & 0 & & 1.22 & 1.43 & & 2.20 & 2.86 & & 3.98 & 4.60 & & 2.33 & 2.36 & 1.98 \\ {\tt Louvain} & 1.97 & 2.04 & & 2.22 & 2.40 & & 3.41 & 3.86 & & 0 & 0 & & 1.17 & 1.43 & & 2.12 & 2.30 & & 2.21 & 2.39 & & 1.99 & 2.01 & 1.98 \\ {\tt CNM} &2.07 & 2.46 & & 2.14 & 2.83 & &3.22 & 3.50 & & 1.23 & 1.46 & & 1.49 & 1.92 & & 2.39 & 2.40 & & 2.92 & 3.41 & & 2.20 & 2.42 & 2.01 \\ {\tt InfoMap} & 0 & 0 & & 1.44 & 1.62 & & 2.01 & 2.46 & & 0 & 0 & & 1.22 & 1.56 & & 2.01 & 2.20 & & 2.31 & 2.98 & & 1.28 & 1.31 & 1.28 \\ {\tt WalkTrap} &4.43 & 4.97 & & 4.86 & 5.08 & & 6.98 & 7.42 & & 2.21 & 2.46 & & 3.21 & 3.49 & & 4.22 & 4.49 & & 5.06 & 5.51 & & 4.24 & 4.65 & 4.01 \\ {\tt LabelPr} & \cellcolor{gray!25} 5.06 & \cellcolor{gray!25} 5.72 & & \cellcolor{gray!25} 5.12 & \cellcolor{gray!25} 5.39 & & \cellcolor{gray!25} 7.50 & \cellcolor{gray!25} 7.82 & & \cellcolor{gray!25} 3.01 & \cellcolor{gray!25} 3.29 & & \cellcolor{gray!25} 3.46 & \cellcolor{gray!25} 3.79 & & \cellcolor{gray!25} 6.21 & \cellcolor{gray!25} 6.80 & & \cellcolor{gray!25} 6.21 & \cellcolor{gray!25} 6.98 & & 5.21 & 5.46 & 3.76 \\\cline{1-21} \cline{23-25} \multicolumn{21}{c}{(A)} &\multicolumn{1}{c}{}& \multicolumn{3}{c}{(B)} \end{tabular}} \vspace{-8mm} \end{table*} \subsection{Experimental Results} We first run experiments to identify the best parameters for {\tt EnDisCo}~and {\tt MeDOC}~and then present the comparative analysis among the competing algorithms. \subsubsection{Dependency on the Parameters} We consider the LFR networks and vary $\mu$. Figure \ref{parameter_dis}(a) shows that the accuracy of {\tt EnDisCo}~is similar for both the involvement functions, while Figure \ref{parameter_dis}(b) shows cosine similarity fully dominating Chebyshev distance. Figure \ref{parameter_dis}(d) shows that Jaccard coefficient performs significantly better than average precision when {\tt MeDOC}~is considered, while \ Figure \ref{parameter_dis}(e) shows that the weighted association function seems to dominate the other for $\mu < 0.6$ and exhibits similar performance for $\mu\geq 0.6$. We further vary the number of iterations $K$ to obtain communities with different vertex orderings -- Figures \ref{parameter_dis}(c) and \ref{parameter_dis}(f) show that for the networks with strong community structure (such as LFR ($\mu=0.1$), Football), the accuracy levels off at $K=0.2|V|$; however with increasing $\mu$ leveling off occurs at larger values of $K$. Note that the patterns observed here for LFR network are similar for other networks. Therefore unless otherwise stated, in the rest of the experiment we show the results of our algorithms with the following parameter settings for disjoint community detection: {\tt EnDisCo}: $K=0.2|V|$, $RCC$, $COS$; {\tt MeDOC}: $K=0.2|V|$, $JC$, $F_w$. \subsubsection{Impact of Base CD Algorithms on {\tt EnDisCo}~and {\tt MeDOC}}\label{sec:impact} In order to assess the impact of each base algorithm in our ensemble, we measure the accuracy of {\tt EnDisCo}~and {\tt MeDOC}~when that base algorithm is removed from the ensemble --- Table \ref{impact} shows that for LFR networks {\tt InfoMap}~has the biggest impact on accuracy according to both the evaluation measures (NMI and ARI) for both {\tt EnDisCo}~and {\tt MeDOC}~ (results are same for real networks \cite{si}). As the final step in both {\tt EnDisCo}~and {\tt MeDOC}~is to run a CD algorithm for re-clustering, we also conduct experiments (Table~\ref{impact_den} for LFR networks and Appendix \cite{si} for real networks) to identify the best one. Again, {\tt InfoMap}~proves to be the best. \begin{table}[!h] \centering \caption{Impact of each base algorithm on the accuracy of {\tt EnDisCo}~ and {\tt MeDOC}. The results are reported on default LFR network with default parameter settings of the proposed algorithms (we use {\tt InfoMap}~ as the final re-clustering algorithm). Each base algorithm is removed in isolation during the construction of ensemble matrix.}\label{impact} \scalebox{0.8}{ \begin{tabular}{|l|l|c|c|c|c|c|c|} \hline \multirow{3}{*}{No.} & Base & \multicolumn{4}{c|}{Disjoint} & \multicolumn{2}{c|}{Overlapping} \\\cline{3-8} &Algorithm & \multicolumn{2}{c|}{{\tt EnDisCo}} & \multicolumn{2}{c|}{{\tt MeDOC}} & \multicolumn{2}{c|}{{\tt MeDOC}} \\\cline{3-8} & & NMI & ARI & NMI & ARI & ONMI & $\Omega$ \\\hline (1) & All & 0.85 & 0.89 & 0.87 & 0.90 & 0.84 & 0.87 \\ (2) & (1) $-$ {\tt FstGrdy} & 0.83 & 0.88 & 0.84 & 0.88 & 0.83 & 0.85 \\ (3) & (1) $-$ {\tt Louvain} & 0.82 & 0.86 & 0.85 & 0.86 & 0.81 & 0.84 \\ (4) & (1) $-$ {\tt CNM} & 0.82 & 0.85 & 0.83 & 0.87 & 0.82 & 0.85 \\ \rowcolor[HTML]{D3D3D3} (5) & (1) $-$ {\tt InfoMap} & 0.80 & 0.81 & 0.81 & 0.82 & 0.80 & 0.81\\ (6) & (1) $-$ {\tt WalkTrap} & 0.84 & 0.88 & 0.85 & 0.81 & 0.83 & 0.86\\ (7) & (1) $-$ {\tt LabelPr} & 0.84 & 0.87 & 0.86 & 0.87 & 0.83 & 0.85\\\hline \end{tabular}} \vspace{-5mm} \end{table} \begin{table}[!h] \centering \caption{Impact of each algorithm at the final stage of {\tt EnDisCo}~ and {\tt MeDOC}~ to re-cluster vertices. The results are reported on default LFR network with other default parameter values of the proposed algorithms.}\label{impact_den} \scalebox{0.80}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline Re-clustering & \multicolumn{4}{c|}{Disjoint} & \multicolumn{2}{c|}{Overlapping} \\\cline{2-7} Algorithm & \multicolumn{2}{c|}{{\tt EnDisCo}} & \multicolumn{2}{c|}{{\tt MeDOC}} & \multicolumn{2}{c|}{{\tt MeDOC}} \\\cline{2-7} & NMI & ARI & NMI & ARI & ONMI & $\Omega$ \\\hline {\tt FstGrdy} & 0.79 & 0.80 & 0.80 & 0.83 & 0.81 & 0.84\\ {\tt Louvain} & 0.82 & 0.84 & 0.83 & 0.86 & 0.82 & 0.83\\ {\tt CNM} & 0.83 & 0.81 & 0.83 & 0.86 & 0.81 & 0.80\\ \rowcolor[HTML]{D3D3D3} {\tt InfoMap} & 0.85 & 0.89 & 0.87 & 0.90 & 0.84 & 0.87 \\ {\tt WalkTrap} & 0.75 & 0.78 & 0.77 & 0.82 & 0.76 & 0.79\\ {\tt LabelPr} & 0.77 & 0.79 & 0.78 & 0.80 & 0.75 & 0.77\\\hline \end{tabular}} \vspace{-6mm} \end{table} \subsubsection{Comparative Evaluation}\label{comp_dis} Table \ref{performance:non}(A) reports the performance of our approaches on all networks using different algorithms in the final step of {\tt EnDisCo}~and {\tt MeDOC}. The numbers denote relative performance improvement of our algorithms (E:{\tt EnDisCo}\ M:{\tt MeDOC}) w.r.t. a given algorithm when that algorithm is used in the final step. For instance, the first entry in the last row (5.06) means that for LFR ($\mu=0.1$) network, the accuracy of {\tt EnDisCo}~(when {\tt LabelPr}\ is used for re-clustering in its final step) averaged over NMI and ARI (0.83) is 5.06\% higher than that of {\tt LabelPr}\ (0.79). The actual values are reported in Appendix \cite{si}. The point to take away from this table is that irrespective of which classical CD algorithm we compare against, {\tt EnDisCo}~and {\tt MeDOC}~always improve the quality of communities found. Moreover, we observe from the results of LFR networks that with the deterioration of the community structure (increase of $\mu$), the improvement increases for all the re-clustering algorithms. Further, Table \ref{performance:non}(B) shows the average improvement of {\tt EnDisCo}~and {\tt MeDOC}~when compared against Consensus Clustering ({\tt ConsCl}). We see that for disjoint networks, both {\tt EnDisCo}~and {\tt MeDOC}~beat {\tt ConsCl}~ with {\tt MeDOC}~emerging in top place. \section{Results of Overlapping Community Detection} \subsection{Datasets} We again use the LFR benchmark to generate synthetic networks with overlapping community structure with the following default parameter settings as mentioned in \cite{oslom,Gregory1}: $n=10000$, $\bar k=50$, $k_{max}=150$, $\mu=0.3$, $O_n=20\%$, $O_m=20$, $c_{max}=100$, $c_{min}=20$. We generate $50$ LFR networks for each parameter configuration --- the experiments reported averages over these $50$ networks. We further vary $\mu$ ($0.1$-$0.8$ with increment of $0.05$), $O_m$ and $O_n$ (both from $15\%$ to $30\%$ with increment of $1\%$) depending upon the experimental need. We also run experiments with six real-world datasets mentioned in Table \ref{dataset}(b) (see details in Appendix \cite{si}). \subsection{Baseline Algorithms} We compare {\tt MeDOC}~ with the following state-of-the-art overlapping community detection algorithms: (i) {\em Local expansion:} OSLOM \cite{oslom}, EAGLE \cite{Shen}; (ii) {\em Agent-based dynamical algorithms:} COPRA \cite{Gregory1}, SLPA \cite{Xie}; (iii) {\em Detection using mixture model:} MOSES \cite{moses}, BIGCLAM \cite{Leskovec}. \subsection{Evaluation Metrics} We use the following evaluation metrics to compare the results with the ground-truth community structure: (a) Overlapping Normalized Mutual Information ($ONMI$) \cite{journals}, (b) Omega ($\Omega$) Index \cite{Leskovec} (details in Appendix \cite{si}). \begin{table*}[!t] \begin{center} \caption{Accuracy of all the competing algorithms in detecting the overlapping community structure from both synthetic and real-world networks. All the disjoint algorithms are used to create the multipartite network and {\tt MeDOC}~ is run with its default parameter setting. } \label{performance:over} \scalebox{0.7}{ \begin{tabular}{|l|cccccccc|c|cccccccccccccc|c|} \hline \multirow{2}{*}{Algorithm} & \multicolumn{8}{c|}{{\bf Synthetic Networks}} & &\multicolumn{14}{c|}{{\bf Real-world Networks}} \\ \multirow{2}{*}{ } & \multicolumn{2}{c}{LFR ($\mu=0.1$)} & &\multicolumn{2}{c}{LFR ($\mu=0.3$)} & &\multicolumn{2}{c|}{LFR ($\mu=0.6$)} & &\multicolumn{2}{c}{Senate} & &\multicolumn{2}{c}{Flickr} & &\multicolumn{2}{c}{Coauthorship} & &\multicolumn{2}{c}{LiveJournal} & &\multicolumn{2}{c|}{Orkut} \\ \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \cline{14-15} \cline{17-18} \cline{20-21} \cline{23-24} & ONMI & $\Omega$ & & ONMI & $\Omega$ & & ONMI & $\Omega$& & ONMI & $\Omega$& & ONMI & $\Omega$& & ONMI & $\Omega$& & ONMI & $\Omega$& & ONMI & $\Omega$ \\ \cline{1-9} \cline{11-24} {\tt OSLOM} & 0.80 &0.78&& 0.74& 0.78&& 0.72& 0.73&& 0.71& 0.73 &&0.68& 0.74&& 0.70& 0.71&& 0.73& 0.75&& 0.71 &0.76 \\ {\tt EAGLE} & 0.81 &0.83 &&0.75& 0.76&& 0.70 &0.74&& 0.73 &0.74&& 0.69 &0.76&& 0.71& 0.74&& 0.74& 0.76&& 0.70 &0.77 \\ \COPRA & 0.80 &0.81&& 0.76& 0.74&& 0.72& 0.74&& 0.74 &0.77&& 0.73& 0.78&& 0.75& 0.79&& 0.76& 0.82&& 0.74& 0.76 \\ {\tt SLPA} & 0.84& 0.86&& 0.78& 0.77&& 0.76& 0.77&& 0.74& 0.76&& 0.72& 0.74&& 0.76& 0.77&& 0.78& 0.85&& 0.75& 0.79\\ {\tt MOSES} & 0.85& 0.86 &&0.80& 0.81&& 0.75& 0.78&& 0.75& 0.78&& 0.74& 0.76&& 0.79& 0.78&& 0.81& 0.82&& 0.78& 0.82 \\ {\tt BIGCLAM} & 0.86 &0.85 &&0.81& 0.83&& 0.77& 0.79&& 0.76& 0.79&& 0.75& 0.76&& 0.80& 0.84&& 0.84& 0.87&& 0.81& 0.84 \\ {\tt MeDOC} & \cellcolor{gray!25} 0.88& \cellcolor{gray!25}0.91 &&\cellcolor{gray!25}0.84& \cellcolor{gray!25}0.87&& \cellcolor{gray!25}0.82& \cellcolor{gray!25}0.84&& \cellcolor{gray!25}0.81& \cellcolor{gray!25}0.85&& \cellcolor{gray!25}0.79& \cellcolor{gray!25}0.84&& \cellcolor{gray!25}0.82& \cellcolor{gray!25}0.86&& \cellcolor{gray!25}0.86& \cellcolor{gray!25}0.88&& \cellcolor{gray!25}0.83& \cellcolor{gray!25}0.86 \\ \hline \hline \end{tabular}} \end{center} \vspace{-8mm} \end{table*} \subsection{Experimental Results} \subsubsection{Parameter Settings} We first try to identify the best parameter settings for {\tt MeDOC}. These include: matching function $W$, association function $\mathcal{F}$, number of iterations $K$ and threshold $\tau$. Figure \ref{parameter_over} shows the results (on LFR networks) by varying $\mu$, $O_m$ and $O_n$. We observe that Jaccard coefficient as matching function and weighted association measure are better than their alternative. The choice of $K$ is the same as shown in Figure \ref{parameter_dis}(f) -- accuracy levels off at $K=0.2|V|$, and therefore we skip this result in the interest of space. We experiment with two choices of thresholding: top 5\% and 10\% most probable communities per vertex, and compare with the threshold selection mechanism described in Section \ref{algo2:Parameter}. Figures \ref{parameter_over}(g)-\ref{parameter_over}(i) show that irrespective of any network parameter selection, our choice of selecting threshold always outperforms others. As shown in Table \ref{impact_den}, {\tt InfoMap}~ seems to be the best choice for the re-clustering algorithm. Therefore, in the rest of the experiments, we run {\tt MeDOC}~with $K=0.2|V|$, $JC$, $F_w$, {\tt InfoMap}~ and $\tau$ (selected by our method). \begin{figure} \centering \scalebox{0.2}{ \includegraphics{parameter_over.eps}} \caption{Dependencies of {\tt MeDOC}~ on different algorithmic parameters. The results are reported on default overlapping LFR networks by varying three parameters $\mu$, $O_m$ and $O_n$. For thresholding, we choose top 5\% and 10\% highly probable communities for each vertex and compare it with our threshold selection method. The value corresponding to one parameter is reported by averaging the values for all possible combinations of the other parameters. The results are statistically significant.}\label{parameter_over} \vspace{-8mm} \end{figure} \subsubsection{Impact of Base Algorithms for Overlapping CD} The impact of the base algorithms on {\tt MeDOC}'s performance is similar to what we saw in the disjoint CD case. The results in Table \ref{impact} show that accuracy decreases most when we drop {\tt InfoMap}~from the base algorithm, followed by {\tt Louvain}~ and {\tt CNM} ~(see more results in Appendix \cite{si}). \subsubsection{Comparative Evaluation} We ran {\tt MeDOC}~with the default setting on three LFR networks and five real-world networks. The performance of {\tt MeDOC}~is compared with the six baseline overlapping community detection algorithms. Table \ref{performance:over} shows the performance of the competing algorithms in terms of ONMI and $\Omega$ index. In all cases, {\tt MeDOC}~is a clear winner, winning by significant margins. The absolute average of ONMI ($\Omega$) for {\tt MeDOC}~ over all networks is 0.83 (0.86), which is 3.58\% (4.39\%) higher than {\tt BIGCLAM}, 5.90\% (7.49\%) higher than {\tt MOSES}, 8.31\% (9.19\%) higher than {\tt SLPA}, 10.67\% (10.95\%) higher than \COPRA, 13.89\% (12.95\%) higher than {\tt EAGLE}, and 14.68\% (15.21\%) higher than {\tt OSLOM}. Another interesting observation is that the performance improvement seems to be prominent with the deterioration of community quality. For instance, the improvement of {\tt MeDOC}~w.r.t. the best baseline algorithm ({\tt BIGCLAM}) is 2.32\% (7.06\%), 3.70\% (4.82\%) and 6.49\% (6.33\%) in terms of ONMI ($\Omega$) with the increasing value of $\mu$ ranging from 0.1, 0.3 and 0.6 respectively. This once again corroborates our earlier observations in Section \ref{comp_dis} for disjoint communities. \begin{figure}[!h] \centering \scalebox{0.2}{ \includegraphics{runtime.eps}} \caption{The value of $\Theta$ w.r.t. the increase of vertices in LFR networks. {\tt EnDisCo}~and {\tt MeDOC}~ are compared with {\tt ConsCl}. The results are statistically significant (since there are multiple curves, we report the range of $p$-values).}\label{runtime} \vspace{-3mm} \end{figure} \vspace{-3mm} \section{Runtime Analysis} Since ensemble approaches require the running all baseline algorithms (which may be parallelized), one cannot expect ensemble methods to be faster than baseline approaches. However, our proposed ensemble frameworks are much faster than existing ensemble approaches such as consensus clustering. To show this, for each ensemble algorithm, we report $\Theta$, the ratio between the runtime of each ensemble approach and the sum of runtimes of all base algorithms, with increasing number of vertices in LFR. We vary the number of edges of LFR by changing $\mu$ from $0.1$ to $0.3$. Figure \ref{runtime} shows that our algorithms are much faster than consensus clustering. We further report the results of {\tt MeDOC}~for overlapping community detection which is almost same as that of disjoint case since it does not require additional steps apart from computing the threshold. \vspace{-3mm} \section{Degeneracy of Solutions} CD algorithms suffer from the problem of ``degeneracy of solutions'' \cite{good2010} which states that an optimization algorithm can produce exponentially many solutions with (nearly-)similar optimal value of the objective function (such as modularity); however the solutions may be structurally distinct from each other. Figure \ref{example} showed how {\tt InfoMap}~ produces many outputs for different vertex orderings of Football network. We test this by considering the default LFR network and one real-world network (Appendix \cite{si} shows results on more real world networks) and run the algorithms on $100$ different vertex orderings of each network. We then measure the pair-wise similarity of the solutions obtained from each algorithm. The box plots in Figure \ref{boxplot} show the variation of the solutions for {\tt EnDisCo}, {\tt MeDOC}~and the best baseline algorithm in both disjoint and overlapping community detections. We observe that the median \ similarity is high with {\tt EnDisCo}~and {\tt MeDOC}~ and the variation is comparatively small. These results suggest that our algorithms provide more robust results than past work and alleviate the problem of degeneracy of solutions. \vspace{-3mm} \begin{figure}[!h] \centering \scalebox{0.205}{ \centering \includegraphics{boxplot1.eps}} \caption{Box plots indicating the variation of the solutions obtained from {\tt EnDisCo}, {\tt MeDOC}~ and the best baseline algorithm for (a) disjoint and (b) overlapping CD on one synthetic network and one real-world network. }\label{boxplot} \vspace{-6mm} \end{figure} \section{Conclusion} In this paper, we proposed two general frameworks for ensemble community detection. {\tt EnDisCo}~identifies disjoint community structures, while {\tt MeDOC}~detects both disjoint and overlapping community structures. We tested both algorithms on both synthetic data using the LFR benchmark and with several real-world datasets that have associated ground-truth community structure. We show that both {\tt EnDisCo}~and {\tt MeDOC}~are more accurate than existing CD algorithms, though of course, {\tt EnDisCo}~and {\tt MeDOC}~leverage them. We further show that for disjoint CD problems, {\tt EnDisCo}~and {\tt MeDOC}~both beat a well known existing disjoint ensemble method called consensus clustering \cite{lanc12consensus} -- both in terms of accuracy (measured via both Normalized Mutual Information and Adjusted Rand Index) and run-time. To our knowledge, {\tt MeDOC}~is the first ensemble algorithm for overlapping community detection that we have seen in the literature. In future, we would like to develop theoretical explanation to justify the superiority of ensemble approaches compared to the discrete models. Other future direction could be to make the ensemble frameworks parallelized. We will apply the proposed methods to identify communities in specific datasets, such as malware traces, protein interaction networks etc. \nop{ In this paper, we proposed two frameworks to aggregate multiple community structures obtained by running different disjoint CD algorithms. Both the frameworks turned out to be superior than past non-ensemble based approaches as well as a recently proposed ensemble approach. We showed how one can leverage disjoint community information to discover the overlap in the community structure. We presented the suitable functions needed in the process of ensemble and showed an automated way of selecting the threshold. In that sense, our algorithms do not require manual intervention for parameter tunning. As it turned out, the choice of base algorithm has no major impact on the clustering quality, but it does in selecting an algorithm for re-clustering. In future, we would like to develop theoretical explanation to justify the superiority of ensemble approaches compared to the discrete models. In traditional ensemble-based classification problem, it has already been shown that an aggregation of several weak base algorithms performs well even though each weak algorithm is just merely better than the random guessing, i.e., the probability of correct classification is $\frac{1}{|C|} + \epsilon$, where $\epsilon$ is a small non-zero value and $|C|$ is the number possible classes \cite{Freund:1995:BWL:220262.220446,Schapire:1990:SWL:83637.83645}. In case of community detection, one can think of weak base algorithms that find the correct community for a vertex in the probability of $\frac{1}{|C|} + \epsilon$ (boosting condition). After a series of boosting procedures, e.g., boosting by a majority voting, the small $\epsilon$ value would help the probability that the community structure is correctly identified converging to a much larger value than $\frac{1}{|C|} + \epsilon$. Of course, there exists a certain probability that the majority vote may not work correctly. If the boosting condition is guaranteed for every vertex and the number of weak algorithms is enough, taking a majority vote may produce stable results. In many practical cases, however, we found that the community detection for a subset of vertices is significantly inaccurate by a majority of weak algorithms, and thus the boosting condition might not be guaranteed for all vertices. We leave this line of research as future agenda. } \vspace{-3mm} \section*{Acknowledgment} Parts of this work were funded by ARO Grants W911NF-16-1-0342, W911NF1110344, W911NF1410358, by ONR Grant N00014-13-1-0703, and Maryland Procurement Office under Contract No. H98230-14-C-0137. \vspace{-3mm}
2,877,628,089,894
arxiv
\section{Introduction} We consider the asymptotic limit $n \rightarrow \infty$ of the first discrete Painlev\'e equation (dP$_\mathrm{I}$) \begin{equation}\label{0:dPI} w_{n+1}+w_n + w_{n-1} = \frac{\alpha n + \beta}{w_n} + \gamma, \end{equation} and apply methods of exponential asymptotics to describe solutions which demonstrate asymptotically pole-free behaviour within certain sectors. This nonlinear equation also appears in the study of orthogonal polynomials, known there as the Freud equation \cite{Duits1,Magnus2,Magnus1}. Previous studies \cite{Bernardo1,Joshi2,Vereshchagin1} considered the far-field behaviour of solutions to dP$_\mathrm{I}$, finding the asymptotic solution in terms of elliptic functions. Joshi \cite{Joshi2} and Vereshchagin \cite{Vereshchagin1} also determined discrete analogues of asymptotically pole-free solutions behaviours by Boutroux for the continuous first Painlev\'e equation (P$_\mathrm{I}$). In this paper, we find the full formal series solutions including exponentially-small terms, study the influence of Stokes phenomena on these solutions, and deduce their regions of validity. The resultant asymptotic solutions share several features with the asymptotically pole-free solutions to P$_{\mathrm{I}}$ known as tronqu\'ee and tritronqu\'ee solutions. Equation \eqref{0:dPI} tends to P$_{\mathrm{I}}$ in the scaling limit $w_n = 1-h(2y(x)-c/2)$, $x=\sqrt{h} n - b/2 - c^2/24$, with $\alpha = h^{3/2}a$, $\beta = -\gamma + 3+ h^2 b$, $\gamma = 6 + h c$, as $h \rightarrow 0$. P$_{\mathrm{I}}$ is one of the six Painlev\'e equations which play a key role in a range of physical problems, in particular arising as universal models in reductions of the nonlinear Schr{\"o}dinger equation \cite{Dubrovin1}, the Korteweg-de Vries equation \cite{Hastings1, Rosales1} and the Boussinesq equation \cite{Clarkson1}, and appear as models in such diverse fields such as statistical mechanics \cite{McCoy1}, nonlinear optics \cite{Giannini1}, electrochemistry \cite{Bass1} and plasma physics \cite{Hastings1}. The first discrete Painlev\'e equation also arises in physical applications, such as the study of quantum gravity \cite{Fokas1, Fokas2}. Motivated by these applications, the study of the Painlev\'e equations in asymptotic limits has been widely pursued. However, the corresponding study of discrete Painlev\'e equations remains sparse. In this paper, we apply techniques of exponential asymptotics for differential-difference and difference equations developed by King and Chapman \cite{King4} in their analysis of the Frenkel-Kontorova model for atomic lattices to dP$_{\mathrm{I}}$, in order to obtain asymptotic approximations which are accurate up to exponentially-small correction terms. We note that there have been some previous investigations into exponential asymptotics for difference equations, such as that of \cite{Daalhuis2,Olver1}. The study by Olde Daalhuis \cite{Daalhuis2} performed a Borel-plane analysis to consider the behaviour of second-order linear difference equations and to determine exponentially-improved solutions in terms of an inverse factorial series. In fact, these Borel-plane methods could be applied to the current investigation, and controlled error estimates for the asymptotics could be obtained in this fashion. The primary reason for using the Stokes-smoothing technique described in \cite{King4} is that the analysis may be performed directly on the nonlinear difference equation, rather than requiring the solution to be formulated in terms of an integral expression; this feature permits the analysis to be easily extended to a wide range of more complicated problems. \subsection{Background} The asymptotic behaviour of solutions to the Painlev\'e equations was studied by Boutroux \cite{Boutroux1}, who considered the behaviour of the first Painlev\'e equation (P$_\mathrm{I}$), \begin{equation*}\label{0:PI} \diff{^2y}{x^2} = 6y^2 + x, \end{equation*} in the limit $|x| \rightarrow \infty$. Boutroux identified special solutions which are asymptotically free of poles in particular sectors of the complex plane of angular width $4\pi/5$, and furthermore discovered five unique solutions that are pole-free in sectors of angular width $8\pi/5$. These solutions are known as \textit{tronqu\'ee} and \textit{tritronqu\'ee} solutions respectively, and tend to $w\sim \pm\sqrt{-x/6}$ as $|x| \rightarrow \infty$ within the asymptotically pole-free sectors. In \cite{Joshi3}, the real {tritronqu\'ee} solution was shown to have no poles on the whole semi-axis $x < 0$. Several asymptotic studies of P$_\mathrm{I}$ have been carried out in \cite{Joshi1, Kapaev1, Kitaev1}. These asymptotic approximations have played essential roles in many of the physical problems described earlier \cite{Dubrovin1, Bass1}. \subsection{Discrete Painlev\'e equations} In this study, we are interested in the asymptotic behaviour of solutions of the first discrete Painlev\'e equation. The discrete Painlev\'e equations are a set of integrable nonlinear difference equations, which tend to the ordinary Painlev\'e equations when the continuum limit is taken. The discrete Painlev\'e equations first appeared in the study of solutions to the continuous Painlev\'e equations. It was discovered that these equations possess B\"acklund transformations, which map one solution to another solution with different parameter values, or even to a solution of a different Painlev\'e equation (see for example \cite{Clarkson1, Gromak1}). These transformations have been used to construct hierarchies of solutions to the Painlev\'e equations, often in terms of rational or special functions. It was noted by Fokas, Its and Kitaev \cite{Fokas1,Fokas2} that dP$_{\mathrm{I}}$ arises in the study of B\"acklund transformations associated with the fourth Painlev\'e equation (P$_{\mathrm{IV}}$). This equation has two parameters, $a$ and $b$, and is given by \begin{equation*} \diff{^2w}{z^2} = \frac{(w')^2}{2w} + \frac{3w^2}{2} + 4zw^2 + 2(z^2-a)w + \frac{b}{w}. \end{equation*} If we associate a choice of parameters $(a_n, b_n)$ of P$_{\mathrm{IV}}$ with a solution denoted by $w_n(z)$, a hierarchy of solutions may be found corresponding to parameter choices \begin{equation}\label{0:P4par} a_n = - \tfrac{3}{2}\mu(-1)^n + \tfrac{1}{2}n,\qquad \sqrt{-2b_n} = \mu(-1)^n + n,\\ \end{equation} where $\mu$ is an arbitrary complex-valued constant. The recurrence relation for $w_n$ is subsequently given by \begin{equation}\label{0:P4dP1} (w_{n+1} + w_n + w_{n-1})w_n = -2zw_n - n + \mu, \end{equation} which is simply a version of dP$_{\mathrm{I}}$ with appropriately chosen parameters. A variant of this relationship was used by \cite{Bassom1} to describe hierarchies of solutions to P$_{\mathrm{IV}}$ in terms of parabolic cylinder functions, as well as in \cite{Clarkson1} to construct solutions in terms of rational functions, as well as Okamoto and Hermite polynomials. By studying the asymptotic behaviour of the discrete Painlev\'e equations, we are able to better understand the behaviour of these solution families. The first discrete Painlev\'e equation has also featured in the study of problems arising outside of the field of integrable systems, particularly relating to the study of orthogonal and orthonormal polynomials \cite{Magnus2, Magnus1}. A set of orthonormal polynomials $p_n(x)$, where $n \geq 0$, is defined with respect to some weight function $w(x)$ as satisfying \begin{equation}\label{0:pnw} \int_{-\infty}^{\infty} p_n(x) p_m(x) w(x) \mathrm{d} x = \delta_{m,n}, \end{equation} where $\delta_{m,n} = 1$ if $m = n$ and is equal to zero otherwise. In general, these polynomials satisfy a recurrence relation \begin{equation*} x p_n(x) = a_{n+1}p_{n+1}(x) + b_n p_n(x) + a_n p_{n-1}(x), \end{equation*} where $a_n$ and $b_n$ themselves satisfy particular recurrence relations, described for example in \cite{Magnus2}. In the case of orthonormal polynomials with respect to the Freud weight, $w(x) = \exp(- \kappa x^4 - \mu x^2)$, we find that $b_n = 0$ (this holds for all even weight functions), and that $u_n = a_n^2$ satisfies \begin{equation}\label{0:pndPI} 4\kappa u_n(u_{n-1} + u_n + u_{n+1}) + 2\mu u_n - n = 0, \end{equation} which, again, is dP$_{\mathrm{I}}$, with appropriately chosen parameters. By analyzing the solutions to dP$_{\mathrm{I}}$, we can determine the behaviour of families of orthogonal polynomials, even in the limit that their degree becomes large. It is this relationship to orthonormal polynomials that underpins the appearance of dP$_{\mathrm{I}}$ in the study of quantum gravity models \cite{Fokas1, Fokas2}. We also note in passing that the variation of $w_n$ with respect to the parameters $\kappa$ and $\mu$ can be computed directly from the fourth continuous Painlev\'e equation with appropriate choice of parameters \cite{Magnus2}, thus confirming the existence of another connection between dP$_{\mathrm{I}}$ and P$_{\mathrm{IV}}$. In recent decades, however, discrete integrable systems such as dP$_{\mathrm{I}}$ have become an object of study in themselves. In part, this is due to these equations being convenient examples of nonlinear difference equations which, due to their integrability, are more tractable than many equations within this class. Hence, they provide a useful testing ground for mathematical techniques intended for wider applicability. The primary motivation for studying these equations, however, is to better understand their solutions. These equations are important discretizations of the continuous Painlev\'e equations; their particular significance is due to the fact that they retain the integrability property upon discretization. They arise as independent entities in applications and possess interesting solutions, whose properties permit us to understand the behaviour of classes of functions, such as the orthonormal polynomials described above. While addressing such questions about all solutions lies beyond the scope of the present study, it motivates our interest in understanding the asymptotic behaviour of the discrete Painlev\'e equations. \subsection{Exponential asymptotics and Stokes lines}\label{Method} The formal series we study in this paper are shown to be divergent (see Section \ref{dPI_geq0}) and contain exponentially-small terms that are invisible to conventional asymptotic approaches. The central idea of exponential asymptotic methods is that a divergent asymptotic series, when truncated optimally, produces an error term that is exponentially small in the asymptotic limit \cite{Boyd1}. The problem may then be rescaled to directly determine the behaviour of this remainder term, allowing the exponentially-small component of the solution to be determined separately to the asymptotic series itself. This idea was introduced by Berry \cite{Berry3,Berry1,Berry4}, and Berry and Howls \cite{Berry5}, who used these methods to determine the behaviour of special functions such as the Airy function. As we uncover this exponentially-small behaviour, we will discover two varieties of important curves, identified originally by Stokes \cite{Stokes1} and known as \textit{Stokes lines} and \textit{anti-Stokes lines}. These curves relate to the behaviour of exponentially-small components of the solutions. As Stokes lines are crossed, the exponentially-small behaviour experiences a smooth, rapid change in value in the neighbourhood of the curve itself; in many cases, the behaviour may switch off entirely on one side of the Stokes line. Hence, Stokes lines allow us to determine which exponentially-small contributions may be present in certain regions of the plane. Anti-Stokes lines determine directions where exponentially-small behaviour changes to become exponentially large, and therefore mark the boundaries of the region in which the asymptotic series expression for the solution is valid. In this investigation, we will apply an exponential asymptotic method developed by Olde Daalhuis \textit{et al.} \cite{Daalhuis1} for linear differential equations, and extended by Chapman \textit{et al.} \cite{Chapman1} for application to nonlinear differential equations. We provide a brief outline of the process here, however more detailed explanation of the methodology may be found in these studies. To optimally truncate an asymptotic series, we require a general form for the asymptotic series coefficients, which is often algebraically intractable. In singular perturbation problems, Dingle \cite{Dingle1} noted that successive terms in the asymptotic series expansion are obtained by repeated differentiation of an earlier term in the series. Hence, the late-order terms $a_m$ of the resultant asymptotic series typically diverge as the ratio between a factorial and the increasing power of a function $\chi$, such as \begin{equation}\label{ch1:ansatz} a_m \sim \frac{A \Gamma(m+\gamma)}{\chi^{m+\gamma}} \qquad \mathrm{as} \quad m \rightarrow \infty, \end{equation} where $\Gamma$ is the gamma function defined in \cite{Abramowitz1}, and $A$, $\gamma$ and $\chi$ are functions that do not depend on $m$. In particular, the \textit{singulant} $\chi$ is equal to zero when the leading-order behaviour is singular, ensuring that this singularity is present in all higher-order terms. Chapman \textit{et al.} \cite{Chapman1} therefore propose a factorial-over-power ansatz (\ref{ch1:ansatz}) for the late-order terms, which may then be used to optimally truncate the asymptotic expansion. The next step, described by Olde Daalhuis \textit{et al.} \cite{Daalhuis1}, is to substitute the truncated series expression back into the original problem, obtaining an equation for the exponentially-small remainder term after truncation. The positions of Stokes lines are found by determining the curves along which the remainder term varies rapidly, and the behaviour of this remainder is found by solving the remainder equation in the neighbourhood of these Stokes lines. Away from the Stokes line, the governing behavior is obtained by solving the homogenous version of the remainder equations, generally giving an expression of the form $A\mathrm{e}^{-\chi/\epsilon}$ (this is a consequence of the analysis in Chapman \textit{et al.} \cite{Chapman1}, although it is also shown as a part of the Stokes smoothing analysis contained herein). Stokes switching will occur when the switching exponential is maximally subdominant; this occurs when the singulant is purely real and positive. Anti-Stokes lines may be found when the exponential term switches from being small to large; this occurs when the singulant is purely imaginary. These are therefore convenient conditions to determine the possible location of Stokes and anti-Stokes lines, as observed by Dingle \cite{Dingle1}, and we apply them to the series solutions of dP$_{\mathrm{I}}$. \subsection{Paper outline} In Section 2, we find the full formal series expansions of the (pole-free) solutions of dP$_\mathrm{I}$ and provide recurrence relations for the coefficients. It is important to note that we find two types of series solutions, called Type A and B solutions. We also describe the late-order terms in these series. We determine the Stokes structures of the Type A solutions in Section 3, which involves descriptions of the exponentially small terms in the solutions. These are used to find the regions in which the solutions are pole-free. We carry out the analysis of Type B solutions in Section 4. Finally, we discuss the results and conclusions of the paper in Section 5. \section{Asymptotic series expansions}\label{dPI_geq0} In this section, we expand the solution as a power series in the limit $n \rightarrow \infty$, and determine a recurrence relation for the coefficients of the series. Finally, we determine a general expression for the late-order terms of the series. Consider the first discrete Painlev\'e equation: \begin{equation*} w_{n+1}+w_n+w_{n-1} = \frac{\alpha n + \beta}{w_n} + \gamma. \end{equation*} Now, simply scaling into the far field requires the behaviour to be smooth in this limit, however we also wish to consider asymptotically pole-free solutions that allow for the odd and even terms to vary rapidly. Hence, we consider separately the odd and even terms in the series, such that \begin{equation}\label{2:yz} w_n = \Bigg\{ \begin{array}{ll} y_n & \mbox{if } n \textrm{ even} \\ z_n & \mbox{if } n \textrm{ odd} \end{array} , \end{equation} where $y_n$ and $z_n$ do not necessarily have the same asymptotic behaviour as $n \rightarrow \infty$. Equation (\ref{1:PL1a}) may now be represented as a system, giving \begin{subequations} \begin{align}\label{1:PL1a} (z_{n+1}+y_n+z_{n-1})y_n &= {\alpha n + \beta + \gamma y_n}, \qquad n \textrm{ even}\\ (y_{n+1}+z_n+y_{n-1})z_n &= {\alpha n + \beta + \gamma z_n}, \qquad n \textrm{ odd}. \label{1:PL1b} \end{align} \end{subequations} In subsequent analysis, we assume $n\in{\mathbb C}$ with iteration $n\mapsto n+1$ acting on domains, and takes an initial value as given by a function $f(n)$ in a domain that includes the initial point. This is a standard interpretation \cite{Spiegel1,Watson1}, and underlies the theory of difference calculus. This allows us to analytically continue the solution in the complex $n$-plane. We now apply the method of multiple scales in terms of the fast variable $n$, and a slow variable $s = \epsilon n$. We scale $y_n$ and $z_n$ and express them such that \begin{equation}\label{1:scaling} y_n = \frac{u(s,\epsilon)}{\epsilon^{1/2}}, \qquad z_n = \frac{v(s,\epsilon)}{\epsilon^{1/2}}, \end{equation} which gives the rescaled equations \begin{subequations} \begin{align}\label{1:PL1scaleda} (v(s+\epsilon,\epsilon) + u(s,\epsilon) + v(s-\epsilon,\epsilon))u(s,\epsilon) &= \alpha s + \epsilon \beta + \epsilon^{1/2}\gamma u(s,\epsilon),\\ (u(s+\epsilon,\epsilon) + v(s,\epsilon) + u(s-\epsilon,\epsilon))v(s,\epsilon) &= \alpha s + \epsilon \beta + \epsilon^{1/2}\gamma v(s,\epsilon). \label{1:PL1scaledb} \end{align} \end{subequations} We assume that $u(s,\epsilon)$ and $v(s,\epsilon)$ are analytic in the variable $s$, and expand the solutions to (\ref{1:PL1scaleda})-(\ref{1:PL1scaledb}) as a Taylor series in $s$ to give the system \begin{subequations} \begin{align}\label{1:PL1expandeda} \left[u(s,\epsilon) + \sum_{j=0}^{\infty} \frac{2\epsilon^{2j} v^{(2j)}(s,\epsilon)}{(2j)!} \right]u(s,\epsilon) &= \alpha s + \epsilon \beta + \epsilon^{1/2} \gamma u(s,\epsilon),\\ \left[v(s,\epsilon) + \sum_{j=0}^{\infty} \frac{2\epsilon^{2j} u^{(2j)}(s,\epsilon)}{(2j)!} \right]v(s,\epsilon) &= \alpha s + \epsilon \beta + \epsilon^{1/2}\gamma v(s,\epsilon). \label{1:PL1expandedb} \end{align} \end{subequations} We now expand the solutions $u(s,\epsilon)$ and $v(s,\epsilon)$ as a formal power series in $\epsilon^{1/2}$, such that \begin{equation}\label{1:seriesinf} u(s,\epsilon) \sim \sum_{m=0}^{\infty} \epsilon^{m/2} u_m(s),\qquad v(s,\epsilon) \sim \sum_{m=0}^{\infty} \epsilon^{m/2} v_m(s). \end{equation} Equations (\ref{1:PL1expandeda})-(\ref{1:PL1expandedb}) therefore become \begin{subequations} \begin{align*} \sum_{m=0}^{\infty}\sum_{r=0}^{\infty} \epsilon^{(m+r)/2}u_m u_r + \sum_{j=0}^{\infty} \frac{2\epsilon^{2j}}{(2j)!} \sum_{m=0}^{\infty}\sum_{r=0}^{\infty} \epsilon^{(m+r)/2}u_m v_r^{(2j)} &= \alpha s + \epsilon \beta + \gamma \sum_{m=0}^{\infty} \epsilon^{(m+1)/2} u_m(s),\\ \sum_{m=0}^{\infty}\sum_{r=0}^{\infty} \epsilon^{(m+r)/2}v_m v_r + \sum_{j=0}^{\infty} \frac{2\epsilon^{2j}}{(2j)!} \sum_{m=0}^{\infty}\sum_{r=0}^{\infty} \epsilon^{(m+r)/2}v_m u_r^{(2j)} &= \alpha s + \epsilon \beta + \gamma \sum_{m=0}^{\infty} \epsilon^{(m+1)/2} v_m(s). \end{align*} \end{subequations} Matching orders of $\epsilon$ as $\epsilon \rightarrow 0$ gives \begin{subequations} \begin{alignat*}{2} \mathcal{O}(\epsilon^0):& \qquad & &u_0(u_0 + 2 v_0) = \alpha s,\\ & & &v_0(v_0 + 2u_0) = \alpha s,\\ \mathcal{O}(\epsilon^{1/2}):& \qquad & &2(u_0u_1 + u_0v_1 + u_1v_0) = \gamma u_0,\\ & & &2(v_0v_1 + v_0u_1 + v_1u_0) = \gamma v_0,\\ \mathcal{O}(\epsilon^1):& \qquad & &u_1^2 +2 u_0 u_2 +2 u_0 v_2 + 2 v_0 u_2 + 2u_1 v_1 = \beta + \gamma u_1,\\ & & & v_1^2 +2 v_0 v_2 +2 v_0 u_2 + 2 u_0 v_2 + 2v_1 u_1 = \beta + \gamma v_1. \end{alignat*} \end{subequations} This gives two sets of solutions. The first of these is given by \begin{equation}\label{1:earlyordersa} u_0 = -v_0 = \pm\sqrt{-\alpha s},\qquad u_1 = v_1 = \frac{\gamma}{2},\qquad u_2 = -v_2 = \mp \frac{4\beta-\gamma^2}{8\sqrt{-\alpha s}}, \end{equation} while the second is \begin{equation}\label{1:earlyordersb} u_0 = v_0 = \pm\sqrt{\frac{\alpha s}{3}},\qquad u_1 = v_1 = \frac{\gamma}{6},\qquad u_2 = v_2 = \pm\frac{(12\beta+\gamma^2)\sqrt{3}}{72\sqrt{\alpha s}}, \end{equation} where the upper and lower choice of signs correspond. In subsequent analysis, we will refer to solutions with the leading-order behaviour given by (\ref{1:earlyordersa}) as Type A, while those with leading-order behaviour given by (\ref{1:earlyordersb}) as Type B. We see that in both solution types, $u_0$ and $v_0$ contain a singularity at $s=0$, and that this singularity increases in strength by one half in each subsequent term of the asymptotic series. Consequently, we expect that the series will diverge, and exponentially-small Stokes switching behaviour will be present in the solution. In general, we find \begin{subequations} \begin{alignat}{2}\label{1:orderna} \mathcal{O}(\epsilon^{m}):& \qquad &\sum_{j=0}^{2m} u_j u_{2m-j} + \sum_{j=0}^{{\lfloor}m/2{\rfloor}} \frac{2}{(2j)!}\sum_{r=0}^{2m-4j} u_r v^{(2j)}_{2m-r-4j}& = \gamma u_{2m-1},\\ & & \sum_{j=0}^{2m} v_j v_{2m-j} + \sum_{j=0}^{{\lfloor}m/2{\rfloor}} \frac{2}{(2j)!}\sum_{r=0}^{2m-4j} v_r u^{(2j)}_{2m-r-4j}& = \gamma v_{2m-1}. \label{1:ordernb} \end{alignat} \end{subequations} In order to determine the optimal truncation point, we must determine the form of the terms in the asymptotic series as $n \rightarrow \infty$. We therefore follow \cite{Chapman1}, and define a late-order ansatz in the limit $n \rightarrow \infty$: \begin{equation}\label{1:ansatz} u_{m} \sim \frac{U(s)\Gamma(m/2+k)}{\chi(s)^{m/2+k}},\qquad v_{m} \sim \frac{V(s)\Gamma(m/2+k)}{\chi(s)^{m/2+k}}, \end{equation} where $\chi$ is the singulant, satisfying $\chi(0) = 0$, $U$ and $V$ are the prefactors, and $k$ is a constant. Applying this ansatz to (\ref{1:orderna})-(\ref{1:ordernb}) and matching leading orders of $m$ as $m \rightarrow \infty$ gives \begin{subequations} \begin{align}\label{1:lot1a} 2(u_0U + u_0 V + v_0 U) + 2 u_0 V \sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j}}{(2j)!} &= 0,\\ 2(v_0V + v_0 U + u_0 V) + 2 v_0 U \sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j}}{(2j)!} &= 0. \label{1:lot1b} \end{align} \end{subequations} Continuing to the subsequent order as $m \rightarrow \infty$, we find $u_1 = v_1 = \gamma/2$ and $u_1 = v_1 = \gamma/6$ for Type A and B solutions respectively, which are always satisfied. At the next order, we find \begin{subequations} \begin{align}\nonumber 2(u_2U + &u_2 V + v_2 U) + 2u_0 V'\sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j-1}}{(2j-1)!}\\&- 2u_0 V \chi''\sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j-2}}{(2j-1)!}\ \label{1:lota2} + 2u_2 V \sum_{j=1}^{\lfloor m/2 \rfloor-1} \frac{(-\chi')^{2j} }{(2j)!} = 0, \\ \nonumber 2(v_2V + &v_2 U + u_2 V) + 2v_0 U'\sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j-1}}{(2j-1)!}\\&- 2v_0 U \chi''\sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j-2}}{(2j-1)!} \label{1:lotb2} + 2v_2 U \sum_{j=1}^{\lfloor m/2 \rfloor-1} \frac{(-\chi')^{2j} }{(2j)!} = 0. \end{align} \end{subequations} To determine the behaviour of the prefactor and singulant terms, we must consider Type A and B solutions separately, and solve (\ref{1:lot1a})-(\ref{1:lot1b}) to find the singulant $\chi(s)$, and then (\ref{1:lota2})-(\ref{1:lotb2}) to find the prefactors $U(s)$ and $V(s)$. \section{Stokes structure of Type A solutions}\label{CASEA} In this section, we solve the late-order term equations for Type A solutions. We use the resultant expressions to determine the Stokes structure, as well as the form of the exponentially-small components of the solution. From these results, we determine the region of the complex plane which is asymptotically pole-free. \subsection{Late-order terms}\label{Section:LOT} The singulant equations given in (\ref{1:lot1a})-(\ref{1:lot1b}) reduce to \begin{equation}\label{LOT:sing} 2 + 2 \sum_{j=1}^{\lfloor m/2 \rfloor}\frac{(-\chi')^{2j}}{(2j)!} = 0. \end{equation} Recalling that we are considering the late-order terms in the limit $m \rightarrow \infty$, the leading-order behaviour of $\chi$ may be found by taking the series to be infinite, introducing error into the singulant behaviour that is only exponentially-small in this limit. This gives \begin{equation}\label{2:eikonal} \cosh(-\chi') = 0. \end{equation} The solutions to this expression take the form \begin{equation}\label{2:singulant0} \chi' = \pm\frac{\mathrm{i}\pi}{2} + M\pi\mathrm{i}, \end{equation} where $M$ is any integer. In general, the late-order behaviour of $u_n$ and $v_n$ will be the sum of terms associated with the singulants given in (\ref{2:singulant0}), with each term associated with a particular choice of sign and $M$. However, in a similar fashion to the analysis carried out in \cite{King4}, we note that this sum will be dominated in the $n \rightarrow \infty$ limit by the two terms associated with $M = 0$. Hence, we may restrict our attention to these components in our analysis. Recalling that $u_0$ and $v_0$ are singular at $s=0$, and hence that $\chi(0) = 0$, we find that the relevant singulants are given by \begin{equation}\label{2:singulant} \chi_1 = \frac{\mathrm{i}\pi s}{2},\qquad \chi_2 = -\frac{\mathrm{i}\pi s}{2}. \end{equation} To find the prefactor associated with these singulants, we solve (\ref{1:lota2})-(\ref{1:lotb2}). We take the limit of the summation terms as $n \rightarrow \infty$. Equation (\ref{1:lotb2}) becomes \begin{align*} 2 (v_2 V + u_2 V + v_2 U) + 2v_0 U'\sinh(-\chi') - \frac{2v_0 U \chi''\sinh(-\chi')}{\chi'} + 2v_2 U \left(\cosh(-\chi')-1\right) = 0. \end{align*} Applying the form of the singulant given in (\ref{2:singulant0}), we find that this reduces to $U' = 0$. Hence we set the prefactor $U(s) = \Lambda$, where $\Lambda$ takes constant value. Recalling that there are two distinct singulant contributions, we conclude that there are two constants associated with $U$, denoted as $\Lambda_1$ and $\Lambda_2$. A similar analysis performed on (\ref{1:lota2}) produces $V' = 0$. Hence, we find that there are also two constants associated with $V$, which we will denote as $\Lambda_3$ and $\Lambda_4$. Finally, we note that for the ansatz (\ref{1:ansatz}) to be consistent with the leading-order behaviour given in (\ref{1:earlyordersa}), we must have $k = -1/2$. This gives a singularity of the appropriate strength in the limit $n \rightarrow 0$. Hence, the late-order terms are given by \begin{equation}\label{2:LOT} u_m \sim \frac{\Lambda_1\Gamma(m/2-1/2)}{(\mathrm{i}\pi s/2)^{m/2-1/2}} + \frac{\Lambda_2\Gamma(m/2-1/2)}{(-\mathrm{i}\pi s/2)^{m/2-1/2}},\qquad v_m \sim \frac{\Lambda_3\Gamma(m/2-1/2)}{(\mathrm{i}\pi s/2)^{m/2-1/2}} + \frac{\Lambda_4\Gamma(m/2-1/2)}{(-\mathrm{i}\pi s/2)^{m/2-1/2}}. \end{equation} The only component of the late-order terms that remains to be found is the value of the constants $\Lambda_{i}$, which may be determined numerically. We illustrate this on an example problem in Appendix \ref{A_typeA}. \subsection{Stokes structure}\label{0:StokesStruct} Having calculated $\chi_1$ and $\chi_2$, we recall from Section \ref{Method} that the behaviour of the exponential term will be proportional to $\exp{(-\chi/\epsilon)}$; this is also demonstrated in the full Stokes switching analysis contained within Appendix \ref{A2}. This term will be exponentially small when $\mathrm{Re}(\chi) > 0$, and exponentially large when $\mathrm{Re}(\chi) < 0$. We also recall that Stokes lines exist where $\mathrm{Im}(\chi) = 0$ and $\mathrm{Re}(\chi) > 0$, while anti-Stokes lines follow $\mathrm{Re}(\chi) = 0$. Figure \ref{Stokes_dPIa1} illustrates the behaviour of the singulants in each of the four quadrants within the complex $s$-plane. \begin{figure} \centering \begin{tikzpicture} [xscale=0.5,>=stealth,yscale=0.5] \draw[white] (8,0) -- (-8,0); \draw[<->] (-6,0) -- (6,0) node[right] {\scriptsize{$\mathrm{Re}(s)$}}; \draw[<->] (0,-6) -- (0,6) node[above] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[line width=0.65mm,dashed] (0,0) -- (5,0); \draw[line width=0.65mm] (0,0) -- (0,5.5); \draw[line width=0.65mm] (0,0) -- (0,-5.5); \draw[decoration = {zigzag,segment length = 2mm, amplitude = 0.5mm},decorate, line width=0.75mm,black] (0,0)--(-5.5,0); \filldraw (0,0) circle (1mm); \node at (2,4) [right] {\scriptsize{$\mathrm{Re}(\chi_1) < 0,\quad \mathrm{Im}(\chi_1)<0$}}; \node at (2,3) [right] {\scriptsize{$\mathrm{Re}(\chi_2) > 0,\quad \mathrm{Im}(\chi_2)>0$}}; \node at (2,-3) [right] {\scriptsize{$\mathrm{Re}(\chi_1) > 0,\quad \mathrm{Im}(\chi_1)<0$}}; \node at (2,-4) [right] {\scriptsize{$\mathrm{Re}(\chi_2) < 0,\quad \mathrm{Im}(\chi_2)>0$}}; \node at (-2,4) [left] {\scriptsize{$\mathrm{Re}(\chi_1) < 0,\quad \mathrm{Im}(\chi_1)>0$}}; \node at (-2,3) [left] {\scriptsize{$\mathrm{Re}(\chi_2) > 0,\quad \mathrm{Im}(\chi_2)<0$}}; \node at (-2,-3) [left] {\scriptsize{$\mathrm{Re}(\chi_1) > 0,\quad \mathrm{Im}(\chi_1)>0$}}; \node at (-2,-4) [left] {\scriptsize{$\mathrm{Re}(\chi_2) < 0,\quad \mathrm{Im}(\chi_2)<0$}}; \draw[<-] (-0.2,1) -- (-1.5,1) node[left] {\scriptsize{Stokes line}}; \draw[<-] (-0.2,-1) -- (-1.5,-1) node[left] {\scriptsize{Stokes line}}; \draw[<-] (1,0.2) .. controls (1,0.8) and (1.2,1) .. (2,1) node[right] {\scriptsize{Anti-Stokes line}}; \end{tikzpicture} \caption{Behaviour of the singulants for Type A solutions in each of the four quadrants. The dashed curve indicates where the real part of both singulants is zero (and hence corresponds to an anti-Stokes line), and the solid curve indicates where the imaginary parts are zero, corresponding to potential Stokes lines. The black jagged line represents a branch cut. Recalling that Stokes switching may only occur when $\mathrm{Im}(\chi) = 0$ and $\mathrm{Re}(\chi) > 0$, we see that $\chi_1$ switches across the curve $\mathrm{arg}(s) = -\pi/2$, while $\chi_2$ switches across the curve $\mathrm{arg}(s) = \pi/2$.}\label{Stokes_dPIa1} \end{figure} We see that there is an anti-Stokes line following the positive real axis, while Stokes lines follow the negative and positive imaginary axis for $\chi_1$ and $\chi_2$ respectively. To determine the behaviour that occurs as the Stokes lines are crossed, we note that the remainder term is of algebraic order along anti-Stokes lines. However, this would make it comparable in size to the leading-order behaviour on the positive real axis, which is not possible. Hence, we conclude that the remainder terms associated with $\chi_1$ and $\chi_2$ are not present along the real axis at all, and hence cannot be active in the first or fourth quadrant. Consequently, we conclude that the exponential contributions associated with $\chi_1$ and $\chi_2$ are present on the left-hand side of their Stokes lines, but are switched off as they cross from the left to the right-hand side (into the fourth and first quadrant respectively). Figure \ref{Stokes_dPIa2} illustrates the regions in which the exponential contributions are active. \begin{figure} \centering \subfloat[Regions containing exponential behaviour]{ \begin{tikzpicture} [xscale=0.4,>=stealth,yscale=0.4] \draw[white] (8,0) -- (-8,0); \draw[<->] (-6,0) -- (6,0) node[right] {\scriptsize{$\mathrm{Re}(s)$}}; \draw[<->] (0,-6) -- (0,6) node[above] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[line width=0.65mm,dashed] (0,0) -- (5,0); \draw[line width=0.65mm] (0,0) -- (0,5.5) node[left] {\scriptsize{\ding{193}}}; \draw[line width=0.65mm] (0,0) -- (0,-5.5) node[left] {\scriptsize{\ding{192}}}; \draw[decoration = {zigzag,segment length = 2mm, amplitude = 0.5mm},decorate, line width=0.75mm,black] (0,0)--(-5.5,0); \filldraw (0,0) circle (1mm); \draw[<-] (-0.2,1) -- (-1.5,1) node[left] {\scriptsize{Stokes line}}; \draw[<-] (-0.2,-1) -- (-1.5,-1) node[left] {\scriptsize{Stokes line}}; \draw[<-] (1,0.2) .. controls (1,0.8) and (1.2,1) .. (2,1) node[right] {\scriptsize{Anti-Stokes line}}; \node at (2,4) [right] {\scriptsize{$\chi_1$: Inactive}}; \node at (2,3) [right] {\scriptsize{$\chi_2$: Inactive}}; \node at (2,-3) [right] {\scriptsize{$\chi_1$: Inactive}}; \node at (2,-4) [right] {\scriptsize{$\chi_2$: Inactive}}; \node at (-8.5,4) [right] {\scriptsize{$\chi_1$: Inactive}}; \node at (-8.5,3) [right] {\scriptsize{$\chi_2$: Active (exp. small)}}; \node at (-8.5,-3) [right] {\scriptsize{$\chi_1$: Active} (exp. small)}; \node at (-8.5,-4) [right] {\scriptsize{$\chi_2$: Inactive}}; \end{tikzpicture} } \subfloat[Exponential contribution]{ \begin{tikzpicture} [xscale=0.4,>=stealth,yscale=0.4] \draw[white] (8,0) -- (-8,0); \draw[<->] (-6,0) -- (6,0) node[right] {\scriptsize{$\mathrm{Re}(s)$}}; \draw[<->] (0,-6) -- (0,6) node[above] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[line width=0.65mm,dashed] (0,0) -- (5,0); \draw[line width=0.65mm] (0,0) -- (0,5.5) node[left] {\scriptsize{\ding{193}}}; \draw[line width=0.65mm] (0,0) -- (0,-5.5) node[left] {\scriptsize{\ding{192}}}; \draw[decoration = {zigzag,segment length = 2mm, amplitude = 0.5mm},decorate, line width=0.75mm,black] (0,0)--(-5.5,0); \filldraw (0,0) circle (1mm); \node at (2,4) [right] {\scriptsize{$R_N = 0$}}; \node at (2,3) [right] {\scriptsize{$S_N = 0$}}; \node at (2,-3) [right] {\scriptsize{$R_N = 0$}}; \node at (2,-4) [right] {\scriptsize{$S_N = 0$}}; \node at (-7.5,4) [right] {\scriptsize{$R_N \sim 4\Lambda_2(\mathrm{i})^{s/\epsilon}$}}; \node at (-7.5,3) [right] {\scriptsize{$S_N \sim 4\Lambda_4(\mathrm{i})^{s/\epsilon}$}}; \node at (-7.5,-3) [right] {\scriptsize{$R_N \sim 4\Lambda_1(-\mathrm{i})^{s/\epsilon}$}}; \node at (-7.5,-4) [right] {\scriptsize{$S_N \sim 4\Lambda_3(-\mathrm{i})^{s/\epsilon}$}}; \end{tikzpicture} } \caption{Figure (a) illustrates the regions of the $s$-plane in which exponential contributions associated with $\chi_1$ and $\chi_2$ are active. Switching behaviour of exponential contributions occurs across the Stokes lines, denoted as solid black lines, where the associated singulant is indicated by the nearby circled numbers. The anti-Stokes line is represented as a dashed curve. The exponential contributions cannot be present along the anti-Stokes line, as they would not be exponentially small here, and hence would be visible to the algebraic power series. Figure (b) shows the exponentially-small remainder terms present in each region.}\label{Stokes_dPIa2} \end{figure} Importantly, we see that as the exponential contributions associated with $\chi_1$ and $\chi_2$ are present only in the third and second quadrants respectively, in which they are exponentially small (as $\mathrm{Re}(\chi) > 0$), they do not cause any difficulty for the validity of the asymptotic expansion. Hence, the solutions satisfying (\ref{1:earlyordersa}) are valid on the entire complex plane, with the exception of the branch cut. We therefore conclude that the solutions are asymptotically pole-free in the entire far-field, aside from the negative real axis. Finally, now that we know the nature of the switching behaviour as the Stokes lines are crossed, we are in a position to calculate the precise form of the remainder term associated with the late-order terms, and hence determine the exponentially-small contribution present in the problem. \subsection{Stokes smoothing}\label{2.4:Smoothing} To determine the behaviour in the neighbourhood of the Stokes line, we return to (\ref{1:PL1expandeda})-(\ref{1:PL1expandedb}). Now, we express $u$ and $v$ as asymptotic expansions truncated after $N$ terms, with remainder $R_N$ and $S_N$ respectively, such that \begin{equation}\label{4:seriesN} u(s,\epsilon) = \sum_{m=0}^N \epsilon^{m/2} u_m(s) + R_N(s),\qquad v(s,\epsilon) = \sum_{m=0}^N \epsilon^{m/2} v_m(s) + S_N(s). \end{equation} If $N$ is chosen so that the series are truncated after their smallest terms, the remainder terms will be exponentially small. By considering the behaviour of these remainder terms, we can obtain the exponentially-small corrections to the asymptotic series, and use these to determine the region in which the asymptotic series provides a valid approximation of the exact solution. A full analysis of the remainder terms $R_N$ and $S_N$ is given in Appendix \ref{A2}. This analysis is quite technical; we therefore summarize the results in the remainder of this section. From Appendix \ref{A2}, we find that the optimal truncation point is given by $N_{\mathrm{opt}} = 2|\chi|/\epsilon + \omega$, where $0 \leq \omega < 1$ is chosen so that $N$ is integer-valued. The remainder terms given in (\ref{4:RNout})-(\ref{4:RNout2}) are shown to take the form \begin{align}\label{4:remainderR} R_N &\sim \mathcal{S} _1\Lambda_1\,\mathrm{e}^{-\mathrm{i}\pi s/2\epsilon} + \mathcal{S} _2\Lambda_2\,\mathrm{e}^{\mathrm{i}\pi s/2\epsilon} = \mathcal{S} _1\Lambda_1\,(-\mathrm{i})^{s/\epsilon} + \mathcal{S} _2\Lambda_2\,(\mathrm{i})^{s/\epsilon},\\ S_N &\sim \mathcal{S} _1\Lambda_3\,\mathrm{e}^{-\mathrm{i}\pi s/2\epsilon} + \mathcal{S} _2\Lambda_2\,\mathrm{e}^{\mathrm{i}\pi s/2\epsilon} = \mathcal{S} _1\Lambda_3\,(-\mathrm{i})^{s/\epsilon} + \mathcal{S} _2\Lambda_4\,(\mathrm{i})^{s/\epsilon}.\label{4:remainderS} \end{align} where $\mathcal{S} _i$ is the Stokes-switching parameter that varies rapidly in the neighbourhood of Stokes lines. From the analysis in Appendix \ref{A2}, we find that the value of $\mathcal{S}_1$ changes smoothly from $\mathcal{S}_1 = 4$ to $\mathcal{S}_1 = 0$ when crossing from the third to the fourth quadrant, and the value of $\mathcal{S}_2$ changes smoothly from $\mathcal{S}_2 = 0$ to $\mathcal{S}_2 = 4$ when crossing from the second to the first quadrant. Hence, the exponential behaviour of the asymptotic approximation has been completely determined, and is shown in Figure \ref{Stokes_dPIa2}(b). Consequently, the optimally-truncated asymptotic series for $u(s,\epsilon)$ and $v(s,\epsilon)$ takes the form \begin{align}\label{4:asymp1app} u(s,\epsilon) &\sim \sum_{m=0}^{N_{\mathrm{opt}}} \epsilon^{m/2} u_m(s) + \mathcal{S} _1\Lambda_1\,(-\mathrm{i})^{s/\epsilon} + \mathcal{S} _2\Lambda_2\,(\mathrm{i})^{s/\epsilon},\\ \label{4:asymp1app2} v(s,\epsilon) &\sim \sum_{m=0}^{N_{\mathrm{opt}}}\epsilon^{m/2} v_m(s) + \mathcal{S} _1\Lambda_3\,(-\mathrm{i})^{s/\epsilon} + \mathcal{S} _2\Lambda_4\,(\mathrm{i})^{s/\epsilon}, \end{align} where the leading orders of $u_n$ and $v_n$ are give in (\ref{1:earlyordersa}), and the late-order behaviour is given in (\ref{2:LOT}). This expression is therefore an accurate asymptotic approximation up to exponentially-small terms, valid in the entire complex $s$-plane. \section{Stokes structure of Type B solutions}\label{CASEB} In this section, we determine the late-order asymptotic terms associated with Type B. These are then used to determine the exponentially-small contributions to the asymptotic solution, and subsequently the Stokes structure. From these results, we determine the region of validity for the general solution, and show that for particular choices of the two Stokes-switching parameters, the region of validity may be extended. \subsection{Late-order terms} When considering Type B solutions, we find that the asymptotic series expansion for $u$ and $v$ are identical, and hence that $u_m = v_m$ for all values of $m$. We therefore set $V(s) = U(s)$. Now, using (\ref{1:earlyordersb}), both singulant equations (\ref{1:lot1a})-(\ref{1:lot1b}) reduce to \begin{align}\label{1:eikonal} 2 + \cosh(-\chi') = 0, \end{align} where we have again taken the upper bound of the summation terms to be infinite. From the leading-order behaviour (\ref{1:earlyordersb}), we know that the solution is singular at $s=0$, and hence that $\chi = 0$ at $s = 0$. Solving (\ref{1:eikonal}) with this boundary condition gives \begin{equation*} \chi = \pm\left[\log(-2-\sqrt{3}) + 2 M \pi \mathrm{i}\right]s. \end{equation*} where $M$ is some integer. As in Type A solutions, the late-order behaviour for $u_n$ will take the form of an infinite sum of factorial-over-power ansatz terms, each associated with a unique choice of $\chi$. However, at late orders, this sum will be dominated by the four terms associated with $M=0$ and $M=-1$, which have the smallest absolute value. Hence, the four terms that contribute to the exponential behaviour of the solution are \begin{alignat*}{2} \chi_1 &= \log(-2-\sqrt{3})s,&\qquad \chi_2 &= -\log(-2-\sqrt{3})s,\\ \chi_3 &= \log(-2+\sqrt{3})s,&\qquad \chi_4 &= -\log(-2+\sqrt{3})s. \end{alignat*} We observe that, unlike Type A, there are four sets of late-order behaviour to be considered in order to understand the Stokes structure of the associated solution. By solving (\ref{1:lota2})-(\ref{1:lotb2}), we again find that $U' = 0$. However, in this case, there are four distinct singulant expressions, and hence four constants associated with the corresponding late-order terms (denoted $\Lambda_1$ - $\Lambda_4$). We also find that, as before, $k = -1/2$. Hence, we find that the late-order terms may be expressed as \begin{align} \nonumber u_m = v_m \sim \frac{\Lambda_1\Gamma(m/2-1/2)}{(\log(-2-\sqrt{3})s)^{m/2-1/2}} &+ \frac{\Lambda_2 \Gamma(m/2-1/2)}{(-\log(-2-\sqrt{3})s)^{m/2-1/2}}\\+&\frac{\Lambda_3 \Gamma(m/2-1/2)}{(\log(-2+\sqrt{3})s)^{m/2-1/2}} + \frac{\Lambda_4 \Gamma(m/2-1/2)}{(-\log(-2+\sqrt{3})s)^{m/2-1/2}},\label{4:LOTcalc} \end{align} where $\Lambda_i$ is associated with $\chi_i$. The numerical process to determine $\Lambda_i$ is more complicated than that demonstrated in Appendix \ref{1:2Inner}, as we must now compute four values of $\Lambda_i$, rather than two. An example numerical calculation is illustrated in Appendix \ref{A_typeB}. \subsection{Stokes structure}\label{1.3:Structure} \begin{figure} \centering \subfloat[Stokes structure for $\chi_{1,2}$]{ \begin{tikzpicture} [xscale=0.65,>=stealth,yscale=0.65] \draw[<->] (-6,0) -- (6,0) node[right] {\scriptsize{$\mathrm{Re}(s)$}}; \draw[<->] (0,-6) -- (0,6) node[above] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[line width=0.65mm,dashed] (0,0) -- (25:6); \draw[line width=0.65mm] (0,0) -- (-65:6); \draw[line width=0.65mm] (0,0) -- (25+90:6); \draw[line width=0.65mm,dashed] (0,0) -- (-65-90:6); \draw node at (2,-1) [right] {\scriptsize{$\mathrm{Re}(\chi_1) > 0$, $\mathrm{Im}(\chi_1) > 0$}}; \draw node at (2,-2) [right] {\scriptsize{$\mathrm{Re}(\chi_2) < 0$, $\mathrm{Im}(\chi_2) < 0$}}; \draw node at (0.5,4) [right] {\scriptsize{$\mathrm{Re}(\chi_1) < 0$, $\mathrm{Im}(\chi_1) > 0$}}; \draw node at (0.5,3) [right] {\scriptsize{$\mathrm{Re}(\chi_2) > 0$, $\mathrm{Im}(\chi_2) < 0$}}; \draw node at (-2,2) [left] {\scriptsize{$\mathrm{Re}(\chi_1) < 0$, $\mathrm{Im}(\chi_1) < 0$}}; \draw node at (-2,1) [left] {\scriptsize{$\mathrm{Re}(\chi_2) > 0$, $\mathrm{Im}(\chi_2) > 0$}}; \draw [->] (-4,0.5) -- (-4,-1); \draw (-7.15,2.5) -- (-7.15,0.5) -- (-1.85,0.5) -- (-1.85,2.5) -- cycle; \draw node at (-0.5,-3) [left] {\scriptsize{$\mathrm{Re}(\chi_1) > 0$, $\mathrm{Im}(\chi_1) < 0$}}; \draw node at (-0.5,-4) [left] {\scriptsize{$\mathrm{Re}(\chi_2) < 0$, $\mathrm{Im}(\chi_2) > 0$}}; \draw node at (-3,4.5) [left] {\scriptsize{$\mathrm{Re}(\chi_1) < 0$, $\mathrm{Im}(\chi_1) < 0$}}; \draw node at (-3,3.5) [left] {\scriptsize{$\mathrm{Re}(\chi_2) > 0$, $\mathrm{Im}(\chi_2) > 0$}}; \draw[decoration = {zigzag,segment length = 2mm, amplitude = 0.5mm},decorate, line width=0.75mm,black] (0,0)--(-5.5,0); \filldraw (0,0) circle (1mm); \end{tikzpicture} } \subfloat[Stokes structure for $\chi_{3,4}$]{ \begin{tikzpicture} [xscale=0.65,>=stealth,yscale=0.65] \draw[<->] (-6,0) -- (6,0) node[right] {\scriptsize{$\mathrm{Re}(s)$}}; \draw[<->] (0,-6) -- (0,6) node[above] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[line width=0.65mm,dashed] (0,0) -- (-25:6); \draw[line width=0.65mm] (0,0) -- (65:6); \draw[line width=0.65mm] (0,0) -- (-25-90:6); \draw[line width=0.65mm,dashed] (0,0) -- (65+90:6); \draw node at (2,1) [right] {\scriptsize{$\mathrm{Re}(\chi_4) > 0$, $\mathrm{Im}(\chi_4) > 0$}}; \draw node at (2,2) [right] {\scriptsize{$\mathrm{Re}(\chi_3) < 0$, $\mathrm{Im}(\chi_3) < 0$}}; \draw node at (0.5,-4) [right] {\scriptsize{$\mathrm{Re}(\chi_4) < 0$, $\mathrm{Im}(\chi_4) > 0$}}; \draw node at (0.5,-3) [right] {\scriptsize{$\mathrm{Re}(\chi_3) > 0$, $\mathrm{Im}(\chi_3) < 0$}}; \draw node at (-2,-2) [left] {\scriptsize{$\mathrm{Re}(\chi_4) < 0$, $\mathrm{Im}(\chi_4) < 0$}}; \draw node at (-2,-1) [left] {\scriptsize{$\mathrm{Re}(\chi_3) > 0$, $\mathrm{Im}(\chi_3) > 0$}}; \draw [->] (-4,-0.5) -- (-4,1); \draw (-7.15,-2.5) -- (-7.15,-0.5) -- (-1.85,-0.5) -- (-1.85,-2.5) -- cycle; \draw node at (-0.5,3) [left] {\scriptsize{$\mathrm{Re}(\chi_4) > 0$, $\mathrm{Im}(\chi_4) < 0$}}; \draw node at (-0.5,4) [left] {\scriptsize{$\mathrm{Re}(\chi_3) < 0$, $\mathrm{Im}(\chi_3) > 0$}}; \draw node at (-3,-4.5) [left] {\scriptsize{$\mathrm{Re}(\chi_4) < 0$, $\mathrm{Im}(\chi_4) < 0$}}; \draw node at (-3,-3.5) [left] {\scriptsize{$\mathrm{Re}(\chi_3) > 0$, $\mathrm{Im}(\chi_3) > 0$}}; \draw[decoration = {zigzag,segment length = 2mm, amplitude = 0.5mm},decorate, line width=0.75mm,black] (0,0)--(-5.5,0); \filldraw (0,0) circle (1mm); \end{tikzpicture} } \caption{Behaviour of the singulants for Type B solutions in the complex $s$-plane. The dashed lines indicate where the real part of the singulants is zero (and hence corresponds to an anti-Stokes line), and the solid lines indicates where the imaginary parts are zero. The jagged line represents a branch cut. Recalling that Stokes switching may only occur when $\mathrm{Im}(\chi) = 0$ and $\mathrm{Re}(\chi) > 0$, we see that $\chi_2$ and $\chi_4$ switch across the Stokes line in the upper-half plane, while $\chi_1$ and $\chi_3$ switch across the Stokes line in the lower-half plane.}\label{Stokes_dPIab} \end{figure} Having calculated the singulants, we may determine the Stokes structure in a similar fashion to Section \ref{0:StokesStruct}, noting that Stokes lines follow curves on which $\mathrm{Im}(\chi) = 0$ and $\mathrm{Re}(\chi) = 0$, and anti-Stokes lines follow curves satisfying $\mathrm{Re}(\chi) = 0$. The behaviour of the four different singulants is illustrated in Figure \ref{Stokes_dPIab}. From this figure, we see that the Stokes structure of the solution is substantially more complicated than in Type A solutions. Not only are there four different exponential contributions to consider, but there are two different sets of Stokes lines, each associated with two of the singulants. There is, however, also a more interesting difference. From Figure \ref{Stokes_dPIa1}, we were able to conclude that neither exponential contribution could be present on the positive real axis in Type A solutions, as they would not be exponentially small. Hence, as the jump across Stokes lines is prescribed in (\ref{4:remainderR})-(\ref{4:remainderS}), the asymptotic behaviour is specified uniquely, with no free parameters. However, from Figure \ref{Stokes_dPIab}, we see that this is no longer true for Type B solutions. While we can see that $\chi_2$ and $\chi_3$ cannot be present on the positive real axis, as they would be exponentially large in the asymptotic limit, the same cannot be said about $\chi_1$ and $\chi_4$. As $\mathrm{Re}(\chi_{1,4}) > 0$ on the positive real axis, the associated exponential contribution is exponentially small, and the asymptotic series is still valid when such contributions are present. Hence, the value of $\mathcal{S}_{1,4}$ on the real axis may be freely specified, meaning that the solution contains two free parameters. Consequently, we determine that the exponential contributions active in each region of the plane are those shown in Figure \ref{Stokes_dPIab2}(a). We see that, assuming that $\mathcal{S}_{1,4}$ is nonzero on the real axis, the region in which the asymptotic expansion associated with (\ref{1:earlyordersb}) is valid is a sector between the anti-Stokes lines associated with $\chi_{1,2}$ and $\chi_{3,4}$, in which the exponentials associated with $\chi_2$ and $\chi_3$ are inactive, and those associated with $\chi_1$ and $\chi_4$ are exponentially small. The angle between the edge of the sector and the real axis (denoted by $\theta$ in Figure \ref{Stokes_dPIab2}) is given by \begin{equation*} \theta = \tan^{-1}\left(\frac{\log(2+\sqrt{3})}{\pi}\right) \approx 22.74^{\circ}. \end{equation*} However, we note that it is possible to choose $\mathcal{S}_{1,4} = 0$ on the real axis. Figure \ref{Stokes_dPIab2}(b) illustrates this behaviour. In this case, the region in which the asymptotic expansion is valid is substantially larger than the general case shown in Figure \ref{Stokes_dPIab2}(a); there is no exponential contribution associated with $\chi_1$ or $\chi_4$ to become large as the anti-Stokes lines are crossed. Instead, the angle $\theta$ is now given by \begin{equation*} \theta = \pi - \tan^{-1}\left(\frac{\log(2+\sqrt{3})}{\pi}\right) \approx 157.26^{\circ}. \end{equation*} \begin{figure} \centering \subfloat[General Stokes Structure]{ \begin{tikzpicture} [xscale=1,>=stealth,yscale=1] \draw (0,0) -- (0,3.5) -- (4,3.5) -- (4,0) -- cycle; \draw (4.5,0) -- (4.5,3.5) -- (8.5,3.5) -- (8.5,0) -- cycle; \draw (0,4) -- (0,7.5) -- (4,7.5) -- (4,4) -- cycle; \draw (4.5,4) -- (4.5,7.5) -- (8.5,7.5) -- (8.5,4) -- cycle; \draw (9,0) -- (9,3.5) -- (13,3.5) -- (13,0) -- cycle; \draw[line width=0.50mm] (9.15,7.3) -- (9.65,7.3) node[right] {\scriptsize{Stokes Line}}; \draw[line width=0.50mm,dashed] (9.15,6.9) -- (9.65,6.9) node[right] {\scriptsize{Anti-Stokes Line}}; \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (9.15,6.5)--(9.65,6.5) node[right] {\scriptsize{Branch Cut}}; \draw (9.6,5.8) node [right] {\scriptsize{No Contribution}} -- (9.6,6) -- (9.2,6) -- (9.2,5.6) -- (9.6,5.6) -- (9.6,5.8) -- cycle; \fill[opacity=0.1] (9.2,5.2) -- (9.2,5.4) -- (9.6,5.4) -- (9.6,5) -- (9.2,5) -- (9.2,5.2) -- cycle; \draw (9.6,5.2) node [right] {\scriptsize{Exp. Small Contribution}}-- (9.6,5.4) -- (9.2,5.4) -- (9.2,5) -- (9.6,5) -- (9.6,5.2) -- cycle; \fill[opacity=0.4] (9.2,4.6) -- (9.2,4.8) -- (9.6,4.8) -- (9.6,4.4) -- (9.2,4.4) -- (9.2,4.6) -- cycle; \draw (9.6,4.6) node [right] {\scriptsize{Exp. Large Contribution}}-- (9.6,4.8) -- (9.2,4.8) -- (9.2,4.4) -- (9.6,4.4) -- (9.6,4.6) -- cycle; \draw[line width=0.50mm,dashed] (2,1.75) -- ++(-25:1.40); \draw[line width=0.50mm,dashed] (2,1.75) -- ++(-180-25:1.40); \draw[line width=0.50mm] (2,1.75) -- ++(-25-90:1.40); \draw[->] (2,0.25) -- (2,3.25); \draw[->] (0.5,1.75) -- (3.5,1.75); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2,1.75)--(0.65,1.75); \fill[opacity=0.1] (1,1.75) arc (-180:-90-25:1) -- (2,1.75) -- cycle; \node at (2,1.75) (a) {}; \path (a) ++(-90-25:1.6) node (c) {\scriptsize{\ding{194}}}; \node at (0.75,0.75) {\scriptsize{$\mathcal{S}_3 = \mathcal{S}_3^+$}}; \node at (2.08,0.75) {\scriptsize{$\mathcal{S}_3 = 0$}}; \draw[line width=0.50mm,dashed] (2+4.5,1.75) -- ++(-25:1.40); \draw[line width=0.50mm,dashed] (2+4.5,1.75) -- ++(-180-25:1.40); \draw[line width=0.50mm] (2+4.5,1.75) -- ++(-25+90:1.40); \draw[->] (2+4.5,0.25) -- (2+4.5,3.25); \draw[->] (0.5+4.5,1.75) -- (3.5+4.5,1.75); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2+4.5,1.75)--(0.65+4.5,1.75); \fill[opacity=0.4] (1+4.5,1.75) arc (-180:-25:1) -- (2+4.5,1.75) -- cycle; \fill[opacity=0.4] (1+4.5,1.75) arc (180:180-25:1) -- (2+4.5,1.75) -- cycle; \fill[opacity=0.1] (3+4.5,1.75) arc (0:-25:1) -- (2+4.5,1.75) -- cycle; \fill[opacity=0.1] (3+4.5,1.75) arc (0:180-25:1) -- (2+4.5,1.75) -- cycle; \node at (2+4.5,1.75) (a) {}; \path (a) ++(90-25:1.6) node (c) {\scriptsize{\ding{195}}}; \node at (3.25,0.75+4) {\scriptsize{$\mathcal{S}_1 = \mathcal{S}_1^-$}}; \node at (1.425,0.55+4) {\scriptsize{$\mathcal{S}_1 = \mathcal{S}_1^+$}}; \draw[line width=0.50mm,dashed] (2+4.5,1.75+4) -- ++(25:1.40); \draw[line width=0.50mm,dashed] (2+4.5,1.75+4) -- ++(-180+25:1.40); \draw[line width=0.50mm] (2+4.5,1.75+4) -- ++(25+90:1.40); \draw[->] (2+4.5,0.25+4) -- (2+4.5,3.25+4); \draw[->] (0.5+4.5,1.75+4) -- (3.5+4.5,1.75+4); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2+4.5,1.75+4)--(0.65+4.5,1.75+4); \fill[opacity=0.1] (1+4.5,1.75+4) arc (180:90+25:1) -- (2+4.5,1.75+4) -- cycle; \node at (2+4.5,1.75+4) (a) {}; \path (a) ++(90+25:1.6) node (c) {\scriptsize{\ding{193}}}; \node at (0.75+4.5,3.5-0.75+4) {\scriptsize{$\mathcal{S}_2 = \mathcal{S}_2^+$}}; \node at (2.08+4.5,3.5-0.75+4) {\scriptsize{$\mathcal{S}_2 = 0$}}; \draw[line width=0.50mm,dashed] (2,1.75+4) -- ++(25:1.40); \draw[line width=0.50mm,dashed] (2,1.75+4) -- ++(-180+25:1.40); \draw[line width=0.50mm] (2,1.75+4) -- ++(25-90:1.40); \draw[->] (2,0.25+4) -- (2,3.25+4) node [left] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[->] (0.5,1.75+4) -- (3.5,1.75+4) node [below] {\scriptsize{$\mathrm{Re}(s)$}};; \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2,1.75+4)--(0.65,1.75+4); \fill[opacity=0.4] (1,1.75+4) arc (180:25:1) -- (2,1.75+4) -- cycle; \fill[opacity=0.4] (1,1.75+4) arc (180:180+25:1) -- (2,1.75+4) -- cycle; \fill[opacity=0.1] (3,1.75+4) arc (0:25:1) -- (2,1.75+4) -- cycle; \fill[opacity=0.1] (3,1.75+4) arc (0:-180+25:1) -- (2,1.75+4) -- cycle; \node at (2,1.75+4) (a) {}; \path (a) ++(-90+25:1.6) node (c) {\scriptsize{\ding{192}}}; \node at (8.5-0.75,3.5-0.75) {\scriptsize{$\mathcal{S}_4 = \mathcal{S}_4^-$}}; \node at (8.5-2.75,3.5-0.55) {\scriptsize{$\mathcal{S}_4 = \mathcal{S}_4^+$}}; \draw[line width=0.50mm,dashed] (2+9,1.75) -- ++(-25:1.40); \draw[line width=0.50mm,dashed] (2+9,1.75) -- ++(25:1.40); \draw[->] (2+9,0.25) -- (2+9,3.25); \draw[->] (0.5+9,1.75) -- (3.5+9,1.75); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2+9,1.75)--(0.65+9,1.75); \fill[opacity=0.4] (1+9,1.75) arc (-180:-25:1) -- (2+9,1.75) -- cycle; \fill[opacity=0.4] (1+9,1.75) arc (180:25:1) -- (2+9,1.75) -- cycle; \fill[opacity=0.1] (3+9,1.75) arc (0:-25:1) -- (2+9,1.75) -- cycle; \fill[opacity=0.1] (3+9,1.75) arc (0:25:1) -- (2+9,1.75) -- cycle; \node at (2.2+9,1.2+1.75) [right] {\scriptsize{$\chi_1$: Large}}; \node at (2.2+9,-1.2+1.75) [right] {\scriptsize{$\chi_4$: Large}}; \draw[->] (2+9+0.75,1.75) arc (0:25:0.75); \draw[->] (2+9+0.75,1.75) arc (0:-25:0.75); \node at (2+9+0.675,1.75) [above right] {\scriptsize{Valid}}; \node at (2,0) [below] {\scriptsize{$\chi_3$}}; \node at (6.5,0) [below] {\scriptsize{$\chi_4$}}; \node at (2,7.5) [above] {\scriptsize{$\chi_1$}}; \node at (6.5,7.5) [above] {\scriptsize{$\chi_2$}}; \node at (11,0) [below] {\scriptsize{Composite Behaviour}}; \end{tikzpicture} } \subfloat[Special Asymptotic Solution]{ \begin{tikzpicture} [xscale=1,>=stealth,yscale=1] \draw (0,0) -- (0,3.5) -- (4,3.5) -- (4,0) -- cycle; \draw (4.5,0) -- (4.5,3.5) -- (8.5,3.5) -- (8.5,0) -- cycle; \draw (0,4) -- (0,7.5) -- (4,7.5) -- (4,4) -- cycle; \draw (4.5,4) -- (4.5,7.5) -- (8.5,7.5) -- (8.5,4) -- cycle; \draw (9,0) -- (9,3.5) -- (13,3.5) -- (13,0) -- cycle; \draw[line width=0.50mm] (9.15,7.3) -- (9.65,7.3) node[right] {\scriptsize{Stokes Line}}; \draw[line width=0.50mm,dashed] (9.15,6.9) -- (9.65,6.9) node[right] {\scriptsize{Anti-Stokes Line}}; \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (9.15,6.5)--(9.65,6.5) node[right] {\scriptsize{Branch Cut}}; \draw (9.6,5.8) node [right] {\scriptsize{No Contribution}} -- (9.6,6) -- (9.2,6) -- (9.2,5.6) -- (9.6,5.6) -- (9.6,5.8) -- cycle; \fill[opacity=0.1] (9.2,5.2) -- (9.2,5.4) -- (9.6,5.4) -- (9.6,5) -- (9.2,5) -- (9.2,5.2) -- cycle; \draw (9.6,5.2) node [right] {\scriptsize{Exp. Small Contribution}}-- (9.6,5.4) -- (9.2,5.4) -- (9.2,5) -- (9.6,5) -- (9.6,5.2) -- cycle; \fill[opacity=0.4] (9.2,4.6) -- (9.2,4.8) -- (9.6,4.8) -- (9.6,4.4) -- (9.2,4.4) -- (9.2,4.6) -- cycle; \draw (9.6,4.6) node [right] {\scriptsize{Exp. Large Contribution}}-- (9.6,4.8) -- (9.2,4.8) -- (9.2,4.4) -- (9.6,4.4) -- (9.6,4.6) -- cycle; \draw[line width=0.50mm,dashed] (2,1.75) -- ++(-25:1.40); \draw[line width=0.50mm,dashed] (2,1.75) -- ++(-180-25:1.40); \draw[line width=0.50mm] (2,1.75) -- ++(-25-90:1.40); \draw[->] (2,0.25) -- (2,3.25); \draw[->] (0.5,1.75) -- (3.5,1.75); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2,1.75)--(0.65,1.75); \fill[opacity=0.1] (1,1.75) arc (-180:-90-25:1) -- (2,1.75) -- cycle; \node at (2,1.75) (a) {}; \path (a) ++(-90-25:1.6) node (c) {\scriptsize{\ding{194}}}; \node at (0.75,0.75) {\scriptsize{$\mathcal{S}_3 = \mathcal{S}_3^+$}}; \node at (2.08,0.75) {\scriptsize{$\mathcal{S}_3 = 0$}}; \fill[opacity=0.4] (1+4.5,1.75) arc (180:180-25:1) -- (2+4.5,1.75) -- cycle; \fill[opacity=0.1] (3+4.5,1.75) arc (0:180-25:1) -- (2+4.5,1.75) -- cycle; \fill[white] (3+4.5,1.75) arc (0:90-25:1) -- (2+4.5,1.75) -- cycle; \draw[line width=0.50mm,dashed] (2+4.5,1.75) -- ++(-25:1.40); \draw[line width=0.50mm,dashed] (2+4.5,1.75) -- ++(-180-25:1.40); \draw[line width=0.50mm] (2+4.5,1.75) -- ++(-25+90:1.40); \draw[->] (2+4.5,0.25) -- (2+4.5,3.25); \draw[->] (0.5+4.5,1.75) -- (3.5+4.5,1.75); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2+4.5,1.75)--(0.65+4.5,1.75); \node at (2+4.5,1.75) (a) {}; \path (a) ++(90-25:1.6) node (c) {\scriptsize{\ding{195}}}; \node at (3.25,0.75+4) {\scriptsize{$\mathcal{S}_1 = 0$}}; \node at (1.425,0.55+4) {\scriptsize{$\mathcal{S}_1 = \mathcal{S}_1^+$}}; \draw[line width=0.50mm,dashed] (2+4.5,1.75+4) -- ++(25:1.40); \draw[line width=0.50mm,dashed] (2+4.5,1.75+4) -- ++(-180+25:1.40); \draw[line width=0.50mm] (2+4.5,1.75+4) -- ++(25+90:1.40); \draw[->] (2+4.5,0.25+4) -- (2+4.5,3.25+4); \draw[->] (0.5+4.5,1.75+4) -- (3.5+4.5,1.75+4); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2+4.5,1.75+4)--(0.65+4.5,1.75+4); \fill[opacity=0.1] (1+4.5,1.75+4) arc (180:90+25:1) -- (2+4.5,1.75+4) -- cycle; \node at (2+4.5,1.75+4) (a) {}; \path (a) ++(90+25:1.6) node (c) {\scriptsize{\ding{193}}}; \node at (0.75+4.5,3.5-0.75+4) {\scriptsize{$\mathcal{S}_2 = \mathcal{S}_2^+$}}; \node at (2.08+4.5,3.5-0.75+4) {\scriptsize{$\mathcal{S}_2 = 0$}}; \fill[opacity=0.4] (1,1.75+4) arc (180:180+25:1) -- (2,1.75+4) -- cycle; \fill[opacity=0.1] (3,1.75+4) arc (0:-180+25:1) -- (2,1.75+4) -- cycle; \fill[white] (3,1.75+4) arc (0:-90+25:1) -- (2,1.75+4) -- cycle; \draw[line width=0.50mm,dashed] (2,1.75+4) -- ++(25:1.40); \draw[line width=0.50mm,dashed] (2,1.75+4) -- ++(-180+25:1.40); \draw[line width=0.50mm] (2,1.75+4) -- ++(25-90:1.40); \draw[->] (2,0.25+4) -- (2,3.25+4) node [left] {\scriptsize{$\mathrm{Im}(s)$}}; \draw[->] (0.5,1.75+4) -- (3.5,1.75+4) node [below] {\scriptsize{$\mathrm{Re}(s)$}};; \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2,1.75+4)--(0.65,1.75+4); \node at (2,1.75+4) (a) {}; \path (a) ++(-90+25:1.6) node (c) {\scriptsize{\ding{192}}}; \node at (8.5-0.75,3.5-0.75) {\scriptsize{$\mathcal{S}_4 = 0$}}; \node at (8.5-2.75,3.5-0.55) {\scriptsize{$\mathcal{S}_4 = \mathcal{S}_4^+$}}; \fill[opacity=0.4] (1+9,1.75) arc (-180:-180+25:1) -- (2+9,1.75) -- cycle; \fill[opacity=0.4] (1+9,1.75) arc (180:180-25:1) -- (2+9,1.75) -- cycle; \fill[opacity=0.1] (3+9,1.75) arc (0:180-25:1) -- (2+9,1.75) -- cycle; \fill[opacity=0.1] (3+9,1.75) arc (0:-180+25:1) -- (2+9,1.75) -- cycle; \fill[white] (3+9,1.75) arc (0:90-25:1) -- (2+9,1.75) -- cycle; \fill[white] (3+9,1.75) arc (0:-90+25:1) -- (2+9,1.75) -- cycle; \draw[line width=0.50mm,dashed] (2+9,1.75) -- ++(180-25:1.40); \draw[line width=0.50mm,dashed] (2+9,1.75) -- ++(180+25:1.40); \draw[line width=0.50mm,dashed] (2+9,1.75) -- ++(180-25:1.40); \draw[line width=0.50mm] (2+9,1.75) -- ++(90+25:1.40); \draw[line width=0.50mm] (2+9,1.75) -- ++(90-25:1.40); \draw[line width=0.50mm] (2+9,1.75) -- ++(-90+25:1.40); \draw[line width=0.50mm] (2+9,1.75) -- ++(-90-25:1.40); \draw[->] (2+9,0.25) -- (2+9,3.25); \draw[->] (0.5+9,1.75) -- (3.5+9,1.75); \draw[decoration = {zigzag,segment length = 1mm, amplitude = 0.25mm},decorate, line width=0.5mm,black] (2+9,1.75)--(0.65+9,1.75); \node at (2+9,1.75) (a) {}; \path (a) ++(-90+25:1.6) node (c) {\scriptsize{\ding{192}}}; \path (a) ++(90+25:1.6) node (c) {\scriptsize{\ding{193}}}; \path (a) ++(90-25:1.6) node (c) {\scriptsize{\ding{195}}}; \path (a) ++(-90-25:1.6) node (c) {\scriptsize{\ding{194}}}; \draw[->] (2+9+0.75,1.75) arc (0:180-25:0.75); \draw[->] (2+9+0.75,1.75) arc (0:-180+25:0.75); \node at (2+9+0.675,1.75) [above right] {\scriptsize{Valid}}; \node at (2,0) [below] {\scriptsize{$\chi_3$}}; \node at (6.5,0) [below] {\scriptsize{$\chi_4$}}; \node at (2,7.5) [above] {\scriptsize{$\chi_1$}}; \node at (6.5,7.5) [above] {\scriptsize{$\chi_2$}}; \node at (11,0) [below] {\scriptsize{Composite Behaviour}}; \end{tikzpicture} } \caption{Behaviour of the singulants for Type B solutions in the complex $s$-plane. We illustrate the behaviour of the four exponential contributions (associated with the four singulants) individually, and also the composite behaviour when each is taken into account. The region in which the series expansion (\ref{1:seriesinf}) provides a valid asymptotic approximation for $u(s,\epsilon)$ and $v(s,\epsilon)$ is illustrated in the composite expression. In each figure, the change in Stokes multiplier $\mathcal{S}$ across Stokes lines is illustrated. We recall that the jump across Stokes lines is specified in (\ref{4:Sjump}). Hence, as $\mathcal{S}_{2,3}$ must be zero on the real axis, their behaviour is completely specified. However, this is not true of $\mathcal{S}_{1,4}$, which leads to two free parameters being present in the problem (specifically, the value of these Stokes multipliers on the real axis). Figure (a) illustrates the behaviour for general, non-zero choices of $\mathcal{S}_{1,4}$ on the real axis. Figure (b) illustrates a special case when $\mathcal{S}_{1,4}$ are chosen to be zero on the real axis, giving a larger region of validity for the asymptotic solution.}\label{Stokes_dPIab2} \end{figure} \subsection{Stokes smoothing}\label{1.4:Smoothing} Using a nearly identical analysis to Section \ref{2.4:Smoothing}, we are able to show that for Type B solutions, the quantity switched across each Stokes line takes the form \begin{equation}\label{1:rng0} R_N \sim \mathcal{S} U \mathrm{e}^{-\chi/\epsilon}, \end{equation} where $R_N$ is again the exponential contribution to the solution, and $\mathcal{S} $ is the Stokes switching parameter. Using matched asymptotic expansions, we find that \begin{equation*} [\mathcal{S} ]^+_- \sim \frac{6\pi}{\chi'\sinh(-\chi')}. \end{equation*} We note that \begin{align*} \chi_{1,2}'\sinh(-\chi_{1,2}') = \sqrt{3}\log(-2-\sqrt{3}),\qquad \chi_{3,4}'\sinh(-\chi_{3,4}') = -\sqrt{3}\log(-2+\sqrt{3}). \end{align*} Hence, from (\ref{1:rng0}), the full exponential behaviour takes the form \begin{align*} R_N \sim \mathcal{S} _1 &\Lambda_1 (-2-\sqrt{3})^{-s/\epsilon} + \mathcal{S} _2 \Lambda_2 (-2-\sqrt{3})^{s/\epsilon} \\&+ \mathcal{S} _3\Lambda_3 (-2+\sqrt{3})^{-s/\epsilon} + \mathcal{S} _4\Lambda_4 (-2+\sqrt{3})^{s/\epsilon}, \end{align*} where $\mathcal{S}_i$ switches rapidly across the relevant Stokes line such that the difference is given by \begin{align}\label{4:Sjump} [\mathcal{S} _{1,2}]^+_- \sim \frac{2\pi\sqrt{3}}{\log(-2-\sqrt{3})},\qquad [\mathcal{S} _{3,4}]^+_- \sim \frac{-2\pi\sqrt{3}}{\log(-2+\sqrt{3})}. \end{align} We also note for completion that $\mathcal{S} _2$ and $\mathcal{S} _3$ must equal zero on one side of the Stokes line, and hence are uniquely determined, while this condition is not required for $\mathcal{S} _1$ and $\mathcal{S} _4$. However, if both $\mathcal{S} _1^+$ and $\mathcal{S} _4^+$ are zero, we obtain the special Stokes structure illustrated in Figure \ref{Stokes_dPIab2}(b). Consequently, the optimally-truncated asymptotic series for $u(s,\epsilon)$, and hence $v(s,\epsilon)$, is shown to take the form \begin{align}\nonumber u(s,\epsilon) \sim \sum_{m=0}^{N_{\mathrm{opt}}} \epsilon^{m/2} u_m(s) + \mathcal{S} _1 &\Lambda_1 (-2-\sqrt{3})^{-s/\epsilon} + \mathcal{S} _2 \Lambda_2 (-2-\sqrt{3})^{s/\epsilon} \\&+ \mathcal{S} _3\Lambda_3 (-2+\sqrt{3})^{-s/\epsilon} + \mathcal{S} _4\Lambda_4 (-2+\sqrt{3})^{s/\epsilon}\label{4:asymp2app} \end{align} where the leading orders of $u_n$ are give in (\ref{1:earlyordersb}), and the late-order behaviour is given in (\ref{4:LOTcalc}). This expression is an accurate asymptotic approximation up to exponentially-small terms. Unlike the Type A solution, however, this approximation of the Type B solution is only valid within a wedge of the complex plane illustrated in Figure \ref{Stokes_dPIab2}. We also recall from Section \ref{1.3:Structure} that $\mathcal{S}_2$ and $\mathcal{S}_3$ are uniquely determined, while $\mathcal{S}_1$ and $\mathcal{S}_4$ contain a Stokes-switching parameter that may be freely specified; this behaviour is not present in the asymptotic approximation of Type A solutions. \section{Conclusions} In this paper, we used methods of exponential asymptotics to compute and investigate two types of asymptotic solutions to the first discrete Painlev\'e equation, associated with the leading order behaviours described in (\ref{1:earlyordersa}) and (\ref{1:earlyordersb}). We identified the Stokes structure of the solution for each type, and used this information to deduce the sectors displaying asymptotically pole-free behaviour in the far-field limit. These asymptotic solutions were given as the sum of a truncated asymptotic power series and an exponentially-subdominant remainder term, shown in (\ref{4:asymp1app})-(\ref{4:asymp1app2}) for Type A solutions, and (\ref{4:asymp2app}) for Type B solutions. In Section \ref{CASEA}, we showed that the asymptotic solution associated with (\ref{1:earlyordersa}) is a valid approximation in the far field of the entire complex plane, aside from a branch-cut along the negative real axis, and hence the solution is asymptotically pole-free. This solution has no exponentially-small behaviour on the positive real axis, although there is exponentially-small behaviour present in the left-half complex plane. This implies that this asymptotic behaviour is uniquely defined, with no free parameter present in the solution. The asymptotic solution associated with (\ref{1:earlyordersb}), discussed in Section \ref{CASEB}, is generally asymptotically pole-free within a narrow wedge in the complex plane containing the positive real axis, seen in Figure \ref{Stokes_dPIab2}(a). Unlike the first asymptotic solution, however, there are two free parameters in the asymptotic approximation, associated with Stokes switching. If these parameters are chosen appropriately, we discover asymptotic solutions with larger ranges of validity, such as that shown in Figure \ref{Stokes_dPIab2}(b). Outside of the asymptotically pole-free regions, the asymptotic series solution is no longer valid, and we instead see the elliptic function behaviour described by \cite{Joshi2}. In fact, the oscillatory behaviour that is present along anti-Stokes lines is a degenerate form of this behaviour. We note with interest that the two classes of asymptotically pole-free solutions to dP$_{\mathrm{I}}$, Types A and B, demonstrate power series-type behaviour in the far field, however they differ in that Type A solutions are uniquely specified with no free parameters, while Type B solutions contain free parameters associated with the Stokes multipliers in the solution. Similar asymptotic behaviour is seen in the tritronqu\'ee and tronqu\'ee solutions to P$_{\mathrm{I}}$, the former of which is uniquely specified, while the latter contains free parameters associated with Stokes switching. Consequently, the two classes of asymptotic behaviour described in this study may be regarded as corresponding to tritronqu\'ee and tronqu\'ee solutions of P$_{\mathrm{I}}$ respectively. The asymptotic solutions to dP$_{\mathrm{I}}$ also provide insight into the behaviour of solution hierarchies to the equations described in (\ref{0:P4dP1}) and (\ref{0:pndPI}). The first of these describes solution hierarchies to the fourth Painlev\'e equation. For example, Type B solutions (\ref{1:earlyordersb}) correspond to solution families which grow as $w_n(z) \sim \pm\sqrt{-n/3} - z/3 + \mathcal{O}(n^{-1/2})$ in the large-$n$ limit, with the parameters described in (\ref{0:P4par}) also growing appropriately. These solution hierarchies are of particular mathematical interest due to their power series dependence on $n$ in the asymptotic limit, which is particular to the asymptotic classes of dP$_{\mathrm{I}}$ solutions described in the present study. Finally, we emphasize that this methodology may be applied to a wide range of difference equations, as it does not require the problem to be integrable, nor does it require that solution be expressed in terms of an integral equation. This is demonstrated in Appendix \ref{A3}, where we outline how this procedure could be applied to determine the Stokes switching behaviour of a non-integrable difference equation. Consequently, although we applied the method to determine information about the exponentially-small components of the solution to dP$_{\mathrm{I}}$, it could easily be adapted to analyze asymptotic solutions of broad classes of difference equations. One particular case of interest is the behaviour of orthonormal polynomials satisfying (\ref{0:pnw}) for a variety of different weight functions, such as the semi-classical weights described in \cite{Magnus2}. In general, these polynomial families may be computed using non-integrable difference equations. However, as there is no aspect of the methodology which depends on the integrability of the equation, these difference equations are still amenable to the asymptotic methods described in the present study. \section{Data Accessibility} We have no supporting data aside from that contained within the text. \section{Competing Interests} We have no competing interests. \section{Authors' Contributions} NJ and CJL collaborated on the mathematical analysis. CJL drafted the manuscript. All authors gave final approval for publication. \section{Acknowledgments} The authors would like to thank Prof. C. J. Howls and Prof. Y. Takei for discussion and suggestions regarding this study. The authors would also like to thank the referees for their helpful comments and suggestions. \section{Funding Statement} NJ and CJL were supported by Australian Laureate Fellowship Grant \#FL120100094 from the Australian Research Council.
2,877,628,089,895
arxiv
\section{Introduction} Supermassive black holes of about a billion solar masses have been observed at $z > 6$ \citep{2003AJ....125.1649F,2006AJ....131.1203F,2011Natur.474..616M}. How such massive objects are assembled within a billion years after the Big Bang remains one of the unresolved mysteries in the Universe. Various pathways have been suggested to explain the formation of supermassive black holes in the early universe \citep{1984ARA&A..22..471R,2008arXiv0803.2862D,2009ApJ...702L...5B,2009MNRAS.396..343R,2010A&ARv..18..279V,2012arXiv1203.6075H,2012ApJ...750...66J,2013ApJ...771..116J,2013arXiv1309.1067S} such as merging and accretion of PopIII remnants \citep{2001ApJ...552..459H,2004ApJ...613...36H,2009ApJ...696.1798T,2012ApJ...756L..19W,2013ApJ...772L...3L}, collapse of a dense stellar cluster \citep{2004Natur.428..724P,2008ApJ...686..801O, 2009ApJ...694..302D} and the monolithic collapse of a massive primordial gas cloud \citep{2002ApJ...569..558O,2003ApJ...596...34B,2006ApJ...652..902S,2006MNRAS.370..289B,2006MNRAS.371.1813L,2008MNRAS.391.1961D,2008arXiv0803.2862D,2010MNRAS.402.1249S,2010MNRAS.tmp.1427J,2010ApJ...712L..69S,2011MNRAS.411.1659L,2013arXiv1304.1369C,2013MNRAS.433.1607L,2013ApJ...774...64W,2013arXiv1309.1097L}. The necessary conditions for the direct collapse model are that the gas must be of a primordial composition and the formation of molecular hydrogen remains inhibited. The latter can be achieved in the presence of a strong Lyman Werner flux produced by the stellar populations in the first galaxies \citep{2001ApJ...546..635O,2007MNRAS.374.1557J,2008MNRAS.391.1961D,2010MNRAS.402.1249S,2011MNRAS.410..919J,2010ApJ...712L..69S,2011MNRAS.418..838W,2011A&A...532A..66L,2012MNRAS.425.2854A,2013MNRAS.430..588L}, also see \cite{2012MNRAS.422.2539I,2013A&A...553L...9V}. The potential sites for the direct collapse are the massive primordial halos of $\rm 10^{7}-10^{8}~M_{\odot}$ where the above mentioned conditions can be fulfilled. Numerical simulations performed to study the collapse of a protogalactic halo in the presence of a strong Lyman Werner flux show that massive objects can be formed \citep{2003ApJ...596...34B,2008ApJ...682..745W,2009MNRAS.393..858R,2011MNRAS.411.1659L,2013MNRAS.433.1607L}. Furthermore, theoretical models propose that supermassive stars formed as a result of direct collapse are the potential embryos of supermassive black holes \citep{2008MNRAS.387.1649B,2010MNRAS.402..673B,2011MNRAS.414.2751B,2012ApJ...756...93H,2012MNRAS.421.2713B,2013A&A...558A..59S,2013ApJ...768..195W,2013arXiv1308.4457H}. Previous numerical simulations mainly focused on the hydrodynamics of the problem, while the role of magnetic fields during the formation of seed black holes via direct collapse remained largely unexplored. Magnetic fields are expected to influence the formation of black holes by exerting extra magnetic pressure and providing additional means for the transport of angular momentum by magnetic torques. Magnetic pressure may enhance the Jeans mass ($ M_{J,B} \propto B^{3}/ \rho^{2}$) and consequently help in suppressing fragmentation which is a key requirement for the direct collapse model. The role of magnetic torques is expected to become significant in the central accretion disk, implying the presence of strong rotation measures and enhanced accretion rates. In fact, the detection of strong rotation measures in quasars at $z = 5.3$ indicates the relevance of magnetic fields in the early universe \citep{2012arXiv1209.1438H}. It is further known from observations of nearby active galactic nuclei that magnetic fields play a vital role in the transport of angular momentum \citep{1999Natur.397..324B,Beck05}. The standard model of cosmology does not provide any constraints on the initial magnetic fields strength. They could be generated via electro-weak or quantum chromodynamics phase transitions \citep{1996PhRvD..53..662B,1989ApJ...344L..49Q} or alternatively, during structure formation via mechanisms such as the Biermann battery effect, the Weibel instability \citep{1950ZNatA...5...65B,1959PhRvL...2...83W,2003ApJ...599L..57S} or thermal fluctuations in the plasma \citep{2012PhRvL.109z1101S}. In addition to the gravitational compression under the constraint of flux freezing, astrophysical dynamos can efficiently amplify the magnetic field, particularly the small scale dynamo which operates by converting the turbulent energy into the magnetic energy \citep{1968JETP...26.1031K,2005PhR...417....1B,Schobera,2013NJPh...15b3017S}. Numerous studies confirm that the small scale dynamo gets excited during structure formation provided that turbulent energy is well resolved \citep{2010A&A...522A.115S,2011ApJ...731...62F,2010ApJ...721L.134S,Schobera,2012ApJ...745..154T,Schoberb,2013NJPh...15a3055B,2013MNRAS.432..668L}. The amplification of magnetic fields by the small scale dynamo was further confirmed by \cite{Federrath11} for higher Mach numbers and by \cite{2012ApJ...760L..28P} for different thermodynamical conditions. A recent study by \cite{2013MNRAS.tmp.2194M} shows that the magnetic field may have a significant impact on the formation of Pop III stars as it strongly influences the fragmentation properties of a gas cloud. In the context of black hole formation via direct collapse, we have shown in our previous study \citep{2013MNRAS.432..668L} that for a Jeans resolution of 64 cells, the small scale dynamo gets excited and exponentially amplifies the magnetic field. It is thus expected that magnetic fields can influence the formation of seed black holes. A recent study by \cite{2014ApJ...782..108S} further shows that the radiation source can aid the generation of magnetic fields. In this study, we explore for the first time the impact of magnetic fields on the fragmentation properties of atomic cooling halos, the potential birthplaces of supermassive black holes. To accomplish this goal, we perform high resolution cosmological magnetohydrodynamical simulations for four distinct halos and employ a fixed resolution of 64 cells per Jeans length during the entire course of the simulations. To investigate the impact of saturated magnetic fields on fragmentation, the initial seeds of higher magnetic field strength are selected based on the results of our previous study \citep{2013MNRAS.432..668L}. We employ a constant background Lyman Werner flux of strength $\rm 10^3$ in units of $\rm J_{21}$ and follow the collapse for a few free fall times by evolving the simulations beyond the formation of the first peak. This study enables us to assess the role of magnetic fields in the assembling of supermassive black holes via direct collapse. The organization of this article is the following. In the second section, we describe the numerical methods and simulation setup employed in this work. The main results from this study are presented in the third section. We discuss our conclusion and summary of the main findings in the fourth section. \section{Computational methods} The simulations presented here are performed with the publicly available cosmological magnetohydrodynamics code ENZO \citep{2004astro.ph..3044O,2013arXiv1307.2265T}. It is a massively parallel code and very well suited for simulations following the collapse from cosmological scales down to the scales of AU. The equations of magnetohydrodynamics (MHD) are solved employing the Harten-Lax-van Leer (HLL) Riemann solver with a piece-wise linear construction. The Dedner scheme \citep{2008ApJS..176..467W,2010NewA...15..581W} is imposed for divergence cleaning. We start our simulations at $ z=$ 100 with cosmological initial conditions which are generated using the inits package. Our computational volume has a comoving size of 1~$\rm Mpc/h$ and periodic boundary conditions are employed both for magneto-hydrodynamics and gravity. We initially run $\rm 128^3$ cells uniform grid simulations to select the most massive halos forming in our computational domain for various random seeds. Simulations are restarted with two additional nested refinement levels each with a resolution of $\rm 128^3$ cells centered on the most massive halo. To simulate the evolution of dark matter dynamics, 5767168 particles are initialized which provide a particle mass resolution of $\rm \sim 600~M_{\odot}$. During the course of the simulations, additional 27 dynamic refinement levels are employed which yield an effective resolution of 0.25 AU. Apart from the fixed Jeans resolution of 64 cells, our resolution criteria are based on the gas over-density and the particle mass resolution. The cells exceeding four times the mean baryonic density are marked for refinement. Similarly, grid cells are flagged for refinement if the dark matter density exceeds 0.0625 times $ \rho_{DM}r^{l (1+ \alpha)}$ where $r=$ 2 is the refinement factor, $l$ is the refinement level and $\alpha =-0.3$ makes the refinement super-Lagrangian. Although gravity by the baryons dominates in the core of the simulated halos, the smoothing of dark matter particles becomes essential in order to avoid a spurious heating of the baryons. We smooth particles at scales of 0.68 pc which corresponds to a refinement level of 14. Our approach is similar to the simulations performed to explore gravitational collapse in previous studies \citep{2008ApJ...682..745W,2012ApJ...745..154T,2013MNRAS.433.1607L,2013ApJ...772L...3L}. The simulations are evolved adiabatically above densities of $\rm 10^{-11}~g/cm^{3}$ after reaching the maximum refinement level to follow the collapse for several dynamical times. Such an approach makes the structures stable on the smallest scales while collapse proceeds on larger scales. We consider these adiabatic cores as proxies for supermassive protostars, which are expected to form at higher densities where cooling is suppressed by the continuum opacity \citep{2001ApJ...546..635O,2008ApJ...686..801O}. In total, we perform eight simulations for four distinct halos each with a weak and a strong initial seed field. The strength of the initial seed fields and the properties of the halos are listed in table \ref{table1}. The simulations are compared at a peak density of $\rm 7 \times 10^{-10}~g/cm^3$. Similar to our previous studies \citep{2013MNRAS.432..668L,2013MNRAS.430..588L}, we employ a strong Lyman Werner flux of strength $\rm 10^3$ in units of $\rm J_{21}=~erg~cm^{-2}~s^{-1}~Hz^{-1}~sr^{-1}$ for stellar spectra of $\rm 10^5$~K and ignore the effect of self-shielding. To model the thermal evolution of the gas, the rate equations of $\rm H$,~$\rm H^{+}$,~$\rm He$,~$\rm He^{+}$,~$\rm He^{++}$,~$\rm e^{-}$,~$\rm H^{-}$,~$\rm H_{2}$,~$\rm H_{2}^{+}$ are self consistently solved with cosmological simulations. \begin{table*} \begin{center} \caption{Properties of the simulated halos are listed here.} \begin{tabular}{cccccc} \hline \hline Model & Initial Mass & spin parameter & Collapse redshift & Initial Magnetic field strength, & Fragmentation \\ & $\rm M_{\odot} $ & $\lambda$ & z & [$\rm Gauss $] & (For unsaturated cases) \\ \hline \\ A & $\rm 4.3 \times 10^{6}$ & 0.0309765 &11.3 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & No\\ B & $\rm 1.0 \times 10^{7}$ & 0.0338661 &12.8 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & Yes\\ C & $\rm 2.3 \times 10^{7}$ & 0.021782 &15.9 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & Yes\\ D & $\rm 1.9 \times 10^{7}$ & 0.0084786 &13.7 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & No\\ \hline \end{tabular} \label{table1} \end{center} \end{table*} \begin{figure*} \hspace{-10.0cm} \centering \begin{tabular}{c} \begin{minipage}{6cm} \includegraphics[scale=0.2]{MHDBmag-comp2.ps} \end{minipage} \end{tabular} \caption{This figure shows the density-weighted magnetic field strength for four halos at the end of our simulations. The top panels show the non-saturated cases while bottom panels depict the saturated cases. Both panels from left to right show the halos A to D as listed in table \ref{table1}.} \label{figh1} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{TimeBmag.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{TimeVRad.ps} \end{minipage} \end{tabular} \caption{Time evolution of the magnetic field strength and radial velocity is shown for halo A, the unsaturated case. Each line color represents different time evolution in units of years as mentioned in the legend. } \label{fig2} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{TimeBmagG.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{TimeVRadG.ps} \end{minipage} \end{tabular} \caption{Same as figure \ref{fig2}. Here we show the time evolution of these quantities for halo C, the unsaturated case.} \label{fig3} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{GRcompresRadius1.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{GRShearBRadius1.ps} \end{minipage} \end{tabular} \caption{The absolute values of the growth rates of magnetic field amplification of all halos are plotted against the radius in this figure. The left panel shows the magnetic growth rate due to the compression while right panel shows the growth rate due to the shear. The dotted lines present unsaturated cases while the solid lines stand for the saturated cases. For the definition of growth rate see the text and references therein. Each color represents a halo. } \label{fig5} \end{figure*} \begin{figure*} \centering \begin{tabular}{c} \begin{minipage}{6cm} \hspace{-1cm} \includegraphics[scale=0.4]{AVGUnGrowthRate.ps} \end{minipage} \end{tabular} \caption{Spherically averaged positive growth rate of the magnetic field for a representative case (unsaturated case, halo A) is plotted against the radius for the earlier and later times. The green line shows the magnetic growth rate at the beginning of accretion shock while red line shows the growth rate close to the saturation stage. For the definition of growth rate see the text and references therein.} \label{fign} \end{figure*} \begin{figure*} \centering \begin{tabular}{c} \begin{minipage}{6cm} \hspace{-1cm} \includegraphics[scale=0.4]{CompShearAmp.ps} \end{minipage} \end{tabular} \caption{The absolute values of the growth rates of magnetic field amplification for a representative halo are plotted against the radius in this figure. The green color shows the magnetic growth rate due to the compression while blue color shows the growth rate due to the shear. The dotted lines present unsaturated cases while the solid lines stand for the saturated cases. For the definition of growth rate see the text and references therein.} \label{fign1} \end{figure*} \begin{figure*} \hspace{-15.0cm} \centering \begin{tabular}{c} \begin{minipage}{3cm} \includegraphics[scale=1.5]{GRate.ps} \end{minipage} \end{tabular} \caption{This figure shows growth rates (absolute values) of magnetic field amplification by the shear (left) and compression (right) for unsaturated cases. The slices of growth rate are shown here centered at the peak density.} \label{figh6} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{SatEBEt1.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{SatEBEKinet1.ps} \end{minipage} \end{tabular} \caption{ The ratio of magnetic to turbulent energy (left panel) and magnetic to kinetic energy (right panel) is shown in the figure. B1, B3, B5, B7 represent the non-saturated cases while B2, B4, B6, B8 stand for the saturated field cases as listed in table \ref{table1}. } \label{fig7} \end{figure*} \begin{figure*} \centering \begin{tabular}{c} \begin{minipage}{6cm} \hspace{-1cm} \includegraphics[scale=0.4]{NewSatEBEt1.ps} \end{minipage} \end{tabular} \caption{The ratio of magnetic to turbulent energy is shown in the figure for centeral region. Dashed and solid lines represent saturated and non-saturated cases as indicated in the legend.} \label{fign2} \end{figure*} \begin{figure*} \vspace{-1.0cm} \hspace{-9.0cm} \centering \begin{tabular}{c} \begin{minipage}{8cm} \includegraphics[scale=0.8]{Halo_profile1.ps} \end{minipage} \end{tabular} \caption{ Radially binned spherically averaged radial profiles are shown for the halos A, B, C and D. The solid lines represent saturated cases while the dashed lines stand for non-saturated cases. Top left and right panels show the enclosed density and mass radial profiles. The accretion rates and magnetic field strength radial profiles are depicted in the bottom left and right panels.} \label{figh2} \end{figure*} \begin{figure*} \hspace{-13.0cm} \centering \begin{tabular}{c} \begin{minipage}{6cm} \includegraphics[scale=0.22]{MHDDensity-comp2.ps} \end{minipage} \end{tabular} \caption{The state of simulations is represented by the density-weighted mean along the axis of projection at the central peak density of $\rm 7 \times 10^{10}~g/cm^{3}$. Non-saturated cases (top panel) and saturated cases (bottom panel) are shown for halos A to D (starting from the left).} \label{figh3} \end{figure*} \begin{figure*} \hspace{-13.0cm} \centering \begin{tabular}{c} \begin{minipage}{6cm} \includegraphics[scale=0.22]{MHDDensityG-comp.ps} \end{minipage} \end{tabular} \caption{The time evolution of the density-weighted mean along the axis of projection for the halo C. The time in years, after the formation of the first peak is shown in each case for the central 770 AU.} \label{figh31} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{MagSupport.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{SatMagSupport.ps} \end{minipage} \\ \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{MagSupportD.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{SatMagSupportD.ps} \end{minipage} \end{tabular} \caption{Figure shows the contribution of support terms for both saturated (right) and non-saturated (left) cases. The upper and lower panels represent two different halos. The solid lines represent the positive support of the quantities while dashed lines represent the negative support. For the definitions of support terms see text. The local support of thermal, turbulent and magnetic fields is shown in this figure.} \label{fig9} \end{figure*} \section{Main Results} In all, we have performed 8 cosmological magnetohydrodynamics simulations for four distinct halos each with an initial magnetic field strength of $\rm 3 \times 10^{-20}$ G (hereafter called non-saturated cases) and $\rm 3 \times 10^{-11}$ G (hereafter called saturated cases). As shown by \cite{2013MNRAS.432..668L}, the latter implies an approximate equipartition between magnetic and turbulent energy at densities of $\rm 10^{-12}~g/cm^{3}$. The lower value is characteristic for magnetic field generation via the Biermann battery \citep{1950ZNatA...5...65B} or through thermal fluctuations \citep{2012PhRvL.109z1101S}, while the higher value may occur if magnetic fields are generated during the QCD or electroweak phase transition \citep{2004PhRvD..70l3003B}. The initial masses and collapse redshifts of the halos are listed in table \ref{table1}. The density perturbations decouple from the Hubble flow and start to collapse via gravitational instability. The gas falls into the dark mater potential and gets shock heated. This process continues until the gas temperature exceeds $10^4$~K where cooling due to Lyman alpha radiation becomes effective and brings the temperature down to 8000 K. Further cooling to lower temperatures remains suppressed due to the photo-dissociation of $\rm H_{2}$ molecules by the strong Lyman-Werner flux. Consequently, an isothermal collapse occurs. The density profile follows an $R \rm ^{-2}$ behavior as expected from an isothermal collapse. The radial infall velocity is about 10 $\rm km/s$. Overall, the collapse dynamics is similar to our previous studies during the initial phases \citep{2013MNRAS.432..668L,2013MNRAS.433.1607L}. In the following, we explore the amplification of magnetic fields during the collapse and their impact on fragmentation. \subsection{Amplification of magnetic fields} The simulations were started with initial seed magnetic fields as mentioned in table \ref{table1}. For both runs, the magnetic field is mainly amplified by gravitational compression below densities of $\rm 10^{-12}~g/cm^{3}$. In this regime, the amplification of the magnetic field strength scales with $B \propto \rho^{2/3}$ and the strength of the magnetic field at the end of our simulations remains much weaker in the non-saturated runs compared to the saturated cases as shown in figure \ref{figh1}. It is found that the strength of the magnetic fields becomes almost equal at densities of $\rm 7 \times 10^{-10}~g/cm^3$ for both the saturated and non-saturated cases during the transition to the adiabatic evolution after reaching the maximum refinement level. This is evident from figure \ref{figh1}. As we will show in the following, this rapid amplification in the non-saturated runs is due to the occurrence of strong accretion shocks. At densities above $\rm 10^{-11}~g/cm^3$, the evolution becomes adiabatic, and a stable core is formed which reaches the state of hydrostatic equilibrium. This core is considered as a proxy for a supermassive protostar. The infall of the gas onto the central core results in the formation of accretion shocks. To investigate the rapid amplification of non-saturated magnetic fields in accretion shocks, we show the time evolution of the magnetic field strength and the radial velocity profiles for the two representative cases in figures \ref{fig2} and \ref{fig3}. It can be noted from the figures that the amplification of magnetic fields is closely related to the radial infall velocity. The radial velocity increases from 10 $\rm km/s$ to 30 $\rm km/s$ (even higher) within a time scale of about 1 year. Similarly, the field strength is amplified by a few orders of magnitude during the same time. The profile of the radial infall velocity is smooth in the beginning and becomes sharper as accretion shocks are formed. The sharp jump in the radial velocity profile around 100 AU is a typical signature of the accretion shocks. It may further be noted that the increase in density corresponding to different times in figures \ref{fig2} and \ref{fig3} is just about an order of magnitude. Due to gravitational compression B should increase with $\rho^{2/3}$, which could not explain such large increase in magnetic field strength. Amplification by gravitational compression would further be more homogenous with in the Jeans volume. Thus, amplification is due to the accretion shocks. As demonstrated by figure \ref{fig2}, the initial amplification occurs at the shock front enclosing the core and subsequently grows inside the core until the end of the simulation is reached. Apart from turbulent diffusion, amplification by shear can contribute to the growth of the magnetic field in the core. Additional contributions may also come from the advection of the magnetic field in the core and diamagnetic pumping from a gradient in the turbulent intensity. In order to understand the different contributions to the magnetic field amplification, we have computed the growth rates of amplification both by turbulent shear and compression for all halos. The rate of change of the magnetic pressure in a fluid element moving with the flow can be computed from the source terms of the induction equation \citep{2013MNRAS.431.3196S}: \begin{equation} {D \over Dt} \left( {B^{2} \over 8 \pi} \right) = {1 \over 4 \pi}\left(B_{i}B_{j}S_{ij}^{*} - {2 \over 3}B^{2}d \right), \label{Bpres} \end{equation} where $\rm{D \over Dt}$ is $\rm {\partial \over \partial t} + v \cdot \nabla$, $d=\nabla \cdot v$ is the velocity divergence and $S_{ij}^{*}=S_{ij}-1/3 d\delta_{ij}$ is the trace-free rate of strain tensor. Dividing both sides of equation (\ref{Bpres}) by $B^2/8\pi $, the first term on the right-hand side represents the growth rate of magnetic energy by shear while the second term is the growth rate by compression (both due to gravity and shocks). The absolute values of the growth rates by shear and compression are shown in figure \ref{fig5}. They increase towards smaller radii, peak around 100 AU and decline toward the center. Such a trend is observed for all halos, both for the saturated and the non-saturated cases. To further elucidate the differences in the amplification rates for the saturated and unsaturated cases, we have overplotted the amplification rates by the shear and compression for a representative case in figure \ref{fign1}. The plot shows that the growth rate is higher for the unsaturated case. It is also noted that the growth rate is higher for shear compared to the compression. In figure \ref{fign}, we show the spherically averaged positive growth rate for a representative case (i.e., halo A, unsaturated run) at the start of the accretion shock and close to the saturation state. The strongest amplification occurs at the accretion shock on scales of a few 100 AU. When saturation occurs in the central region, the growth rate declines in the core and the peak in the growth rate shifts towards larger radii. This is a clear indication of magnetic field saturation on small scales while amplification still proceeds on scales larger than 100 AU. The inverse of the growth rate gives the amplification time scale which locally decreases to less than 0.1 year. The amplification time scale for compression is about half of the shear amplification time scale. We note here that even for compressively driven turbulence part of the energy (about $\rm 1/3-1/2$) lies in the solenoidal modes \citep{2010A&A...512A..81F} thus naturally providing a two-to-one ratio of compressive and shear modes. The local variations in the growth rates are shown in figure \ref{figh6} for a representative case. The very short amplification time scale shows that the magnetic field can be amplified very rapidly in the presence of strong accretion shocks. In the center of the core, however, the amplification by compression and shear is weak. This suggests that the growth of the magnetic field inside the core (see figures 2 and 3) is mainly caused by advection and turbulent diffusion. To further assess the amplification and saturation of magnetic fields, we have computed the ratio of the magnetic to turbulent energy as shown in figure \ref{fig7}. It increases with density both for the saturated and the non-saturated cases. At the strong accretion shock the magnetic field amplification time scale becomes very short and rapid amplification happens for both saturated and non-saturated cases until the magnetic energy becomes comparable the turbulent energy. To further clarify the differences between the saturated and unsaturated cases, we have plotted the ratio of magnetic to turbulent energy for the centeral region. This figure shows that magnetic energy is in equipartition with turbulent energy and the magnetic field gets saturated as evident from the change in the slop of $\rm E_{B}/E_{turb}$. The ratio of the magnetic to the total kinetic energy is depicted in the right panel of figure \ref{fig7}. It initially increases with density, gets enhanced rapidly by accretion shocks, reaches a peak value of $\rm 10^{-1}$ and then declines which is an indication of magnetic field saturation. It is further noted that saturation occurs around densities of $\rm 10^{-10}~g/cm^{3}$ which is deduced from the change in the slope of the magnetic to kinetic energy ratio. In the saturated cases, the amplification is many orders of magnitude lower compared to their counterparts which reach the same field strength from a much smaller value. This is expected, as the initial seed field is already in approximate equipartition with the turbulent energy, implying amplification predominantly by gravitational compression. \subsection{Implications for the formation of seed black holes} The central properties of the halos at their collapse redshifts are shown in figure \ref{figh2}. The density profile shows an $R^{-2}$ behavior as expected from an isothermal collapse and becomes flat in the central adiabatic core. This trend is observed for all cases. The small bumps in the density profiles for non-saturated cases are due to the formation of additional clumps. The maximum value of the density is $\rm 7 \times 10^{-8}~g/cm^{3}$. The mass profile increases with $R^{2}$ in the center, becomes flat around 100 AU and then increases linearly with radius. The mass profiles are very similar for the saturated and non-saturated cases. The mass accretion rates are about 1 $\rm M_{\odot}/yr$ at larger radii and drop down to $\rm 10^{-3}~M_{\odot}/yr$ in the central adiabatic core. The profile of the magnetic field strength shows that irrespective of the initial seed field, the magnetic field reaches the saturation value in the presence of strong accretion shocks. Overall, the halo properties are in good agreement with previous studies \citep{2013MNRAS.433.1607L,2013arXiv1309.1097L}. The state of the simulations is shown by the density-weighted mean along the projection axis for four distinct halos in figure \ref{figh3}. It is found that massive clumps of a few hundred solar masses are formed in every halo both for the saturated and the non-saturated runs. In addition to this, fragmentation is observed in two halos for the non-saturated cases. The masses of these clumps are a few ten solar masses (20 $\rm M_{\odot}$ and 30 $\rm M_{\odot}$) and they are gravitationally bound. The suppression of fragmentation in the saturated cases is attributed to the additional magnetic pressure on larger scales. The time evolution of the density structure for halo C is shown in figure \ref{figh31}. The initially turbulent cloud collapses to form a massive clump within a few years. It keeps accreting gas from its surroundings and the formation of an additional clump can be seen after 10 years of evolution. We have investigated the impact of magnetic fields on the fragmentation properties of these halos and computed the local support terms for magnetic fields. The local support is derived from the source terms of the differential equation for the rate of compressions of the gas \citep{2013MNRAS.431.3196S} \begin{equation} -{D d \over Dt}= 4 \pi G \rho_{0}\delta -\Lambda \end{equation} Here, $\delta$ is the overdensity relative to the mean density $\rho_0$ and $\Lambda$ is the local support against gravitational compression. $\Lambda$ receives the contributions from thermal pressure, resolved turbulence, and the magnetic fields. The support by magnetic fields is \citep{2013MNRAS.431.3196S}: \begin{dmath} \Lambda_{\rm mag} = {1 \over 4 \pi \rho} \left[-{ \partial^{2} \over \partial x_{i} \partial x_{j}} \left( {1 \over 2} B^{2} \right) + {\partial B_{i} \over \partial x_{j}} {\partial B_{j} \over \partial x_{i}} \right] + \\ {1 \over 4 \pi \rho^{2}} {\partial \rho \over \partial x_{i}} \left[ { \partial \over \partial x_{i}} \left( {1 \over 2} B^{2} \right) + B_{j}{\partial B_{i} \over \partial x_{j}} \right] \end{dmath} For the definition of thermal and turbulence support terms see \cite{2013MNRAS.431.3196S}, while a first application is presented by \cite{2013MNRAS.433.1607L}. Like other support terms, the magnetic support has positive and negative components. The positive components provide support against gravity while the negative components aid to the gravitational compression. The contributions of the local support terms against gravity are shown for two representative cases in figure \ref{fig9}. It is important to note that the contribution of the positive support by the magnetic field dominates over the turbulent and thermal pressure support in the vicinity of the accretion shocks at radii around 100 AU. The support by turbulence is dominated by the negative contribution from compression by accretion shocks. Negative support is a characteristic of compressible turbulence, particularly in the presence of shocks. For a detailed discussion on negative turbulent support see \cite{2013MNRAS.431.3196S}. Even stronger support comes from the thermal pressure, while the magnetic support is sub-dominant near the center. For the saturated field cases, the large positive support from magnetic fields helps in the suppression of fragmentation on radial scales ranging from less than 100 to about 1000 AU, which encompasses the fragmentation scale in figure \ref{figh3}. Particularly at radii outside the accretion shock, this is a result of the initially larger magnetic field. For the unsaturated case, the amplification of the magnetic field produces support comparable to the saturated case only for a narrower range of scales around 100 AU. As numerical simulations tend to underestimate the physical amplification rate, a final conclusion on the role of initial field strength and dynamo support is, however, not possible at this stage. \section{Discussion} In total, we have performed 8 cosmological MHD simulations to investigate the role of magnetic fields during the formation of supermassive black holes. The simulations were carried out for four distinct halos with initial seed magnetic fields of $\rm 3 \times 10^{-20}$ G and $\rm 3 \times 10^{-11}$ G. The main motivation for the selection of the stronger magnetic field strength was to explore the impact of saturated magnetic fields on the fragmentation properties of so-called atomic cooling halos. To achieve this goal, we evolved the simulations adiabatically beyond the formation of the first peak for a few free-fall times until they reached the same peak density of $\rm 7 \times 10^{-10}~g/cm^{3}$. Our results show that irrespective of the initial seed field strength, the magnetic field gets amplified very rapidly in the presence of strong accretion shocks. This is indicated by the short time scale for compressive amplification compared to the free-fall time. The amplification is mainly caused by the shock fronts and the magnetic field is subsequently transported into the core by turbulent diffusion and similar processes until the magnetic energy grows to equipartition with kinetic energy. We therefore report a new mode of magnetic field amplification by the accretion shocks in atomic cooling halos as well as a possible contribution from the compressive turbulent modes driven by the accretion process. We further note that, while the adiabatic cores in our simulations were introduced to follow the collapse beyond the first peak, very similar cores are expected to form during the formation of protostars at higher densities \citep{2001ApJ...546..635O,2008ApJ...686..801O}. It is thus desirable to extend the calculations pursued here to the formation of protostars. We also emphasize that the turbulent amplification of magnetic fields depends strongly on the Reynolds number of the flow \citep{1968JETP...26.1031K,1998MNRAS.294..718S}. Since we cannot resolve all length scales down to the physical dissipation length scale, the actual amplification is probably even stronger. Such rapidly amplified magnetic fields may suppress the fragmentation on even larger scales than shown in these simulations. Currently, however, fully resolved simulations are infeasible. A possible solution to this problem might be the application of subgrid-scale models for MHD turbulence. Our results indicate that magnetic fields are relevant for the formation of seed black holes, as they help in the suppression of fragmentation via additional magnetic pressure. The masses of the clumps at the end of our simulations are a few hundred solar masses and large accretion rates of about $\rm 1~M_{\odot}/yr$ are observed. Given such high accretion rates, these objects are expected to reach $\rm 10^5~M_{\odot}/yr$ within a short time. The amount of fragmentation is significantly less compared to the hydrodynamics simulations performed in our previous study \citep{2013MNRAS.433.1607L}. The peak density reached in the MHD simulations is about 13 times lower than in the hydrodynamical case. Further differences may arise from the use of different Riemann solvers. We evolved these simulations only for a few free fall times after the formation of the first peak. Further evolution of such high-resolution simulations becomes extremely costly due to the Courant constraints on the computation of timestep. However, we expect that the presence of the magnetic fields will be favorable for the formation of massive seed black holes as it suppresses the fragmentation. We have also shown in recent studies that subgrid scale turbulence helps in the formation of stable accretion disks and assembling massive objects of $\rm 10^5~M_{\odot}$ in 20,000 years via rapid accretion \citep{2013MNRAS.433.1607L,2013arXiv1309.1097L}. The presence of subgrid scale MHD turbulence may further help in the formation of accretion disks in magnetized halos. As our previous results indicated that accretion stalls when $\rm \sim 10^{5}~M_{\odot}$ are reached because of an increase in the rotational support, we speculate that magnetic fields may enhance angular momentum transport and increase the final mass scale. This requires cosmological MHD simulations following the accretion for even longer times. \section*{Acknowledgments} The simulations described in this work were performed using the Enzo code, developed by the Laboratory for Computational Astrophysics at the University of California in San Diego (http://lca.ucsd.edu). We acknowledge research funding by Deutsche Forschungsgemeinschaft (DFG) under grant SFB $\rm 963/1$ (projects A12, A15) and computing time from HLRN under project nip00029. DRGS thanks the DFG for funding via the Schwerpunktprogram SPP 1573 ``Physics of the Interstellar Medium'' (grant SCHL $\rm 1964/1-1$). The simulation results are analyzed using the visualization toolkit for astrophysical data YT \citep{2011ApJS..192....9T}.
2,877,628,089,896
arxiv
\section{Introduction} Randomized benchmarking (RB) \cite{Knill:2008fi,Magesan:2011bx,Magesan:2012to,Magesan:2012tn,Epstein:2014oe} has become a standard method for characterizing gate error rates in quantum computing. The RB protocol is simple: prepare an initial state, apply a sequence of quantum gates, measure the fidelity of the final state, repeat with sequences of increasing length, and fit the fidelity versus sequence length to an exponential to obtain the average gate error rate. The simplicity and efficiency of the protocol has led to its widespread use in many qubit technologies, including superconductors \cite{Chow:2009kv,Magesan:2012to,Barends:2014bq}, ions \cite{Knill:2008fi,Gaebler:2012kt,Harty:2014df}, solid state quantum dots \cite{Veldhorst:2014pd,Fogarty:2015vo}, and atomic nuclei \cite{Pla:2013az}. Though the protocol was originally developed to characterize uncorrelated Markovian noise, the qubit systems where RB has been used are generally subject to non-Markovian correlated noise \cite{Wellstood:1987qt,Yoshihara:2006wk, Bylander:2011kl,Slichter:2012am,Anton:2013hm, Taylor:2007im,Veldhorst:2014pd,Eng:2015kr,Fogarty:2015vo}. In this paper we analyze the effects of correlated noise on RB. We show a surprising formal equivalence between single qubit RB and a long-range coupled spin-one Ising model. The connection to the Ising model leads to a determinant formula that gives RB sequence fidelity decays ranging from exponential to power law, depending on noise correlations. The effects of correlated noise on RB have previously been examined, both analytically and numerically. Magesan et al.\ \cite{Magesan:2012tn} as well as Wallman and Flammia \cite{Wallman:2014xc} demonstrated the robustness of the RB estimated error rate to weakly time dependent gate noise. For $1/f$ correlated noise, Epstein et al.\ \cite{Epstein:2014oe} showed through numerical simulations that RB gives an error rate within a factor of two of the true average gate error rate. For general Hamiltonian-driven correlated noise, Ball et al.\ \cite{Ball:2016cs} derived sequence fidelity probability density functions valid to linear order in the product of sequence length $N$ and average gate error rate $\varepsilon$. The determinant formula for the average sequence fidelity that we derive is valid for general $N$, allowing us to quantify the degree to which sequence fidelity is nonexponential. Our analysis shows that the expected RB sequence fidelity is given by a spin-one Ising model partition function. The Ising model has effective coupling strengths given by the covariance matrix of error phases accumulated in benchmarking intervals and an effective temperature given by the inverse of the gate error rate. We use the techniques of statistical field theory \cite{Amit:2005lk} to obtain a high temperature series expansion for the RB sequence fidelity, and also to make explicit the relationship between RB and random dynamical decoupling \cite{Viola:1999fe,Viola:2005rp}. The lowest order term in the high temperature expansion takes the form of a finite rank Toeplitz determinant \cite{Fisher:2007zp}, giving $N$ dependence in the sequence fidelity varying from exponential for uncorrelated noise to power law for quasistatic noise. Because fitting the power law decay to an exponential produces unreliable results, we propose an alternative fitting procedure based on the observation that the initial fidelity decay is independent of noise correlations \cite{Ball:2016cs}. Finally, we show that the formal equivalence between RB and long-range coupled Ising models extends to benchmarking of $d$-state systems, with Ising model site variables that are the $d^2 - 1$ weights of the adjoint representation of {SU($d$)}. The expected value of the RB sequence fidelity $P_0$ is given by the noise-averaged $N$-fold composition of the twirled (group-averaged) free evolution operator $\mathcal{R}$: \begin{equation} P_0 = \left\langle \mathrm{tr}\left(\rho_0. \mathcal{R}^{(N)}\circ\ldots\circ\mathcal{R}^{(2)}\circ\mathcal{R}^{(1)}(\rho_0)\right) \right\rangle_\mathrm{noise}, \label{eq:P0} \end{equation} for an initial density matrix $\rho_0$ that is a pure state. The twirled free evolution operator in interval $n$ is defined as \begin{equation} \mathcal{R}^{(n)}(\rho) = \int_{\mathrm{U}(d)} dU U^\dagger F_n U \rho U^\dagger F_n^\dagger U, \label{eq:RDefinition} \end{equation} where U($d$) is the $d$-dimensional unitary group, $dU$ is the Haar measure for U($d$), and $F_n$ is the noisy unitary free evolution in interval $n$. The twirled free evolution operator is implemented experimentally using a unitary 2-design for U($d$), converting the integral over the Haar measure in Eq.~(\ref{eq:RDefinition}) to a finite sum over the design. For qubit systems, the 2-design is usually the Clifford group. Eq.~(\ref{eq:P0}) then contains two averages: a noise average and a group (or sequence) average. We assume that these averages are independent so that each sequence sees all noise realizations. Experimentally, there may be noise correlations between sequence executions not fully captured by the present formalism. For a single qubit subject to Hamiltonian noise, the unitary free evolution operator is $F_n = e^{-i \frac{\theta_n}{2} \hat{m}.\boldsymbol{\sigma}}$, where $\theta_n$ is the error phase accumulated in free evolution interval $n$, $\hat{m}$ is a unit vector giving the axis of rotation in 3-space, and $\boldsymbol{\sigma}$ is the vector of Pauli matrices. Taking as a basis for the space of density matrices the identity matrix $\sigma_0$ and Pauli matrices $\{\sigma_1,\sigma_2,\sigma_3\}$, the matrix representation of the twirled free evolution operator Eq.~(\ref{eq:RDefinition}) is $R^{(n)}_{00}=1$, $R^{(n)}_{ij}=\frac{1}{3}(1+2\cos\theta_n)\delta_{ij}$ for $i,j = 1,2, \mathrm{or\ } 3$; all other components are 0 \cite{Epstein:2014oe}. For any initial pure-state density matrix the repeated application of the twirled free evolution map results in a sequence fidelity of \begin{equation} P_0 = \frac{1}{2} + \frac{1}{2} \left\langle\prod_{n=1}^{N}\frac{1}{3}\left(1+2\cos\theta_n\right) \right\rangle_{\mathrm{noise}} \equiv \frac{1}{2} + \frac{1}{2}\mathcal{Z}, \label{eq:singleQubitP0} \end{equation} where we have defined $\mathcal{Z}$ as the noise-averaged product. Assuming that the length $N$ vector of accumulated error phases ${\ensuremath{\bm{\theta}}}= \{\theta_n\}_{n=1}^N$ is Gaussian distributed, \begin{equation} \mathcal{Z} = \int_{{\ensuremath{\mathbb{R}^N}}} d{\ensuremath{\bm{\theta}}} \left[ \prod_{n=1}^{N}\frac{1}{3}\left(1+2\cos\theta_n\right) \right] \frac{e^{-\frac{1}{2}({\ensuremath{\bm{\theta}}}-{\ensuremath{\bm{\mu}}}).{\ensuremath{\bm{\chi}}}^{-1}.({\ensuremath{\bm{\theta}}}-{\ensuremath{\bm{\mu}}})}}{\sqrt{(2\pi)^N |{\ensuremath{\bm{\chi}}}|}}, \label{eq:singleQubitZ} \end{equation} where ${\ensuremath{\bm{\mu}}}$ is the vector of mean accumulated error phases, ${\ensuremath{\bm{\chi}}}$ is the positive semi-definite covariance matrix of the noise, and $|{\ensuremath{\bm{\chi}}}|$ is the determinant of ${\ensuremath{\bm{\chi}}}$. In the case that ${\ensuremath{\bm{\chi}}}$ has zero eigenvalues, we take the distribution as giving Dirac delta functions in the directions of the eigenvectors whose associated eigenvalues are zero. We assume that the mean accumulated error phase in each interval is the same, so that ${\ensuremath{\bm{\mu}}} = \theta_0 {\ensuremath{\vec{\mathbf{1}}}}$, where ${\ensuremath{\vec{\mathbf{1}}}}$ is the length $N$ vector of all ones. Before evaluating $\mathcal{Z}$ for general covariance matrices, we examine the uncorrelated (Markovian) and quasistatic noise limits. For noise that is uncorrelated between intervals and identical on each interval, the covariance matrix is ${\ensuremath{\bm{\chi}}} = 2\beta {\ensuremath{\bm{I}}}$, where $\beta$ parameterizes the strength of the noise ($\beta=3\varepsilon$, with $\varepsilon$ the average gate error rate) and ${\ensuremath{\bm{I}}}$ is the $N\times N$ identity matrix. Because the covariance matrix is diagonal, the $N$-dimensional integral in Eq.~(\ref{eq:singleQubitZ}) becomes the product of $N$ identical integrals: \begin{equation} \mathcal{Z}_\mathrm{uncorrelated} = \left( \frac{1+2 e^{-\beta}\cos\theta_0}{3} \right)^N, \label{eq:exactUncorrelatedZ} \end{equation} leading to the standard exponential decay in sequence fidelity for DC and/or uncorrelated noise. For quasistatic noise, the covariance matrix ${\ensuremath{\bm{\chi}}} = 2\beta {\ensuremath{\vec{\mathbf{1}}}}\otimes{\ensuremath{\vec{\mathbf{1}}}}$ is singular: it has one eigenvalue of $2 \beta N$ and $N-1$ zero eigenvalues. Integrating over the delta functions associated with the zero eigenvalues leaves the single integral: \begin{eqnarray} \mathcal{Z}_\mathrm{quasistatic} &=& \int_\mathbb{R} d\theta \left[ \frac{1+2\cos\theta}{3} \right]^N \frac{e^{-\frac{(\theta-\theta_0)^2}{4 \beta}}}{\sqrt{2\pi2 \beta}}\\ &&\hspace*{-2cm}=\frac{1}{3^N}\left[ \binom{N}{0}_2 + 2 \sum_{k=1}^N \binom{N}{k}_2 e^{-\beta k^2}\cos k\theta_0 \right], \label{eq:exactQuasistaticZ} \end{eqnarray} where $\binom{N}{k}_2$ is a trinomial coefficient \cite{Euler:1911hs}. These exact expressions for uncorrelated and quasistatic noise are useful in assessing the accuracy of approximate expressions for the sequence fidelity presented below. For both uncorrelated and quasistatic noise, the case of a single free evolution interval $N=1$ allows us to identify $\beta$ as parameterizing the envelope decay of a driven oscillation $\cos\theta_0$. Defining $\tau$ to be the length in time of the free evolution interval, $\beta$ is related to $T_2^*$ through $\beta \approx (\frac{\tau}{T_2^*})^\alpha$, with the exponent $\alpha$ and characteristic noise time $T_2^*$ depending on the noise process, while $\theta_0 = \omega \tau$ for a DC field characterized by strength $\omega$. Returning to the general expression Eq.~(\ref{eq:singleQubitZ}), we rewrite the product of $1+2\cos\theta_n$ terms as a sum of cosines: \begin{equation} \prod_{n=1}^N 1+2\cos\theta_n = \sum_{{\ensuremath{\bm{g}}} \in \{-1,0,1\}^N} \cos {\ensuremath{\bm{g}}}.{\ensuremath{\bm{\theta}}}. \end{equation} Here ${\ensuremath{\bm{g}}}$ is a length $N$ vector each of whose components is $1$, $-1$, or $0$; the sum is over all $3^N$ configurations of ${\ensuremath{\bm{g}}}$. The validity of this equality can be shown by induction using the cosine addition formula. Eq.~(\ref{eq:singleQubitZ}) then takes the form \begin{eqnarray} \mathcal{Z} &=& \frac{1}{3^N}\sum_{{\ensuremath{\bm{g}}} \in \{-1,0,1\}^N} \int_{\ensuremath{\mathbb{R}^N}} d{\ensuremath{\bm{\theta}}} \cos{\ensuremath{\bm{g}}}.{\ensuremath{\bm{\theta}}} \frac{e^{-\frac{1}{2}({\ensuremath{\bm{\theta}}}-{\ensuremath{\bm{\mu}}}).{\ensuremath{\bm{\chi}}}^{-1}.({\ensuremath{\bm{\theta}}}-{\ensuremath{\bm{\mu}}})}}{\sqrt{(2\pi)^N |{\ensuremath{\bm{\chi}}}|}} \label{eq:HSTransformation} \\ &=& \frac{1}{3^N}\sum_{{\ensuremath{\bm{g}}} \in \{-1,0,1\}^N} e^{-\frac{1}{2}{\ensuremath{\bm{g}}}.{\ensuremath{\bm{\chi}}}.{\ensuremath{\bm{g}}}}e^{i {\ensuremath{\bm{g}}}.{\ensuremath{\bm{\mu}}}}. \label{eq:singleQubitPartitionFunctionSum} \end{eqnarray} The equality of Eqs.~(\ref{eq:HSTransformation}) and (\ref{eq:singleQubitPartitionFunctionSum}) is the Hubbard-Stratonovich transformation \cite{Stratonovich:1957uq,Hubbard:1959fk}, used to convert an Ising model to an associated field theory. We use the inverse transformation here, converting the ``field theory'' of the sequence fidelity to an Ising model. The right hand side of Eq.~(\ref{eq:singleQubitPartitionFunctionSum}) is a partition function for an $N$-site spin-one Ising model with long-range coupling, normalized to an infinite temperature value of $3^N$. Interactions between site variables have coupling strength determined by the covariance matrix ${\ensuremath{\bm{\chi}}}$, and the $n^\mathrm{th}$ site couples to an imaginary magnetic field $\mu_n$. If both ${\ensuremath{\bm{\chi}}}$ and ${\ensuremath{\bm{\mu}}}$ scale with $\beta$, we can identify $\beta$ as the inverse temperature. Because it characterizes the noise strength, $\beta \approx (\frac{\tau}{T_2^*})^\alpha$ must be small, corresponding to the high temperature limit; otherwise a RB experiment would not be useful. Though Eq.~(\ref{eq:singleQubitPartitionFunctionSum}) has the form of a partition function sum, it is \emph{not} used in the same way that a normal partition function is used. The expected RB sequence fidelity is directly proportional to $\mathcal{Z}$: $\mathcal{Z}$ is not used as a probability density function whose normalization is immaterial, but instead the actual value of $\mathcal{Z}$ is its key property. Equation~(\ref{eq:singleQubitPartitionFunctionSum}) reveals a pleasing connection between single qubit RB and a spin-one Ising model partition function. However, using Eq.~(\ref{eq:singleQubitPartitionFunctionSum}) to determine the sequence fidelity for given ${\ensuremath{\bm{\chi}}}$ and ${\ensuremath{\bm{\mu}}}$ requires evaluating a sum with a number of terms exponential in the sequence length $N$. Using $\beta$ as an expansion parameter, we perform a high temperature/weak coupling expansion of Eq.~(\ref{eq:singleQubitZ}) to derive an approximate expression for the single qubit sequence fidelity, presented in the supplementary material. When the mean noise $\theta_0=0$, the approximate form for the partition function is \begin{equation} \mathcal{Z}_0 = \frac{1}{\sqrt{\left|{\ensuremath{\bm{I}}} + \frac{2}{3}{\ensuremath{\bm{\chi}}}\right|}} \left(1-\frac{1}{12}\sum_{n=1}^N\Sigma_{nn}^2+\ldots\right), \label{eq:singleQubitZ0Approximate} \end{equation} where $\Sigma\equiv{\ensuremath{\bm{\chi}}}({\ensuremath{\bm{I}}}+\frac{2}{3}{\ensuremath{\bm{\chi}}})^{-1}$. The leading order dependence of the sequence fidelity depends only on the determinant of a symmetric positive definite matrix; for wide-sense stationary noise this matrix is also Toeplitz. Since the determinant expression is common to all terms in the $\mathcal{Z}$ series expansion, the correction terms within the parentheses specify relative errors compared to the leading order term. The first correction term nominally has $N\beta^2$ scaling, with the actual $N$ scaling dependent on the covariance matrix ${\ensuremath{\bm{\chi}}}$. For uncorrelated and quasistatic noise the Toeplitz determinants are readily computed analytically. For the case of uncorrelated noise ${\ensuremath{\bm{\chi}}}=2\beta{\ensuremath{\bm{I}}}$, \begin{equation} \mathcal{Z}_{0,\mathrm{uncorrelated}} = \left(1+\frac{4}{3}\beta\right)^{-\frac{N}{2}} \left(1-\frac{N\beta^2}{3(1+\frac{4}{3}\beta)^2} +\ldots \right), \label{eq:approximateUncorrelatedZ} \end{equation} and the sequence fidelity decays exponentially with increasing $N$, with error rate the same to $\mathcal{O}(\beta)$ as that given by Eq.~(\ref{eq:exactUncorrelatedZ}). The previously stated $N\beta^2$ dependence of the correction term is apparent. Though the relative error may become large for large $N\sim\beta^{-2}$, the absolute error will be exponentially small because of the prefactor. For the case of quasistatic noise, ${\ensuremath{\bm{\chi}}}=2\beta {\ensuremath{\vec{\mathbf{1}}}}\otimes{\ensuremath{\vec{\mathbf{1}}}}$, and \begin{equation} \mathcal{Z}_{0,\mathrm{quasistatic}} = \frac{1}{\sqrt{1+\frac{4}{3}N\beta}} \left(1-\frac{N\beta^2}{3\left(1+\frac{4N\beta}{3}\right)^2} \ldots\right). \label{eq:approximateQuasistaticZ} \end{equation} The quasistatic noise sequence fidelity decays as the inverse square root of $N$, significantly slower than in the case of uncorrelated noise. The correction term now scales as $1/N$ for large $N$, so that both the relative and absolute errors of the leading order quasistatic series expansion compared to the exact expression Eq.~(\ref{eq:exactQuasistaticZ}) decrease with large $N$. One may compute additional terms of the series expansion in Eq.~(\ref{eq:singleQubitZ0Approximate}) if high accuracy in sequence fidelity is required. Uncorrelated and quasistatic noise display significantly different decay dependence on $N$. Figure \ref{fig:decayPlot} shows the decay of sequence fidelity for uncorrelated and quasistatic noise computed from the exact expressions Eqs.~(\ref{eq:exactUncorrelatedZ}) and (\ref{eq:exactQuasistaticZ}), respectively, for the same noise parameter $\beta=0.01$ and $\theta_0=0$. \begin{figure} \includegraphics[width=3in]{decayPlot} \caption{\label{fig:decayPlot} Sequence fidelity $P_0$ as a function of RB sequence length $N$. The orange data shows uncorrelated noise sequence fidelity corresponding to Eq.~(\ref{eq:exactUncorrelatedZ}) for $\beta=0.01$ and $\theta_0=0$; the black curve is best fit to the orange data using $A+B(1-2\varepsilon)^N$. The dark blue data shows quasistatic noise sequence fidelity corresponding to Eq.~(\ref{eq:exactQuasistaticZ}) for $\beta=0.01$ and $\theta_0=0$; the cyan curve is best fit to the dark blue data using the leading order term in Eq.~(\ref{eq:approximateQuasistaticZ}). Magenta curves are best fits to the quasistatic dark blue data using $A+B(1-2\varepsilon)^N$ for four different fitting scenarios, discussed in the text. } \end{figure} Fitting the uncorrelated noise decay to the standard RB decay expression $A+B(1-2\varepsilon)^N$ recovers the original noise parameter $\beta=0.01$; fitting the quasistatic noise decay to the leading order decay in Eq.~(\ref{eq:approximateQuasistaticZ}) also recovers $\beta=0.01$. Fitting quasistatic noise decay to $A+B(1-2\varepsilon)^N$ gives inconsistent estimates of the noise parameter $\beta$. Magenta curves in Fig.~\ref{fig:decayPlot} show four fitting scenarios, with $A$ and $B$ held fixed or allowed to vary, and with equal weighting of sequence fidelity data or weighting by inverse sequence length in the least squares objective. All curves shown here underestimate the average gate error rate by factors of two to six; overestimation is also possible if only short RB sequence lengths are used for fitting. A more detailed analysis of fitting power law decay to an exponential is given in the supplementary information. This example demonstrates the danger in assuming that all benchmarking experiments can be fit to exponential decay. If the form but not the overall scale $\beta$ of the covariance matrix ${\ensuremath{\bm{\chi}}}$ is known, fitting benchmarking data to Eq.~(\ref{eq:singleQubitZ0Approximate}) estimates $\beta$. With no prior knowledge of ${\ensuremath{\bm{\chi}}}$, an alternative procedure for determining $\beta$ or $\varepsilon$ emerges from the observation that the sequence fidelity for small $N$ is independent of the noise correlations, a result previously noted in \cite{Ball:2016cs}. To lowest order in $N\beta$, Eq.~(\ref{eq:singleQubitZ0Approximate}) gives $\mathcal{Z}_0 \approx 1-\frac{2}{3} N\beta$, independent of all the off-diagonal elements of the covariance matrix. The initial decay in sequence fidelity can be fit to $A + B(1-2 N \varepsilon)$, where $A$ is determined by the asymptotic value of $P_0$ for large $N$. Fitting the quasistatic expression Eq.~(\ref{eq:exactQuasistaticZ}) shown in Fig.~\ref{fig:decayPlot} with $A=P_0(N=10^4)=0.543$ yields $\beta = 0.0107$, a 7\% relative error in the average gate error rate. Errors in fitting $\beta$ of $\mathcal{O}(\beta^2)$ and $\mathcal{O}(\beta\delta/B)$ arise from the linear approximation and the error $\delta$ in the asymptotic value for $A$, respectively. If we assume that the noise is limited to a single axis, the covariance matrix ${\ensuremath{\bm{\chi}}}$ can be expressed simply in terms of the noise power spectral density (PSD). For dephasing noise with Hamiltonian $H=\frac{1}{2} \sigma_z B(t)$, the error phase accumulated in a free evolution interval is $\theta_n = \int_{(n-1)\tau}^{n\tau}dsB(s)$. Using the Wiener-Khinchin theorem, the covariance matrix components for $m$,$n\in 1,\ldots,N$ are \begin{eqnarray} \chi_{mn} &=& \left\langle (\theta_m-\theta_0)(\theta_n-\theta_0) \right\rangle_\mathrm{noise} \nonumber\\ &=& \int_0^\infty df \frac{\left[\cos\left(2\pi f\tau(m-n)\right)\sin^2\pi f\tau\right]} {\pi^2f^2} S(f), \label{eq:covariancePSD} \end{eqnarray} where $S(f)$ is the PSD of $B(t)$. Define $\phi_{mn}$ to be the quantity in the square brackets in Eq.~(\ref{eq:covariancePSD}). With the effective Ising model Hamiltonian in Eq.~(\ref{eq:singleQubitPartitionFunctionSum}) as $H_\mathrm{Ising} =\frac{1}{2} g_m\chi_{mn}g_n -i g_n \mu_n$, we recognize $\frac{1}{2}g_m\phi_{mn}g_n$ as giving filter functions used in dynamical decoupling analyses \cite{Ball:2016cs,Cywinski:2008ys,Biercuk:2011zr,Green:2012rb,Green:2013cr}. For a dynamical decoupling pulse sequence of length $N$, a single configuration of site variables $\{g_n\}_{n=1}^N$ taking only values of $1$ or $-1$ specifies the decoupling sequence and associated filter function $F(f\tau)=\frac{1}{2}g_m\phi_{mn}g_n$. In the RB context, we can interpret the partition function sum Eq.~(\ref{eq:singleQubitPartitionFunctionSum}) and corresponding sequence fidelity Eq.~(\ref{eq:singleQubitP0}) as the average fidelity over $3^N$ ``decoupling'' (really randomization) sequences. An additional value of $0$ for each site variable is permitted by the twirled free evolution map, in contrast to dynamical decoupling where only ``forward'' (1) and ``echoed'' ($-1$) intervals are allowed. This is because the twirled free evolution map takes $g_m$ values from the weights of the adjoint representation of SU(2) while single axis dynamical decoupling takes $g_m$ values from the non-trivial irreducible representation of $\mathbb{Z}_2$. \begin{table} \caption{\label{tab:PSDTable} PSD parameters for Eq.~(\ref{eq:PSD}) and corresponding free evolution decay parameters. Colors refer to Fig.~\ref{fig:PSDDecayPlot}. } \begin{ruledtabular} \begin{tabular}{llllll} & $f_L[\mathrm{Hz}]$ & $f_H[\mathrm{Hz}]$ & $A[\mathrm{Hz}]$ & $T_2^*[\mathrm{s}]$ & $\alpha$\\ orange (uncorrelated) &$\infty$ & -- & $2.0\times 10^6$ & $1.0\times10^{-6}$&1\\ brown & $10^8$ & $10^{10}$ & $4.2\times10^6$ & $9.5\times10^{-7}$ & 1\\ purple & $10^6$ & $10^{10}$ & $4.3\times10^7$ & $1.0\times10^{-7}$ & 2\\ red & $10^{-3}$& $10^{10}$ & $7.9\times10^{15}$ & $1.0\times10^{-7}$ & 2\\ green & $10^{-3}$& $10^{5}$ & $9.8\times10^{15}$ & $1.0\times10^{-7}$ & 2\\ blue (quasistatic) &-- &-- & -- & $1.0\times10^{-7}$ & 2 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics[width=3in]{PSDDecayPlot} \caption{\label{fig:PSDDecayPlot} Sequence fidelity $P_0$ vs. RB sequence length $N$. All data has $\beta=0.01$, $\theta_0=0$, and $\tau=10^{-8}\mathrm{s}$. Orange data is for uncorrelated noise, Eq.~(\ref{eq:exactUncorrelatedZ}), and blue data for quasistatic noise, Eq.~(\ref{eq:exactQuasistaticZ}). Brown, purple, red and green data correspond to PSDs with parameters given in Table~\ref{tab:PSDTable} and functional form in Eq.~(\ref{eq:PSD}). Brown, purple, and orange data are nearly indistinguishable on the plot. } \end{figure} For a PSD of the form \cite{Paladino:2014kx} \begin{equation} S(f) = \begin{cases} A & 0\leq f<f_L\\ A\frac{f_L}{f} & f_L\leq f \leq f_H\\ A\frac{f_L f_H}{f^2} & f_H <f, \end{cases} \label{eq:PSD} \end{equation} a variety of different sequence fidelity decay behaviors is possible. This continuous, piecewise PSD is constant at low frequency, $1/f$ at medium frequencies, and $1/f^2$ at high frequencies. For a PSD of this functional form the covariance matrix integrals Eq.~(\ref{eq:covariancePSD}) have (large) closed form expressions as functions of $f_L$, $f_H$, $A$, and $\tau$, with which approximate sequence fidelities are computed using Eq.~(\ref{eq:singleQubitZ0Approximate}). Figure~\ref{fig:PSDDecayPlot} shows the sequence fidelity decay for four choices for the $A$, $f_L$, and $f_H$ parameters given in Table~\ref{tab:PSDTable}; all sequence fidelities are computed with $\tau=10^{-8}\mathrm{s}$. Uncorrelated (orange) and quasistatic (blue) sequence fidelity data\ are shown for comparison in Fig.~\ref{fig:PSDDecayPlot}, with all six curves having $\beta=0.01$ and $\theta_0=0$. From $\beta=0.01\approx(\frac{\tau}{T_2^{*}})^\alpha$, one can determine $T_2^{*}$ and $\alpha$ from a $\tau$-series expansion of $\chi_{11}$. $T_2^{*}$ and $\alpha$ for the different PSDs are also given in Table~\ref{tab:PSDTable}. Since all six curves have the same $\beta$ value, small $N$ behavior is almost identical. For large $N$ the decays behave differently, with the brown and purple curves exhibiting nearly uncorrelated noise behavior, and the green curve exhibiting more quasistatic noise behavior. Free evolution decay for the brown and purple PSDs are different---exponential and Gaussian, respectively, but their benchmarking decays are nearly identical. The green curve has $1/f^2$ PSD behavior at the pulsing frequency, and the red curve has $1/f$ PSD at the pulsing frequency; both give rise to decays between uncorrelated and quasistatic. Finally, we note that the relationship between RB sequence fidelity and Ising model partition functions can be generalized to $d$-state systems. The progression from Eq.~(\ref{eq:singleQubitZ}) to Eq.~(\ref{eq:singleQubitPartitionFunctionSum}) for $d=2$ is structurally the same for general $d$: the non-trivial term in the twirled free evolution matrix is the sum of cosines over weights of the adjoint representation of {SU($d$)}; products of cosine terms are converted into sums using the cosine addition formula; the Hubbard-Stratonovich transformation converts the integral over all error phases at different intervals into a partition function sum over all possible configurations of {SU($d$)}\ adjoint representation weights at $N$ sites. The details of the analysis are given in the supplementary material. In summary, we have shown that RB on one or more qubits has a deep connection to the Ising model and admits long-range (power law) and short range (exponential) decay behavior for noises with different spectra. On the surface, this implies that fitting RB experiments to an exponential decay model can be dangerous and can lead to undependable estimates of the error rate. This can be mitigated by fitting only the short sequence (linear decay) and asymptotic regime data, which always gives a consistent estimate of the average single gate error rate. There is however a deeper issue. For generic noise, the average gate error rate does not uniquely determine the long time behavior. Even if we can correctly estimate the average gate error with RB we may learn nothing about a system's fault-tolerant behavior \cite{Kueng:2016jk}. Nonexponential RB decay implies that there exist error correction/control procedures that are more favorable than simply comparing the average gate error rate to fault tolerance thresholds, thresholds that are generally calculated using Markovian error models. A trivial example is that for perfect quasistatic noise one could completely eliminate errors with decoupling sequences, but whether there generically exist error mitigation techniques for correlated noise is an interesting open problem. \section{Supplementary Information} \section{High Effective Temperature Expansion for Partition Function} Here we derive the approximate expression, Eq.~(11) of the main text, for the single qubit sequence fidelity partition function discussed in the main text. Our starting point is Eq.~(4) of the main text, \begin{equation} \mathcal{Z} = \int_{{\ensuremath{\mathbb{R}^N}}} d{\ensuremath{\bm{\theta}}} \left[ \prod_{n=1}^{N}\frac{1}{3}\left(1+2\cos\theta_n\right) \right] \frac{e^{-\frac{1}{2}({\ensuremath{\bm{\theta}}}-{\ensuremath{\bm{\mu}}}).{\ensuremath{\bm{\chi}}}^{-1}.({\ensuremath{\bm{\theta}}}-{\ensuremath{\bm{\mu}}})}}{\sqrt{(2\pi)^N |{\ensuremath{\bm{\chi}}}|}}. \label{eq:singleQubitZ} \end{equation} We define new integration variables $\bm{s} = ({\ensuremath{\bm{\theta}}} - {\ensuremath{\bm{\mu}}})/\epsilon$, where $\epsilon\equiv\sqrt{2\beta}$ ($\epsilon$ is the standard deviation of the random noise in each free evolution interval). Here we are using $\beta$ to parameterize the strength of the random noise ${\ensuremath{\bm{\chi}}}$ only; ${\ensuremath{\bm{\mu}}}$ can take finite values, independent of the size of the random noise. With respect to the spin-one Ising model, $\beta$ only really represents an effective inverse temperature when ${\ensuremath{\bm{\mu}}}$ also scales with $\beta$. By excluding $\beta$ scaling from ${\ensuremath{\bm{\mu}}}$, however, we can obtain approximate expressions that hold for small random noise (small $\beta$) and finite mean ${\ensuremath{\bm{\mu}}}=\theta_0{\ensuremath{\vec{\mathbf{1}}}}$. We expand the product in the square brackets in Eq.~(\ref{eq:singleQubitZ}) in an $\epsilon$ series, \begin{eqnarray} \prod_{n=1}^N\frac{1+2\cos\theta_n}{3} &=& \left(\frac{1+2\cos\theta_0}{3}\right)^N \exp\left(-\epsilon\frac{2\sin\theta_0}{1+2\cos\theta_0}{\ensuremath{\vec{\mathbf{1}}}}.\bm{s}\right) \exp\left(-\epsilon^2\frac{2+\cos\theta_0}{(1+2\cos\theta_0)^2}\bm{s}.\bm{s}\right) \nonumber\\ &&\times \left(1 -\epsilon^3 \frac{(7+2\cos\theta_0)\sin\theta_0}{3(1+2\cos\theta_0)^3} \sum_{n=1}^N s_n^3 - \epsilon^4 \frac{28+12\cos\theta_0-12\cos2\theta_0-\cos3\theta_0} {12(1+2\cos\theta_0)^4} \sum_{n=1}^N s_n^4 +\ldots \right). \end{eqnarray} This series expansion is obtained by taking the exponential of the Taylor series expansion of the logarithm of the left hand side product, keeping the first and second orders in $\epsilon$ in the exponential, and re-expanding in a Taylor series the exponential of all the remaining terms in the logarithm expansion. Such an expansion plays a similar role to the expansion of $e^{-\lambda \phi^4}$ in field theory. The expansion allows us to express all terms in the integral of Eq.~(\ref{eq:singleQubitZ}) as moments of a Gaussian distribution with modified covariance matrix ${\ensuremath{\bm{\Sigma}}}$ and modified mean ${\ensuremath{\bm{\nu}}}$, \begin{eqnarray} {\ensuremath{\bm{\Sigma}}} & \equiv & {\ensuremath{\bm{\chi}}}\left({\ensuremath{\bm{I}}} + \frac{2(2+\cos\theta_0)}{(1+2\cos\theta_0)^2}) {\ensuremath{\bm{\chi}}}\right)^{-1},\\ \bm{y} & \equiv & 2\frac{\sin\theta_0}{1+2\cos\theta_0} {\ensuremath{\vec{\mathbf{1}}}}, \\ {\ensuremath{\bm{\nu}}} & \equiv & -{\ensuremath{\bm{\Sigma}}}.\bm{y}. \end{eqnarray} Notice that ${\ensuremath{\bm{\Sigma}}}$ retains the $\beta$ scaling of ${\ensuremath{\bm{\chi}}}$ and that ${\ensuremath{\bm{\nu}}}$ also is first order in $\beta$. Performing the Gaussian moment integrations (Wick contractions) and setting $\epsilon=1$ results in the following series expansion for the partition function: \begin{eqnarray} \mathcal{Z} &=& \left(\frac{1+2\cos\theta_0}{3}\right)^N \frac{e^{\frac{1}{2}\bm{y}.{\ensuremath{\bm{\Sigma}}}.\bm{y}}} {\sqrt{\left|{\ensuremath{\bm{I}}} + \frac{2(2+\cos\theta_0)}{(1+2\cos\theta_0)^2}{\ensuremath{\bm{\chi}}} \right|}} \left(1 -\frac{(7+2\cos\theta_0)\sin\theta_0}{3(1+2\cos\theta_0)^3} \sum_{n=1}^N \left(3\Sigma_{nn}\nu_n+\nu_n^3\right)\right. \nonumber\\&& \quad \left. - \frac{28+12\cos\theta_0-12\cos2\theta_0-\cos3\theta_0} {12(1+2\cos\theta_0)^4} \sum_{n=1}^N \left(3\Sigma_{nn}^2+6\Sigma_{nn}\nu_n^2+\nu_n^4\right) +\ldots \right). \label{eq:singleQubitZApproximate} \end{eqnarray} In the limit of zero mean noise $\theta_0=0$, ${\ensuremath{\bm{\Sigma}}}={\ensuremath{\bm{\chi}}}(I+\frac{2}{3}{\ensuremath{\bm{\chi}}})^{-1}$ and $\bm{y}={\ensuremath{\bm{\nu}}}=0$. The approximate form for the partition function then becomes \begin{equation} \mathcal{Z}_0 = \frac{1}{\sqrt{\left|{\ensuremath{\bm{I}}} + \frac{2}{3}{\ensuremath{\bm{\chi}}}\right|}} \left(1-\frac{1}{12}\sum_{n=1}^N\Sigma_{nn}^2+\ldots\right), \label{eq:singleQubitZ0Approximate} \end{equation} Eq.~(11) in the main text. \section{Fitting Quasistatic Noise Sequence Fidelity Decay to an Exponential} In this section we illustrate the inconsistencies in fitting quasistatic sequence fidelity decay to an exponential. We assume that the quasistatic sequence fidelity decay is described by the lowest order term in the approximate partition function, Eq.~(13) of the main text, with the nominal average gate error rate $\varepsilon = \frac{1}{3}\beta$. With ``data'' generated from Eq.~(13) of the main text, we perform a weighted least-squares fit to an exponential functional form minimizing the objective \begin{equation} \sum_{N=1}^{N_\mathrm{max}} \left( \frac{1}{2}+\frac{1}{2}\frac{1}{\sqrt{1+4N \varepsilon}} -A - B (1-2 K\varepsilon)^N \right)^2 w_N, \label{eq:fittingObjective} \end{equation} where $A$, $B$, and $K$ are the possible fitting parameters, and $N_\mathrm{max}$ is the maximum RB sequence length used in the fit. $K$ measures how far the fit deviates from the nominal average gate error rate $\varepsilon$. We consider four fitting scenarios: \begin{enumerate} \item $A$ and $B$ are fixed at $\frac{1}{2}$ and $w_N = 1$ \item $A$ and $B$ are fixed at $\frac{1}{2}$ and $w_N = \frac{1}{N}$ \item $A$ and $B$ are fitting parameters and $w_N=1$ \item $A$ and $B$ are fitting parameters and $w_N=\frac{1}{N}$. \end{enumerate} Scenarios 1 and 3 are equally weighted with respect to sequence length, while scenarios 2 and 4 have shorter sequences more heavily weighted than longer sequences, corresponding to fitting to sequence lengths that are evenly sampled in $\log N$. For the figures that follow, we take $\beta=0.01$ and correspondingly, $1/(2\varepsilon)=150$. For exponential decay, $1/(2\varepsilon)$ corresponds to the sequence length giving the $1/e$-point. Figures \ref{fig:fixedABUnderestimatePlot} and \ref{fig:floatingABUnderestimatePlot} show the inverse of the error rate deviation parameter $K$ for the four fitting scenarios, as a function of maximum RB sequence length $N_\mathrm{max}$ used in the fits. Both over- and underestimation of the average gate error rate is possible, dependent on the maximum sequence lengths used for fitting, and whether $A$ and $B$ are allowed to vary. For scenarios 3 and 4, where $A$ and $B$ are allowed to vary, $A$ and $B$ can differ substantially from their nominal values of $\frac{1}{2}$. For scenario 3, equally weighted data, as $N_\mathrm{max}$ increases, the objective is minimized by correctly fitting the nominal value of $A$, and severely underestimating both $B$ and the average gate error rate since most of the fitting data is at large RB sequence lengths. For scenario 4, which weights shorter sequences more heavily, $A$ and $B$ are approaching their nominal values as $N_\mathrm{max}$ increases, with the average gate error rate underestimated by a factor of $2.2$. Note that the objective Eq.~(\ref{eq:fittingObjective}) is almost scale invariant for small $\varepsilon$---the summand depends on the product $N\varepsilon$ rather than $N$ and $\varepsilon$ separately. Figures 1--4 are essentially unchanged for different values of (small) $\varepsilon$. Figure~\ref{fig:floatingABLogFitPlot} shows fits to the quasistatic sequence fidelity for scenario 4, for four different values of $N_\mathrm{max}= \{1,10,100,1000\}\times 1/(2\varepsilon)$. For $N_\mathrm{max}=1/(2\varepsilon)$ (blue curve in figure) the fit appears to be very good, but is incorrect: $A=0.75$, $B=0.24$, and $K=1.8$, giving a factor of $1.8$ overestimate of the average gate error rate. Because $A$ and $B$ are allowed to vary, the initial nonexponential decay can be fit by changing $A$ and $B$ from their nominal values. When longer sequences are included in the fitting data, the exponential functional fitting form no longer can be made to match the full nonexponential decay. \begin{figure} \includegraphics[width=4in]{fixedABUnderestimatePlot} \caption{\label{fig:fixedABUnderestimatePlot} Plot of inverse fitting parameter $1/K$ versus maximum RB sequence length $N_\mathrm{max}$ in units of inverse error rate $1/(2\varepsilon)=150$. Blue dots are $1/K$ for fitting scenario 1 ($A=B=1/2$, $w_N=1$); orange dots are for scenario 2 ($A=B=1/2$, $w_N=1/N$). For fixed $A$ and $B$, the average gate error rate is always underestimated, by factors between $1.5$ and $5.6$, for maximum sequence lengths between $1/(2\varepsilon)$ and $1000/(2\varepsilon)$. } \end{figure} \begin{figure} \includegraphics[width=4in]{floatingABUnderestimatePlot} \caption{\label{fig:floatingABUnderestimatePlot} Plot of inverse fitting parameter $1/K$ versus maximum RB sequence length $N_\mathrm{max}$ in units of inverse error rate $1/(2\varepsilon)=150$. Blue dots are $1/K$ for fitting scenario 3 ($A$ and $B$ are fitting parameters, $w_N=1$); orange dots are for scenario 4 ($A$ and $B$ are fitting parameters, $w_N=1/N$). When $A$ and $B$ are allowed to vary, the average gate error rate is overestimated for short maximum sequence lengths $N_\mathrm{max}\sim1/(2\varepsilon)$ and underestimated for longer $N_\mathrm{max}$. } \end{figure} \begin{figure} \includegraphics[width=4in]{floatingABLinearPlot} \caption{\label{fig:floatingABLinearPlot} Fitting parameters $A$ and $B$ versus maximum RB sequence length $N_\mathrm{max}$ in units of inverse error rate $1/(2\varepsilon)=150$, for fitting scenario 3 ($A$ and $B$ are allowed to vary; $w_N=1$). Green dots are $A$ and red dots are $B$. } \end{figure} \begin{figure} \includegraphics[width=4in]{floatingABLogPlot} \caption{\label{fig:floatingABLogPlot} Fitting parameters $A$ and $B$ versus maximum RB sequence length $N_\mathrm{max}$ in units of inverse error rate $1/(2\varepsilon)=150$, for fitting scenario 4 ($A$ and $B$ are allowed to vary; $w_N=1/N$). Green dots are $A$ and red dots are $B$. } \end{figure} \begin{figure} \includegraphics[width=4in]{floatingABLogFitPlot} \caption{\label{fig:floatingABLogFitPlot} Sequence fidelity $P_0$ versus sequence length $N$, for quasistatic ``data'' and fits from scenario 4 ($A$ and $B$ are fitting parameters, $w_N=1/N$). Black curve is approximate quasistatic sequence fidelity given by Eq.~(13) of the main text. $\beta=0.01$ and $1/(2\varepsilon)=150$. Blue fit has $N_\mathrm{max}=1/(2\varepsilon)$, orange fit has $N_\mathrm{max}=10/(2\varepsilon)$, green fit has $N_\mathrm{max}=100/(2\varepsilon)$, and red fit has $N_\mathrm{max}=1000/(2\varepsilon)$. The fitting curves end at their respective values of $N_\mathrm{max}$. The blue fit lies on top of the black curve, but gives incorrect values for $A$, $B$, and $K$. } \end{figure} \section{Benchmarking of $d$-State Systems} Here we show that the analysis of single qubit benchmarking and its relationship to the Ising model can be extended to the benchmarking of $d$-state systems. Following the derivation in the main text, we compute the expected benchmarking sequence fidelity in Eq.~(1) of the main text, via the repeated application of the twirled free evolution operator. Because the Haar measure integration in the definition of the twirled free evolution operator (Eq.~(2) of the main text) is translation invariant, without loss of generality we assume a diagonal free evolution unitary in ${\ensuremath{U(d)}}$, \begin{equation} F_n = \mathrm{diag}\left(e^{-i \theta_1^n},e^{-i \theta_2^n},\ldots,e^{-i \theta_d^n}\right), \end{equation} where $\theta_i^n$ is the error phase accumulated on the $i^\mathrm{th}$ state in the $n^\mathrm{th}$ interval. (Note that the error phase angle convention here differs from the standard SU(2) expression by a factor of 2.) This free evolution matrix can be substituted into Eq.~(2) of the main text, and the integration over the Haar measure performed explicitly; however, a few group theoretical observations obviate the need for explicitly performing $(d^2-1)$-dimensional integrals. The action of the twirled free evolution map on a density matrix is the product of the defining and conjugate representations of {SU($d$)}, which in turn is the direct sum of the (irreducible) trivial and adjoint representations. The twirled free evolution map in any irreducible representation (irrep) commutes with all elements in the irrep, again because of Haar measure invariance. By Schur's lemma, the twirled free evolution map must be proportional to the identity on each irrep. The proportionality constant $K^{(n,\gamma)}$ for each irrep $D^{(\gamma)}$ is given by \begin{equation} \int_{U(d)}dU D^{(\gamma)}\left(U F_n U^{-1}\right)=K^{(n,\gamma)} D^{(\gamma)}(I). \end{equation} Taking the trace of both sides gives $ K^{(n,\gamma)} = \chi^{(\gamma)}(F_n)/{d^{(\gamma)}}, $ where $\chi^{(\gamma)}(F_n)$ is the character of the free evolution operator on the irrep $\gamma$, and $d^{(\gamma)}$ is the dimension of irrep $\gamma$. The character $\chi^{(\gamma)}(F_n)$ can be written in terms of the weights $w$ of irrep $\gamma$ and the angles $\theta_i^n$ of $F_n$ \cite{Elliott:1979, Cahn:1984vn}. (The Weyl character formula is customarily used to express $\chi^{(\gamma)}(F_n)$ in terms of the highest weight of $\gamma$, the Cartan subalgebra, and a sum over the Weyl group, but to perform the integrations over Gaussian distributed $\theta_i^n$ variables, it is more convenient to express the character explicitly as a sum over all the weights.) For the trivial and adjoint irreps, in a basis of $d^2$ $d\times d$ Hermitian matrices $\{\lambda_i\}_{i=0}^{d^2-1}$ orthogonal with respect to the trace inner product, the matrix for the twirled free evolution map is then \begin{equation} R_{ij}^{(n)} = \begin{cases} 1& \mathrm{for}\ i=j=0,\\ \frac{1}{d^2-1}\sum_{w\in W(d^2-1)} \cos\left(\sum_{k=1}^d w_k\theta_k^n\right) &\mathrm{for}\ i=j=\{1,2,\ldots,d^2-1\},\\ 0& \mathrm{otherwise}, \end{cases} \end{equation} where we assume that $\lambda_0 = I_{d\times d}$ (i.e., the upper left $1\times1$ block of $R^{(n)}$ corresponds to the trivial irrep, and the lower right $(d^2-1) \times (d^2-1)$ block of $R^{(n)}$ corresponds to the adjoint irrep), and $W(d^2-1)$ is the multiset of weights of the {SU($d$)}\ adjoint representation, in the standard basis. Adjoint representation weights in the standard basis are $d$-component vectors with either all zero entries (of which there are $d-1$ repeated weights), or two non-zero entries of $+1$ and $-1$ (of which there are $d(d-1)$ unique weights). Repeated application of the twirled free evolution map to a pure state, again assuming a Gaussian distribution for the noise, results in cosine-weighted Gaussian integrals for the expected sequence fidelity. As in the $d=2$ case, we convert products of cosines into sums using the cosine addition formula, and we again use the (inverse) Hubbard-Stratonovich transformation to convert the continuous $\theta_k^n$ error phase variables to discrete $w_i^m$ weight variables. The final sequence fidelity expression is \begin{widetext} \begin{equation} P_0 = \frac{1}{d}+\frac{d-1}{d}\frac{1}{(d^2-1)^N} \sum_{w^1\in W} \sum_{w^2\in W}\ldots \sum_{w^N\in W} \exp\left(-\frac{1}{2}\sum_{m,n=1}^N\sum_{i,j=1}^d w_i^m\chi_{ij}^{mn}w_j^n + i \sum_{n=1}^N\sum_{i=1}^d w_i^n \theta_{0,i}^n\right). \label{eq:generalP0} \end{equation} \end{widetext} The tensor $\chi_{ij}^{mn}$ gives the covariance between the error phase at state $i$ and free evolution interval $m$, and the error phase at state $j$ and free evolution interval $n$, and $\theta_{0,i}^n$ is the mean error accumulated in state $i$ in interval $n$. We again have an $N$ site Ising model partition function, where each site variable is a $d$-dimensional vector, whose possible states are the weights of the adjoint representation of {SU($d$)}. Specializing to $d=2$ recovers the single qubit partition function of Eq.~(10) of the main text. Approximate expressions for the sequence fidelity for $d$-state randomized benchmarking can similarly be obtained through a high effective temperature expansion of Eq.~(\ref{eq:generalP0}), giving a lowest order formula in terms of $\chi_{ij}^{mn}$ analogous to the determinant expression in Eq.~(\ref{eq:singleQubitZ0Approximate}).
2,877,628,089,897
arxiv
\section{Introduction} The classical methods of finding solutions to extrapolation, interpolation and filtering problems for stationary stochastic processes and sequences were developed by Kolmogorov (see selected works of Kolmogorov (1992)), Wiener (see the book by Wiener (1966)), Yaglom (see, for example, books by Yaglom (1987a, 1987b). The problem of estimation of the unknown values of harmonizable random sequences and processes were investigated in papers by Cambanis (1983), Cambanis and Soltani (1984), Hosoya (1982). The interpolation problem for harmonizable symmetric $\alpha$-stable random sequences were investigated in papers by Weron (1985) and Pourahmadi (1984). Most of results concerning estimation of the unknown (missed) values of stochastic processes are based on the assumption that spectral densities of processes are exactly known. In practice, however, complete information on the spectral densities is impossible in most cases. In such situations one finds parametric or nonparametric estimates of the unknown spectral densities. Then the classical estimation method is applied under the assumption that the estimated densities are true. This procedure can result in significant increasing of the value of error as Vastola and Poor (1983) have demonstrated with the help of some examples. This is a reason to search estimates which are optimal for all densities from a certain class of admissible spectral densities. These estimates are called minimax since they minimize the maximal value of the error. A survey of results in minimax (robust) methods of data processing can be found in the paper by Kassam and Poor (1985). The paper by Grenander (1957) should be marked as the first one where the minimax extrapolation problem for stationary processes was formulated and solved. Later Franke and Poor (Franke and Poor, 1984; Franke, 1985) applied the convex optimization methods for investigation the minimax-robust extrapolation and interpolation problems. In papers by Moklyachuk (1990 -- 2008a) of the minimax-robust extrapolation, interpolation and filtering problems are studied for stationary processes. The papers by Moklyachuk and Masyutka (2006 -- 2012) are dedicated to minimax-robust extrapolation, interpolation and filtering problems for vector-valued stationary processes and sequences. Dubovetska et al. (2012) solved the problem of minimax-robust interpolation for another generalization of stationary processes -- periodically correlated sequences. In the papers by Dubovetska and Moklyachuk (2013 -- 2014), Moklyachuk and Golichenko (2016) the minimax-robust extrapolation, interpolation and filtering problems for periodically correlated processes are investigated. The minimax-robust extrapolation, interpolation and filtering problems for stochastic sequences and random processes with $n$th stationary increments are investigated by Luz and Moklyachuk (Luz and Moklyachuk, 2012 -- 2016; Moklyachuk and Luz, 2013). In this paper the problem of optimal estimation is investigated for the linear functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values of a random sequence $\{\xi_j,j\in\mathbb Z\}$ based on observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition. The problem is investigated under the condition of spectral certainty as well as under the condition of spectral uncertainty. Formulas for calculating the value of the error and the spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty where spectral densities of the sequences are exactly known. In the case of spectral uncertainty where spectral densities of the sequences are not exactly known while sets of admissible spectral densities are available, relations which determine the least favorable densities and the minimax-robust spectral characteristics for different classes of spectral densities are derived. \section{Harmonizable symmetric $\alpha$-stable random sequences. Basic properties}\label{sec:fields} \begin{definition}[symmetric $\alpha$-stable random variable] A real random variable $\xi$ is said to be symmetric $\alpha$-stable, $S\alpha S$, if its characteristic function has the form $E exp(it\xi) = exp(-c|t|^{\alpha})$ for some $c \geq 0$ and $0 < \alpha \leq 2.$ The real random variables $\xi_1,\xi_2,\dots,\xi_n$ are jointly $S\alpha S$ if all linear combinations $\sum_{k=1}^{n}a_k\xi_k$ are $S\alpha S$, or ,equivalently, if the characteristic function of $\vec{\xi}=(\xi_1,\dots,\xi_n)$ is of the form $\phi_{\vec{\xi}}(\vec{t}) = E exp(i \sum t_k \xi_k) = exp\{-\int|\sum t_k x_k|^{\alpha}d \Gamma_{\vec{\xi}}(\vec{x})\},$ where $t_1,\dots, t_n$ are real numbers and $\Gamma_{\xi}$ is a symmetric measure defined on the unit sphere $S_n \in R^n$ (Cambanis, 1983). \end{definition} \begin{definition}[symmetric $\alpha$-stable stochastic sequence] A stochastic sequence $\{\xi_k,k\in\mathbb Z\}$ is called symmetric $\alpha$-stable, $S\alpha S$, stochastic sequence, if all linear combinations $\sum_{k=1}^{n}a_k\xi_k$ are $S\alpha S$ random variables. \end{definition} For jointly $S{\alpha}S$ random variables $\xi= \xi_1 + i\xi_2$ and $\eta= \eta_1 + i \eta_2$ the covariation of $\xi$ with $\eta$ is defined as (Cambanis, 1983) $$[\xi,\eta]_{\alpha} = \int_{S_4} (x_1 + i x_2)(y_1 + i y_2)^{<\alpha-1>} d \Gamma_{\xi_1,\xi_2,\eta_1,\eta_2}(x_1, x_2, y_1, y_2),$$ \noindent where $z^{<\beta>} = |z|^{\beta - 1} \bar{z} $ for a complex number $z$ and $\beta > 0.$ The covariation in general is not symmetric and linear on second argument (Weron, 1985). For $\xi, \xi_1, \xi_2, \eta$ jointly $S\alpha S$ we have $$[\xi_1 + \xi_2, \eta]_{\alpha} = [\xi_1, \eta]_{\alpha} + [\xi_2, \eta]_{\alpha},$$ \begin{equation}\label{eq:inequality} |[\xi, \eta]_{\alpha}| \leq ||\xi||_{\alpha} ||\eta||_{\alpha}^{\alpha - 1} \end{equation} and $||\xi||_{\alpha} = [\xi, \xi]_{\alpha}^{1/\alpha}$ is a norm in a linear space of $S\alpha S$ random variables which is equivalent to convergence in probability. It should be noted that $||\cdot||_\alpha$ is not necessarily the usual $L^{\alpha}$ norm. Here is the simplest properties of the function $z^{<\beta>}.$ \begin{lemma} Let $z, x, y $ be complex numbers, $\beta > 0$. Then the following properties hold true: \begin{itemize} \item $|z|^{<\beta>} = z \cdot z^{<\beta - 1>},$ \item $\left||z|^{<\beta>}\right| = \left|z\right|^{<\beta>},$ \item if $z^{<\beta>} = v$, thet $z = v^{<1/\beta>} = |v|^{(1-\beta)/\beta}\bar{v},$ \item $z^{<1>} = \bar{z},$ \item if $z \neq 0$, then $z^{<\alpha>} z^{<\beta>} = \frac{\bar{z}}{|z|} z^{<\alpha + \beta>},$ \item if $z \neq 0$, then $\frac{z^{<\alpha>}}{z^{<\beta>}} = \frac{z}{|z|} z^{<\alpha - \beta>},$ \item $(c z)^{<\alpha>} = c^{\alpha} z^{<\alpha>}, c \in \mathbb{R},$ \item $(z^{<\alpha>})^{<\beta>} = {\bar{z}}^{<\alpha \beta>},$ \item $(xy)^{<\alpha>} = x^{<\alpha>} y^{<\alpha>},$ \item $(z^{\alpha})^{<\beta>} = (z^{<\beta>})^{\alpha},$ \item $(z^{<\alpha>})^{\beta} = (z^{\beta})^{<\alpha>},$ \item $|z^{<\alpha>}|^{\beta} = |z|^{\alpha \beta},$ \item $(x + y)^{<\alpha>} = \bar{x}|x + y|^{\alpha - 1} + \bar{y}|x + y|^{\alpha - 1}.$ \end{itemize} \end{lemma} Let $Z =\{Z(t): -\infty < t < \infty\}$ be a complex $S{\alpha}S$ process with independent increments. The spectral measure of the process $Z$ is defined as $\mu\{(s, t]\} =\|Z(t) - Z(s)\|_{\alpha}^{\alpha}.$ The integrals $\int f(t)dZ(t)$ can be defined for all $f \in L^{\alpha}(\mu)$ with properties (see Cambanis, 1983; Cambanis and Soltani, 1984; Hosoya, 1982): \[ \left\|\int f(t) d Z(t)\right\|^{\alpha}_{\alpha} = \int |f(t)|^{\alpha}d \mu, \] \begin{equation}\label{eq:norm_equality} \left[\int f(t) d Z(t), \int g(t) d Z(t)\right]_\alpha = \int f (t) (g(t))^{<\alpha - 1>} d \mu. \end{equation} \begin{definition}[Harmonizable symmetric $\alpha$-stable stochastic sequence] A $S\alpha S$ stochastic sequence $\{\xi_n,n\in\mathbb Z\}$ is said to be harmonizable, $HS{\alpha}S$, if there exists a $S\alpha S$ process $Z = \{Z(\theta); \theta \in [-\pi, \pi]\}$ with independent increments and finite spectral measure $\mu$ such that sequence $\xi_n$ has the spectral representation $$\xi_n = \int_{-\pi}^{\pi}e^{in\theta}dZ(\theta), \quad n \in \mathbb{Z},$$ \noindent and the covariation has the representation $$[\xi_n, \xi_m]_\alpha = \int_{-\pi}^{\pi}e^{i(n-m)\theta}d \mu(\theta), \quad m, n \in \mathbb{Z}.$$ \end{definition} Note that a $HS\alpha S$ stochastic sequence is not necessarily stationary even second order stationary, but for $\alpha = 2$ the $HS\alpha S$ sequences are stationary with Gaussian distribution. In this article we consider the case where $1<\alpha \leq 2$. Denote by $H(\xi)$ the time domain of the $HS\alpha S$ sequence $\{\xi_n,n\in\mathbb Z\}$, which is a closed in the norm $\|\cdot\|_{\alpha}$ linear manifold generated by all values of the $HS\alpha S$ sequence $\{\xi_n,n\in\mathbb Z\}$. It follows from the spectral representation of the $HS\alpha S$ sequence $\{\xi_n,n\in\mathbb Z\}$ that the mapping $\xi_n\leftrightarrow e^{in\theta},n\in\mathbb Z,$ extents to an isomorphism between the spaces $H(\xi)$ and $L^{\alpha}(\mu)$. Under this isomorphism to each $\eta \in H(\xi)$ corresponds a unique $f\in L^{\alpha}(\mu)$ such that $\eta=\int_{-\pi}^{\pi}f(\theta)dZ(\theta)$. For a closed linear subspace $M \subseteq L^{\alpha}(\mu)$ and $f \in L^{\alpha}(\mu)$, there exists a unique element from $M$ which minimizes the distance to $f$. This element is called projection of $f$ onto $M$ or the best approximation of $f$ in $M$. This projection is denoted by $P_M f$ and is uniquely determined by the condition (Singer, 1970) \begin{equation} \int_{-\pi}^{\pi} g \left(f - P _M f\right)^{<\alpha - 1>}d \mu = 0,\quad g \in M. \end{equation} Similarly, for $HS \alpha S$ stochastic sequence $\{\xi_n,n\in\mathbb Z\}$ and a closed linear subspace $H^-(\xi)$ of the space $H(\xi)$ there is a uniquely determined element $\hat{\xi}_n \in H^-(\xi)$ which minimizes the distance to $\xi_n$ and is uniquely determined from the condition \begin{equation}\label{eq:ortogonal} \left[\eta, \xi_n - \hat{\xi}_n\right]_{\alpha} = 0,\quad \eta \in H^-(\xi). \end{equation} From linearity of the covariation with respect to the first argument from this relation we have that \begin{equation}\label{eq:linearity} ||\xi_n - \hat{\xi}_n||_{\alpha}^{\alpha} = \left[\xi_n,\xi_n-\hat{\xi}_n\right]_{\alpha}-\left[\hat{\xi}_n, \xi_n-\hat{\xi}_n\right]_{\alpha} = \left[\xi_n, \xi_n-\hat{\xi}_n\right]_{\alpha}. \end{equation} This relation plays a fundamental role in the characterization of minimal $HS \alpha S$ stochastic sequences $\{\xi_n,n\in\mathbb Z\}$. \section{Extrapolation problem. Projection approach} Consider the problem of the optimal estimation of the linear functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j = \int_{-\pi}^{\pi} A(e^{i\theta})dZ^{\xi}(\theta),$$ $$A(e^{i\theta})=\sum_{j = 0}^{\infty}a_j e^{ij\theta}, $$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a harmonizable symmetric $\alpha$-stable random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$. We will suppose that the sequence $ \{ {a}_j: j=0,1, \ldots \}$ which determines the functional $A { \xi} $ satisfies conditions \begin{equation} \label{cond-on-aj} \sum_{j=0}^{ \infty} \left|a_j \right| < \infty , \quad \sum_{j=0}^{ \infty}(j+1) \left | {a}_j \right | ^{2} < \infty. \end{equation} The first condition ensures that the functional $A\xi$ has a finite second moment. The second condition ensures the compactness in $\ell_2$ of the operators that will be defined below. We consider the problem for mutually independent harmonizable symmetric $\alpha$-stable random sequences $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ which have absolutely continuous spectral measures and the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition (Kolmogorov, 1992; Rozanov, 1967; Salehi, 1979; Pourahmadi, 1984; Weron, 1985) \begin{equation}\label{eq:minimality} \int_{-\pi}^{\pi} (f(\theta)+g(\theta))^{-1/(\alpha-1)}d\theta<\infty. \end{equation} Denote by $H^{-}(\xi+\eta)$ the closed in the $||\cdot||_\alpha$ norm linear manifold generated by values of the harmonizable symmetric $\alpha$-stable random sequence $\xi_k+\eta_k, k = -1, -2, \dots$ in the space $H(\xi+\eta)$ generated by all values of the $HS \alpha S$ sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$. The optimal estimate $\hat{A} \xi$ of the functional ${A} \xi$ is a projection of ${A} \xi$ on the manifold $H^{-}(\xi+\eta)$. It is determined by relations $$[\zeta, A \xi - \hat{A} \xi]_{\alpha} = 0, \quad \forall \zeta \in H^{-}(\xi+\eta),$$ or, equivalently, by relations \begin{equation}\label{ortog1} [\xi_k+\eta_k, A \xi - \hat{A} \xi]_{\alpha} = 0, \quad \forall k = -1, -2, \dots. \end{equation} It follows from the isomorphism between the spaces $H(\xi+\eta)$ and $L^{\alpha}(f+g)$ that the optimal estimate $\hat{A} \xi$ of the functional ${A} \xi$ is of the form \begin{equation}\label{estim2} \hat{A} \xi = \int_{-\pi}^{\pi} {h}(\theta) \left(d Z^{\xi}(\theta) + dZ^{\eta}(\theta) \right). \end{equation} It is determined by the spectral characteristic ${h}(\theta)$ of the estimate which is from the subspace $L^{\alpha}_{-}(f+g)$ of the $L^{\alpha}(f+g)$ space generated by functions $e^{ik\theta},k = -1, -2, \dots$. The spectral characteristic ${h}(\theta)$ of the optimal estimate satisfies the following equations \begin{equation}\label{ortog2} \int_{-\pi}^{\pi} e^{i\theta k} \left[\left( A(e^{i\theta}) - {h}(\theta) \right)^{<\alpha - 1>}f(\theta) - \left({h}(\theta) \right)^{<\alpha - 1>}g(\theta)\right] d\theta = 0, \, k = -1, -2, \dots. \end{equation} It follows from these equations that the spectral characteristic ${h}(\theta)$ of the estimate is determined by the relation \begin{equation}\label{sp-char2} \left( A(e^{i\theta}) - {h}(\theta) \right)^{<\alpha - 1>}f(\theta) - \left({h}(\theta) \right)^{<\alpha - 1>}g(\theta) = \overline{C(e^{i\theta})}, \end{equation} $$ C(e^{i\theta})=\sum_{j = 0}^{\infty} c_j e^{i j \theta},$$ where $c_j$ are unknown coefficients. These unknown coefficients $c_j$ are determined from the condition ${h}(\theta)\in L^{\alpha}_{-}(f+g)$ which gives us the system of equations \begin{equation}\label{speq2} \int_{-\pi}^{\pi} {e^{-i\theta k}\,\, {h}(\theta)} d\theta = 0,\quad k = 0, 1,\dots. \end{equation} The variance of the optimal estimate of the functional is calculated by the formula \begin{equation}\label{var2} \left\|{A} \xi- \hat{A} \xi \right\|_\alpha^\alpha = \int_{-\pi}^{\pi} \left| A(e^{i\theta}) - {h}(\theta) \right|^{\alpha}f(\theta)d \theta + \int_{-\pi}^{\pi} \left|{h}(\theta) \right|^{\alpha}g(\theta)d \theta. \end{equation} We can conclude that the following theorem holds true. \begin{thm}\label{thm1} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent harmonizable symmetric $\alpha$-stable random sequences which have absolutely continuous spectral measures and the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality}. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ is calculated by formula \eqref{estim2}. The spectral characteristic ${h}(\theta)$ of the estimate is determined by equation \eqref{sp-char2}, where the unknown coefficients $c_j$ are determined from the system of equations \eqref{speq2}. The variance of the optimal estimate of the functional is calculated by formula \eqref{var2}. \end{thm} \subsection{Extrapolation problem. Observations without noise} Consider the problem of optimal linear estimation of the functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j = \int_{-\pi}^{\pi} A(e^{i\theta})dZ^{\xi}(\theta),\quad A(e^{i\theta})=\sum_{j = 0}^{\infty}a_j e^{ij\theta}, $$ that depends on the unknown values of a harmonizable symmetric $\alpha$-stable random sequence $\{\xi_j,j\in\mathbb Z\},$ from observations of the sequence $\xi_k$ at points $k = -1, -2, \dots$. Let $\{\xi_k,k\in\mathbb Z\}$ be a harmonizable symmetric $\alpha$-stable random sequence which has absolutely continuous spectral measure and the spectral density $f(\theta)>0$ satisfying the minimality condition \begin{equation}\label{minimality1} \int_{-\pi}^{\pi} (f(\theta))^{-1/(\alpha-1)}d\theta<\infty. \end{equation} The optimal estimate $\hat{A} \xi$ of the functional ${A} \xi$ is of the form \begin{equation}\label{estim_f} \hat{A} \xi = \int_{-\pi}^{\pi} {h}(\theta) dZ^{\xi}(\theta). \end{equation} The spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional is calculated by the formula \begin{equation}\label{spchar_f} {h}(\theta) = A(e^{i\theta}) - \left(\overline{C(e^{i\theta})}) \right)^{<\frac{1}{\alpha - 1}>} \left(f(\theta)\right)^{\frac{-1}{\alpha - 1}}, \end{equation} where the unknown coefficients $c_j$, $j = 0, 1,\dots$, are determined from the system of equations \begin{equation}\label{eq_sphar_f} \int_{-\pi}^{\pi} e^{-i\theta k} \left( \left(\sum_{j = 0}^{\infty}a_j e^{i j \theta}\right) - \left(\sum_{j = 0}^{\infty} \overline{c}_j e^{-i j \theta} \right)^{<\frac{1}{\alpha - 1}>} \left( f(\theta) \right)^{\frac{-1}{\alpha - 1}}\right) d\theta = 0, k = 0, 1,\dots \end{equation} The variance of the optimal estimate of the functional is calculated by the formula \begin{equation}\label{variance_f} \left\| {A}_N \xi- \hat{A} \xi \right\|_\alpha^\alpha = \int_{-\pi}^{\pi} \left|\left(\overline{C(e^{i\theta}}) \right)^{<\frac{1}{\alpha - 1}>} (f(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha} f(\theta) d \theta. \end{equation} As a corollary from Theorem \ref{thm1} the following statement holds true. \begin{Corollary}\label{cor1} Let $\{\xi_k,k\in\mathbb Z\}$ be a harmonizable symmetric $\alpha$-stable random sequence which has absolutely continuous spectral measure and the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1}. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ is of the form \eqref{estim_f}. The spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional is calculated by formula \eqref{spchar_f}, where the unknown coefficients $c_j$, $j = 0, 1,\dots$, are determined from the system of equations \eqref{eq_sphar_f}. The variance of the optimal estimate of the functional is calculated by formula \eqref{variance_f}. \end{Corollary} \subsection{Extrapolation problem. Stationary sequences} Consider the problem in the particular case where $\alpha=2$. In this case the harmonizable symmetric $\alpha$-stable random sequences $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are stationary sequences and we have the problem of the optimal estimation of the linear functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a stationary random sequence from observations of the stationary sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$. We will suppose that stationary sequences $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ have spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \begin{equation}\label{eq:minimality} \int_{-\pi}^{\pi} (f(\theta)+g(\theta))^{-1}d\theta<\infty. \end{equation} The optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is of the form \eqref{estim2}, where the spectral characteristic ${h}(\theta)$ of the optimal estimate and the variance of the optimal estimate, determined by equations \eqref{sp-char2}, \eqref{var2}, are of the form \begin{equation} \label{sphar_a2} \begin{split} &h(\theta)=\frac{A(e^{i\theta})f(\theta)-C(e^{i\theta})}{f(\theta)+g(\theta)}=\\ &=A(e^{i\theta})-\frac{A(e^{i\theta})g(\theta)+C(e^{i\theta})}{f(\theta)+g(\theta)}, \end{split} \end{equation} \begin{equation} \label{var_a2} \begin{split} \Delta(h;f,g)=\left\|{A}_N \xi- \hat{A} \xi \right\|_2^2&=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \frac{\left|A(e^{i\theta})g(\theta)+C(e^{i\theta}) \right|^2}{(f(\theta)+g(\theta))^2}f(\theta)d\theta\\ &+\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})f(\theta)-C(e^{i\theta}) \right|^2}{(f(\theta)+g(\theta))^2}g(\theta)d\theta.\\ \end{split} \end{equation} The unknown coefficients $c_j$, $j=0,1,\dots,$ are determined from the system of equations \eqref{speq2} which is of the form in this case \begin{equation*} \begin{split} \int\limits_{-\pi}^{\pi}\left(A(e^{i\theta})\frac{f(\theta)}{f(\theta)+g(\theta)}-\frac{C(e^{i\theta})}{f(\theta)+g(\theta)}\right)e^{-ik\theta}d\theta=0, \quad k = 0, 1,\dots \end{split} \end{equation*} From this system of equations we get the following equations \begin{equation}\label{3} \begin{split} \sum\limits_{j=0}^{\infty}a_j\int\limits_{-\pi}^{\pi}\frac{e^{i(j-k)\theta}f(\theta)}{f(\theta)+g(\theta)}d\theta -\sum\limits_{j=0}^{\infty}c_j\int\limits_{-\pi}^{\pi}\frac{e^{i(j-k)\theta}}{f(\theta)+g(\theta)}d\theta=0,\quad k = 0, 1,\dots \end{split} \end{equation} Denote by $\bold{B}$, $\bold{R}$, $\bold{Q}$ operators in the space $\ell_{2}$, which are determined by matrices with elements $$B_{k,j}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}e^{i(j-k)\theta}\frac{1}{f(\theta)+g(\theta)}d\theta;$$ $$R_{k,j}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}e^{i(j-k)\theta}\frac{f(\theta)}{f(\theta)+g(\theta)}d\theta;$$ $$Q_{k,j}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}e^{i(j-k)\theta}\frac{f(\theta)g(\theta)}{f(\theta)+g(\theta)}d\theta;$$ $$k,j = 0, 1,2,\dots $$ With the help of the introduced notations we can write equations (\ref{3}) in the form \begin{equation*}\begin{split} \sum\limits_{j=0}^{\infty}R_{k,j}a_j= \sum\limits_{j=0}^{\infty}B_{k,j}c_j,\quad k = 0, 1,2,\dots. \end{split} \end{equation*} These equations can be represented in the matrix-vector form \begin{equation*}\begin{split} \bold{R}\bold{a}=\bold{B} \bold{c}, \end{split} \end{equation*} where $\bold{a}=(a_0,a_1,\dots)$, $\bold{c}=(c_0,c_1,\dots)$. The unknown coefficients $c_j,j=0,1,\dots$ form a solution to this equation and can be represented in the form $$c_j=(\bold{B}^{-1}\bold{R}\bold{a})_j,$$ where $(\bold{B}^{-1}\bold{R}\bold{a})_j$ is the $j$-th element of the vector $\bold{B}^{-1}\bold{R}\bold{a}$. Finally, the spectral characteristic and the variance of the optimal estimate are determined by the formulas (for more details see the books by Moklyachuk (2008), Moklyachuk and Masyutka (2012), Moklyachuk and Golichenko (2016)) \begin{equation} \label{sphar_a12} \begin{split} h(\theta)&=\frac{ A(e^{i\theta})f(\theta)-\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} } {f(\theta)+g(\theta)}=\\ &=A(e^{i\theta})-\frac{ A(e^{i\theta})g(\theta)+\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} } {f(\theta)+g(\theta)}, \end{split} \end{equation} \begin{equation} \label{var_a12} \begin{split} \Delta(h;f,g)=\left\|{A} \xi- \hat{A} \xi \right\|_2^2 &=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})g(\theta)+\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} \right|^2}{(f(\theta)+g(\theta))^2}f(\theta)d\theta\\ &+\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})f(\theta)-\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} \right|^2}{(f(\theta)+g(\theta))^2}g(\theta)d\theta\\ &=\langle\bold{R}\bold{a},\bold{B}^{-1}\bold{R}\bold{a}\rangle+\langle\bold{Q}\bold{a},\bold{a}\rangle, \end{split} \end{equation} So, the following theorem holds true. \begin{thm}\label{thm2} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent stationary random sequences which have absolutely continuous spectral measures and the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} with $\alpha=2$. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ is calculated by the formula \eqref{estim2}. The spectral characteristic ${h}(\theta)$ of the estimate is calculated by formula \eqref{sphar_a12}. The variance of the optimal estimate of the functional is calculated by formula \eqref{var_a12}. \end{thm} \subsection{Extrapolation problem. Stationary sequences. Observations without noise} Consider the problem of optimal linear estimation of the functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j = \int_{-\pi}^{\pi} A(e^{i\theta})dZ^{\xi}(\theta),\quad A(e^{i\theta})=\sum_{j = 0}^{\infty}a_j e^{ij\theta}, $$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a stationary stochastic sequence from observations of the sequence $\xi_k$ at points $k = -1, -2, \dots$. Suppose that the stationary stochastic sequence $\{\xi_k,k\in\mathbb Z\}$ has the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$. The optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is of the form \eqref{estim_f}, where the spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is calculated by the formula \begin{equation}\label{sphar_a22} {h}(\theta) = A(e^{i\theta}) - \left(\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{a})_je^{ij\theta} \right) \left(f(\theta)\right)^{-1}. \end{equation} The variance of the optimal estimate of the functional is calculated by the formula \begin{equation}\label{var_a22} \left\| {A} \xi- \hat{A} \xi \right\|_2^2 = \frac{1}{2\pi} \int_{-\pi}^{\pi} \left|\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{a})_je^{ij\theta} \right|^2 f^{-1}(\theta)d\theta=<\bold{B}^{-1}\vec{\bold{a}}, \vec{\bold{a}}>, \end{equation} where $\bold{B}$ is operator in the space $\ell_2$, determined by the matrix with elements \begin {equation*} B(k,j)=\,\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}f^{-1}(\theta)e^{i(j-k)\theta}d\theta, \quad k,j=0,1,\dots. \end{equation*} The spectral density $f(\theta)>0$ of the stationary sequence $\{\xi_k,k\in\mathbb Z\}$ satisfies the minimality condition \eqref{minimality1} with $\alpha=2$. For this reason the function $f^{-1}(\theta)$ admits the factorization \begin{equation}\label{factor} \frac{1}{f(\theta)}=\sum\limits_{p=-\infty}^{\infty}b_{p}e^{ip\theta}=\left|\sum\limits_{j=0}^{\infty}\psi_{j}e^{-ij\theta}\right|^2 =\left|\sum\limits_{j=0}^{\infty}\varphi_{j}e^{-ij\theta}\right|^{-2}. \end{equation} Denote by $\bold{\Psi}$ and $ \bold{\Phi}$ linear operators in the space $\ell_2$ which are determined by matrices with elements $\bold{\Psi}_{i,j}=\psi_{i-j}$, $\bold{\Phi}_{i,j}=\varphi_{i-j}$, for $0\leq j\leq i$, while $\bold{\Psi}_{i,j}=0$, $\bold{\Phi}_{i,j}=0$, for $0\leq i< j$. It can be shown that $\bold{\Psi}\bold{\Phi}=\bold{\Phi}\bold{\Psi}=I$. The operator $\bold{B}$ can be represented in the form $\bold{B}=\bold{\Psi}^{'}\overline{\bold{\Psi}}.$ The operator $\bold{B}^{-1}$ can be represented in the form $\bold{B}^{-1}=\overline{\bold{\Phi}}\bold{\Phi}^{'}.$ As a corollary we can represent formula \eqref{var_a22} in the form \begin{equation}\label{var_a222} \left\| {A} \xi- \hat{A} \xi \right\|_2^2 =\langle\bold{B}^{-1}\vec{\bold{a}}, \vec{\bold{a}}\rangle =\langle\overline{\bold{\Phi}}\bold{\Phi}^{'}\vec{\bold{a}}, \vec{\bold{a}}\rangle =\langle\bold{\Phi}^{'}\vec{\bold{a}}, \bold{\Phi}^{'}\vec{\bold{a}}\rangle =\langle\bold{A}\vec{\varphi}, \bold{A}\vec{\varphi}\rangle =\|\bold{A}\vec{\varphi}\|^2, \end{equation} where the linear operator $\bold{A}$ in the space $\ell_2$ is determined by matrix with elements $\bold{A}_{i,j}=a_{i-j}$, $i,j=0,1,\dots$, and the vector $\vec{\varphi}=({\varphi}_0,{\varphi}_1, {\varphi}_2,\dots)$ are determined by elements ${\varphi}_j,j=0,1,\dots$ of the factorization \eqref{factor}. So, the following theorem holds true. \begin{thm}\label{thm3} Let $\{\xi_k,k\in\mathbb Z\}$ be a stationary stochastic sequence which has absolutely continuous spectral measure and the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence $\{\xi_k,k\in\mathbb Z\}$ from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$ is of the form \eqref{estim_f}, where the spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional is calculated by the formula \eqref{sphar_a22}. The variance of the optimal estimate of the functional can be calculated by formula \eqref{var_a22} as well as by formula \eqref{var_a222}. \end{thm} \begin{exm} \label{example1} Consider the problem of optimal linear estimation of the functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j $$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a stationary stochastic sequence from observations of the sequence $\xi_k$ at points $k = -1, -2, \dots$. Suppose that the stationary stochastic sequence $\{\xi_k,k\in\mathbb Z\}$ has the spectral density $$f(\theta) = |1 - \alpha e^{-i\theta}|^{-2}.$$ The function $f^{-1}(\theta)= |1 - \alpha e^{-i\theta}|^{2}$ admits the factorization $$f^{-1}(\theta) = b_{-1} e^{-i\theta} + b_{0}+b_{1} e^{i\theta} =\left|\sum\limits_{j=0}^{\infty}\psi_{j}e^{-ij\theta}\right|^2 =\left|\sum\limits_{j=0}^{\infty}\varphi_{j}e^{-ij\theta}\right|^{-2},$$ where $b_{0} = 1+|\alpha|^2$, $b_{-1} = -\alpha$, $b_{1} = -\bar{\alpha}$, $b_p=0, |p|>1$ are Fourier coefficients of the function $f^{-1}(\theta)$; $\psi_{0} = 1, \psi_{1} = -\alpha, \psi_j=0, j>1$; $\varphi_{j} = \alpha ^{j}, j\geq 0$; $b_p=\sum\limits_{k=0}^{\infty}\psi_{k}\overline{\psi}_{k+p},$ $p\geq0$, and $b_{-p}=\overline{b_p}$, $p\geq0$. The optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is of the form \eqref{estim_f}, where the spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is calculated by the formula \begin{equation}\label{sphar_a2222} {h}(\theta) = \left(\sum_{j = 0}^{\infty}a_j e^{ij\theta}\right) - \left(\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{a})_je^{ij\theta} \right) \left(b_{-1} e^{-i\theta} + b_0+b_1 e^{i\theta}\right), \end{equation} Making use of the relation ${B}^{-1}=\overline{{\Phi}}{\Phi}^{'}$ we find that the matrix $(B)^{-1}$ is of the form \begin{equation*} B^{-1}= \left(\begin{array}{ccccc} 1 & \alpha & \alpha ^2 & \alpha ^3 & \ldots \\ \overline{\alpha} & \overline{\alpha}\alpha+1 & \overline{\alpha}\alpha^{2}+\alpha & \overline{\alpha}\alpha^{3}+\alpha^{2} & \ldots \\ \overline{\alpha^2} & \overline{\alpha^2}\alpha+\overline{\alpha} & \overline{\alpha^2}\alpha^2+\overline{\alpha}\alpha+1 & \overline{\alpha^2}\alpha^3+\overline{\alpha}\alpha^2+\alpha & \ldots \\ \overline{\alpha^3} & \overline{\alpha^3}\alpha+\overline{\alpha^2} & \overline{\alpha^3}\alpha^2+\overline{\alpha^2}\alpha+\overline{\alpha} & \overline{\alpha^3}\alpha^3+\overline{\alpha^2}\alpha^2++\overline{\alpha}\alpha +1& \ldots \\ \ldots \end{array}\right). \end{equation*} Consider the problem under the condition $a_j=0,j\geq3$. In this case the coefficients $c(j)=\left(\bold{B}^{-1}\vec{\bold{a}}\right)_j, j =0, 1, 2,\ldots$ are as follows $$ c_0=a_0+a_1\alpha+a_2\alpha^2, $$ $$ c_1=a_0\overline{\alpha}+a_1(\overline{\alpha}\alpha+1)+a_2(\overline{\alpha}\alpha^{2}+\alpha), $$ $$ c_2=a_0\overline{\alpha^2}+a_1(\overline{\alpha^2}\alpha+\overline{\alpha})+a_2(\overline{\alpha^2}\alpha^2+\overline{\alpha}\alpha+1), $$ $$ c_j=a_0\overline{\alpha^j}+a_1(\overline{\alpha^j}\alpha+\overline{\alpha^{j-1}})+a_2(\overline{\alpha^j}\alpha^2+\overline{\alpha^{j-1}}\alpha+\overline{\alpha^{j-2}}),j\geq3. $$ The spectral characteristic ${h}_2(\theta)$ of the optimal linear estimate $\hat{A}_2 \xi$ of the functional $A_2\xi=a_0+a_1\xi_1+a_2\xi_2$ is calculated by the formula $$ {h}_2(\theta) = \left(a_0+a_1 e^{i\theta} + a_2 e^{i2\theta} \right) - \left(\sum\limits_{j=0}^{\infty}c_je^{ij\theta} \right) \left(b_{-1} e^{-i\theta} + b_0+b_1 e^{i\theta}\right)= $$ $$ =-c_0b_{-1}e^{-i\theta}=\left(a_0\alpha +a_1\alpha^2+a_2\alpha^3 \right)e^{-i\theta}. $$ The error of the estimate is calculated by the formula $$ \left\| {A}_2 \xi- \hat{A}_2 \xi \right\|_2^2=\langle\bold{A}\vec{\varphi}, \bold{A}\vec{\varphi}\rangle =\langle\bold{B}^{-1}\vec{\bold{a}}, \vec{\bold{a}}\rangle= $$ $$ = a_0^2+a_0a_1(\alpha+\overline{\alpha})+a_1^2(1+\alpha^2)+ a_0a_2(\alpha^2+\overline{\alpha^2})+ a_1a_2(\alpha+\overline{\alpha})(1+\alpha^2)+ a_2^2(1+\alpha^2+\alpha^4). $$ \end{exm} \section{Extrapolation problem. Minimax approach} The value of the error $$\Delta\left(h(f,g);f,g\right):= \left\| \hat{A} \xi- A \xi \right\|_\alpha^\alpha $$ and the spectral characteristic $h(f,g):={h}(\theta)$ of the optimal estimate $\hat{A}\xi$ of the functional $A\xi$ can be calculated by the proposed formulas only in the case where we know the spectral densities $f(\theta)$ and $g(\theta)$ of the harmonizable symmetric $\alpha$-stable stochastic sequences $\{\xi_k,k\in\mathbb Z\},$ and $\{\eta_k,k\in\mathbb Z\}$. However, usually we do not have exact values of the spectral densities of stochastic sequences, while we often know a set $D=D_f\times D_g$ of admissible spectral densities. In this case we can apply the minimax-robust method of estimation to the extrapolation problem. This method let us find an estimate that minimizes the maximum of the errors for all spectral densities from the given set $D=D_f\times D_g$ of admissible spectral densities simultaneously (see books by Moklyachuk (2008), Moklyachuk and Masyutka (2012), Moklyachuk and Golichenko (2016) for more details). \begin{definition} For a given class of spectral densities $D=D_f\times D_g$ the spectral densities $f_0(\theta)\in D_f$, $g_0(\theta)\in D_g$ are called the least favorable in $D=D_f\times D_g$ for the optimal linear estimation of the functional $A \xi$ if the following relation holds true $$\Delta\left(f_0,g_0\right)=\Delta\left(h\left(f_0,g_0\right);f_0,g_0\right)=\max\limits_{(f,g)\in D_f\times D_g}\Delta\left(h\left(f,g\right);f,g\right).$$ \end{definition} \begin{definition} For a given class of spectral densities $D=D_f\times D_g$ the spectral characteristic $h^0=h(f_0,g_0)$ of the optimal estimate $\hat{A}\xi$ of the functional $A\xi$ is called minimax (robust) for the optimal linear estimation of the functional $A \xi$ if the following relations hold true $$h^0\in H_D= \bigcap\limits_{(f,g)\in D_f\times D_g} L^{\alpha}(f+g),$$ $$\min\limits_{h\in H_D}\max\limits_{(f,g)\in D}\Delta\left(h;f,g\right)=\max\limits_{(f,g)\in D}\Delta\left(h^0;f,g\right).$$ \end{definition} The least favorable spectral densities $f_0(\theta)$, $g_0(\theta)$ and the minimax spectral characteristic $h^0=h(f_0,g_0)$ form a saddle point of the function $\Delta \left(h;f,g\right)$ on the set $H_D \times D$. The saddle point inequalities $$\Delta\left(h;f_0,g_0\right)\geq\Delta\left(h^0;f_0,g_0\right)\geq \Delta\left(h^0;f,g\right) $$ $$ \forall h \in H_D, \forall f \in D_f, \forall g \in D_g$$ holds true if $h^0=h(f_0,g_0)$ and $h(f_0,g_0)\in H_D,$ where $(f_0,g_0)$ is a solution to the constrained optimization problem \begin{equation} \label{condextr} \max\limits_{(f,g)\in D_f\times D_g}\Delta\left(h(f_0,g_0);f,g\right)=\Delta\left(h(f_0,g_0);f_0,g_0\right), \end{equation} \begin{equation}\label{delta41}\begin{split} \Delta\left(h(f_0,g_0);f,g\right)&= \left\|{A} \xi- \hat{A} \xi \right\|_\alpha^\alpha\\ & = \int_{-\pi}^{\pi} \left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}f(\theta)d \theta + \int_{-\pi}^{\pi} \left|{h}^0(\theta) \right|^{\alpha}g(\theta)d \theta. \end{split}\end{equation} The constrained optimization problem \eqref{condextr} is equivalent to the unconstrained optimization problem \begin{equation} \label{8} \Delta_D(f,g)=-\Delta(h(f_0,g_0);f,g)+\delta(f,g\left|D_f\times D_g\right.)\rightarrow \inf, \end{equation} where $\delta(f,g\left|D_f\times D_g\right.)$ is the indicator function of the set $D=D_f\times D_g$. Solution $(f_0,g_0)$ to the problem (\ref{8}) is characterized by the condition $0 \in \partial\Delta_D(f_0,g_0),$ where $\partial\Delta_D(f_0,g_0)$ is the subdifferential of the convex functional $\Delta_D(f,g)$ at point $(f_0,g_0)$. This condition makes it possible to find the least favorable spectral densities in some special classes of spectral densities $D=D_f\times D_g$ (Ioffe and Tikhomirov, 1979; Pshenychnyj, 1971; Rockafellar, 1997; Moklyachuk, 2008b). Note, that the form of the functional $\Delta(h(f_0,g_0);f,g)$ is convenient for application the method of Lagrange multipliers for finding solution to the problem \eqref{condextr}. Making use of the method of Lagrange multipliers and the form of subdifferentials of the indicator functions we describe relations that determine least favourable spectral densities in some special classes of spectral densities Summing up the derived formulas and the introduced definitions we come to conclusion that the following lemmas hold true \begin{lemma} \label{lem41} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent harmonizable symmetric $\alpha$-stable random sequences which have absolutely continuous spectral measures and the spectral densities $f_0(\theta)>0$ and $g_0(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality}. Let the spectral densities $(f_0,g_0)\in D_f\times D_g$ gives a solution to the constrained optimization problem \eqref{condextr}. The spectral densities $(f_0,g_0)$ are the least favorable spectral densities in $D_f\times D_g$ and $h^0=h(f_0,g_0)$ is the minimax spectral characteristic of the optimal linear estimation $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0,g_0)\in H_D$. \end{lemma} \begin{lemma} \label{lem42} Let $\{\xi_k,k\in\mathbb Z\}$ be a harmonizable symmetric $\alpha$-stable random sequence which has absolutely continuous spectral measure and the spectral density $f_0(\theta)>0$ satisfying the minimality condition \eqref{minimality1}. Let the spectral density $f_0\in D_f$ gives a solution to the constrained optimization problem \begin{equation} \label{condextr42} \max\limits_{f\in D_f}\Delta\left(h(f_0);f\right)=\Delta\left(h(f_0);f_0\right), \end{equation} \begin{equation} \label{delta42} \Delta\left(h(f_0);f\right)=\left\| {A} \xi- \hat{A} \xi \right\|_\alpha^\alpha = \int_{-\pi}^{\pi} \left|\left(C^0(e^{i\theta}) \right)^{<\frac{1}{\alpha - 1}>} (f_0(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha} f(\theta) d \theta. \end{equation} The spectral density $f_0$ is the least favorable spectral density in $D_f$ and $h^0=h(f_0)$ is the minimax spectral characteristic of the optimal linear estimate $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0)\in H_D$. \end{lemma} \begin{lemma} \label{lem43} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent stationary random sequences which have absolutely continuous spectral measures and the spectral densities $f_0(\theta)>0$ and $g_0(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} with $\alpha=2$. Let spectral densities $(f_0,g_0)\in D_f\times D_g$ gives a solution to the constrained optimization problem \begin{equation} \label{condextr43} \max\limits_{(f,g)\in D_f\times D_g}\Delta\left(h(f_0,g_0);f,g\right)=\Delta\left(h(f_0,g_0);f_0,g_0\right), \end{equation} \begin{equation} \label{delta43} \begin{split} \Delta\left(h(f_0,g_0);f,g\right) &=\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})g_0(\theta)+\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}{(f_0(\theta)+g_0(\theta))^2}f(\theta)d\theta\\ &+\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})f_0(\theta)-\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}{(f_0(\theta)+g_0(\theta))^2}g(\theta)d\theta.\\ \end{split} \end{equation} The spectral densities $(f_0,g_0)$ are the least favorable spectral densities in $D_f\times D_g$ and $h^0=h(f_0,g_0)$ is the minimax spectral characteristic of the optimal linear estimate $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0,g_0)\in H_D$. \end{lemma} \begin{lemma} \label{lem44} Let $\{\xi_k,k\in\mathbb Z\}$ be a stationary random sequence which has absolutely continuous spectral measure and the spectral density $f_0(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$. Let the spectral density $f_0\in D_f$ gives a solution to the constrained optimization problem \begin{equation} \label{condextr44} \max\limits_{f\in D_f}\Delta\left(h(f_0);f\right)=\Delta\left(h(f_0);f_0\right), \end{equation} \begin{equation}\label{delta44} \Delta\left(h(f_0);f\right) = \frac{1}{2\pi}\int_{-\pi}^{\pi} \left|\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{a})_je^{ij\theta} \right|^2 f_0^{-2}(\theta)f(\theta)d\theta. \end{equation} The spectral density $f_0$ is the least favorable spectral density in $D_f$ and $h^0=h(f_0)$ is the minimax spectral characteristic of the optimal linear estimate $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0)\in H_D$. \end{lemma} \section{Least favorable spectral densities in the class $D_f^{\beta}\times D_g^{\varepsilon}$} Consider the problem of optimal estimation of the linear functional $A \xi$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a random sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} from the class of admissible spectral densities $D=D_f^{\beta}\times D_g^{v,u}$, where $$ D_f^{\beta} = \left\{f(\theta)\left|\int\limits_{-\pi}^{\pi} (f(\theta))^{\beta}d\theta= P_1\right.\right\},$$ $$D_g^{\varepsilon} = \left\{g(\theta)\left| g(\theta)=(1-{\varepsilon}) g_1(\theta)+ {\varepsilon}w(\theta),\int\limits_{-\pi}^{\pi} g(\theta)d\theta= P_2\right.\right\}. $$ Assume that spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ and the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$, determined by the equations \begin{equation} \label{hf51} \qquad\qquad h_f(f_0,g_0)=\left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}, \end{equation} \begin{equation} \label{hg51} h_g(f_0,g_0)=\left|{h}^0(\theta) \right|^{\alpha}, \end{equation} \begin{equation}\label{sp51} \left( A(e^{i\theta}) - {h}^0(\theta) \right)^{<\alpha - 1>}f_0(\theta) - \left({h}^0(\theta) \right)^{<\alpha - 1>}g_0(\theta) = C^0(e^{i\theta}), \end{equation} \begin{equation}\label{speq51} \int_{-\pi}^{\pi} {e^{-i\theta k}\,\, {h}^0(\theta)} d\theta = 0,\quad k = 0, 1,\dots \end{equation} are bounded. Under these conditions the functional \begin{equation}\begin{split} \Delta\left(h(f_0,g_0);f,g\right)&= \left\|{A} \xi- \hat{A} \xi \right\|_\alpha^\alpha\\ & = \int_{-\pi}^{\pi} \left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}f(\theta)d \theta + \int_{-\pi}^{\pi} \left|{h}^0(\theta) \right|^{\alpha}g(\theta)d \theta. \end{split}\end{equation} is linear and continuous in the $L_1\times L_1$ space and we can apply the Lagrange multipliers method to derive that the least favorable densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the equations \begin{equation} \label{eqf51} \left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}= \gamma_1 \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} \begin{equation} \label{eqg51} \left|{h}^0(\theta) \right|^{\alpha}= \left(\varphi_1 (\theta)+\gamma_2 \right), \end{equation} where $\varphi_1 (\theta)\leq 0$ and $\varphi_1(\theta)= 0$ if $g_0(\theta)\geq (1-{\varepsilon}) g_1(\theta)$; $\gamma_1,$ $\gamma_2$ are the Lagrange multipliers which are determined from the conditions $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1,\quad \int\limits_{-\pi}^{\pi} g_0(\theta)d\theta= P_2. $$ Thus, the following statement holds true. \begin{thm}\label{thm51} Let the spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the minimality condition \eqref{eq:minimality} and let the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$ determined by formulas \eqref{hf51}, \eqref{hg51}, \eqref{sp51}, \eqref{speq51} be bounded. The spectral densities $f_0(\theta)$ and $g_0(\theta)$ are the least favorable in the class $D=D_f^{\beta}\times D_g^{\varepsilon}$ for the optimal linear estimation of the functional $A \xi$ if they satisfy equations \eqref{eqf51}, \eqref{eqg51} and determine a solution to the optimization problem \eqref{condextr}. The minimax-robust spectral characteristic $h(f_0,g_0)$ of the optimal estimate of the functional $A \xi$ is determined by formulas \eqref{sp51}, \eqref{speq51}. \end{thm} \subsection{Least favorable spectral densities. Observations without noise} Consider the problem of optimal linear estimation of the functional $A \xi= \sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values $\xi_j, j = 0, 1, \ldots$, from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ is a harmonizable symmetric $\alpha$-stable random sequence which have spectral density $f_0(\theta)>0$ satisfying the minimality condition \eqref{minimality1} from the class of admissible spectral densities $D_f^{\beta}$. Assume that spectral density $f_0\in D_f^{\beta}$ and the function $h_f(f_0)$ determined by the equation \begin{equation} \label{hf52} h_f(f_0)= \left|\left(C^0(e^{i\theta}) \right)^{<\frac{1}{\alpha - 1}>} (f_0(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha} \end{equation} is bounded. Under this condition the functional \eqref{delta42} is linear and continuous in the $L_1$ space and we can apply the method of Lagrange multipliers to find solution of the constrained optimization problem \eqref{condextr42} and derive that the least favorable density $f_0\in D_f^{\beta}$ satisfy the equation \begin{equation} \label{eqf52} \left|\left(C^0(e^{i\theta}) \right)^{<\frac{1}{\alpha - 1}>} (f_0(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha}= \gamma_1 \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} where $\gamma_1$ is the Lagrange multipliers. From this equation we find that the least favorable density is of the form \begin{equation}\label{eqf522} f_0(\theta) = C\left|\sum_{j = 0}^{\infty} c_j e^{-i j \theta} \right|^\frac{\alpha}{\alpha + (\alpha-1)(\beta - 1)}. \end{equation} The unknown constants are determined from the optimization problem \eqref{condextr42} and from the condition $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1. $$ In the case $\beta=1$ the least favorable density is of the form \begin{equation}\label{eqf5222} f_0(\theta) = C\left|\sum_{j = 0}^{\infty} c_j e^{-i j \theta} \right|. \end{equation} The following statement holds true. \begin{thm}\label{thm52} Let the spectral density $f_0\in D_f^{\beta}$ satisfy the minimality condition \eqref{minimality1} and let the function $h_f(f_0)$ determined by formula \eqref{hf52} be bounded. The spectral density $f_0(\theta)$ is the least favorable in the class $D_f^{\beta}$ for the optimal linear estimation of the functional $A \xi$ if it is of the form \eqref{eqf522} and determine a solution to the optimization problem \eqref{condextr42}. The minimax-robust spectral characteristic $h(f_0)$ of the optimal estimate of the functional $A \xi$ is determined by formula \eqref{spchar_f}. \end{thm} \subsection{Least favorable spectral densities. Stationary sequences} Consider the problem of the optimal estimation of the linear functional $A \xi$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a random sequence $\{\xi_k,k\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points of time $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent stationary stochastic sequences which have spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} with $\alpha=2$ from the class of admissible spectral densities $D=D_f^{\beta}\times D_g^{\varepsilon}$. Assume that spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ and the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$, determined by the equations \begin{equation} \label{hf53} h_f(f_0,g_0)=\frac{\left|A(e^{i\theta})g_0(\theta)+\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta}\right|^2}{(f_0(\theta)+g_0(\theta))^2}, \end{equation} \begin{equation} \label{hg53} h_g(f_0,g_0)=\frac{\left|A(e^{i\theta})f_0(\theta)-\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}{(f_0(\theta)+g_0(\theta))^2} \end{equation} are bounded. Under these conditions the functional \eqref{delta43} is linear and continuous in the $L_1\times L_1$ space and we can apply the Lagrange multipliers method to find solution of the constrained optimization problem \eqref{condextr43} and derive that the least favorable densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the equations \begin{equation} \label{eqf53} {\left|A(e^{i\theta})g_0(\theta)+\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta}\right|^2}=\gamma_1 {(f_0(\theta)+g_0(\theta))^2} \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} \begin{equation} \label{eqg53} {\left|A(e^{i\theta})f_0(\theta)-\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}= {(f_0(\theta)+g_0(\theta))^2} \left(\varphi_1 (\theta)+\gamma_2 \right), \end{equation} where $\varphi_1 (\theta)\leq 0$ and $\varphi_1(\theta)= 0$ if $g_0(\theta)\geq (1-{\varepsilon}) g_1(\theta)$; $\gamma_1,$ $\gamma_2$ are the Lagrange multipliers which are determined from the conditions $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1,\quad \int\limits_{-\pi}^{\pi} g_0(\theta)d\theta= P_2. $$ Thus, the following statement holds true. \begin{thm}\label{thm53} Let the spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the minimality condition \eqref{eq:minimality} with $\alpha=2$ and let the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$ determined by formulas \eqref{hf53}, \eqref{hg53} be bounded. The spectral densities $f_0(\theta)$ and $g_0(\theta)$ are the least favorable in the class $D=D_f^{\beta}\times D_g^{\varepsilon}$ for the optimal linear estimation of the functional $A \xi$ if they satisfy equations \eqref{eqf53}, \eqref{eqg53} and determine a solution to the constrained optimization problem \eqref{condextr43}. The minimax-robust spectral characteristic $h(f_0,g_0)$ of the optimal estimate of the functional $A \xi$ is determined by formulas \eqref{sphar_a12}. \end{thm} \subsection{Least favorable spectral densities. Stationary sequences. Observations without noise} Consider the problem of the optimal linear estimation of the functional $A \xi$ that depends on the unknown values of a random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence at points $k = -1, -2, \dots$, where the stationary random sequence $\{\xi_k,k\in\mathbb Z\},$ has the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$ from the class of admissible spectral densities $D_f^{\beta}$. Assume that spectral density $f_0\in D_f^{\beta}$ and the function $h_f(f_0)$ determined by the equation \begin{equation} \label{hf54} h_f(f_0)=\left|\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{a})_je^{ij\theta} \right|^2f_0^{-2}(\theta) \end{equation} is bounded. Under this condition the functional \eqref{delta44} is linear and continuous in the $L_1$ space and we can apply the Lagrange multipliers method to find solution of the constrained optimization problem \eqref{condextr44} and derive that the least favorable density $f_0\in D_f^{\beta}$ satisfy the equation \begin{equation} \label{eqf54} \left|\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{a})_je^{ij\theta} \right|^2f_0^{-2}(\theta)= \gamma_1 \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} where $\gamma_1$ is the Lagrange multiplier which is determined from the conditions $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1. $$ Thus, the following statement holds true. \begin{thm}\label{thm54} Let the spectral density $f_0\in D_f^{\beta}$ satisfy the minimality condition \eqref{minimality1} with $\alpha=2$ and let the function $h_f(f_0)$ determined by formula \eqref{hf54} be bounded. The spectral density $f_0(\theta)$ is the least favorable in the class $D_f^{\beta}$ for the optimal linear estimation of the functional $A \xi$ if it satisfies equation \eqref{eqf54} and determine a solution to the constrained optimization problem \eqref{condextr44}. The minimax-robust spectral characteristic $h(f_0)$ of the optimal estimate of the functional $A \xi$ is determined by formulas \eqref{sphar_a22}. \end{thm} In the case of $\beta=1$ the set of admissible spectral densities $D=D_f$ is of the form $$ D_f = \left\{f(\theta)\left| \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} f(\theta)d\theta= P\right.\right\}. $$ Stationary sequences with the spectral densities from such $D_f$ have finite dispersion $E|\xi_j|^2= P$ and can be represented as a sum of a regular sequence and a singular sequence. The least favorable in $D_f$ spectral density is density of the regular sequence since singular sequences have zero value of the mean square error of extrapolation. Spectral densities from $D_f$ of the regular sequences admit the factorization \begin{equation}\label{factor2222} f(\theta) = \left|\sum\limits_{j=0}^{\infty}\varphi_{j}e^{-ij\theta}\right|^{2},\quad \sum\limits_{j=0}^{\infty}|\varphi_{j}|^{2}= P, \end{equation} and we can use the following optimization problem to find the least favourable spectral density in the set $D_f$ \begin{equation}\label{var_a2222} \|\bold{A}\vec{\varphi}\|^2\to\max,\quad \|\vec{\varphi}\|^2= \sum\limits_{j=0}^{\infty}|\varphi_{j}|^{2}= P, \end{equation} where the linear operator $\bold{A}$ in the space $\ell_2$ is determined by matrix with elements $\bold{A}_{i,j}=a_{i-j}$, $i,j=0,1,\dots$, and the vector $\vec{\varphi}=({\varphi}_0,{\varphi}_1, {\varphi}_2,\dots)$ are determined by elements ${\varphi}_j,j=0,1,\dots$ of the factorization \eqref{factor2222}. Solution to the optimization problem \eqref{var_a2222} gives the eigenvector $\vec{\varphi}^0=({\varphi}_0^0,{\varphi}_1^0, {\varphi}_2^0,\dots)$ which corresponds to greatest eigenvalue $\nu^0$ of the linear operator $\bold{A}$. We present this result as a theorem \begin{thm}\label{thm544} Spectral density $f_0(\theta)$ is the least favourable in the class $D_f$ for the optimal linear estimation of the functional $A \xi$ if it is of the form \eqref{factor2222}, where $\vec{\varphi}^0=({\varphi}_0^0,{\varphi}_1^0, {\varphi}_2^0,\dots)$ is the eigenvector of the linear operator $\bold{A}$ which corresponds to greatest eigenvalue $\nu^0$ of the operator. The optimal minimax linear estimate $ \hat{A} { \xi}$ of the functional $A {\xi}=\sum_{j=0}^{ \infty} {a}_j\xi_j$ is of the form \[ \hat{A} { \xi}= \sum_{j=0}^{ \infty} {a}_j \left[ \sum_{u=- \infty}^{-1} \varphi_{j-u}^0 { \varepsilon}_u \right] , \] \noindent where $ \varepsilon_u$ is a standard stationary sequence with orthogonal values ("white noise" sequence), the sequence $ \{\varphi_u^0: u= 0,1,\dots \}$ is uniquely determined by coordinates of the eigenvector of the operator $\bold{A}$ that corresponds to the greatest eigenvalue $\nu^0$ and condition $E \left \| { \xi}_j \right \| ^{2} =P$. \end{thm} \begin{exm} \label{exm2.1} Consider the problem of optimal linear stimulation of the functional \[A_{1} { \xi}= \xi (0)+ 2\xi (1) \] that depends on the unknown values $ { \xi}(0)$, $ { \xi}(1)$ of a stationary sequence $ { \xi}(j)$, that satisfies conditions \[ E{ \xi}(j)=0,\quad E\left | { \xi}(j) \right |^{2} \le P, \] based on observations of the sequence $ { \xi}(j)$ at points $j=-1,-2,\dots$. \noindent Eigenvalues of the operator $\bold{A}_1$ are equal to $(1\pm \sqrt{17})/2 $. So the greatest eigenvalue is $ \nu_{1} =(1 + \sqrt{17})/2 $. The eigenvector corresponding to the eigenvalue $ \nu_{1} =(1+ \sqrt{17})/2 $ is of the form $ \vec\varphi = \left \{ \varphi (0), \varphi(1) \right \}$, where \[ \varphi (0)= \sqrt{1/2+ 1/2\sqrt{17}},\quad \varphi (1)= \sqrt{1/2- 1/2\sqrt{17}} . \] The least favourable in the class $D_f$ for the optimal linear estimation of the functional $A_1 \xi$ spectral density is of the form $$ f(\theta) = P\left|\varphi_{0}+\varphi_{1}e^{-ij\theta}\right|^{2},\quad \left|\varphi_{0}\right|^{2}+\left|\varphi_{1}\right|^{2}= 1, $$ which is the spectral density of the least favourable stationary sequence $ { \xi}(j)$ that is a moving average sequence of the form \[ { \xi}(j)= \sqrt{P}\varphi (0) \varepsilon (j)+ \sqrt{P}\varphi (1) \varepsilon (j-1)= \] \[= \sqrt{P}\sqrt{1/2+ 1/2\sqrt{17}} \,\,\varepsilon(j) + \sqrt{P}\sqrt{1/2- 1/2\sqrt{17}}\,\,\varepsilon(j-1), \] $\varepsilon(j)$ is a ''white noise'' sequence. \noindent The optimal linear minimax estimate $ \hat{A}_{1} { \xi}$ of the functional $A_{1} { \xi}$ is of the form \[ \hat{A}_{1} { \xi}= \sqrt{P}\varphi (1)\,\, \varepsilon (-1)= \sqrt{P}\sqrt{1/2- 1/2\sqrt{17}}\,\,\varepsilon (-1). \] \noindent The mean-square error of the optimal estimate of the functional $A_{1} { \xi}$ does not exceed $(9 + \sqrt{17})/2$. \end{exm} \section{Conclusion} We propose methods of solution the optimal linear estimation problem for the linear functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values of a random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition. The problem is investigated under the condition of spectral certainty as well as under the condition of spectral uncertainty. Formulas for calculation the value of the error and spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty where spectral densities of the sequences are exactly known. In the case where spectral densities of the sequences are not exactly known while a set of admissible spectral densities is available, relations which determine least favorable densities and the minimax-robust spectral characteristics are found for different classes of spectral densities.
2,877,628,089,898
arxiv
\section{Introduction} \label{sec:intro} A microlensing event occurs when a luminous object is temporarily magnified by a massive body, such as a star or dark matter object, passing close to the line of sight and acting as a gravitational lens. \citet{Pac86} advocated searching for microlensing events towards the Large Magellanic Cloud in order to detect dark matter in the Galactic halo. Soon three separate collaborations were conducting systematic searches: OGLE \citep{Uda92}, EROS \citep{Aub93}, and MACHO \citep{Alc93}, between them observing the Large and Small Magellanic Clouds and the Galactic bulge. Other groups such as MOA (e.g. \citealt{Bon01}) have since joined the search, and thousands of microlensing events have now been detected (e.g., \citealt{Alc00a, Woz01, Sum03}), almost all towards the bulge. A much smaller number of microlensing candidates have also been identified toward the Large Magellanic Cloud (e.g., \citealt{Alc00b}) and M31 (e.g., \citealt{Nov05}). One of the main aims of all these observations is to accurately measure the optical depth, $\tau$ -- the probability of seeing a microlensing event at any given instant -- which can provide much information about the structure and mass distribution of the Galaxy and its halo. Since the first estimates of $\tau$ by \citet{Pac91} and \citet{Gri91}, predictions based on increasingly refined models have consistently and significantly disagreed with measurements based on increasingly large sets of observational data. However, there are now signs of convergence. \citet{HG03} -- hereafter HG03 -- used star counts from the \emph{Hubble Space Telescope} (\emph{HST}) to normalise their Galactic model, predicting $\tau = 1.63 \times 10^{-6}$ towards Baade's window (BW), based on lensing of red clump giants (RCGs). They noted reasonable agreement with two recent measurements towards the bulge, also based on RCGs, of $\tau = 2.0\, (2.13) \pm 0.4 \times 10^{-6}$ and $\tau = 0.94\, (1.08) \pm 0.30 \times 10^{-6}$, from the MACHO \citep{Pop01} and EROS \citep{Afo03} collaborations, respectively. The numbers in parentheses are from table 2 of \citet{Afo03}, who enabled a better comparison between all bulge optical depth measurements to be made by adjusting the values for their offset from BW. Now from 7 years of MACHO survey data, \citet{Pop04} report $\tau = 2.17^{+0.47}_{-0.38} \times 10^{-6}$ at $(l, b) = (1.50^\circ, -2.68^\circ)$, which is in excellent agreement with recent theoretical predictions, including the \citeauthor{HG03} result. Most recently, from the OGLE-II survey \citet{Sum05} find $\tau = 2.37^{+0.53}_{-0.43} \times 10^{-6}$ at $(l, b) = (1.16^\circ, -2.75^\circ)$, which is also consistent with the latest MACHO survey value. In this paper we generate Monte Carlo simulations of the Galaxy based on HG03. The outline of the paper is as follows. \S\ref{sec:model} describes the model and theory, and \S\ref{sec:results} presents our results: In \S\ref{sec:opdepth res} we reproduce the HG03 $\tau_{\rm BW}$, and then compare our predicted $\tau$ with the recent MACHO and OGLE results in various directions. \S\ref{sec:maps} presents maps of optical depth and average event time-scale (duration). These maps can be compared with observations in any direction. In \S\ref{sec:tscales} we predict the event rate as a function of time-scale and compare this to the distribution observed by OGLE. In \S\ref{sec:frac} we show how at both long and short times the time-scale distribution is directly related to the lens mass function. We summarise our results in \S\ref{sec:discussion}. \section{The Model} \label{sec:model} \subsection{Bulge and disc mass models} \label{sec:models} \citet{Dwe95} compared various hypothetical mass density models of the bulge to the infrared light density profile seen by the Cosmic Background Explorer (COBE) satellite. We use the G2 (barred) model from their table 1, with $R_{\rm max}$ = 5 kpc. The bar is inclined by $13.4^\circ$ to the Galactic centre line of sight, and the distance to the Galactic centre is set at 8 kpc. \citeauthor{Dwe95} used 8.5 kpc, so we adjust their model parameters accordingly. The model is then normalised by \emph{HST} star counts (see the end of \S\ref{sec:populations}). This independent constraint can be used to normalise any bulge model. For the disc, we use the local disc density model of \citet{Zhe01}, as extended to the whole disc by HG03. As the disc model is relatively secure (HG03), it will contribute only small uncertainties to predictions of the optical depth, so it is not renormalised as for the bulge model. \subsection{Source and lens populations} \label{sec:populations} The optical depths reported by \citet{Pop04} are based on lensing of RCGs in the bulge, and HG03 assume only bulge RCG sources in their model. \citet{Sum05} observed lensing of red giants and red super giants as well as RCGs. We assume that these different types of stars follow the same bar density distribution and are bright enough to be seen throughout the bar, which corresponds to the case with $\gamma=0$ in the following eq. (\ref{eq:opdepth}). Our lens mass function is generated as in HG03. Their unnormalised bulge mass function assumes initial star formation according to \begin{equation} {\rm d}N / {\rm d}M = k(M/M_{\rm brk})^\alpha, \label{eq:massfunc} \end{equation} where $M_{\rm brk} = 0.7\ \rm M_\odot$, $\alpha = -2.0$ for $M > M_{\rm brk}$, and $\alpha = -1.3$ for $M \leq M_{\rm brk}$, consistent with observations by \citet{Zoc00}. However HG03 extended this beyond the latter's lower limit of $M \sim 0.15\ \rm M_\odot$ to a brown dwarf cut-off of $M \sim 0.03\ \rm M_\odot$. We assume objects with masses 0.03--0.08 $\rm M_\odot$ and 0.08--1 $\rm M_\odot$ become brown dwarfs (BD) and main-sequence stars (MS) respectively, 1--8 $\rm M_\odot$ stars evolve into 0.6 $\rm M_\odot$ white dwarfs (WD), 8--40 $\rm M_\odot$ stars become 1.35 $\rm M_\odot$ neutron stars (NS), and anything more massive forms a 5 $\rm M_\odot$ black hole (BH). For MS stars we use the mass-luminosity relation of \citet{Cox99}, and take all other lenses to be dark. The model is then normalised by comparing extinction-adjusted MS counts to \emph{HST} star counts \citep{Hol98} as described in HG03. The same mass function and luminosity relation are also used for the disc. Strictly they should be independently estimated, but any uncertainties are small compared to others involved as we find disc stars account for only $\sim 20$ per cent of the total number of stars in BW. \subsection{Kinematic model} \label{sec:kinematics} To calculate the event rate, we must also specify the velocities of the lenses, sources and observer. The observer velocity $v_{\rm O}$ is assumed to follow the Galactic rotation, so the two velocity components in $l$ and $b$ are given by \begin{equation} v_{\rm O, l} = v_{\rm O,{\rm rot}} = 220 {\rm \ kms}^{-1}, \ \ \ \ \ \ v_{\rm O,b} = 0. \label{eq:vo} \end{equation} The lens and source velocities in the $l$ and $b$ directions are given by \begin{equation} v_{\rm l} = v_{\rm rot} + v_{\rm rand,l}, \ \ \ \ \ \ v_{\rm b} = v_{\rm rand,b}, \label{eq:v_lb} \end{equation} where the rotation velocity $v_{\rm rot}$ and the random velocity $v_{\rm rand}$ are from \citet{HG95}: for the disc $v_{\rm rot} = 220 {\rm \ kms}^{-1}$, and for the bar $v_{\rm rot}$ is given by projecting $v_{\rm max} = 100 {\rm \ kms}^{-1}$ across the line of sight according to \begin{eqnarray} v_{\rm rot} & = & v_{\rm max} \left( x \over 1\ {\rm kpc} \right) \ \ \ \ \ \ (R < 1\ {\rm kpc},\ {\rm solid\ body\ rotation}), \nonumber \\ v_{\rm rot} & = & v_{\rm max} \left( x \over R \right) \ \ \ \ \ \ \ \ \ \ \ (R \geq 1\ {\rm kpc},\ {\rm flat\ rotation}), \label{eq:vrot} \end{eqnarray} where $R=(x^2+y^2)^{1/2}$, and the coordinates $(x, y, z)$ have their origin at the Galactic centre, with the $x$ and $z$ axes pointing towards the Earth and the North Galactic Pole respectively. The random velocity components $v_{\rm rand,l}$ and $v_{\rm rand,b}$ are assumed to have Gaussian distributions. For the disc $\sigma_{\rm l,\, b} = (30,\,20) {\rm \ kms}^{-1}$, and for the bar we use $\sigma_{\rm x,\, y,\, z}$ = (110, 82.5, 66.3)${\rm \ kms}^{-1}$ as found by \citet{HG95} using the tensor virial theorem (see also \citealt {SEW03}, and \citealt{KR02}). These values should be altered slightly as HG03 used a different normalisation. This may affect our results slightly, but it is re-assuring that our results based on such a simple kinematic model appear to agree with the data quite well (see \S\ref{sec:results}). \subsection{Optical depth and event rate} \label{sec:taugamma} $\tau$ in any given direction is an average over the optical depths of all the source stars in that direction. The optical depth to a particular star is defined as the probability that it is within the Einstein radius (see below) of any foreground lenses. Hence more distant stars, although fainter and less likely to be detected, have higher optical depths \citep{Sta95}. HG03 accounted for this with the term $\gamma$ in the calculation of observed optical depth: \begin{equation} \langle \tau \rangle_\gamma = {4\pi G \over c^2} {\int_0^\infty {\rm d}D_{\rm s} D_{\rm s}^{2-\gamma} \rho(D_{\rm s}) \int_0^{D_{\rm s}} {\rm d}D_{\rm d} \rho(D_{\rm d}) D_{\rm d} (D_{\rm s} - D_{\rm d})/D_{\rm s} \over \int_0^\infty {\rm d}D_{\rm s} D_{\rm s}^{2-\gamma} \rho(D_{\rm s})}, \label{eq:opdepth} \end{equation} where $D_{\rm s}$ and $D_{\rm d}$ are the distances to the source and deflector (lens), and $\rho(D_{\rm s})$ and $\rho(D_{\rm d})$ are the source number density and lens mass density. RCGs and other bright stars in the bulge can be identified independently of their distance, so $\gamma = 0$. Eq. (\ref{eq:opdepth}) was originally presented (in a slightly different form) by \citet{KP94}, who also derived an expression for the lensing event rate $\Gamma$. We give this here in terms of $\gamma$, and account for variation in lens mass by bringing the term $M^{-1/2}$ inside the integral: \begin{eqnarray} \Gamma & = & {4 G^{1/2} \over c} \int_0^\infty {\rm d}D_{\rm s} D_{\rm s}^{2-\gamma} \rho(D_{\rm s}) \nonumber \\ & & \times {\int_0^{D_{\rm s}} {\rm d}D_{\rm d} \rho(D_{\rm d}) v [D_{\rm d} (D_{\rm s} - D_{\rm d})/M D_{\rm s}]^{1/2} \over \int_0^\infty {\rm d}D_{\rm s} D_{\rm s}^{2-\gamma} \rho(D_{\rm s})}, \label{eq:freq} \end{eqnarray} where $v$ is the lens-source relative transverse velocity, \begin{equation} v = ({v_{\rm l}}^2 + {v_{\rm b}}^2)^{1/2}, \end{equation} and its components in the Galactic $l$ and $b$ coordinates, $v_{\rm l}$ and $v_{\rm b}$, are related to the observer, lens and source velocities by \begin{equation} v_{\rm l, b} = \left( (v_{\rm D} - v_{\rm O}) + (v_{\rm O} - v_{\rm S}) {D_{\rm d} \over D_{\rm s}} \right)_{\rm l, b}, \end{equation} where $v_{\rm D}$ and $v_{\rm S}$ are the deflector (lens) and source transverse velocities; their components in the $l$ and $b$ directions are given in eq. (\ref{eq:v_lb}). The time-scale of an event $t_{\rm E}$ is defined as the time taken for a source to cross the Einstein radius of the lens $r_E$ \citep{Pac96}: \begin{equation} t_{\rm E} = {r_E \over v} \ \ \ \ \ \ \ \ r_E = \left( {4GM \over c^2} {D_{\rm d}(D_{\rm s} - D_{\rm d}) \over D_{\rm s}} \right)^{1/2}. \label{eq:tE} \end{equation} \section{Results} \label{sec:results} \subsection{Optical depth in MACHO and OGLE fields} \label{sec:opdepth res} HG03 calculated $\tau = (0.98, 0.65, 1.63) \times 10^{-6}$ towards BW for bulge, disc, and all lenses respectively. Our equivalent values are (1.06, 0.65, 1.71) $\times 10^{-6}$. HG03 noted that the value of $\gamma$ makes little difference to $\tau$ for disc lenses, but for bulge lenses $\tau$ becomes $0.86 \times 10^{-6}$ when $\gamma = 1$. We find $\tau = 0.92 \times 10^{-6}$ in this case. Our results for bulge lenses differ by 7--8 per cent from HG03's due to a slight difference in implementation of the bulge model normalisation. We find that allowing MS disc lenses to also act as sources themselves makes a negligible difference to the total value of $\tau$. The MACHO measurement \citep{Pop04} of $\tau = 2.17^{+0.47}_{-0.38} \times 10^{-6}$ at $(l, b) = (1.50^\circ, -2.68^\circ)$, was obtained from a sub-sample of their observed fields, the `Central Galactic Region' (CGR), which covers 4.5 deg$^2$ and contains 42 of the 62 RCG microlensing events seen. The coordinates $(1.50^\circ, -2.68^\circ)$ are a weighted average position of these fields; the unweighted average is $(l, b) = (1.55^\circ, -2.82^\circ)$. Optical depths were also given for a region `CGR+3' that contains 3 additional fields, and for all 62 events. In Table \ref{tab:opdepths_macho} we compare our expected values to each of these results, and to $\tau$ reported for each of the individual CGR fields. OGLE's measurement \citep{Sum05} of $\tau = 2.37^{+0.53}_{-0.43} \times 10^{-6}$ at $(l, b) = (1.16^\circ, -2.75^\circ)$ made use of 32 RCG events, in 20 of their 49 fields, where $(l, b) = (1.16^\circ, -2.75^\circ)$ is the weighted average field position. $\tau$ was also given for each field; we compare our values to all of these results in Table \ref{tab:opdepths_ogle}. Note that any significant disagreement occurs only in individual fields, and that in only 1 of the 6 fields (MACHO and OGLE) with $> 4$ events (OGLE \#30) does our value lie far outside the stated $1 \sigma$ uncertainty. Table \ref{tab:frac_opdepth} shows the percentage contributions to the total optical depth and event rate from the different types of lenses. The disc lenses contribute about 37 per cent of the optical depth and a slightly smaller fraction (31 per cent) of the event rate. We see that 62 per cent of all events have luminous (MS) lenses, the other 38 per cent are dark (BD, WD, NS and BH). The NSs and BHs contribute about 9 per cent of the optical depth but only 4 per cent of the event rate. This is because the events caused by stellar remnants on average have longer time-scales, and thus they occur less frequently. \begin{table} \centering \begin{tabular}{ccccc}\hline Region/field & ${N_{\rm events}}^*$ & ($l$, $b$) ($^\circ$) & $\tau_{\rm MACHO} (\times 10^{-6})$ & $\tau_{\rm model} (\times 10^{-6})$ \\ \hline CGR$^\dag$ & 42 & (1.50, -2.68) & $2.17^{+0.47}_{-0.38}$ & 2.43 \\ CGR$^\ddag$ & 42 & (1.55, -2.82) & -- & 2.33 \\ CGR+3 & 53 & (1.84, -2.73) & $2.37^{+0.47}_{-0.39}$ & 2.34 \\ All events & 62 & (3.18, -4.30) & $1.21^{+0.21}_{-0.21}$ & 1.32 \\ & & & & \\ 108 & 6 & (2.30, -2.65) & $2.04 \pm 0.92$ & 2.31 \\ 109 & 2 & (2.45, -3.20) & $0.58 \pm 0.41$ & 1.96 \\ 113 & 3 & (1.63, -2.78) & $0.55 \pm 0.35$ & 2.34 \\ 114 & 3 & (1.81, -3.50) & $1.19 \pm 0.74$ & 1.87 \\ 118 & 7 & (0.83, -3.07) & $2.85 \pm 1.35$ & 2.25 \\ 119 & 0 & (1.07, -3.83) & -- & 1.74 \\ 401 & 7 & (2.02, -1.93) & $5.13 \pm 2.16$ & 2.85 \\ 402 & 10 & (1.27, -2.09) & $3.95 \pm 1.50$ & 2.89 \\ 403 & 4 & (0.55, -2.32) & $1.16 \pm 0.66$ & 2.83 \\ \hline \end{tabular} \caption{Comparison of model and MACHO optical depths for the Central Galactic Region (CGR) and individual fields. $^*$Number of events seen by MACHO. $^\dag$Weighted average ($l$, $b$). $^\ddag$Unweighted average ($l$, $b$).} \label{tab:opdepths_macho} \end{table} \begin{table} \centering \begin{tabular}{ccccc}\hline Region/field & ${N_{\rm events}}^* $ & ($l$, $b$) ($^\circ$) & $\tau_{\rm OGLE} (\times 10^{-6})$ & $\tau_{\rm model} (\times 10^{-6})$ \\ \hline All fields$^\dag$ & 32 & (1.16, -2.75) & $2.37^{+0.53}_{-0.43}$ & 2.43 \\ & & & & \\ 1 & 0 & (1.08, -3.62) & -- & 1.87 \\ 2 & 1 & (2.23, -3.46) & $2.31 \pm 2.31$ & 1.85 \\ 3 & 4 & (0.11, -1.93) & $3.99 \pm 2.07$ & 3.20 \\ 4 & 5 & (0.43, -2.01) & $2.93 \pm 1.39$ & 3.09 \\ 20 & 1 & (1.68, -2.47) & $1.15 \pm 1.15$ & 2.54 \\ 21 & 0 & (1.80, -2.66) & -- & 2.39 \\ 22 & 1 & (-0.26, -2.95) & $0.79 \pm 0.79$ & 2.42 \\ 23 & 0 & (-0.50, -3.36) & -- & 2.13 \\ 30 & 6 & (1.94, -2.84) & $8.88 \pm 3.89$ & 2.26 \\ 31 & 1 & (2.23, -2.94) & $2.10 \pm 2.10$ & 2.15 \\ 32 & 1 & (2.34, -3.14) & $0.87 \pm 0.87$ & 2.02 \\ 33 & 2 & (2.35, -3.66) & $9.69 \pm 7.38$ & 1.73 \\ 34 & 2 & (1.35, -2.40) & $3.80 \pm 2.69$ & 2.65 \\ 35 & 2 & (3.05, -3.00) & $2.99 \pm 2.19$ & 1.98 \\ 36 & 0 & (3.16, -3.20) & -- & 1.85 \\ 37 & 2 & (0.00, -1.74) & $2.06 \pm 1.65$ & 3.39 \\ 38 & 2 & (0.97, -3.42) & $2.68 \pm 2.09$ & 2.01 \\ 39 & 3 & (0.53, -2.21) & $1.51 \pm 0.90$ & 2.92 \\ 45 & 0 & (0.98, -3.94) & -- & 1.68 \\ 46 & 0 & (1.09, -4.14) & -- & 1.56 \\ \hline \end{tabular} \caption{Comparison of model and OGLE optical depths. $^*$Number of events seen by OGLE. $^\dag$Weighted average ($l$, $b$).} \label{tab:opdepths_ogle} \end{table} \begin{table} \centering \begin{tabular}{cccccccc}\hline & \multicolumn{7}{c}{Location/type of lens} \\ & Bar & Disc & BD & MS & WD & NS & BH \\ \hline Optical depth & 63 & 37 & 7 & 62 & 22 & 6 & 3 \\ Event rate & 69 & 31 & 17 & 62 & 17 & 3 & 1 \\ \hline \end{tabular} \caption{Percentage contributions, to the total predicted $\tau$ and $\Gamma$, from different types of lens.} \label{tab:frac_opdepth} \end{table} \begin{figure} \centering \includegraphics[width = 8cm]{fig1a.eps} \includegraphics[width = 8cm]{fig1b.eps} \caption{Average optical depth in latitude (left panel) and longitude (right panel) strips, for $-5.5^\circ \leq b \leq 5.5^\circ$ and $-5.5^\circ \leq l \leq 5.5^\circ$ respectively. The solid line shows the model prediction, while the open and solid circles are data points from MACHO (fig. 14, \citealt{Pop04}) and OGLE (fig. 12, \citealt{Sum05}) respectively.} \label{fig:line} \end{figure} In their figs. 12 and 14 respectively, \citet{Sum05} and \citet{Pop04} plot average optical depths in latitude and longitude strips. We produce similar plots in Fig. \ref{fig:line}, with the OGLE and MACHO data points shown. In both sets of strips the model is in good agreement with both sets of data. The single data point at negative $l$ is based on only one microlensing event, so the discrepancy has low statistical significance. \subsection{Maps of optical depth and average event time-scale} \label{sec:maps} Figs. \ref{fig:map_opdepth} and \ref{fig:map_time} are maps of expected optical depth and average event time-scale. We can clearly see higher optical depths and longer time-scales at negative galactic longitude. This is due to the inclination of the bar to the line of sight. At positive longitude the bar is closer to us, and the line of sight cuts through the bar at a steeper angle. Hence there are fewer potential lenses, in either the disc or the bar, between us and any bar source, and so $\tau$ is smaller. Also, objects rotating around the Galactic centre have a smaller component of their velocity along the line of sight, so average transverse velocities will be greater, and average time-scales shorter. At negative longitude, the line of sight passes through more of the disc and cuts the bar at a shallower angle. Hence we see higher optical depths and smaller transverse velocities, and thus longer average time-scales. \begin{figure} \centering \includegraphics[width = 10cm]{fig2.eps} \caption{Map of expected optical depth. The MACHO and OGLE fields are shown by the large and small grey boxes, respectively. For the MACHO fields, the crosshatch pattern indicates the CGR subset listed in Table \ref{tab:opdepths_macho}. For OGLE, the crosshatch pattern denotes those fields \emph{not} listed in Table \ref{tab:opdepths_ogle}. The small square indicates BW. Contour levels are at $(0.1, 0.2, 0.3, 0.5, 1, 2, 3, 4, 5) \times 10^{-6}$.} \label{fig:map_opdepth} \end{figure} \begin{figure} \centering \includegraphics[width = 10cm]{fig3.eps} \caption{Map of expected average event time-scale. The MACHO and OGLE fields are shown by the large and small grey boxes, respectively. For the MACHO fields, the crosshatch pattern indicates the CGR subset listed in Table \ref{tab:opdepths_macho}. For OGLE, the crosshatch pattern denotes those fields \emph{not} listed in Table \ref{tab:opdepths_ogle}. The small square indicates BW. Contour levels are at 22.5, 25, 27.5, 30, 32.5, 35, 37.5, 40 and 42.5 d.} \label{fig:map_time} \end{figure} \begin{figure} \centering \includegraphics[width = 10cm]{fig4.eps} \caption{$\tau$ as a function of $b$, for $l$ = $3.9^\circ$. Our model slope is almost identical for bar angles of $13.4^\circ$ and $20^\circ$. The profile of \citet{BG02} diverges from ours at $b\approx -3^\circ$, increasing rapidly towards the mid-plane ($b=0^\circ$).} \label{fig:gerhard} \end{figure} We wish to compare our maps to others. \citet{EB02} produced red clump optical depth maps for three Galactic models, but while two of these appear similar to ours, they do not agree. One of those models was also used to make a time-scale map, which is quite different to ours. This is not surprising since, as well as using a different mass model, their mass function, velocities and velocity dispersions were also different. (In fact their timescale map has two sets of contours, to show the effects of including and excluding bar streaming. Without streaming their mean timescales are much shorter than ours, and with it they are greater by a factor $\sim3$, much longer than ours. Such a large variation is puzzling, and we are cautious about comparing their map to ours). In their fig. 16, \citet{BG02} presented an optical depth map for RCG sources, with a bar angle of $20^\circ$. For $b$ $\la -3^\circ$ it appears quite similar to ours, but moving towards the Galactic centre $\tau$ climbs far more steeply than in our map. This is best seen by comparison to their fig. 17, where they plot $\tau$ as a function of $b$, for $l$ = $3.9^\circ$. This is shown in Fig. \ref{fig:gerhard} with equivalent profiles for our model. We see how rapidly \citeauthor{BG02}'s profile diverges from ours towards $b=0^\circ$. We also see that changing the bar angle in our model from $13.4^\circ$ to $20^\circ$ does not explain this difference. Instead it is probably due to the density in their bulge mass model increasing much faster towards the mid-plane. The observational data for the mid-plane are limited due to heavy extinction, and so mass models are not well-constrained in this region. Given the difficulty in obtaining any measurement of $\tau$ at small latitude, it is difficult at present to test either profile there. \subsection{Time-scale distributions} \label{sec:tscales} Fig. \ref{fig:tscales} shows the event rate as a function of time-scale towards the OGLE coordinates $(l, b) = (1.16^\circ, -2.75^\circ)$, for bar (thin line), disc (dashed line) and all (bold line) lenses. There is good agreement with the asymptotic power-law tails $d\Gamma / d({\rm log}\ t_{\rm E}) \propto$ ${t_{\rm E}}^3$, ${t_{\rm E}}^{-3}$ for very short and long time-scales, respectively \citep{MP96}. The disc lensing events have an average time-scale of 26.3 d, slightly longer than the bulge lensing events' average of 25.7 d, as also found by \citet{KP94}. The average time-scale for all events is 25.9 d. \begin{figure} \centering \includegraphics[width = 10cm]{fig5.eps} \caption{Predicted microlensing event rate as a function of time-scale, towards $(l, b) = (1.16^\circ, -2.75^\circ)$. The bold line represents all lenses. The thin and dashed lines represent the bar and disc lenses, respectively. The two dotted lines are asymptotic tails $d\Gamma / d({\rm log}\ t_{\rm E}) \propto$ ${t_{\rm E}}^3$, ${t_{\rm E}}^{-3}$ for very short and long time-scales, respectively.} \label{fig:tscales} \end{figure} \begin{figure} \centering \includegraphics[width = 10cm]{fig6.eps} \caption{Microlensing event rate as a function of time-scale, towards $(l, b) = (1.16^\circ, -2.75^\circ)$. The solid line shows the model prediction, and the OGLE observed distribution (corrected for detection efficiency) is shown as a histogram.} \label{fig:tscales2} \end{figure} In Fig. \ref{fig:tscales2} we renormalise our time-scale distribution (for all lenses) and compare it to that seen by OGLE, as corrected for detection efficiency (see fig. 14 in \citealt{Sum05}). We do not compare to the time-scale distribution seen by \citet{Pop04} -- they assumed that the effect of blending on RCG sources is negligible, but \citet{Sum05} found $\approx$ 38 per cent of OGLE-II events with apparent RCG sources were really due to faint stars blended with a bright companion. Fortunately, they also showed that blending has little effect on estimates of $\tau$ due to partial cancellation of its different effects, a point also made by \citet{Pop04}. However, time-scale distributions will be significantly shifted towards shorter events. As a result, the MACHO time-scale distribution (not shown) has a significant excess at short time-scales compared with our model. Our time-scale distribution shows reasonable agreement with OGLE's. The Kolmogorov--Smirnov (KS) test shows that the predicted and observed distributions are consistent at a $\approx 52$ per cent confidence level. Our average time-scale of 25.9 d is in excellent agreement with OGLE's corrected average of $28.1 \pm 4.3$ d. Our median and quartiles are (19.2, 11.2, 31.7) d, respectively. The event time-scale distribution from the data still has large uncertainties due to the limited number of events. It is apparent that the data have not yet reached the predicted asympototic behaviour at short and long time scales, so a more stringent test on the model is not yet possible. \citet*[see also \citealt{Pea98}]{BDG04} have also modelled the time-scale distribution. They reproduced that from MACHO's 99 DIA events \citep{Alc00a} centred at $(l, b) = (2.68^\circ, -3.35^\circ)$. However, both distributions are clearly shifted towards short time-scales compared to our model prediction in the same direction\footnote{At first glance all three distributions may appear to be similar. However, whereas we define the event timescale as the Einstein-radius crossing time (see \S\ref{sec:taugamma}), MACHO plot the diameter-crossing time, a factor of 2 difference.} (this is not shown, as it is very close to the solid line in Fig. \ref{fig:tscales2}). Although the DIA method is less prone to the systematics of blending \citep{Sum05}, it is still possible that the MACHO DIA time-scale distribution is somewhat affected. The most important difference between our model and \citeauthor{BDG04}'s is that in order to match the data at short time-scales, they adopted a Schecter mass function, $n(M) \propto M^{-2.35}$ for $M \leq 0.35\ \rm M_\odot$ down to 0.04 $\rm M_\odot$, steeper than our mass function, $n(M) \propto M^{-1.3}$ for $M < 0.7\ \rm M_\odot$. As a result, their median lens mass is much smaller than ours (0.11 $\rm M_\odot$ vs. 0.35 $\rm M_\odot$, weighted by event rate). The different kinematics may also have a noticeable effect on the timescales, but their more realistic dynamical model does not allow a simple comparison to be made. \subsection{Fractional contributions to event rate -- mass weightings} \label{sec:frac} Fig. \ref{fig:frac_tscales} shows the fractional contributions to the total event rate, as a function of event time-scale, for the different types of lens (BD, MS, WD, NS and BH) as indicated. At short time-scales ($t_{\rm E} \la 4$ d), the brown dwarfs dominate the event rate, while at long time-scales ($t_{\rm E} \ga 100$ d), the stellar remnants become increasingly important. There is asymptotic behaviour at both long and short time-scales. We find that the fractional contribution from a lens of mass $M$ is weighted by $M^2\, n(M)\, {\rm d}M$ and $M^{-1}\, n(M)\, {\rm d}M$, respectively. In the Appendix we derive these weightings from eq. (\ref{eq:freq}). (The scaling at long event tails has already been derived by \citealt{Ago02}). Table \ref{tab:frac_tscales} shows that direct calculation of these asymptotic fractions from the mass function gives results that clearly agree with the trends in Fig. \ref{fig:frac_tscales}. These weightings are independent of the density and kinematics of the lens population, and hence provide valuable information about the lens mass function. \begin{table} \centering \begin{tabular}{cccccc}\hline Time-scale & BD & MS & WD & NS & BH \\ \hline Long & 0.53 & 44 & 20 & 12 & 24 \\ Short & 72 & 27 & 1.5 & 0.078 & 0.0032 \\ \hline \end{tabular} \caption{Percentage contributions to the total predicted event rate, at long and short time-scales, from the different types of lens.} \label{tab:frac_tscales} \end{table} \begin{figure} \centering \includegraphics[width = 10cm]{fig7.eps} \caption{Fractional contributions to total expected event rate, as a function of event time-scale, from BD, MS, WD, NS and BH lenses as indicated. The solid and dashed lines represent the bar and disc lenses, respectively. The asymptotic fractions at long and short time-scales are a function of the lens mass only (see text).} \label{fig:frac_tscales} \end{figure} \section{Summary} \label{sec:discussion} In this paper, we have used a simple Galaxy model normalised by star counts \citep{HG03} to predict the microlensing optical depth. Combined with simple kinematic models, we also predict maps and distributions of the time-scale distributions. We have shown that the fraction of long and short events contributed by a lens of mass $M$ is weighted by $M^2\, n(M)\, {\rm d}M$ and $M^{-1}\, n(M)\, {\rm d}M$ respectively. If the tails of this distribution can be accurately determined from observations, we have a direct probe of the lens mass function. It is remarkable that this emprically-normalised model based on the COBE G2 model \citep{Dwe95} shows good agreement with data recently published by the MACHO and OGLE collaborations (\citealt{Sum05} and \citealt{Pop04}) for the optical depth in various Galactic fields, and its trends with $l$ and $b$. Our maps of optical depth and average event time-scale cover a large area of the sky, and can be compared to future determinations of $\tau$ in similar areas when they become available. The expected distribution of the event time-scale also appears to show good agreement with the recently published OGLE data \citep{Sum05}. However, the numbers of microlensing events used (42 and 32) in the recent MACHO and OGLE analyses are still small, so the test on the models is not yet stringent. When the much larger database of microlensing events ($\sim$ thousands) is analysed, then a full comparison with the models will become much more discriminating. \section*{Acknowledgments} We thank Drs. Vasily Belokurov, Nicholas Rattenbury and Martin Smith for many useful discussions. We thank the anonymous referee for their helpful comments. AW acknowledges support from a PPARC studentship. \bibliographystyle{mn2e}
2,877,628,089,899
arxiv
\section{\label{Introduction}Introduction} The spin up and down states of an electron form the archetypal two-level system in quantum physics and make the electron a natural candidate for realizing a quantum bit. Quantum computing approaches use electrons in both condensed matter and atomic systems, for instance confined in quantum dots or bound to donors in semiconductors~\cite{Watson2018, Yoneda2018a, Laucht2017}, or bound as valence electrons in trapped atomic ions~\cite{Brown2016, Bruzewicz2019}. In these examples, the confinement to either the host solid-state environment or to a much heavier ion can limit the potential of the electron spin qubit: for trapped ions, entanglement is typically mediated by the slow motion of the heavy ions in a shared trapping potential~\cite{Sorensen1999, Leibfried2003a}, which limits the gate speed, while in condensed matter systems unwanted coupling of the electron's charge and magnetic moment to the imperfect environment limits coherence times. An approach which promises to remove these limitations is to confine individual free electrons in actual vacuum~\cite{Daniilidis2013electron,Peng2017,Kotler2017}. Here we show experimentally that this can be achieved with the type of traps used for the currently most advanced ion trap quantum computers, namely quadrupole Paul traps. Compared to commonly trapped ions, the electron's charge-to-mass ratio is larger by a factor $10^4-10^5$, such that motion-based gates and shuttling operations could be sped up by two orders of magnitude. Based on measurements for ion ground state qubits, which should experience similar decoherence mechanisms to trapped electron spin qubits, coherence times of at least a second are expected~\cite{Ruster2016}. Furthermore, reducing the complex level structure down to the minimum of two levels rules out qubit errors due to population leakage~\cite{Brown2018}. Adapting the quantum-CCD architecture developed for trapped ions~\cite{Kielpinski2002CCD} to trapped electrons offers the opportunity to build a fast, modular, and high-fidelity quantum computer using advanced microwave technology~\cite{Mintert2001a, Piltz2016, Ospelkaus2011, Harty2016}, which promises better compatibility with current microfabrication methods compared to laser technology and optical beam delivery. Beyond quantum computing, the experimental platform we introduce here may offer new avenues for creating and studying small cold plasma~\cite{Twedt2012}, highly controllable few- to single electron sources for electron optics applications~\cite{Kruit2016}, or single-electron mechanical oscillators~\cite{Burd2019}. Trapping single electrons in vacuum has previously been achieved in two other platforms. First, electrons have been confined in cryogenic Penning traps in the early 1970s~\cite{Wineland1973} by combining a large magnetic field and a constant electric quadrupole field. While several proposals have considered using single electrons in Penning traps as qubits~\cite{Ciaramicoli2003, Marzoli2009, Goldman2010}, limited work has been performed on experimental realisations so far~\cite{Bushev2008}. Electrons can also be trapped above the surface of liquid helium, offering quantum information applications in milli-Kelvin environments~\cite{Lyon2006, Schuster2010} and recent experimental efforts have reached the single-electron regime~\cite{Koolstra2019a}. Our approach to trapping electrons builds on the established quadrupole radiofrequency ion trap architecture, which is at the forefront of current quantum computing approaches with atomic ions~\cite{Brown2016, Bruzewicz2019}. Guiding electrons along a radiofrequency guide~\cite{Hoffrogge2011} has been achieved and electrons have been co-trapped with ions in a combined Paul and Penning trap~\cite{Walz1995}, but trapping electrons in a pure Paul trap has not been reported so far. While potential applications to quantum computing will require cryogenic environments~\cite{Daniilidis2013electron,Peng2017,Kotler2017}, we concentrate here on demonstrating electron trapping in a proof-of-principle experiment at room temperature. \begin{figure} \includegraphics[width = 3.375 in]{Fig1.png}% \caption{\label{fig:Fig1} Electron trap design. Exploded view of trap printed circuit boards. Driving the halfwave co-planar waveguide resonator (central board) gives rise to a quadrupole microwave trap inside the slot at the central end of the resonator. The two outside boards are identical and feature electrodes $e_{\mathrm{in,\;1-10}}$ ($e_{\mathrm{in,\;11-20}}$) on the bottom (top) to provide static confinement along the slot direction (see bottom inset for electrode labels). The boards are separated with alumina spacers of 1.27~mm height and have a footprint of about 5 by 10 cm.} \end{figure} Paul traps employ a rapidly oscillating quadrupole electric field to confine charged particles at the null of the quadrupole field in two or three dimensions. The effective confining potential can be described by the pseudopotential $U_{\mathrm{p}} = \frac{q^2 E^2}{4 m \Omega^2}$, where $E$ and $\Omega$ are the amplitude and frequency of the oscillating electric field, and $q$ and $m$ the charge and mass of the trapped particle(s). The spatial dependence $U_{\mathrm{p}}(x, y, z)$ derives from the quadrupole electric field amplitude $E(x, y, z)$. The stability of trajectories for a charged particle in a quadrupole trap can be described with the $a_{\mathrm{M}}$ and $q_{\mathrm{M}}$ parameters of the Mathieu equation, which are known as the stability parameters in the context of ion traps, and have been studied theoretically and experimentally~\cite{Alheit1997,Leibfried2003review}. Typically, trajectories in the trap are stable if the frequency of motion $\omega$ of the particles in the potential is much slower than the frequency~$\Omega$ of the confining field, and if the pseudopotential depth, defined as the maximum of the pseudopotential, is much larger than the kinetic energy of the charged particles. While the Mathieu equation provides useful intuition for quadrupole traps, it should be noted that the pseudopotential picture and the treatment with the Mathieu equation are no longer accurate when the potential deviates from a purely harmonic dependence. There are three main challenges to moving from ion to electron trapping. First, due to the lower electron mass the trapping field must be at higher frequencies. The stability parameters ($a_{\mathrm{M}}$ and $q_{\mathrm{M}}$) and depth of a quadrupole trap scale as $(m\Omega^2)^{-1}$, requiring the drive frequency~$\Omega$ for an electron trap to be about two orders of magnitude higher than for typical ion traps. Second, electrons must be created with energies low enough to stay confined by the trapping potential. We require both that cold electrons are injected directly into the trap center, and that the trap is sufficiently deep. Third, in the absence of fluorescence detection, we need a different mechanism to evidence trapping. \section{\label{Experiment}Experiment design} \subsection{\label{Trap}Electron trap} We begin by describing the microwave quadrupole trap engineered for this experiment, shown in an exploded view in Fig.~\ref{fig:Fig1}. It consists of three double-sided printed circuit boards (PCBs) separated by alumina spacers. The central board features a co-planar $\lambda/2$ waveguide resonator capacitively coupled to a microwave feedline (right-hand side of the PCB). The end of the resonator in the board center contains a slot and functions as the trap's microwave electrode, providing an AC quadrupole field which confines electrons inside the slot in the $x$ and $y$-directions. The quality factor of the resonator is about 35. When fully assembled and connected inside the ultra-high vacuum (UHV) chamber, we measure a resonance frequency of $2\pi\times\;1.60$~GHz, and find we can reach about 100~V on the microwave resonator with 5~W input power. Integrating a co-planar resonator into the trap design provides a convenient solution to reaching the high frequencies needed for electron trapping and future cryogenic experiments can take advantage of previous work on waveguide resonators in the context of superconducting qubits~\cite{Goeppl2008, Malissa2013}. The resonator is held at DC ground potential via a tap in its center which connects it to the grounded top surface of the board. The outside PCBs, mirroring each other about the central board, each feature ten rectangular electrodes along the slot on the inside board surface, labeled $e_{\mathrm{in,\;j}}$ with $j=1-20$. Electrodes $e_{\mathrm{in,\;1-10}}$ are visible on the lower board and magnified in the inset, while electrodes $e_{\mathrm{in,\;11-20}}$ are on the hidden side of the upper board. The traces delivering voltages to the electrodes are on the outside surfaces, visible for the top board, and linked to a ground electrode via 10~pF decoupling capacitors. Both boards also feature a single electrode which surrounds the slot on the outside surface, and is labeled $e_{\mathrm{out,\;1}}$ for the bottom board and $e_{\mathrm{out,\;2}}$ for the top board. Electrodes $e_{\mathrm{in,\;1-20}}$ are used to apply a static quadrupole field, confining electrons in the $z$ (axial) direction, while $e_{\mathrm{out,\;1-2}}$ are held at DC ground potential. Wires soldered to the outer boards supply DC voltages in the $\pm 28$~V range from a 16 bit digital-to-analog-converter, while the microwave voltage is applied via SMA connectors to the central board. \begin{figure*} \includegraphics{Fig2.pdf}% \caption{\label{fig:Fig2} Electron trapping potential. (a) Contour plot of the trapping potential based on the pseudopotential approximation in the $xy$-plane in the center of the slot for 90~V amplitude at 1.6~GHz frequency on the resonator electrode. The circuit board substrate is indicated by the hashed areas, metal electrodes are highlighted in yellow. (b) Top: pseudopotential along the $x$-axis through the trap center (blue continuous curve), compared to an ideal harmonic potential (dashed orange). Bottom: deviation $\Delta$ of pseudopotential from harmonic potential. (c) Top: pseudopotential along the $y$-axis through the trap center (blue continuous curve), compared to an ideal harmonic potential (dashed orange). Location of trap substrate (electrodes) shown as hashed grey (solid yellow) area. Bottom: deviation $\Delta$ of pseudopotential from harmonic potential.} \end{figure*} \begin{figure*} \includegraphics{Fig3.png}% \caption{\label{fig:Fig3} Experimental setup and control schematic. (a) Simplified rendering of the setup inside the UHV chamber, showing the electron trap, a microchannel plate (MCP) detector, a mesh, the path of the PI beams, and an oven supplying calcium for ionization. (b) Key elements for experiment synchronization. An arbitrary waveform generator (AWG) provides the extraction pulses to the trap, and the start signal for a time-to-digital converter (TDC). It controls the timing for pulsing on and off the PI loading beams and a signal generator (SG), which excites the electron motion (`tickle'). Pulses from the MCP provide the stop Signals to the TDC. Low-pass filters prevent electronic pickup of the microwave trap drive and the extraction pulses by the MCP detection circuit which contains pulse shaping and amplification elements. The Ca oven and a digital-to-analog converter (DAC) for the DC trap voltages are operated with constant settings during an experiment.} \end{figure*} Fig.~\ref{fig:Fig2} details the trap pseudopotential experienced by an electron when the resonator supplies 90~V at $2\pi\times\;1.6$~GHz and all DC electrodes are grounded. Fig.~\ref{fig:Fig2}(a) displays a map of the pseudopotential for a cut through the trap in the $xy$-plane at the center of the slot, highlighting the trap substrate and electrodes as hashed grey and yellow areas, respectively. In Fig.~\ref{fig:Fig2}(b) and (c) we show that the trap depth based on the pseudopotential approximation (continuous blue curves) is about 1.3~eV (or 15,000~K), limited by the weaker confinement along the $x$-direction. We compare the pseudopotential to an ideal harmonic potential (orange dashed curve) and find they match closely to a distance of about 100~$\mathrm{\mu m}$ from the trap center, as exemplified by the green curves in the bottom panels which show their difference $\Delta$. The secular frequency for an electron moving in this potential (radial modes of motion) corresponds to about $2\pi\times\;300$~MHz. \subsection{\label{th_load}Electron loading and detection} With a suitable trap design in place, we address the challenges of injecting electrons into the trap and detecting them. Previous experiments involving trapping electrons in Penning traps, or guiding electrons in a linear quadrupole potential employed electron guns, either as primary electron source~\cite{Hoffrogge2011}, or to create secondary electrons through collision ionization of background gas~\cite{Wineland1975,Walz1995}. Here, we borrow the two-stage procedure for photoionization (PI) of calcium which is used for trapping ions from an atomic beam~\cite{Gulde2001}. It enables both the creation of very cold electrons by tuning the lasers close to the ionization threshold, and preferential ionization in the trapping region by optical alignment. Since fewer charged particles are introduced around the trap using this method, we also reduce accidental charging of the trap which would modify the trapping potential. Detection is accomplished by applying voltage pulses to several DC electrodes which distort the trapping potential to extract trapped electrons, and accelerate them into a microchannel plate detector (MCP). \subsection{\label{setup}Experiment setup and protocol} The main components of the experimental setup and their alignment in the UHV chamber are shown in Fig.~\ref{fig:Fig3}(a), omitting electric leads for simplicity. The base pressure in the chamber is below $1\times10^{-10}$~mbar. In addition to the trap itself, the chamber contains a resistive oven aligned to direct an atomic calcium beam through the trap slots when heated (steel tube labelled `oven'), the two-stage MCP, and a steel mesh which directs electrons extracted from the trap towards the MCP. The PI laser beams (423 and 390~nm wavelength) are overlapped and traverse the chamber at right angles to the Ca oven, focusing near the trapping region with a beam waist of about 30~$\mathrm{\mu m}$. The 423~nm single-mode laser is tuned to be on resonance with the neutral calcium $4 ^{1}S_{0}$-$4 ^{1}P_{1}$ transition, while the free-running multi-mode 390~nm laser diode is tuned by temperature and current to maximise the electron ionization rate. While the $4 ^{1}P_{1}$-continuum ionization threshold is at about 389.8~nm, we find the ionization rate to peak when the diode center wavelength is about $390.3\pm0.2$~nm, suggesting ionization is taking place via Rydberg states~\cite{Gulde2001}. As such, we expect electrons to inherit only minimal kinetic energy from the ionization process and their energy in the trapping potential is rather determined by their ionization location and the phase of the microwave field. We note that since the PI lasers are co-propagating, Ca ionization conditions are met over an extended volume, which follows the laser beam path, and the majority of electrons are created outside of the trap. Constraints on optical access to the trap in our particular setup prevent us from aligning the two PI beams such that they only intersect inside the trap. Fig.~\ref{fig:Fig3}(b) shows a schematic of the electronics setup for synchronizing the experiment. An arbitrary waveform generator (AWG) functions as the experiment clock, providing the trap extraction pulse, and the start signal triggering a time-to-digital-converter (TDC). The extraction pulses are added to the static voltages for DC confinement which originate from a digital-to-analog converter (DAC). The AWG further controls the timing for switching on and off both the 390-nm PI laser for loading and a signal generator (SG) used to apply an rf tone (labeled `tickle') to the trap. The electron detection signal is picked off from the MCP anode supply voltage with a high-pass filter, shaped and amplified so that it can be used as the TDC stop signal. Low-pass filtering the extraction pulses ($2 \pi \times 50$~MHz cut-off) prevents electronic pickup at the MCP and a further low-pass filter ($2 \pi \times 200$~MHz cut-off) in front of the shaping and amplification circuit removes pickup at the frequency of the microwave drive. \begin{figure} \includegraphics{Fig4.pdf}% \caption{\label{fig:Fig4} Measurement protocol and typical data. (a) Illustration of one cycle of the experimental protocol. Free electrons are created during the loading phase when the 390-nm laser is switched on. An rf tone is applied to one DC electrode in some experiments during the `wait/excite' phase. Voltage pulses to three DC electrodes in the `readout' phase eject electrons in the direction of the mesh and MCP. (b) Histogram of MCP detection events for experiment with load and wait times $t_{\mathrm{load}}=t_{\mathrm{wait}}\approx 10~ \mathrm{\mu s}$. During loading, some untrapped electrons are accelerated into the MCP, replicating the 390~nm laser pulse shape. The extraction pulse empties the trap into the MCP, resulting in a large and sharply localized signal. Inset: Close-up of the histogram during the readout phase.} \end{figure} The timing for one experimental cycle is illustrated in Fig.~\ref{fig:Fig4}(a). It starts with a loading phase of variable duration $t_\mathrm{load}$ where the 390~nm PI laser is pulsed on. The 423~nm PI laser is kept on during the full cycle. Loading is followed by a variable time $t_\mathrm{wait}$, where we either keep all settings constant or apply an rf tone at frequency $\omega_{\mathrm{tickle}}$ to electrode $e_{\mathrm{in,\;17}}$. Finally, an extraction pulse of 20~ns duration is applied to three electrodes, $e_{\mathrm{out,\;1}}$ with 14~V amplitude, and $e_{\mathrm{in,\;3,8}}$ with 10~V amplitude, which ejects trapped charges from the trap. For the experiments presented here we supply a constant current to the calcium oven, and keep the microwave trap drive in continuous-wave mode such that the voltage amplitude on the microwave resonator corresponds to about 90~V. We use about 500~$\mathrm{\mu W}$ of 423~nm laser light and approximately 2.4~$\mathrm{mW}$ of 390~nm light for the photoionization process. The mesh is at 150~V potential while the first and second stage, and the anode of the MCP are kept at 200~V, 2200~V and 2500~V, respectively. In Fig.~\ref{fig:Fig4}(b) we show a histogram of MCP detection events, where the loading and wait times are $t_{\mathrm{load}}=t_{\mathrm{wait}}\approx~10~\mathrm{\mu s}$. Data are displayed as probability to record an event during a 1~ns time bin and we acquire data for $10^{7}$ experimental cycles. During the loading period we observe a small constant signal mirroring the shape of the 390~nm laser pulse, likely from just created but not trapped electrons. Application of the extraction pulse at the end of the experimental cycle results in a large and sharply localized signal from the MCP, demonstrating that electrons remain in the trap 10~$\mathrm{\mu s}$ after the end of the loading pulse. The inset displays a close-up of the readout signal, which peaks with a full-width at half-maximum of about 2~ns. Note the inset uses a linear scale for the readout signal, while the full cycle is displayed using a semi-logarithmic scale to show the background during loading as well. \section{\label{results}Results} \subsection{\label{exp_load}Electron loading and storage} \begin{figure} \includegraphics{Fig5.pdf}% \caption{\label{fig:Fig5} Trapped electron loading and storage. (a) Electron trapping and detection probability as function of loading time with $t_{\mathrm{wait}} = 10~ \mathrm{\mu s}$. The dashed curve shows an exponential fit to $ 1-\exp(t/\tau_{1/e})$ with $\tau_{1/e}= 80.3\pm 0.5~\mathrm{\mu s}$. Error bars due to statistical uncertainty are too small to be visible. (b) Storage time measurement with a double-exponential fit, showing three quarters of electrons leave the trap with a decay constant $\tau_{1/e} = 30 \pm 7~\mathrm{ms}$, while the remaining quarter show no decay within measurement uncertainty. The horizontal dash-dotted line displays the background detection level based on an independent measurement. Error bars and the grey band correspond to one s. d. statistical uncertainty. } \end{figure} Having demonstrated electron trapping, we move on to quantify the trapping process. To investigate electron loading we use the protocol introduced in Fig.~\ref{fig:Fig4}(a) and vary $t_{\mathrm{load}}$ for a fixed wait time $t_{\mathrm{wait}} = 10~\mathrm{\mu s}$. For ease of presentation, we sum detections over a 50-ns wide window around the readout signal, see Fig.~\ref{fig:Fig5}(a) for the results. The left ordinate displays the fraction of cycles with at least one detection pulse from the MCP which approaches unity with a time constant $\tau_{1/e}\approx 80~\mathrm{\mu s}$ as $t_{\mathrm{load}}$ increases. We employ a simple threshold method to detect MCP pulses with the TDC and set a 60~ns deadtime following each detection to prevent double counting some events due to voltage ringing, which sets a a natural limit of one detection event per cycle. Considering electron loss at the mesh and the MCP we estimate about one in eight extracted electrons results in a signal from the MCP. Taking into account the loss during the readout process and the fraction of detections per cycle, we can estimate the average number of electrons in the trap for each measurement setting (see Appendix \ref{sec:app_efficiency} for details). The right ordinate in Fig.~\ref{fig:Fig5}(a) shows that the electron number is proportional to the loading time: it takes on average $10~\mathrm{\mu s}$ to load one electron and we trap on average about 20 electrons for a loading time of $200~\mathrm{\mu s}$. To measure the electron storage time in the trap, we set the loading time such that the trap rarely contains more than a single electron, and record the readout signal as function of the wait time, see Fig.~\ref{fig:Fig5}(b) for the data. The measurement shows two distinct regimes, where about three quarters of electrons are lost within 100~ms (exponential decay with $\tau_{1/e} = 30 \pm 7~\mathrm{ms}$ for this measurement), while the remaining one quarter show no detectable loss after 1~s. The dark horizontal dash-dotted line displays the background detection level measured independently to be about $1\times 10^{-4}$ detections per cycle. Long storage times in the trap are essential if trapped electrons are to be used as qubits, so understanding the mechanism behind loss is an important task. The dominant loss mechanism for laser-cooled ions is collisions with background gas. Collisions, in particular with heavier atoms and molecules, can provide sufficient energy to kick an ion out of its trapping potential or may lead to the formation of molecules. Given the light mass of the electron one might expect collisional loss and electron capture to play a major role in our trap too. In order to quantify this loss channel we have changed the pressure in the vacuum chamber by more than an order of magnitude and measured the decay constant at chamber pressures of about $\sim5\times10^{-10}$~mbar and $\sim2\times10^{-8}$~mbar. We found no change within our measurement uncertainty (see Appendix \ref{sec:app_loss} for details), which rules out collision with the background gas as the primary loss channel. The important difference between electrons in our trap and laser-cooled ions is their light mass as well as the absence of a cooling mechanism which dampens the motion and concentrates particles in the trap center. Electrons in our trap sample a much greater volume and hence experience anharmonicities in the trapping potential further away from the trap center. While the general motion of a charged particle in an anharmonic AC potential is non-trivial~\cite{Gerlich1992}, we can make two general distinctions to the harmonic case. First, the normal modes of motion become coupled and second, nonlinear resonances \cite{Wang1993, Alheit1996a} enable the transfer of energy from the driven micromotion to the secular motion, leading to heating and particle loss. We illustrate particle loss for the motion of a single electron along one dimension of the trap in numerical simulations in Appendix \ref{sec:simulation} and find that trajectories for electrons ionized further than about 100~$\mathrm{\mu m}$ from the trap center are not generally stable. The electron motion amplitude in the trap exceeds 250~$\mathrm{\mu m}$ for those cases. This length scale is consistent with the onset of strong deviations of the pseudopotential from a purely harmonic form as shown in Fig. \ref{fig:Fig2}. Within $200~\mathrm{\mu m}$ of the trap center, the pseudopotential deviates by less than 2\% from the ideal harmonic potential and one may expect stable motion there. Coupling between the secular modes and repeated perturbations can still heat electrons from stable into unstable orbits and lead to loss, however, since there is no damping of the motion. Such perturbations to the trapping potentials do happen in our experiments, for instance due to power fluctuations of the microwave field, charging of the trap substrate, or the interactions of multiple charges in the trap. We can experimentally probe some elements of this argument: by changing the focusing and alignment of the PI beams, which affects the average ionization distance from the trap center, we find larger PI beams in the trapping region are correlated with fewer long-lived electrons. We also observe storage times to decrease slightly when we increase the electron density in the trap by increasing the loading time. We believe trap anharmonicity then to be the driving force behind electron loss in our trap. Only the trajectories for single electrons confined within the harmonic trapping region are stable, so with the ionisation volume dictated by the alignment of the photoionization beams we expect the majority of initially trapped electrons (which are created outside of the central $100~\mathrm{\mu m}$) to experience a sufficiently anharmonic potential be driven out of the trap eventually. The tail of long-lived electrons in Fig.~\ref{fig:Fig5}(b) is then attributed to the single electrons that remain in a stable trajectory after all other particles have been heated out of the trap. Studying the loss mechanisms of trapped electrons in greater detail with this Paul trap will likely be an important subject for future work. However, the long lifetimes observed here show that heating effects, for instance due to collisions with background gas or the electron micromotion~\cite{Prestage1991, Chen2013a} are not prohibitive to conducting experiments even at room temperature. This is an encouraging sign, in particular for the prospects of non-destructive electron detection and cooling via image current measurements in a cryogenic environment. Studying trap loss may also yield insights relevant for quadrupole ion traps, and since the electron motion is faster by a factor of a hundred compared to ion motion in a typical trap, experiments would take less time. \begin{figure} \includegraphics{Fig6.pdf}% \caption{\label{fig:Fig6} Trap frequencies. Measurements of the motional resonances for an average of four electrons loaded into the trap, and a wait time $t_{\mathrm{wait}} = 2~ \mathrm{ms}$ during which an rf tickle is switched on. The axial (radial) resonance is denoted $\omega_{z}$ ($\omega_{r}$). (a) 5~mV tickle excitation, (b) 20~mV tickle excitation.} \end{figure} \subsection{\label{frequencies}Trap frequencies} Finally, we are interested in the frequencies of the electron motion in the trap. Again, we follow the experimental protocol from Fig.~\ref{fig:Fig4}(a), now loading about ten electrons on average, and setting the wait time to $t_{\mathrm{wait}} = 2~\mathrm{ms}$. During the wait time we try to excite the motion of trapped electrons with an rf tone at frequency $\omega_{\mathrm{tickle}}$. We step $\omega_{\mathrm{tickle}}$ in increments of 1~MHz from 20 to 350~MHz and monitor electron loss, which is indicative of a motional resonance. The top panel in Fig.~\ref{fig:Fig6} shows the electron loss spectrum for a tickle voltage of about 5~mV applied to $e_{\mathrm{in,\;17}}$ and features two prominent dips. We can identify the resonances based on their response to DC and microwave voltages, revealing the dip at about $2\pi\times 40$~MHz as the axial mode, while the $2\pi\times 300$~MHz resonance is due to one of the radial modes of motion. We find no evidence of multi-electron Wigner crystal modes, which is consistent with having weakly or non-interacting electrons with a range of energies in the trap. For low excitation tickle powers only the fundamental resonances are visible. Exciting the system more strongly reveals a series of harmonics of the axial mode (see Fig.~\ref{fig:Fig6}(b)), and a small shift in the fundamental frequency. Changing the DC and microwave voltages, we can tune the axial mode frequency between 30 and 100~MHz and the radial mode between 200 and 380~MHz, limited by the voltage sources used in the experiment. \section{Conclusions and outlook} In summary, we have presented the first experiment to trap electrons in a microwave Paul trap. Electrons can be loaded in tens of microseconds and 25\% survive up to at least one second. Trap frequencies ranging from several 10~MHz to several 100~MHz have been measured. Trapping electrons in a Paul trap opens to door to using their unique properties for quantum information processing. One of the main challenges moving towards this goal is to cool the secular modes sufficiently to be able to perform quantum operations on them. The well-established method of detecting the electron image current with a resonant circuit provides a convenient cooling mechanism, as it thermalises the detected mode of motion with the temperature of the detection circuit~\cite{Wineland1973}. Some form of cryogenic cooling is then necessary to reach low electron temperatures. This may not appear straightforward considering the high voltages at microwave frequencies required for trapping, but several features may be used to our advantage. First, we do not, in principle, require the trap itself to be cold, just the detection circuit, which can be separate from the main body of the trap. Since the image current detection dissipates much less power than the microwave electrodes, cooling the detection circuit separately may be a sensible choice. Second, we do not not need to reach the motional ground state to be in the quantum regime~\cite{Molmer1999}. For the sake of concreteness, reducing the dimensions of the current trap by a factor of ten, such that the trap center would be about $50~\mu\mathrm{m}$ from the nearest electrode, and increasing the drive frequency by a factor 10 ($\Omega\sim16~\mathrm{GHz}$), would give radial modes at $\omega\sim3~\mathrm{GHz}$. With a detection circuit cooled to 1.5~K and tuned to a radial mode, we expect an average mode occupation of 10 quanta, which is comparable to the mode occupation of a Doppler-cooled ion. In Ref.~\cite{Peng2017} we have discussed schemes to cool other modes of motion and perform state readout under similar conditions. Along these lines, the next milestones towards quantum control of trapped electrons would be non-destructive electron detection~\cite{Wineland1973} and spin readout~\cite{Peng2017}, which benefit from motional frequencies in GHz regime (that is, smaller traps) and integration into a cryogenic environment. Building on technology that has already been demonstrated for quantum control of trapped ion hyperfine~\cite{Mintert2001a, Piltz2016, Ospelkaus2011, Harty2016} and Zeeman~\cite{Ruster2016} qubits could accelerate the development of a trapped electron quantum computing platform. Distribution of entanglement over large distances is another challenge further in the future. Here, dipole-dipole coupling of single electrons, or electron crystals, in separate traps is an attractive option to realizing entanglement over intermediate distances~\cite{Harlander2011,Brown2011}. A path towards coupling electron qubits over longer distances could be via image currents in shared electrodes~\cite{Heinzen1990,Daniilidis2009}. Techniques like these may enable creation of large entangled states. We also note that an electron in the harmonic potential of a Paul trap realizes an instance of the lightest possible electromechanical oscillator~\cite{Eisert2004}. The resonance frequency and quality factor can be engineered by controlling the confining potential. While we believe a platform operating with trapped electrons as the sole qubit modality is the least challenging route, the ability to fine-tune the frequency of motion {\em in-situ} and the electron's strong interaction with electric fields could be used for coupling to other quantum systems with resonances in the GHz range, such as superconducting qubits~\cite{Kurizki2015,Kotler2017}. Finally, electron Paul traps may also find applications outside the realm of quantum information science. Our trap could, for instance, trap positrons and be employed for the preparation of antihydrogen~\cite{Leefer2016,Ahmadi2020}. Other applications include electric-field sensing at GHz frequencies~\cite{Brownnutt2015}, using cold trapped electrons for imaging~\cite{Kruit2016}, or for plasma physics studies~\cite{Twedt2012}. \begin{acknowledgments} We would like to thank S. Mouradian for manuscript feedback, the Yao lab at UC Berkeley for loan of the TDC, and Dr David E. Root, Keysight Laboratories, for initiating financial support and technical assistance by Keysight Technologies through the Keysight University Research Collaborations Program. \end{acknowledgments}
2,877,628,089,900
arxiv
\section{Introduction} Supersymmetric models for particle physics have been quite actively studied for decades, and they will attract much more attention under the second season of Large Hadron Collider. The most famous and successful one is the minimal supersymmetric standard model (MSSM), which is indeed respected in many of supersymmetric models. In a generic model building, these supersymmetric models are accompanied by a sequestered hidden sector which breaks supersymmetry (SUSY) spontaneously, even when that is not mentioned explicitly. The SUSY breaking sector is certainly a key constituent of SUSY scenarios because of the fact that SUSY is broken in our real world at least below the electroweak scale. A wide variety of models for SUSY breaking sectors, solely or in association with the visible sector, have been proposed so far. In particular, many models of dynamical supersymmetry breaking (DSB) due to the strong dynamics of non-Abelian gauge theories, were proposed after the Seiberg duality revealed infrared behaviors of strongly coupled $\mathcal N=1$ SUSY theories~\cite{Seiberg:1994pq,Seiberg:1997vw}. These DSB scenarios are quite promising for completing SUSY models, because a large hierarchy between the Planck scale and the SUSY breaking scale (intrinsic strong scale) is easily generated by a logarithmic running of strong gauge couplings. In this paper, we construct DSB models in four-dimensional (4D) low-energy effective theory derived from ten-dimensional (10D) supersymmetric Yang-Mills (SYM) theories compactified on three 2-tori with magnetic fluxes. Extra-dimensional space with magnetic fluxes has been addressed as a hopeful candidate for the origin of flavor structure of the quarks and leptons, which is a big mystery of the standard model and its SUSY extensions. Magnetic fluxes on tori lead to gauge symmetry breaking and derive a product gauge group from a single large group, realizing generations of chiral fermions as degenerate zero-modes in the bi-fundamental representations~\cite{Bachas:1995ik,Cremades:2004wa} in the 4D effective theory. Indeed, a semi-realistic flavor structure was obtained in an MSSM-like model derived from the magnetized SYM theories~\cite{Abe:2012fj,Abe:2014soa}, where a suitable Yukawa hierarchy consistent with the observed masses and mixings of quarks and leptons is realized. This hierarchy is essentially due to the quasi-localization of wavefunctions in extra-dimensional space~\cite{ArkaniHamed:1999dc} caused by the magnetic fluxes. It was also shown that this model can be consistent with the recent experimental constraints on the Higgs boson mass and SUSY particle spectra, where a certain class of SUSY-breaking mediation mechanism is assumed~\cite{Abe:2014soa}. With $Z_2$ orbifolding~\cite{Abe:2008fi}, these attractive properties of magnetic fluxes remain still and three-generation models of the quarks were studied on orbifolds~\cite{Abe:2008sx}. In Refs.~\cite{Abe:2014vza}, realistic Yukawa hierarchies were indeed realized on magnetized $Z_2 \times Z'_2$ orbifolds. These magnetized orbifolds\footnote{ Recently, $Z_3$, $Z_4$ and $Z_6$ orbifold models were also studied~\cite{Abe:2014noa,Abe:2015yva,Matsumoto:2016okl,Fujimoto:2016zjs}.} lead to a different flavor structure from the magnetized tori without orbifolding~\cite{Abe:2009vi,Abe:2009uz,Abe:2013bba}. Besides that, the orbifold projection can eliminate extra adjoint fields (those remain massless on tori and are phenomenologically disfavored in many cases), which would be a great advantage in a realistic model building. Thus, magnetized toroidal compactification with or without orbifolding is an exciting possibility of realizing the suitable visible (MSSM) sector in extra-dimensional field theories. As a second step towards completing these models, it is important to study SUSY breaking mechanisms on magnetized tori and orbifolds, which is the main purpose of this paper. The following sections are organized as follows. In Sec.~\ref{sec:2}, we review the 10D SYM theories compactified on magnetized tori. We adopt a 4D $\mathcal N=1$ description of 10D SYM theories, which is quite useful for the later model building. With this description, we give an overview of zero-mode configurations when the theory is compactified on three 2-tori with magnetic fluxes with/without $Z_2$ orbifolding. Sec.~\ref{sec:3} is the main part of this paper, where the construction of various DSB models is shown with several concrete magnetized backgrounds. In Sec.~\ref{sec:3.1}, we show certain aspects for DSB on magnetized tori with a simple configuration of magnetic fluxes which yields the gauge symmetry breaking $U(N) \rightarrow U(N_C) \times U(N_X)$, by assuming certain vacuum expectation values (VEVs) of the adjoint fields and their masses around them. First, a (metastable) DSB model is constructed in Sec.~\ref{sec:3.1.1} respecting the Intriligator-Seiberg-Shih (ISS) model~\cite{Intriligator:2006dd}, that is, $SU(N_C)$ SYM theory with $N_F$ fundamental massive quarks, satisfying $N_C-1 \le N_F<\frac32N_C$. We also construct a DSB model without massive quarks in Sec.~\ref{sec:3.1.2}, deriving a tadpole term from tri-linear couplings in the superpotential, via a suitable strong dynamics. In Sec.~\ref{sec:3.2}, we extend the flux configuration in such a way that the gauge symmetry breaking $U(N) \rightarrow U(N_C) \times U(N_X) \times U(N_Y)$ occurs. Then, we show a class of self-complete DSB models on magnetized orbifolds, where all the extra unwanted fields are eliminated by the orbifold projection and DSB successfully occurs within the given framework without any nontrivial assumptions. In Sec.~\ref{sec:3.3}, we comment on some perspectives for embedding the obtained DSB models into a single whole system including the visible (MSSM) sector and another hidden sector for the moduli stabilization. We conclude with the future prospects in Sec.~\ref{sec:4}. In Appendix~\ref{sec:app}, the other flux configurations are shown for deriving the same class of DSB models as the one demonstrated in Sec.~\ref{sec:3.2}. \section{10D SYM theory on magnetized tori} \label{sec:2} We review 10D SYM theories on magnetized tori and orbifolds briefly, following Ref.~\cite{Abe:2012ya}. In this paper, the theories are compactified on $M^4\times T^2\times T^2\times T^2$ with/without $Z_2$ orbifolding. First, we introduce a 4D $\mathcal N=1$ description of higher dimensional SUSY theories which is quite useful for our model building. Using the description, we turn on Abelian magnetic fluxes in extra dimensional space and show an overview of zero-mode configurations on the magnetized tori. Finally, we explain about magnetized $Z_2$ orbifolds which are one of key ingredients in some of our DSB models. \subsection{4D $\mathcal N=1$ decomposition} The 10D SYM theory consists of a 10D vector field $A_M$ ($M=0,1,\ldots,9$) and a 10D Majorana-Weyl spinor field $\lambda$. For the extra dimensional directions, we define complex coordinates~$z^i$ ($i=1,2,3$) and vectors~$A_i$ with complex structures~$\tau_i$ as \begin{equation*} z^i\equiv \frac12\left(x^{2+2i}+\tau_ix^{3+2i}\right),\qquad A_i\equiv -\frac1{\im\tau_i}\left(\tau_i^*A_{2+2i}-A_{3+2i}\right). \end{equation*} The periodic boundary conditions for the three 2-tori are given by $z_i\sim z_i+1$ and $z_i\sim z_i+\tau_i$. On this complex basis, the metric of three 2-tori is represented by \begin{eqnarray*} ds_{\rm 6D}^2\equiv 2h_{\bar ij}d\bar z^{\bar i}dz^j,\qquad h_{\bar ij}=\delta_{\bar ij}2(2\pi R_i)^2, \end{eqnarray*} where $R_i$ determines the period of $i$-th 2-torus. Now, the 10D vector field $A_M$ has been decomposed into a 4D vector and three complex scalar fields, $A_\mu$ and $A_i$. The spinor field can also be decomposed into four 4D Weyl spinors, which are distinguished by their chiralities on each 2-torus. We denote them as $\lambda_{+++}$, $\lambda_{+--}$, $\lambda_{-+-}$ and $\lambda_{--+}$ where the $i$-th subscript $\pm$ expresses the chirality on the $i$-th 2-torus, and the others (e.g., $\lambda_{---}$) are excluded by the 10D Weyl condition. We redefine these four spinors as \begin{equation*} \lambda_0\equiv\lambda_{+++},\quad \lambda_1\equiv\lambda_{+--},\quad \lambda_2\equiv\lambda_{-+-},\quad \lambda_3\equiv\lambda_{--+}, \end{equation*} for later convenience. These 4D component fields form 4D $\mathcal N=1$ supermultiplets, which are assigned to a vector~$V$ and three chiral superfields~$\phi_i$ as \begin{eqnarray} V&\equiv&-\theta\sigma^\mu\bar\theta A_\mu+i\bar\theta\bar\theta\theta\lambda_0 -i\theta\theta\bar\theta\bar\lambda_0+\frac12\theta\theta\bar\theta\bar\theta D,\label{eq:vdef}\\ \phi_i&\equiv&\frac1{\sqrt2}A_i+\sqrt2\theta\lambda_i+\theta\theta F_i. \label{eq:phidef} \end{eqnarray} The authors of Refs.~\cite{Marcus:1983wb,ArkaniHamed:2001tb} proposed an action in the 4D $\mathcal N=1$ superspace, that is equivalent to the usual component-action of 10D SYM theory with the definitions (\ref{eq:vdef}) and (\ref{eq:phidef}). Ref.~\cite{Abe:2012ya} showed its extension to the toroidal compactifications where background magnetic fluxes are turned on. In the superspace formulation, a 4D $\mathcal N=1$ SUSY out of the full $\mathcal N=4$ SUSY possessed by 10D SYM theories becomes manifest, which is preserved by the configurations of magnetic fluxes. The $\mathcal N=1$ SUSY-preserving conditions are read from field equations for the auxiliary fields $D$ and $F_i$, those are shown later. \subsection{Zero-modes on magnetized tori} Next we show the zero-mode structure on magnetized tori. In $U(N)$ gauge theory, magnetic fluxes on the $i$-th 2-torus can be represented by $N\times N$ matrix $M^{(i)}$ in \begin{equation*} \langle A_i\rangle=\frac{\pi}{\im\tau_i}M^{(i)}\bar z_{\bar i}. \end{equation*} We consider nonvanishing integer values for only diagonal entries of $M^{(i)}$, i.e., the Abelian magnetic fluxes. When some of them are degenerate, the gauge symmetry is broken as $U(N)\rightarrow U(N_a)\times U(N_b)\times\cdots$. We require these magnetic fluxes to satisfy conditions $\langle F_i\rangle=\langle D\rangle=0$ to preserve 4D $\mathcal N=1$ SUSY. These can be rewritten simply as~\cite{Abe:2012ya,Troost:1999xn} \begin{equation} \frac1{\mathcal A^{(1)}}M^{(1)}+\frac1{\mathcal A^{(2)}}M^{(2)}+ \frac1{\mathcal A^{(3)}}M^{(3)}=0, \label{eq:susy} \end{equation} where $\mathcal A^{(i)}$ represents the area of the $i$-th 2-torus. If this is not satisfied, SUSY is broken at a compactification scale which is, in general, extremely higher than the electroweak scale and some of SUSY particles get tachyonic masses due to $\langle D \rangle \ne 0$. In the following, we denote $(a,b)$-entries of $U(N)$ adjoint superfield $\phi_j$ by $\phi_j^{ab}$. For such bi-fundamental fields of $U(N_a) \times U(N_b)$, zero-mode equations on the magnetized 2-tori are given by \begin{eqnarray} \left[\bar\partial_{\bar i} +\frac{\pi}{2\im\tau_i} M_{ab}^{(i)}z_i \right]\phi_j^{ab} &=& 0 \qquad{\rm for}\quad i=j, \label{eq:zeroii}\\ \left[\partial_i -\frac{\pi}{2\im\tau_i} M_{ab}^{(i)}\bar z_{\bar i} \right] \phi_j^{ab} &=& 0 \qquad{\rm for}\quad i\neq j, \label{eq:zeroij} \end{eqnarray} where $M_{ab}^{(i)} \equiv M_{a}^{(i)}-M_{b}^{(i)}$ expresses the difference between two diagonal entries in $M^{(i)}$. For positive values of $M_{ab}^{(i)}$, we find $M_{ab}^{(i)}$-degenerate zero-modes as solutions of Eq.~(\ref{eq:zeroii}), while Eq.~(\ref{eq:zeroij}) has no normalizable solution. On the other hand, for $M_{ab}^{(i)}<0$, only Eq.~(\ref{eq:zeroij}) allows $|M_{ab}^{(i)}|$-degenerate zero-modes. Thus, magnetic fluxes yield generations of chiral fermions. \subsection{Magnetized orbifold} We now consider $Z_2$ orbifolding on magnetized tori. The superfield description introduced above is compatible with orbifold projections, when we assign the same $Z_2$ parity to all the component fields contained in a single superfield. For example, we consider a $Z_2$ orbifold which acts on the first and the second 2-tori as $(z_1,z_2)\rightarrow (-z_1,-z_2)$. Under this $Z_2$ transformation, the superfields behave as \begin{eqnarray*} V(x_\mu,z_1,z_2,z_3)&=&~~PV(x_\mu,-z_1,-z_2,z_3)P^{-1},\\ \phi_1(x_\mu,z_1,z_2,z_3)&=&-P\phi_1(x_\mu,-z_1,-z_2,z_3)P^{-1},\\ \phi_2(x_\mu,z_1,z_2,z_3)&=&-P\phi_2(x_\mu,-z_1,-z_2,z_3)P^{-1},\\ \phi_3(x_\mu,z_1,z_2,z_3)&=&~~P\phi_3(x_\mu,-z_1,-z_2,z_3)P^{-1}, \end{eqnarray*} where the projection operator $P$ is given by an $N\times N$ matrix satisfying $P^2=1$. Then, all the elements are assigned into either even- or odd-parity mode under this $Z_2$ transformation. Orbifold projections reduce the number of degenerate zero-modes generated by magnetic fluxes, or eliminate them completely. Ref.~\cite{Abe:2008fi} identified the number of degeneracy of each $Z_2$-eigenmode with the sequence of magnetic fluxes on $Z_2$ orbifolds, that is shown in Table~\ref{tb:zeromode}. \begin{table}[th] \center \begin{tabular}{cccccccccccccc} $M$&$0$& $1$ &$2$ &$3$ &$4$ &$5$ &$6$ &$7$ &$8$ &$9$ & $10$&$2n$&$2n+1$ \\\hline Even& $1$ &$1$ &$2$ &$2$ &$3$ &$3$ &$4$ &$4$ &$5$ & $5$ &$6$&$n+1$&$n+1$\\ Odd& $0$ &$0$ &$0$ &$1$ &$1$ &$2$ &$2$ &$3$ &$3$ & $4$ &$4$&$n-1$&$n$\\ \end{tabular} \caption{This shows the number of active zero-modes on the magnetized orbifold.} \label{tb:zeromode} \end{table} From these results, we find that the orbifold background gives variety to a magnetized model building. In the next section, we construct various DSB models on magnetized tori and orbifolds based on them. \section{Dynamical supersymmetry breaking} \label{sec:3} In this section, we construct DSB models on a variety of magnetized background. First, we consider the simple configurations of magnetic fluxes leading to a gauge symmetry breaking $U(N)\rightarrow U(N_C)\times U(N_X)$, and show some specific configurations with which the resultant zero-modes contain certain DSB models such as the ISS model~\cite{Intriligator:2006dd} and others. In the ISS model, for example, SUSY is broken by a strong dynamics of $SU(N_C)$ gauge theory with $N_F$ flavors. In our magnetized model building, $N_C$ and $N_F$ are determined by the degeneracies of Abelian magnetic fluxes and the degeneracies of the bi-fundamental zero-modes, respectively. These models seem quite simple but there appear some extra massless modes, those should be eliminated or decoupled somehow to obtain successful DSB. As we will see, orbifold projections are not available for such a purpose, and we have to assume some extrinsic effects to eliminate the extra fields in this simple class of models. In the second part of this section, we consider more structural flux configurations that cause a gauge symmetry breaking $U(N) \rightarrow U(N_C)\times U(N_X)\times U(N_{X'})$. A great advantage of such extended configurations is that all the extra fields can be eliminated by a combination of magnetic fluxes and a certain orbifold projection, within a given framework of magnetized orbifold. They are really promising at least as long as we focus on the SUSY breaking sector. We finally discuss prospects of our DSB models in association with other sectors, such as the visible (MSSM) and other hidden (especially moduli stabilization) sectors. \subsection{Models with $U(N) \rightarrow U(N_C)\times U(N_X)$}\label{sec:adj} \label{sec:3.1} \subsubsection{ISS-type models} \label{sec:3.1.1} In the first type of our model building, we try to realize the ISS model~\cite{Intriligator:2006dd}, that is, the magnetized background is required to derive $SU(N_C)$ SYM theory with $N_F$ massive fundamental flavors from a single 10D $U(N)$ SYM theory. The IR description of such a model is given by \begin{equation*} W= \lambda \phi_{in}\Phi^{ij}\bar\phi_j^n+\mu^2 \Phi^{ii}, \end{equation*} where $\Phi$ and $\phi$ correspond to baryons and mesons ($i,j=1,2,\ldots,N_F$ and $n=1,2,\ldots,N_C$). We can see that all the F-terms of $\Phi^{ij}$, $F_{\Phi_{ij}}\sim\mu^2\delta_{ij}+\lambda\phi_{in}\bar\phi_j^n$, cannot vanish simultaneously for $N_F>N_C$. This is the so-called rank-condition mechanism of SUSY breaking. In generic $SU(N_C)$ theories with $N_F$ flavors, SUSY breaking metastable vacua are realized within a range $N_C-1\leq N_F<\frac32N_C$. In particular, they can be long-lived when the quark mass scale is much smaller than the dynamical scale. We consider the configurations of magnetic fluxes which break the gauge symmetry as $U(N) \rightarrow U(N_C)\times U(N_X)$. For a while, we take both the $U(N_C)$ and $U(N_X)$ gauge groups to be non-Abelian ($N_C,N_X\geq2$) for the sake of generality. Such magnetic fluxes are given by \begin{eqnarray} M^{(1)}&=&\begin{pmatrix} 0\times {\bm1}_{N_C}&0\\ 0&M\times {\bm1}_{N_X} \end{pmatrix},\quad M^{(2)}=\begin{pmatrix} 0\times {\bm1}_{N_C}&0\\ 0&-1\times {\bm1}_{N_X} \end{pmatrix},\nonumber\\ &&\hspace{6em}M^{(3)}=\begin{pmatrix} 0\times {\bm1}_{N_C}&0\\ 0&0\times {\bm1}_{N_X} \end{pmatrix},\label{eq:mag1} \end{eqnarray} where these matrices represent the $U(N_C+N_X)$ gauge space. This configuration preserves at least a 4D $\mathcal N=1$ SUSY with $\mathcal A^{(1)}/\mathcal A^{(2)}=M$ fixed for a positive value of $M$. The chirality projection caused by these magnetic fluxes eliminates the zero-modes of certain elements of the $U(N)$ adjoint chiral superfields, and we find for $M>0$, \begin{eqnarray*} \phi_1&=&\begin{pmatrix} \Xi_1&0\\ q&\Omega_1 \end{pmatrix},\quad \phi_2=\begin{pmatrix} \Xi_2&\tilde q\\ 0&\Omega_2 \end{pmatrix},\quad \phi_3=\begin{pmatrix} \Xi_3&0\\ 0&\Omega_3 \end{pmatrix}. \end{eqnarray*} The magnetized background (\ref{eq:mag1}) induces $M$-pairs of vector-like quarks ($q,\tilde q$) in off-diagonal entries of $\phi_1$ and $\phi_2$. Diagonal entries, $\Omega_i$ and $\Xi_i$, correspond to $U(N_C)$ and $U(N_X)$ adjoint fields respectively. The 10D SYM theory allows couplings between $\phi_i$'s only in the form $\phi_1 \phi_2 \phi_3$ in the $\mathcal N=1$ superpotential. When the Wilson lines for $U(N_X)$ in the third 2-torus are somehow generated, they lead to a nonvanishing VEV of $\Omega_3$ and then the mass term $\langle\Omega_3\rangle q\tilde q$ is generated for the quarks. Then, the ISS-type DSB would occur if the quark mass scale is smaller than the dynamical scale of $SU(N_C)$ SYM theory. However, in order to realize the ISS model exactly, we need to further assume that the fluctuations of adjoint fields $\Omega_i$ and $\Xi_i$ around their VEVs ($\langle\Omega_3\rangle \ne 0$, $\langle\Omega_{1,2}\rangle =\langle \Xi_i \rangle =0$ in the present case) should be eliminated or decoupled from the DSB dynamics. Orbifold projections are not useful for such a purpose, because the nonvanishing (continuous) Wilson lines are generically incompatible with the orbifold background. Here, we just assume these extra fields obtain heavy masses due to some extrinsic effects from, e.g., supergravity/string corrections. Under the above assumption, we have two gauge theories with massive quarks: $SU(N_C)$ SYM with $M\times N_X$ fundamental flavors and $SU(N_X)$ SYM with $M\times N_C$ fundamental flavors. In the case with \begin{eqnarray} N_C-1 &\leq& M \times N_X \ < \ \frac 32N_C, \label{eq:iss} \end{eqnarray} the ISS model is realized by the $SU(N_C)$ gauge theory. In this scenario, we have another constraint on the values of $N_C$ and $N_X$. The running of $SU(N_X)$ gauge coupling must be milder than that of $SU(N_C)$, which leads to the constraint \begin{eqnarray} M\times N_X-3N_C &<& M\times N_C-3N_X \ \Leftrightarrow \ N_X < N_C. \label{eq:cond1} \end{eqnarray} While one can easily see that both conditions~(\ref{eq:iss}) and (\ref{eq:cond1}) cannot be satisfied with $M=1$, it becomes easier for $M \geq 2$ to fulfill them and we can find many successful ansatzes, e.g., \begin{eqnarray} N_C &=& 3,\qquad N_X \ = \ 2,\qquad M \ = \ 2. \nonumber \end{eqnarray} When the extra $U(N_X)$ gauge theory is Abelian, that is, $N_X=1$, we can realize similar models much easier, because the second condition~(\ref{eq:cond1}) is not required in this case. Thus, we can construct DSB models concerning about only the first one~(\ref{eq:iss}). \subsubsection{Models without massive quarks} \label{sec:3.1.2} We have another scenario on the magnetized background~(\ref{eq:mag1}) where the nonvanishing Wilson-lines are not required for DSB. As we have noticed, a key ingredient of this background is the following coupling, \begin{equation} g\Omega_3q\tilde q =g\left(\langle\Omega_3\rangle +\tilde\Omega_3\right) q\tilde q, \label{eq:omg3sp} \end{equation} where $g$ is a coupling constant. In the previous model, we have assumed a nonvanishing VEV $\langle\Omega_3\rangle$ and the absence of its fluctuation $\tilde\Omega_3$ at a low energy to realize the ISS-type DSB which has only the mass term for the vector-like quarks in the superpotential. Alternatively here we consider the case that $\tilde\Omega_3(=\Omega_3)$ is active while the Wilson-line $\langle\Omega_3\rangle$ is vanishing. Without the Wilson lines, the higher dimensional SYM theory does not produce any mass terms for $q$ and $\tilde q$ (as well as $\Omega_3$) at least at the leading order. This allows us to infer that, turning on a VEV $\langle\Omega_3\rangle \ne 0$ breaks some kinds of global symmetries of higher-dimensional SYM theory on magnetized tori, which prohibits the masses of bi-fundamental (as well as adjoint) fields. Thus, in the following, we can consider our models to be a chiral theory, as long as we do not introduce the continuous Wilson lines. This will become more clear in the next subsection. For the purpose to derive a DSB model without massive quarks, let us consider a situation where we can ignore the other block-diagonal entries of $\phi_i$ than $\Omega_3$ at a low energy (i.e., $\Omega_1$, $\Omega_2$ and $\Xi_i$ are decoupled) for simplicity. Again, this could not be realized by orbifolds because $\Omega_3$ and $\Xi_3$ have the same orbifold parity and both of them survive or vanish simultaneously under the orbifold projection. We should consider some extrinsic mechanisms to make the extra fields heavy as in the previous models. When they are somehow decoupled, the superpotential contains only the above Yukawa coupling~(\ref{eq:omg3sp}). In $SU(N_C)$ SYM theory with $N_f$ fundamental flavors, for $N_C>N_F$, the Affleck-Dine-Seiberg (ADS) potential~\cite{Davis:1983mz,Affleck:1983mk} \begin{eqnarray} W_{\rm ADS} &=& C_{N_C,N_F}\left(\frac{\Lambda^{3N_C-N_F}} {{\rm det}\,\hat M}\right)^{1/(N_C-N_F)}, \nonumber \end{eqnarray} is obtained, where $\Lambda$ is the dynamical scale, $N_F\times N_F$ matrix $\hat M$ is defined as $\hat M^i_{~j}\equiv q^{in} \tilde q_{nj}$, and $C_{N_C,N_F}$ are constants. Our magnetized model contains $SU(N_C)$ SYM with $M\times N_X$ fundamental flavors and $SU(N_X)$ SYM with $M\times N_C$ fundamental flavors. We consider the case that the dynamics of the former non-Abelian gauge theory produces the above ADS potential, that is, $N_C>M\times N_X$. The total effective superpotential can be written in terms of the operator $\hat M$ as \begin{equation*} W_{\rm effective}=g \tr\Omega_3\hat M +C_{N_C,N_F}\left(\frac{\Lambda^{3N_C-N_F}} {{\rm det}\,\hat M}\right)^{1/(N_C-N_F)}. \end{equation*} This is almost the simplest DSB model found by Affleck, Dine and Seiberg~\cite{Affleck:1984xz}. This potential makes the operator $\hat M$ develop a nonvanishing VEV, $\langle \hat M\rangle\sim \Lambda$, and the resulting low-energy superpotential for $\Omega_3$ is \begin{eqnarray} W=g\Lambda^2\Omega_3 +W_0, \nonumber \end{eqnarray} which is just like the Polonyi model~\cite{Polonyi:1977pj}. When the extra gauge theory is non-Abelian, $N_X\geq2$, we have to concern about the condition~(\ref{eq:cond1}) on $N_C$ and $N_X$, again. However, this is always satisfied when the ADS potential is generated , $N_C>M\times N_X$, for any positive value of $M$. As for Abelian cases $N_X=1$, such an extra constraint is not of course required. Thus, we can obtain a wide variety of this class of models as well as the previous ISS-type models discussed in Sec.~\ref{sec:3.1.1}. For $N_C=N_F$, the ADS potential is not generated. In this case, however, it is known that the strong dynamics induces chiral condensations yielding a vacuum with $\det\langle \hat M\rangle\neq0$. Therefore the Yukawa coupling~(\ref{eq:omg3sp}) produces a tadpole term for $\Omega_3$ breaking SUSY. Thus, a DSB model can be also obtained for $N_C= M\times N_X$. Here, the consistency condition~(\ref{eq:cond1}) requires $M\geq 2$. \subsection{Models with $U(N) \rightarrow U(N_C) \times U(N_X) \times U(N_Y)$} \label{sec:3.2} So far, we have assumed that the extra adjoint fields are somehow decoupled. We here propose another class of DSB models on magnetized orbifold, where DSB successfully occurs within the given framework without requiring any extrinsic effects. We will find that more structural configurations of magnetic fluxes which cause a gauge symmetry breaking $U(N) \rightarrow U(N_C) \times U(N_X) \times U(N_Y)$ lead to self-complete DSB models on $Z_2 \times Z'_2$ orbifolds, where all the extra unwanted fields are eliminated below the compactification scale. We first explain an overview of this new class of models before giving a concrete configuration of magnetized background. Let us consider the gauge symmetry breaking due to magnetized backgrounds as $U(N)\rightarrow U(N_C)\times U(N_X)\times U(N_Y)$. Field contents responsible for the DSB dynamics here are ($i\neq j\neq k\neq i$) \begin{eqnarray} \phi_i&=&\left(\begin{array}{c|c|c} ~~&~~&~~\\\hline & &S\\\hline & &\\ \end{array}\right),\quad \phi_j=\left(\begin{array}{c|c|c} ~~&\tilde Q&~~\\\hline & &\\\hline & &\\ \end{array}\right),\quad \phi_k=\left(\begin{array}{c|c|c} ~~&~~&~~\\\hline & &\\\hline Q & &\\ \end{array}\right), \label{eq:fcon} \end{eqnarray} where three diagonal-blocks represent the product gauge group $U(N_C)\times U(N_X)\times U(N_Y)$ in $U(N)=U(N_C+N_X+N_Y)$ adjoint matrices and then the off-diagonal blocks in $\phi_i$'s are chiral multiplets in the corresponding bi-fundamental representations. Every mass term for $S$, $\tilde{Q}$ and $Q$ is forbidden by the (unbroken) gauge symmetry. In order to avoid chiral anomaly in adjunct $U(N_X)$ and $U(N_Y)$ gauge theories, we simply set $N_X=N_Y=1$ in the following. Even in this simple setup, the number of flavors can be controlled because the magnetic fluxes produce the degeneracy of zero-modes, enhancing the effective flavors. Chiral superfields $S$, $Q$ and $\tilde Q$ have a Yukawa coupling in the superpotential, \begin{eqnarray} W &=& g SQ\tilde Q, \label{eq:sqq} \end{eqnarray} where $g$ expresses the effective coupling constant, which is given by an overlap integral of wavefunctions determined by magnetic fluxes and is calculable on magnetized tori (see~\cite{Cremades:2004wa,Abe:2008sx} for reviews). In accordance with the discussion in the previous subsection, for $N_C \geq N_F$, the $U(N_C)$ gauge dynamics enforces the operator $\hat M\equiv Q\tilde Q$ to develop a nonvanishing VEV, breaking SUSY. \subsubsection{The essential structure} We here aim to realize a minimal setup, that is, $N_F$ pairs of quarks ($Q$, $\tilde Q$) and a singlet $S$ in $U(N_C)$ SYM theory. We require the degeneracy of $S$ to be one in order to avoid the presence of extra massless fields. The generation structure of $Q$ and $\tilde Q$ should be produced on a single 2-torus. Otherwise, the rank of their Yukawa matrix is reduced and some fields become irrelevant to the DSB dynamics. Let us suppose that it is produced on the first 2-torus and denote magnetic fluxes felt by $Q$, $\tilde Q$ and $S$ by $M_1^Q$, $M_1^{\tilde Q}$ and $M_1^S$, respectively, where the subscript discriminates the first 2-torus. The gauge invariance enforces them to satisfy $M_1^Q+M_1^{\tilde Q}+M_1^S=0$. Furthermore, we find that only one of the three is positive and the others have to be negative. The reason is that the Yukawa coupling~(\ref{eq:sqq}) originates from the 10D gauge coupling $\phi_1\phi_2\phi_3$, and positive (negative) magnetic fluxes are required in order to produce zero-modes in $\phi_1$ ($\phi_{2,3}$) on the first 2-torus, which can be seen from Eqs.~(\ref{eq:zeroii}) and (\ref{eq:zeroij}). On a magnetized orbifold, $Q$, $\tilde Q$ and $S$ are assigned into either even- or odd-parity mode under the $Z_2$ transformation. The numbers of their zero-modes are determined by the magnetic fluxes $(M_1^Q,M_1^{\tilde Q},M_1^S)$ and their $Z_2$ parity. We show the relation between magnetic fluxes and the number of zero-modes on magnetized orbifolds in Table~\ref{tb:zeromode}. The $Z_2$ invariance of Yukawa coupling~(\ref{eq:sqq}) allows us to consider three cases for their $Z_2$ parity assignments, those are (even-even-even), (odd-odd-even) or (even-odd-odd) for $(Q,\tilde Q,S)$. Note that (odd-even-odd) is equivalent to (even-odd-odd) under the renaming $(Q,\tilde Q) \leftrightarrow (\tilde Q,Q)$, then we exclude the former. We eventually found only six patterns satisfy these conditions, which are shown in Table~\ref{tb:sixconf}. \begin{table}[h] \center \begin{tabular}{c|cc} &$Z_2$ parity of $(Q,\tilde Q,S)$ &$(M_1^Q,M_1^{\tilde Q},M_1^S)$\\\hline Pattern~1 & (even,\,even,\,even) & ($-n,\,n,\,0$) \\ Pattern~2 & (even,\,even,\,even) & ($-2n,\,2n+1,\,-1$) \\ Pattern~3 & (even,\,odd,\,odd) & ($-n,\,n+3,\,-3$) \\ Pattern~4 & (even,\,odd,\,odd) & ($-2n,\,2n+4,\,-4$) \\ Pattern~5 & (odd,\,odd,\,even) & ($-n,\,n,\,0$) \\ Pattern~6 & (odd,\,odd,\,even) & ($-2n-1,\,2n+2,\,-1$) \\ \end{tabular} \caption{The six allowed combinations of $Z_2$ parity assignment and magnetic fluxes $(M_1^Q,M_1^{\tilde Q},M_1^S)$ are listed, where $n$ is an arbitrary positive integer. } \label{tb:sixconf} \end{table} In order to produce the singlet $S$ without its multiplicity, $|M|=0,1$ unit of fluxes are allowed for the even-parity mode and $|M|=3,4$ for the odd-parity mode. The condition $M_1^Q+M_1^{\tilde Q}+M_1^S=0$ (one is positive and the other two are negative) severely restricts the values of $M_1^Q$ and $M_1^{\tilde Q}$, because the zero-mode number of $Q$ and that of $\tilde Q$ have to be equal for a successful DSB. Therefore, we can conclude that any other configurations are excluded. We find some differences among these six patterns. The first one is the value of coupling constants in $\lambda_{ij}SQ_i\tilde Q_j$. The Yukawa matrix $\lambda_{ij}$ is proportional to the identity matrix with Patterns~1 and 5. In the other cases, $\lambda_{ij}$ have nonvanishing values in their off-diagonal entries which can be calculated explicitly. The second difference is a constraint on the torus areas given by the SUSY preserving condition~(\ref{eq:susy}). In general, Patterns~2, 3, 4 and 6 yield characteristic values of the ratios $\mathcal A^{(1)}/\mathcal A^{(2)}$ and $\mathcal A^{(1)}/\mathcal A^{(3)}$ as one can see in Appendix~\ref{sec:app}. This might be of importance in combination with, especially, the visible sector. We will discuss about it in the last of this section. These six patterns allow us to construct several realistic DSB models within the given framework of magnetized orbifold without any nontrivial assumptions, which will be shown below. \subsubsection{A self-complete model} We propose concrete DSB models with explicit configurations of magnetic fluxes and orbifolds in the whole extra compact space. As discussed above, let us suppose that the main structure of our DSB models is produced on the first 2-torus. The configurations on the other two 2-tori are determined in order to eliminate all the extra field contents other than $Q$, $\tilde Q$ and $S$ in $\phi_1$, $\phi_2$ and $\phi_3$ without affecting the generation structure of them realized on the first 2-torus. The $Z_2$ parity assignments and the magnetic fluxes on the first 2-torus are selected from the six patterns shown in Table~\ref{tb:sixconf} and the magnetic fluxes on the second and third 2-tori are enforced to satisfy the SUSY preserving condition~(\ref{eq:susy}). In the following, we construct an illustrating model on the basis of Pattern~1. That is, $Q$, $\tilde Q$ and $S$ are assigned into the even-parity mode on the first 2-torus with $(M_1^Q,M_1^{\tilde Q},M_1^S)=(-n,\,n,\,0)$. With the other five patterns, we can also realize similar models which we show in Appendix~\ref{sec:app}. Let us consider the following magnetized background, \begin{eqnarray} M^{(1)}&=&\begin{pmatrix} 0&0&0\\ 0&M&0\\ 0&0&M \end{pmatrix},\quad M^{(2)}=\begin{pmatrix} 0&0&0\\ 0&-1&0\\ 0&0&0 \end{pmatrix},\quad M^{(3)}=\begin{pmatrix} 0&0&0\\ 0&0&0\\ 0&0&-1 \end{pmatrix},\label{eq:realmag1} \end{eqnarray} which breaks the $U(N)$ gauge symmetry down to $U(N_C) \times U(1)_X \times U(1)_Y$, while preserving $\mathcal N=1$ SUSY with $\mathcal A^{(1)}/\mathcal A^{(2)}=\mathcal A^{(1)}/\mathcal A^{(3)}=M$. We take the value of $M$ to be positive. In this case (before orbifolding) zero-mode contents are given by \begin{eqnarray*} \phi_1&=&\begin{pmatrix} \Xi_1&0& 0\\ \tilde Q'&\Xi'_1&0\\ Q&0&\Xi''_1 \end{pmatrix},\quad \phi_2=\begin{pmatrix} \Xi_2&\tilde Q&0\\ 0&\Xi'_2&0\\ 0&S'&\Xi''_2 \end{pmatrix},\quad \phi_3=\begin{pmatrix} \Xi_3&0&Q'\\ 0&\Xi'_3&S\\ 0&0&\Xi''_3 \end{pmatrix}. \end{eqnarray*} On this magnetized tori, we consider two $Z_2$ orbifold projections, i.e., a $Z_2 \times Z'_2$ orbifold. The first one, $Z_2$ orbifolding, acts on the first and the second 2-tori $(z_1,z_2,z_3)\rightarrow(-z_1,-z_2,z_3)$ with the projection operator \begin{equation*} P_{+--}=\begin{pmatrix} +&0&0\\ 0&-&0\\ 0&0&- \end{pmatrix}. \end{equation*} This operator successfully assigns the even-parity to all of $Q$, $\tilde Q$ and $S$ as in Pattern~1, while eliminating $S'$ and all the diagonal entries of $\phi_1$ and $\phi_2$. The second one is $Z'_2$ projection acting on the second and the third 2-tori $(z_1,z_2,z_3) \rightarrow (z_1,-z_2,-z_3)$ with the projection operator \begin{equation*} P_{+-+}=\begin{pmatrix} +&0&0\\ 0&-&0\\ 0&0&+ \end{pmatrix}, \end{equation*} which eliminates $Q'$, $\tilde Q'$ and all the diagonal entries of $\phi_3$. Consequently, the remaining zero-mode contents are exactly the ideal ones~(\ref{eq:fcon}). All the extra fields have been completely eliminated in the combination of magnetic fluxes and orbifolding. The total degeneracy of $S$ is certainly one. As for $Q$ and $\tilde Q$, their degeneracy is counted as the resulting number of $Z_2$ even modes with $|M|$ units of fluxes, which is read from Table~\ref{tb:zeromode}. We have obtained desirable $SU(N_C)$ SYM theory with $N_F=1,2,3,\ldots$ flavors, which can satisfy the condition $N_C=N_F\geq2$ or $N_C>N_F$ for a successful DSB. We show similar DSB models with Pattern~2 to 6 in Appendix~\ref{sec:app}. One might consider that the following configuration of magnetic fluxes is better than Eq.~(\ref{eq:realmag1}), \begin{eqnarray*} M^{(1)}&=&\begin{pmatrix} 0&0&0\\ 0&M&0\\ 0&0&M \end{pmatrix},\quad M^{(2)}=\begin{pmatrix} 0&0&0\\ 0&-1&0\\ 0&0&-1 \end{pmatrix},\quad M^{(3)}=\begin{pmatrix} 0&0&0\\ 0&0&0\\ 0&0&0 \end{pmatrix}. \end{eqnarray*} The last 2-torus is vacant and the condition $\mathcal A^{(1)}/\mathcal A^{(2)}=M$ ensures that the SUSY is preserved. As for the gauge symmetry, $U(N)$ is broken down to $U(N_C)\times U(2)$. This $U(2)$ symmetry can be further broken to $U(1)_X \times U(1)_Y$ by orbifold projections. We again consider $Z_2 \times Z'_2$ orbifolds for this magnetized background. The first $Z_2$ acts on the first and the second 2-tori with the operator $P_{+--}$, and the second $Z'_2$ on the second and the third 2-tori with $P_{+-+}$. The surviving zero-modes are described as \begin{eqnarray} \phi_1&=&\begin{pmatrix} 0&0&0\\ 0&0&0\\ Q&0&0 \end{pmatrix},\quad \phi_2=\begin{pmatrix} 0&\tilde Q&0\\ 0&0&0\\ 0&0&0 \end{pmatrix},\quad \phi_3=\begin{pmatrix} 0&0&0\\ 0&0&S\\ 0&S'&0 \end{pmatrix}, \label{eq:fcon2} \end{eqnarray} and their full superpotential is found as \begin{equation} W=gSQ\tilde Q, \label{eq:wpote} \end{equation} which has the same form as Eq.~(\ref{eq:sqq}). Although there is an extra massless field $S'$, it would not affect the DSB dynamics because this is a singlet under $U(N_C)$ and has no coupling in the superpotential.\footnote{We expect that this $S'$ can get a mass at loop-levels after DSB, because $Q$, $\tilde Q$ and $S$ have $U(1)_X$ gauge charges.} In this model, the zero-modes of both $S$ and $S'$ have no multiplicity, while the zero-mode degeneracies of $Q$ and $\tilde Q$ are equivalent to the previous model. Thus, the strong dynamics of $SU(N_C)$ gauge theory can generate DSB depending on the values of $N_C$ and $M$, without any nontrivial assumptions for extra field contents. This can be another self-sustained DSB model. Although this model contains a decoupled massless field $S'$, the model has a clear advantage to the previous one. There is no magnetic fluxes on the third 2-torus, then its area $\mathcal A^{(3)}$ is not constrained by SUSY conditions. This can be helpful for associating this model with the other sectors as we will discuss in the next subsection. \subsection{Comments on the association with other sectors} \label{sec:3.3} We have constructed models for a SUSY breaking (hidden) sector, which must be combined with the MSSM (visible) sector and the other phenomenologically/cosmologically required sectors such as the moduli stabilization sector. Especially, when we consider a moduli stabilization mechanism based on non-perturbative effects such as gaugino condensations, like the Kachru-Kallosh-Linde-Trivedi (KKLT) scenario~\cite{Kachru:2003aw}, one or more strong gauge theories in the hidden sector are required for the non-perturbative dynamics. There are some key issues for combining these sectors altogether. Our models are based on SYM theories compactified on magnetized tori with/without orbifolds. The DSB models shown in subsection~\ref{sec:adj} is constructed without orbifolding, and thus, all the associated sectors such as the visible sector must also be constructed without orbifolding. On the other hand, the other DSB models on orbifolds have to be combined with the visible and the other sectors all constructed on the same orbifold. As the promising candidates for the visible sector, realistic flavor structures of MSSM-like models on magnetized tori~\cite{Abe:2012fj} and orbifolds~\cite{Abe:2008sx,Abe:2009vi} were derived so far. It is known that they are drastically different from each other. Therefore, we expect that models with or without orbifolding will be distinguishable phenomenologically. The values of higher-dimensional gauge coupling $g$, torus area $\mathcal A^{(i)}$ and complex structure $\tau_i$, are universal for all the sectors derived from the single higher-dimensional SYM theory. Thus, we have to choose common values for every sectors to be consistent with each other. We naively expect that most of these values are strongly constrained in the visible sector. First, the gauge coupling $g$ should be determined as follows. The 4D effective gauge coupling constant at the compactification scale, which is roughly given by a product of $g$ and the volume of extra compact space, must be consistent with the experimental data in the visible sector, i.e., the observed values of standard model (SM) gauge couplings. For example, if we consider MSSM for the visible sector, it automatically leads to a unified value of three SM gauge couplings at around $10^{16}$ GeV which is usually identified as the compactification scale, and the 4D effective gauge coupling can be fixed by the unified value. Next, the complex structures of tori are very important degrees of freedom to control the hierarchical structure of Yukawa couplings in the visible sector. Their values should be set to realize the quark and lepton masses and mixing angles~\cite{Abe:2012fj,Abe:2014vza}. Finally, the configurations of magnetic fluxes in the visible sector are extremely limited in order to realize the three generation structure of quarks and leptons, and the ratios of three torus areas, $\mathcal A^{(1)}/\mathcal A^{(2)}$ and/or $\mathcal A^{(1)}/\mathcal A^{(3)}$, are determined through the SUSY preserving conditions depending on the flux configuration. We remark that these constraints on parameters from the visible sector inevitably affect the model building for hidden sectors. Indeed, the DSB models shown in this paper also restrict the ratios $\mathcal A^{(1)}/\mathcal A^{(2)}$ and/or $\mathcal A^{(1)}/\mathcal A^{(3)}$, those must be consistent with the constraints from the visible sector. Therefore, the existence of unconstrained parameters in each sector is a great advantage, when we construct the whole system as a combination of the solely constructed visible and hidden sectors. Note that some of our DSB models with a vanishing flux in the third 2-torus restrict only one of the above two ratios. The models with two magnetized 2-tori (or even with a single magnetized 2-torus) among three are interesting from another point of view. We expect that our magnetized models based on 10D SYM theories would be completed being embedded into magnetized D9-brane systems, while the economically fluxed models have a potential to be compatible with D7-branes (or even D5-branes).\footnote{ It is argued that lower-dimensional D-branes may be derived from a magnetized D-brane in higher-dimensions with an infinite number of magnetic fluxes~\cite{Cremades:2004wa}. The effective field theory of such lower-dimensional branes can be derived based on such an argument~\cite{Abe:2015jqa}.} When we construct the whole system by combining our DSB sector with the certain visible and other sectors, we also have to care about the direct couplings among them. All the sectors should be embedded into a single $U(N)$ gauge theory, if we regard our models as D-brane models with a single stack. On the other hand, with multi-stacks of D-branes (e.g., D3/D7 or D5/D9 systems\footnote{The superfield formulation to describe such mixed D-brane systems was also constructed~\cite{Abe:2015jqa}.}), we can start from a product of multiple $U(N)$ gauge groups. In general, there exist bi-fundamental fields charged under two different sectors, depending on the configurations of magnetic fluxes and orbifolding. In particular, such bi-fundamental fields charged under the SM gauge groups are phenomenologically dangerous in many cases. We should also require that the strong dynamics of DSB and moduli stabilization sectors do not disturb each other through light fields charged under both sectors. Although these bi-fundamental fields are troublesome in generic cases, vector-like fields charged under both the MSSM and DSB sectors can be interesting, because they behave as messenger fields which mediate SUSY breaking contributions to the visible sector. In the previous analyses~\cite{Abe:2012fj} of magnetized models, it has been mostly assumed that the SUSY spectra are dominated by the moduli-mediation and/or anomaly-mediation~\cite{Randall:1998uk,Giudice:1998xp}, which depends on how to stabilize the moduli fields in association with the DSB sector. For example, in the KKLT-like moduli stabilization scenarios~\cite{Kachru:2003aw} with some concrete DSB sectors~\cite{Dudas:2006gr,Abe:2006xp}, contributions from the above two mediations can be comparable, and the so-called mirage mediation scenario~\cite{Choi:2004sx,Choi:2005ge,Endo:2005uy,Choi:2005uz} is realized. By assuming such a mediation scenario, the SUSY spectrum was studied in concrete magnetized models of the visible sector and some generic features were obtained~\cite{Abe:2012fj}. Then, it is interesting to employ one of our DSB models as the concrete hidden sector in this kind of scenario. The previous results can be deflected by the gauge-mediated contributions due to the appearance of messengers in the bi-fundamental representation between the MSSM and DSB sectors. We will study them in another places. \section{Summary} \label{sec:4} We have studied DSB models within the framework of 10D SYM theories compactified on magnetized tori and orbifolds. First, aspects for DSB on magnetized tori/orbifolds have been shown with the simple configurations of magnetic fluxes which causes the gauge symmetry breaking $U(N) \rightarrow U(N_C)\times U(N_X)$, by assuming (non)vanishing VEVs of adjoint fields and (non)decoupling of their fluctuations from the DSB dynamics around the VEVs. Then, in order for the strong dynamics of $SU(N_C)$ SYM theories with $N_F$ flavors to trigger a dynamical SUSY breaking, certain relations between $N_C$ and $N_F$ are required. It is remarkable that the number of flavors $N_F$ can be controlled by magnetic fluxes in our model, in other words, the background flux configuration determines whether DSB occurs or not. At the same time, however, we also find that the decoupling of some extra adjoint fields, those could not be eliminated by orbifold projections in the model building procedures, is necessarily assumed in this simple class of models. Otherwise the existence of them could spoil the successful DSB and/or are already ruled out by phenomenological/cosmological observations. In the case that some extrinsic mechanisms realize the assumed situations, these DSB models on magnetized tori are available for a further model building, while the decoupling of extra adjoint fields is in general a challenging issue in the model building based on SYM theories in higher-dimensional spacetime. Then, next, we have proposed another class of DSB models on orbifolds by extending the previous configurations of magnetic fluxes which preserve $U(N_C) \times U(N_X)$ symmetry to those yield $U(N_C) \times U(N_X) \times U(N_Y)$, especially, to the simplest one $U(N_C) \times U(1)_X \times U(1)_Y$. We have searched such flux configurations that the $SU(N_C)$ SYM theory contains $N_F$ vector-like pairs $(Q,\,\tilde Q)$ with their nonvanishing Yukawa couplings to a singlet $S$. As the result, we found six patterns of suitable configurations. On the basis of one of these six patterns, we demonstrated the construction of a self-complete DSB model on a $Z_2 \times Z'_2$ orbifold, where all the extra fields below the compactification scale are eliminated by the combination of chiral projections due to magnetic fluxes and the orbifold projections. In Appendix~\ref{sec:app}, we also show the other five patterns allow us to construct similar feasible models. Therefore, we conclude that, in this class of magnetized orbifold models, we can realize DSB without relying on any extrinsic mechanisms to eliminate extra fields. Furthermore, we have studied another choice of magnetic fluxes, where only two of the three 2-tori are fluxed. Although this permits a presence of one more singlet $S'$ without the Yukawa couplings to quarks, it of course does not disturb the DSB dynamics and can be another self-sustained DSB model. It is remarkable that the existence of unfluxed 2-torus can be an advantage when we combine the DSB (hidden) sector with the MSSM (visible) sector~\cite{Abe:2015jqa}. As discussed in the previous section, our DSB models should be embedded into a larger unified system being compatible with the MSSM sector and the others, e.g., the moduli stabilization sector. This must be an important task from both theoretical and phenomenological points of view. We expect some of the six patterns we found and their extensions being suitable for such embeddings. Finally, it is an interesting possibility that such a whole system is realized by magnetized D-branes. In this case, we should verify some stringy consistency of the full system containing all the sectors for completing our scenario on magnetized tori/orbifolds. These are remained as future works. \subsection*{Acknowledgements} The authors would like to thank Yuji Omura and Tokihiro Watanabe for valuable discussions which initiate this work. H.~A. was supported in part by the Grant-in-Aid for Scientific Research No.~25800158 from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) in Japan. T.~K. was supported in part by the Grant-in-Aid for Scientific Research No.~25400252 from the MEXT in Japan.
2,877,628,089,901
arxiv
\section{Introduction} Let $\Gamma$ be a cofinite discrete subgroup of ${\hbox{PSL}_2( {\mathbb R})} $ and consider the function \begin{equation} \Psi_\Gamma(X)=\sum_{N(P)\leq X}\Lambda(P), \end{equation} where the sum is over closed geodesics $P$ on ${\Gamma\backslash\mathbb H}$ with norm $N(P)\leq X$, and $\Lambda(P)=\log(N(P_0))$ where $P$ is a power of the primitive geodesic $P_0$. The prime geodesic theorem states that for a cofinite discrete subgroup $\Gamma$ of ${\hbox{PSL}_2( {\mathbb R})} $ we have \begin{equation}\label{best-known} \Psi_\Gamma(X)=X+O(X^{\delta+{\varepsilon}}) \end{equation} for some $\delta<1$. There are several non-trivial bounds for $\delta$: Bounds for general groups e.g. \cite{Huber:1960}, \cite[Thm 10.5]{Iwaniec:2002a}, and stronger bounds for congruence groups \cite{Iwaniec:1984a}, \cite{LuoSarnak:1995}, \cite{Cai:2002} with the current record due to Soundararajan and Young \cite{SoundararajanYoung:2013} who proved \begin{equation}\label{sy-bound} \delta=2/3+\theta/6,\end{equation} where $\theta$ is a subconvexity exponent for quadratic $L$-functions in the conductor-aspect. See also \cite{BalkanovaFrolenkov:2019}. Concerning the value $\theta$ Conrey and Iwaniec has proved that $\theta=1/6$ is admissible \cite[Cor. 1.5]{ConreyIwaniec:2000}, and the Lindel\"of hypothesis predicts that $\theta=0$ is admissible. For the rest of the paper we let $\Gamma={\hbox{PSL}_2( {\mathbb Z})} $. One reason for the interest in the error term in this case, is the striking relation between $\Psi_{\Gamma}$ and averages of class number of real quadratic fields ordered by the size of the regulator, see \cite{Sarnak:1982a}. Both \cite{SoundararajanYoung:2013} and \cite{BalkanovaFrolenkov:2019} use the relation to real quadratic fields. This relation is encoded in the Zagier $L$-series \begin{equation}\mathscr{L}_n(s)= \frac{\zeta(2s)}{\zeta(s)}\sum_{q=1}^\infty \frac{1}{q}\sum_{\substack{1\leq r\leq 2q\\r^2\equiv n\, (4q)}}1, \quad \textrm{ when }\Re(s)>1.\end{equation} This series admits meromorphic continuation to the entire complex plane. If $n$ is non-zero and not a full square it is an entire function. We refer to \cite[\S 4]{Zagier:1977}, \cite[Sec. 4]{BalkanovaFrolenkov:2018e} for some of its basic properties. There is a beautiful identity (see \cite[Prop 2.2]{SoundararajanYoung:2013}) linking $\Psi_\Gamma$ and the Zagier $L$-series stating that \begin{equation} \Psi_\Gamma(X)=2\sum_{3\leq n\leq X}\sqrt{n^2-4}\mathscr{L}_{n^2-4}(1). \end{equation} A relation of this kind was first discovered by Kuznetsov \cite[Eq. 7.2]{Kuznetsov:1978a}. In this paper we investigate further the relation between these two objects. In \cite{BalkanovaFrolenkov:2017b} Balkanova and Frolenkov studied smooth averages of $\mathscr{L}_{n^2-4}(s)$. Their work suggests that $\mathscr{L}_{n^2-4}(1/2+it)$ has density function \begin{equation*}m_t(x):=\begin{cases} \frac{1}{2\zeta(3/2)}\left(\log(x^2-4)-\frac{\pi}{2}+3\gamma -2\frac{\zeta'(3/2)}{\zeta(3/2)}-\log 8\pi\right) & \text{ if }t=0\\ \frac{\zeta(1+2it)}{\zeta(3/2+it)}+\frac{2^{1/2+it}\sin(\pi (1/2+it)/2)}{\pi^{it}}\frac{\zeta(it)}{\zeta(3/2-it))}\Gamma(it)(x^2-4)^{-it}& \text{ if }t\neq0. \end{cases}\end{equation*} Indeed a modification of their work (see Section \ref{sec:asymptotics}) shows the following result: \begin{thm}\label{asymptotic}The following asymptotic formula holds \begin{equation}\label{compromise} \sum_{2<n\leq X}\mathscr{L}_{n^2-4}(1/2+it)=\int_{2}^{X}m_t(u)du+O(X^{\alpha+{\varepsilon}}) \end{equation} uniformly for $\abs{t}\leq X^{\varepsilon}$. Here $\alpha=2(1+\theta)/3$, where $\theta$ is a subconvexity exponent for quadratic $L$-functions in the conductor-aspect. \end{thm} In \cite[p.6]{BalkanovaFrolenkov:2017b} it was noted that if, for $\abs{t}\leq X^{\varepsilon}$, the numbers $\mathscr{L}_{n^2-4}(1/2+it)$ are bounded on average in windows of the form $X<n\leq X+T$ for $T\gg X^{2/3}$, then the error-term estimate in the prime geodesics theorem \eqref{best-known} holds with $\delta=2/3$. In this paper we go further and investigate what happens if we have good asymptotics in \eqref{compromise}: \begin{thm}\label{conditional-bound} Assume that \eqref{compromise} holds for some $\alpha >0$. Then $\delta=1/2+\alpha/4$ is valid i.e. \begin{equation} \Psi_\Gamma(x)=x+O(x^{1/2+\alpha/4+{\varepsilon}}). \end{equation} \end{thm} Using the error term from Theorem \ref{asymptotic} recovers the best known bound \eqref{best-known} in the prime geodesics theorem. Considering if this can be improved it is tempting to speculate what the best possible error term is in \eqref{compromise}. We show below that if $\alpha=1/2$ is admissible then it is optimal. Hence the limit of what might be achieved using Theorem \ref{conditional-bound} is $$\delta=5/8.$$ This is far from the conjectured $\delta=1/2$, but spectacularly better than what we know even on the generalized Lindel\"of hypothesis, which would only give $\delta=2/3$. We note that the exponent 5/8 has been proven to hold in the mean average by Cherubini and Guerreiro \cite[Thm. 1.4]{CherubiniGuerreiro:2018}. Very recently this was improved to 7/12 by Balog, Bir{\'o}, Harcos, and Maga \cite[Thm. 1]{BalogBiroHarcosMaga:2019}. To see that $\alpha=1/2$ would indeed be optimal we prove the following $\Omega$-result: \begin{thm}\label{omega}We have \begin{equation} \sum_{2<n\leq X}\mathscr{L}_{n^2-4}(1/2)=\int_{2}^{X}m_0(u)du+\Omega(X^{1/2}) \end{equation} as $X\to\infty$. \end{thm} As mentioned above it is expected that \begin{equation}\label{conjecture1} \Psi_\Gamma(x)=x+O(x^{1/2+{\varepsilon}}). \end{equation} This would indeed be optimal (See \cite[Theorem 3.8, p. 477]{Hejhal:1983a}). Petridis and Risager proposed the following conjecture supporting the correctness of \eqref{conjecture1}: For any ${\varepsilon}>0$ and $X\geq 2$ we have \begin{equation}\label{conjecture2} \sum_{0<t_j\leq T} X^{it_j}\ll T(TX)^{\varepsilon}, \end{equation} where $\lambda_j=1/4+t_j^2$ are the eigenvalues of $\Delta$. Here $\Delta$ denotes the automorphic hyperbolic Laplacian for ${\hbox{PSL}_2( {\mathbb Z})} $. Iwaniec \cite[Lemma 1]{Iwaniec:1984a} showed that for $1\le T\le X^{1/2}\log^{-2}X$ the error term in the prime geodesic theorem can be expressed as follows \begin{equation}\label{PrimeGeodesic to spec.sum} \Psi_\Gamma(X)-X=2X^{1/2}\Re\left(\sum_{0<t_j\le T}\frac{X^{it_j}}{1/2+it_j}\right)+ O\left(\frac{X}{T}\log^2x\right). \end{equation} Using summation by parts on the sum we easily see that the conjecture \eqref{conjecture2} implies \eqref{conjecture1}. The following additively twisted Selberg--Linnik conjecture was suggested by Iwaniec \cite[p.139]{Iwaniec:1984a}, \cite[p.189]{Iwaniec:1984}: For any ${\varepsilon}>0$ and $C, D, n \geq 1$ we have \begin{equation}\label{conjecture3} \sum_{c\leq C}\frac{e(D/c)}{c}S(n,n,c)\ll (nCD)^{\varepsilon}. \end{equation} We show that this conjecture is stronger than \eqref{conjecture2}: \begin{prop}\label{iw>>pr} Iwaniec's conjecture \eqref{conjecture3} implies the conjecture \eqref{conjecture2} on the spectral exponential sum. \end{prop} \begin{rem} The paper is organized as follows. In Section \ref{sec:asymptotics} we prove Theorem \ref{asymptotic} extending the method from \cite{BalkanovaFrolenkov:2017b}. Then in Section \ref{sec:conditional} we combine techniques from \cite{Iwaniec:1984a}, \cite{LuoSarnak:1995} and \cite{BalkanovaFrolenkov:2019a} to prove Theorem \ref{conditional-bound}. In Section \ref{sec:omega-results} we prove the omega result in Theorem \ref{omega}. We do so by first showing that it suffices to show omega results for a smoothed problem, and then using an expansion from \cite{BalkanovaFrolenkov:2017b} combined with a lemma due to Ivi\'c and Motohashi \cite{IvicMotohashi:1990a}, to prove the desired result. Maybe surprisingly, in order to make the argument work we need the existence of a symmetric square $L$-function with non-vanishing central value. This is ensured by an asymptotic formula for the first moment of symmetric square $L$-functions, see \cite[Theorem 7.1.1]{Ng:2016a}, \cite{Tang:2012} and \cite{Balkanova:2019}. \end{rem} \section{Asymptotics for the average over $\mathscr{L}_{n^2-4}(1/2+it)$}\label{sec:asymptotics} In this section we prove Theorem \ref{asymptotic}. For $t=0$ the result was proved in \cite[Theorem 1.3]{BalkanovaFrolenkov:2017b}, the only difference being that the proof of \cite[Theorem 1.3]{BalkanovaFrolenkov:2017b} identified the asymptotics of the integral in the main term. The case $t\neq 0$ is similar. In particular, the main term can be evaluated using the proof of \cite[Theorem 6.3]{BalkanovaFrolenkov:2017b}. In order to estimate the error term, we follow the proof of \cite[Theorem 7.4]{BalkanovaFrolenkov:2017b}. We use the same notation as in \cite[Theorem 7.4]{BalkanovaFrolenkov:2017b}. To this end, we define $\omega(x)$ to be a smooth characteristic function of the interval $(X,2X)$ with the length of smoothing being equal to $T$. We assume also that $\omega^{(k)}(x)\ll_k T^{-k}$. Note that we assume that $X^{{\varepsilon}_0}<T<X^{1-{\varepsilon}_0}$ since the final choice of $T$ will be $T=X^{2/3-4\theta/3}$. Our first step is to estimate the function $h_1(\omega;1/2+it;r)$ defined as follows (see \cite[Lemma 5.2]{BalkanovaFrolenkov:2017b}): \begin{multline}\label{eq:hwsr} h_1(\omega;s;r)=\int_{0}^{\infty}\frac{\omega(x)}{2x^s}\frac{\sin{\pi(s/2-1/2+ir)}}{\sin{(\pi i r)}}\Gamma(1/2+s/2+ir)\\ \times \frac{\Gamma(s/2+ir)}{\Gamma(1+2ir)}\left( \frac{x}{2}\right)^{-2ir} F(s/2+ir,s/2+1/2+ir,1+2ir;4/x^2)dx. \end{multline} We need to estimate $h_1(\omega;s;r)$ for both positive and negative $r$, but using that $ h_1(\omega; s, r)=\overline {h_1(\omega; \overline s, -r)}$ we may assume that $r>0$. Assume first that $r\ll X^{3{\varepsilon}}.$ In this case we approximate the hypergeometric function in \eqref{eq:hwsr} by 1 using its series representation \begin{equation*} F(s/2+ir,s/2+1/2+ir,1+2ir;4/x^2)=1+O\left(\frac{1+\abs{t/2+r}^2}{(1+\abs{r})x^2}\right), \end{equation*} for $x$ sufficiently large (See e.g. \cite[p. 38]{Good:1981b}). Estimating all other factors in \eqref{eq:hwsr} by absolute value, we obtain \begin{equation}\label{hestimate1} h_1(\omega;s;r)\ll \frac{\sqrt{X}}{\abs{r}^{1/2}}. \end{equation} Now assume that \begin{equation}\label{secondrange}r\gg X^{3{\varepsilon}}>t^3.\end{equation} In this case we can use the integral representation \cite[15.6.1]{OlverLozierBoisvertClark:2010} for the hypergeometric function in order to apply the saddle point method. Take $\alpha:=t/r\ll X^{-2{\varepsilon}}$ and $Z:=x^2/4$. Using \cite[15.6.1]{OlverLozierBoisvertClark:2010} we show that \begin{multline*} h_1(\omega;1/2+2it;r)= \frac{\sin{\pi(-1/4+i(r+t))}}{2^{3/2+2it}\sin{(\pi i r)}}\frac{\Gamma(1/4+i(r+t))}{\Gamma(1/4+i(r-t))}\times\\ \int_{0}^{\infty}\omega(x)\int_{0}^{1} \frac{y^{-1/4+i(r+t)}(1-y)^{-3/4+i(r-t)}}{(Z-y)^{1/4+i(r+t)}}dydx. \end{multline*} Consequently, bounding the factor in front of the integral by Stirling's asymptotics we find \begin{equation}\label{hest2} h_1(\omega;1/2+2it;r)\ll \left|\int_{0}^{\infty}\omega(x)\int_{0}^{1} \frac{y^{-1/4}(1-y)^{-3/4}}{(Z-y)^{1/4}}\exp(irf(\alpha,Z,y))dydx\right|, \end{equation} where \begin{equation}\label{fdef} f(\alpha,Z,y):=(1+\alpha)\log y+(1-\alpha)\log(1-y)-(1+\alpha)\log(Z-y). \end{equation} For simplicity, let us denote $f(y):=f(\alpha,Z,y)$ and the integral over $y$ in \eqref{hest2} by $I(\alpha,Z)$. For $X$ sufficiently large there is only one saddle point of the function $f(y)$ (the solution of the equation $f'(y)=0$ ) in $[0,1]$ and it is located at \begin{equation}\label{fsadle} y_0=\frac{Z-(Z^2-Z(1-\alpha^2))^{1/2}}{1-\alpha}=\frac{Z(1+\alpha)}{Z+(Z^2-Z(1-\alpha^2))^{1/2}}. \end{equation} We remark that since $Z$ is large and $\alpha$ is small, the saddle point is located near the point $1/2$. To localize the saddle point, we introduce the smooth partition of unity: \begin{equation*} \beta_1(y)+\beta_2(y)+\beta_3(y)=1 \end{equation*} such that $\beta_1(y)=0$ if $y>2\delta$, $\beta_3(y)=0$ if $y<1-2\delta$, $\beta_2(y)\neq 0$ if $\delta<y<1-\delta$ for some small fixed constant $\delta$. Accordingly, \begin{equation*} I(\alpha,Z)=\sum_{j=1}^{3}I_j(\alpha,Z), \end{equation*} where \begin{equation*} I_{j}(\alpha,Z)=\int_{0}^{1}\frac{y^{-1/4}(1-y)^{-3/4}}{(Z-y)^{1/4}}\exp(irf(\alpha,Z,y))\beta_j(y)dy. \end{equation*} Since the saddle point does not belong to the support of $\beta_1(y)$ and $\beta_3(y)$, integrating by parts $I_{1}(\alpha,Z)$ and $I_{3}(\alpha,Z)$ $n$ times, we show that \begin{equation}I_{1}(\alpha,Z)+I_{3}(\alpha,Z)\ll_n \frac{1}{Z^{1/4}r^n}. \end{equation} To deal with $I_{2}(\alpha,Z)$, we apply the saddle point method in the form of \cite[Theorem 1.2]{McKeeSunYe:2017} (See also \cite[Prop 8.2]{BlomerKhanYoung:2013}, getting that \begin{multline}\label{eq:I2(alpha,Z)} I_{2}(\alpha,Z)=\frac{y_0^{-1/4}(1-y_0)^{-3/4}}{(Z-y_0)^{1/4}}\frac{\exp(irf(y_0))}{\sqrt{rf''(y_0)}} \left(c_0+\sum_{j=1}^{n}\frac{c_j(\alpha, Z)}{r^j} \right)\\+O(Z^{-1/4}r^{-n-1}), \end{multline} where $c_0=\sqrt{2\pi}\exp(\pi i/4)$ and $c_j(\alpha,Z)\ll 1$ for $j\geq 1$. We only describe how to estimate the contribution of the main term in \eqref{eq:I2(alpha,Z)}. All other terms can be handled in the same way. After some simplifications, we obtain \begin{equation}\label{hest3} h_1(\omega;1/2+2it;r)\ll\frac{1}{r^{1/2}} \left|\int_{0}^{\infty}\frac{\omega(x)}{x^{1/2}}C(x,\alpha)\exp(irf(y_0))dx\right|+\frac{1}{Z^{1/4}r^A}, \end{equation} where \begin{equation} C(x,\alpha)=\frac{y_0^{-1/4}(1-y_0)^{-3/4}}{(1-y_0/Z)^{1/4}\sqrt{f''(y_0)}}. \end{equation} This function is bounded and smooth in $x$. In order to bound the integral we need to analyze the oscillatory part $\exp(irf(y_0))$. Using \eqref{fsadle} we write \begin{equation} y_0=\frac{Z(1-\alpha)}{d_1},\quad 1-y_0=\frac{d_2}{d_1},\quad Z-y_0=\frac{d_2}{1-\alpha} \end{equation} with \begin{align} d_1=&Z+(Z^2-Z(1-\alpha^2))^{1/2}\\d_2=&(Z^2-Z(1-\alpha^2))^{1/2}-\alpha Z\\\overline {d_2}=&(Z^2-Z(1-\alpha^2))^{1/2}+\alpha Z. \end{align} Using \eqref{fdef} we can therefore write \begin{multline} f(y_0)=(1+\alpha)\log(1-\alpha^2)+(1+\alpha)\log Z-2\log(d_1)-2\alpha\log(d_2). \end{multline} Using \begin{equation} d_2\overline d_2=(1-\alpha^2)Z(Z-1) \end{equation} we deduce \begin{multline}\label{f(y0)} f(y_0)=(1-\alpha)\log(1-\alpha^2)+(1-\alpha)\log Z\\-2\alpha\log(Z-1) -2\log d_1+2\alpha\log(\overline{d_2}). \end{multline} In order to simplify the expression for $\exp(irf(y_0))$, we expand the summands of $f(y_0)$ in power series with respect to $r$. For example, \begin{equation*} r(1-\alpha)\log(1-\alpha^2)=r(-\alpha^2+\alpha^3-\ldots)=-\frac{t^2}{r}+\frac{t^3}{r^2}-\ldots \end{equation*} since $\alpha=t/r$. As a result, \begin{equation}\label{f simpify1} \exp(ir(1-\alpha)\log(1-\alpha^2))=1-\frac{it^2}{r}+\ldots \end{equation} By \eqref{secondrange} we can take sufficiently many terms in the series representation and we obtain a negligibly small error term. All remaining terms can be estimated in the same way and will be smaller. Therefore, it is enough to replace $\exp(ir(1-\alpha)\log(1-\alpha^2))$ by 1. Note further that \begin{align*} \frac{d_1}{Z+\sqrt{Z^2-Z}}&=\frac{Z+\sqrt{Z^2-Z(1-\alpha^2)}}{Z+\sqrt{Z^2-Z}}\\ &= \left(1+\frac{\sqrt{Z^2-Z}}{Z+\sqrt{Z^2-Z}}\left(\sqrt{1+\frac{\alpha^2Z}{Z^2-Z}}-1\right)\right) \end{align*} as is easily checked. Note that the right-hand-side is close to 1. Taking $e^{-ir\log(\cdot) }$ on this expression and doing a Taylor expansion on $\sqrt{1+\frac{\alpha^2Z}{Z^2-Z}}$ and then a Taylor expansion of $e^{-ir\log(\cdot) }$ on the term close to 1 we find that \begin{align}\label{f simpify2} \exp\left(-2ir\log d_1 \right)= & \exp\left(-2ir\log\left(\left(Z+\sqrt{Z^2-Z}\right)\right)\right) \\ & \cdot \left(1+\frac{\sqrt{Z^2-Z}}{Z+\sqrt{Z^2-Z}}\frac{Z\alpha^2(-2ir)}{2(Z^2-Z)}+\ldots\right). \end{align} Similarly, we see that \begin{equation} \overline d_2= \sqrt{Z^2-Z}\left(\frac{Z\alpha}{\sqrt{Z^2-Z}}+\sqrt{1+\frac{\alpha^2Z}{Z^2-Z}}\right), \end{equation} and doing the same type of Taylor expansion we find \begin{equation}\label{f simpify3} \exp\left(2ir\alpha\log \overline{d_2}\right)=\exp\left(2ir\alpha\log\sqrt{Z^2-Z}\right) \left(1+\frac{2irZ\alpha^2}{\sqrt{Z^2-Z}}+\ldots\right). \end{equation} Combining \eqref{f(y0)} and \eqref{f simpify1}, \eqref{f simpify2}, \eqref{f simpify3}, we get an expression \begin{equation}\label{exp f_0} \exp\left(irf(y_0)\right)= \exp\left(irF(x,\alpha)\right) \left(1+\ldots\right), \end{equation} where \begin{align} F(x, \alpha)&=(1\!-\!\alpha)\log Z\!-\!2\alpha\log(Z\!-\!1)\!-\!2\log(Z\!+\!\sqrt{Z^2\!-\!Z})\!+\!2\alpha\log\sqrt{Z^2\!-\!Z}\\ &=\log Z-\alpha\log(Z-1)-2\log(Z+\sqrt{Z^2-Z})\\ &=-(2\log(x+\sqrt{x^2-4})+\alpha\log(x^2-4)+(6+2\alpha)\log(2)). \end{align} Here we have used that $Z=x^2/4$. All the terms after 1 in the last expression of \eqref{exp f_0} will be smaller in the final analysis and we ignore them below. Using the expression \eqref{exp f_0} in \eqref{hest3} we see that we need to bound \begin{equation}\label{hest4} \mathcal{I}:=\frac{1}{r^{1/2}} \abs{\int_{0}^{\infty}\frac{\omega(x)}{x^{1/2}}C(x,r)\exp\left(irF(x, \alpha) \right)dx}. \end{equation} Integrating \eqref{hest4} by parts $n$ times, we infer \begin{equation} \mathcal{I}\ll \frac{1}{\sqrt{r}}\left(\frac{\sqrt{X}}{r}\right)^n. \end{equation} Therefore, for $r>X^{1/2+\epsilon}$ the integral is negligible. In order to estimate $\mathcal{I}$ for $r\leq X^{1/2+\epsilon}$, we specialize the choice of the characteristic function \begin{equation}\label{omega def} \omega(x)=\frac{1}{T\pi^{1/2}}\int_X^{2X}\exp\left(-\frac{(x-K)^2}{T^2}\right)dK. \end{equation} For an arbitrary $A>1$ and some $c>0$ we have (see \cite{IvicJutila:2003}) \begin{equation} \omega(x)=\begin{cases} 1+O(x^{-A})& \text{ if } x\in ]X+cT\sqrt{\log X},2X-cT\sqrt{\log X}[\\ O((|x|+X)^{-A})& \text{ if } x\not\in [X-cT\sqrt{\log X},2X+cT\sqrt{\log X}] \end{cases} \end{equation} and $1+O(T^3(T+\min(|x-X|,|x-2X|))^{-3})$ otherwise. Using \eqref{omega def} and making the change $x=K+yT$, we show that \begin{equation} \mathcal{I}\ll\frac{1}{r^{1/2}}\int_X^{2X} \int_{-\infty}^{\infty}\exp(-y^2)\frac{C(K+yT,\alpha)}{(K+yT)^{1/2}}\exp\left(irF(K+yT, \alpha) \right)dydK. \end{equation} Using the rapid decay of the function $\exp(-y^2)$, we truncate the integral over $y$ at the point $v_0:=\log{(rX)}$. After that we expand all functions under the integral sign in the Taylor series. In particular, as $x\rightarrow \infty$, we have the following Taylor series: \begin{equation} F(x,\alpha)=-(6+2\alpha)\log{2}-\left( 2(\alpha+1)\log{x}+2\log{2}-\frac{2+4\alpha}{x^2}+\ldots\right). \end{equation} Thus we show that \begin{equation} \mathcal{I}\ll\frac{1}{r^{1/2}}\int_X^{2X} \int_{-v_0}^{v_0}\frac{\exp(-y^2)}{K^{1/2}}\exp\left(2ir\log(K+yT) \right)dydK. \end{equation} It follows from the Taylor series expansion of the logarithm that \begin{equation} r\log(K+yT)=r\log{K}+\frac{ryT}{K}+r\left( \frac{yT}{K}\right)^2+\ldots \end{equation} Since $|y|\ll \log{X}$ and $K\sim X$, we obtain for $T\ll X^{3/4-{\varepsilon}_1}\ll X^{1-{\varepsilon}}r^{-1/2}$ that \begin{equation} \mathcal{I}\ll\frac{1}{r^{1/2}}\int_X^{2X}\frac{1}{\sqrt{K}}\exp(2ir(1+\alpha)\log{K}) \int_{-v_0}^{v_0}\exp(-y^2)\exp(2ir(1+\alpha)yT/K)dydK. \end{equation} Due to the rapid decay of the function $\exp(-y^2)$, we can enlarge the integral over $y$ to $(-\infty, \infty)$ at the cost of a negligible error term. Evaluating the resulting integral over $y$ we have \begin{equation}\label{eq:mathcalI} \mathcal{I}\ll\frac{1}{r^{1/2}}\int_X^{2X}\frac{1}{\sqrt{K}}\exp(2ir(1+\alpha)\log{K}) \exp(-(r(1+\alpha)T/K)^2)dK. \end{equation} Note that for $r>T^{-1}X\log{X}$ the integral above is negligible. Now consider the case $r\leq T^{-1}X\log{X}$. To estimate \eqref{eq:mathcalI}, we apply the following inequality. Since we have not been able to find a proper reference we also provide a short proof: \begin{lemma}\label{stupid-lemma} Let $-\infty\leq a\leq b \leq \infty$. Let $p(x)$ be integrable on $[a,b]$, and $q(x)$ a differentiable real function satisfying that $q'(x)\neq 0$ in the support of $p(x)$. Assume further that $p(x)/q'(x)$ is differentiable with its derivative integrable on $[a,b]$. If $]a,b[$ is the union of $m$ intervals $[a_i,b_i]$ such that $\frac{p(x)}{q'(x)}$ is monotonic on each interval $[a_i,b_i]$ we have \begin{equation}\label{eq:sp1} \int_{a}^b p(x)\exp(iq(x))dx \leq 2 (m+1) \max_{x\in [a,b]}\abs{\frac{p(x)}{q'(x)}}. \end{equation} \end{lemma} \begin{proof} Using integration by parts we see that \begin{equation}\abs{\int_a^b p(x)\exp(iq(x))dx}\leq \abs{\left. \frac{p(x)}{q'(x)}\exp(iq(x))\right\vert_a^b}+\abs{\int_a^b \frac{d}{dx}\left(\frac{p(x)}{q'(x)}\right)\exp(iq(x))dx}. \end{equation} If $\frac{p(x)}{q'(x)}$ is monotonic on $[a_i,b_i]$ then \begin{equation} \int_{a_i}^{b_i}\abs{\frac{d}{dx}\left(\frac{p(x)}{q'(x)}\right)}dx=\abs{\int_{a_i}^{b_i}\frac{d}{dx}\left(\frac{p(x)}{q'(x)}\right)dx}=\abs{\left.\frac{p(x)}{q'(x)}\right\vert_{a_i}^{b_i}}\leq 2 \max_{x\in [a,b]}\abs{\frac{p(x)}{q'(x)}}, \end{equation} from which the lemma follows. \end{proof} Using Lemma \ref{stupid-lemma}, we prove that $\mathcal{I}\ll X^{1/2}r^{-3/2}$. Therefore, \begin{equation}\label{hestimate2} h_1(\omega;1/2+2it;r)\ll \begin{cases} X^{1/2}r^{-3/2} & r<T^{-1}X\log{X},\\ X^{-100}&T^{-1}X\log{X}\leq r\leq X^{1/2+\epsilon},\\ r^{-1/2}(X^{1/2}/r)^n & r>X^{1/2+\epsilon}. \end{cases} \end{equation} Now we apply \eqref{hestimate1} for $r_j\ll X^{2{\varepsilon}}$ and \eqref{hestimate2} for $r_j\gg X^{2{\varepsilon}}$ to estimate the contribution of the discrete spectrum: \begin{equation*} Z_D(1/2+it)\ll \sqrt{X}\left(\frac{X}{T}\right)^{1/2+{\varepsilon}}. \end{equation*} See \cite[(1.6)]{BalkanovaFrolenkov:2017b} for the definition of $Z_D(s)$. The continuous part $Z_C(s)$ \cite[(1.4)]{BalkanovaFrolenkov:2017b} can be estimated in the same way. We are left to estimate the holomorphic part $Z_H(1/2+it)$, see \cite[(1.5)]{BalkanovaFrolenkov:2017b} for the definition. With this goal, we study the function \begin{multline}\label{gdef} g(\omega;s;k)=2^{2k-1}\sin{\frac{\pi s}{2}}\int_{0}^{\infty}\omega(x)\Gamma(k+s/2-1/2)\\ \times\frac{\Gamma(k+s/2)}{\Gamma(2k)} x^{1-2k-s}F(k+s/2-1/2,k+s/2,2k;4/x^2)dx \end{multline} for $s=1/2+2it.$ To deal with the hypergeometric function, we apply an approach based on the Mellin-Barnes integrals \cite{Frolenkov:2015}. To this end, we first use the property \cite[15.8.1]{OlverLozierBoisvertClark:2010}, getting \begin{align}\label{Ftransform} F(k+s/2&-1/2,k+s/2,2k;\frac{4}{x^2})=\\ &\left(1-\frac{4}{x^2}\right)^{-k-s/2+1/2}F(k+s/2-1/2,k-s/2,2k;-\frac{4}{x^2-4}). \end{align} In the same way as in \cite[Theorem 2]{Frolenkov:2015}, we prove that \begin{equation}\label{Fasympt} F(k+s/2-1/2,k-s/2,2k;-\frac{4}{x^2-4})=1+O\left(\frac{k+t}{\sqrt{k}(x^2-4)^{1/2}}\right). \end{equation} Using \eqref{gdef}, \eqref{Ftransform} and \eqref{Fasympt}, we obtain \begin{equation}\label{gest} g(\omega;1/2+2it;k)\ll \int_{0}^{\infty}\frac{\omega(x) 2^{2k}}{x^{2k-1/2}}\exp(\pi|t|)\frac{|\Gamma(k+it)|^2}{\Gamma(2k)} dx. \end{equation} Applying the results of \cite[section 2]{Frolenkov:2015} it can be shown that \begin{equation}\label{Gamma est small k} \exp(\pi|t|)\frac{|\Gamma(k+it)|^2}{\Gamma(2k)}\ll\frac{t^{2k-1}\exp(tg_2(k/t))}{\Gamma(2k)},\quad\hbox{for}\quad k\le t, \end{equation} \begin{equation}\label{Gamma est big k} \exp(\pi|t|)\frac{|\Gamma(k+it)|^2}{\Gamma(2k)}\ll\frac{\Gamma^2(k)}{\Gamma(2k)}\exp(t(\pi-g_1(t/k))),\quad\hbox{for}\quad k>t, \end{equation} where \begin{equation}\label{g1} g_1(z):=\int_0^z\log(1+x^2)\frac{dx}{x^2}=2\arctan z-\frac{\log(1+z^2)}{z}, \end{equation} \begin{equation}\label{g2} g_2(z):=\int_0^z\log(1+x^2)dx=z\log(1+z^2)-2z+2\arctan z. \end{equation} Substituting \eqref{Gamma est small k} in \eqref{gest} and using the Stirling formula \cite[5.11.3]{OlverLozierBoisvertClark:2010}, we have for $k\le t$ \begin{equation*} g(\omega;1/2+2it;k)\ll\frac{\sqrt{k}}{t} \int_{0}^{\infty}\omega(x)\sqrt{x}\exp(kh_2(k,t,x))dx, \end{equation*} where \begin{equation}\label{h2def} h_2(k,t,x):=2+2\log(t/k)-2\log x+g_2(k/t)t/k. \end{equation} Since $g_2(z)-zg_2'(z)=2\arctan z-2z$ we obtain that \begin{equation*} \frac{\partial}{\partial k}h_2(k,t,x)=-\frac{2t}{k^2}\arctan\frac{k}{t}<0. \end{equation*} consequently, for $k\le t$ there exists some constants $c_1,c_2>0$ such that \begin{equation}\label{gest2} g(\omega;1/2+2it;k)\ll\frac{\sqrt{k}}{t} \int_{0}^{\infty}\omega(x)\sqrt{x}(c_1t/x)^kdx\ll X^{-c_2k}. \end{equation} The case of $k>t$ can be treated in the same way and leads to the same estimate. Finally, we show that $Z_H(1/2+it)\ll X^{1/2}$. Combining the above estimates with \cite[Thm. 6.2]{BalkanovaFrolenkov:2017b} and the bound $\mathscr{L}_{n^2-4}(1/2+it)\ll n^{2\theta+{\varepsilon}}X^{{\varepsilon}} $ we find \begin{align} \sum_{X<n\leq 2X}\mathscr{L}_{n^2-4}(1/2+it)&=\sum_{n=1}^\infty\omega(n)\mathscr{L}_{n^2-4}(1/2+it)+O(X^{2\theta+{\varepsilon} }T)\\ \label{end-game} &=\int_{X}^{2X}m_t(u)du+Z_C(1/2+it)+Z_H(1/2+it)\\ &\quad\quad +Z_D(1/2+it)+O(X^{2\theta+{\varepsilon} }T)\\ &=\int_{X}^{2X}m_t(u)du+O(X^{1/2}(X/T)^{1/2+{\varepsilon}}+X^{2\theta+{\varepsilon} }T). \end{align} Choosing $T=X^{2/3-4\theta/3}$ and making the dyadic partition of unity we complete the proof of Theorem \ref{asymptotic}. We notice that the dependence on the Lindel\"of hypothesis comes from the unsmoothing in the first line of \eqref{end-game}. \section{Conditional improvements in the prime geodesic theorem}\label{sec:conditional} In this section we prove Theorem \ref{conditional-bound}. By Iwaniec's explicit formula \eqref{PrimeGeodesic to spec.sum} and summation by parts it suffices to bound \begin{equation}\sum_{0< t_j\leq T}t_jX^{it_j}.\end{equation} It turns out to be useful to approximate this by a smooth sum \begin{equation}\label{relevant-sum}\sum_{0< t_j}t_jX^{it_j}\exp(-t_j/T).\end{equation} Define \begin{equation}\label{phi def0} \varphi(x)=\frac{\sinh^2\beta}{2\pi}x^2\exp(ix\cosh\beta) \end{equation} with \begin{equation}\label{beta def} \beta=\frac{1}{2}\log X+\frac{i}{2T}. \end{equation} The reason for this choice is that the integral transform of $\varphi$, call it $\hat\varphi$, appearing in the Kuznetsov trace formula, satisfies \begin{equation}\label{test-function}\hat\varphi(t)=tX^{it}\exp(-t/T)(1+O(\abs{t}^{-1}) \end{equation} for $X,T$ sufficiently large, and $t$ bounded away from zero. The function $\hat\varphi$ is therefore well-suited to study \eqref{relevant-sum}. Let us introduce the following notation \begin{equation}\label{c def} c:=-i\cosh\beta=a-ib, \end{equation} \begin{equation}\label{a,b def} \begin{cases} a:=\sinh(\log\sqrt{X})\sin((2T)^{-1}),\\ b:=\cosh(\log\sqrt{X})\cos((2T)^{-1}). \end{cases} \end{equation} We remark that \begin{equation}\label{argc} \arg{c}=-\pi/2+\gamma,\quad 0<\gamma, \quad T^{-1}\ll\gamma \ll T^{-1}. \end{equation} Let $h(x)$ is a smooth function supported in $[N,2N]$ for some $N>1$ such that \begin{equation}\label{h conditions} |h^{(j)}(x)|\ll N^{-j},\,\hbox{for}\,j=0,1,2,\ldots\, \int_{-\infty}^{\infty}h(x)dx=N. \end{equation} Iwaniec \cite[Lemma 8]{Iwaniec:1984a} showed that by using the Lindel\"of conjecture on average (proved by Luo and Sarnak \cite[(5)]{LuoSarnak:1995}) one can prove \begin{equation}1=\frac{\pi^2}{12N}\left(\sum_{n\in \mathbf{N}}h(n)\abs{\nu_j(n)}^2- r(t_j,N)\right)\end{equation} with \begin{equation} \sum_{t_j\leq T}\abs{r(t_j,N)}\ll T^{2}N^{1/2}\log^2(T). \end{equation} Here $\nu_j(n)$ is the $n$th Fourier coefficient of the eigenfunction of the Laplace eigenvalue $1/4+t_j^2$ (appropriately normalized). Using this and \eqref{test-function} we see that in order to understand \eqref{relevant-sum} it suffices to understand \begin{equation} \sum_{n}h(n)\sum_{t_j}\hat\varphi(t_j)\abs{\nu_j(n)}^2. \end{equation} This can be analyzed using the Kuznetsov formula, which leads to the Kloosterman sum \begin{equation}\label{intermediate} \sum_{n}h(n)\sum_{q=1}^{\infty}\frac{S(n,n;q)}{q} \varphi\left(\frac{4\pi n}{q}\right). \end{equation} We show that if we understand the asymptotics of \begin{equation}\sum_{2<n\leq X}\mathscr{L}_{n^2-4}(1/2+it)\end{equation} then we can bound \eqref{intermediate} which leads to a bound on \eqref{relevant-sum} which ultimately leads to a bound on the error term in the prime geodesic theorem. We start by quoting a result from \cite{BalkanovaFrolenkov:2019a} which shows a relation between the sum \eqref{intermediate} and $\mathscr{L}_{n^2-4}(1)$: \begin{lem}\label{lemma estimates on the first error} For $N,X,T\gg1$ the following asymptotic formula holds \begin{multline}\label{sum Kloosterman decomposition2} \sum_{n}h(n)\sum_{q=1}^{\infty}\frac{S(n,n;q)}{q} \varphi\left(\frac{4\pi n}{q}\right)=\frac{2 \int_0^\infty h(t)dt } {\zeta(2)} \sum_{n=3}^{\infty}\mathscr{L}_{n^2-4}(1)\Phi(n)\\+ O\left(N\log(NX)+N^{1/2}X^{1/4+\theta}T^{3/2}\left(1+\frac{T}{X^{1/2}}\right)\right), \end{multline} where \begin{equation}\label{Phi(n)def} \Phi(x)=\frac{\sinh^2\beta}{2\pi c^2} \frac{1-x^2/(4c^2)}{(1+x ^2/(4c^2))^{2}}. \end{equation} \end{lem} \begin{proof} See \cite[Lemma 3.4]{BalkanovaFrolenkov:2019a}. \end{proof} \subsection{Relation between $\mathscr{L}_{n^2-4}(1)$ and $\mathscr{L}_{n^2-4}(1/2+it)$} We denote the $q$th coefficient in the Dirichlet series of $\mathscr{L}_n(s)$ by $\lambda_q(n)$ i.e. for $\Re(s)>1$ we have \begin{equation} \mathscr{L}_n(s)=\sum_{q=1}^\infty\frac{\lambda_q(n)}{q^s}. \end{equation} For any $n$ and some constant $A>0$ one has \begin{equation}\label{eq:subconvexity} \mathscr{L}_n(1/2+it)\ll (1+|n|)^{\theta}(1+|t|)^{A}, \end{equation} where we may take $\theta$ to be any subconvexity exponent for Dirichlet $L$-functions of real primitive characters in the conductor aspect, and $t$ any subconvexity exponent in the $t$ aspect. (See \cite[Sec. 4]{BalkanovaFrolenkov:2018e} for this and other properties of $\mathscr{L}_n(s)$.) Conrey and Iwaniec \cite{ConreyIwaniec:2000} proved that we can take $\theta=1/6+{\varepsilon}$, and Young \cite{Young:2017b} proved a hybrid bound $\theta=A=1/6+{\varepsilon}$. For $V\geq1$ define the series \begin{equation}\label{SV def} S_V(n^2-4):=\sum_{q=1}^{\infty}\frac{\lambda_q(n^2-4)}{q}\exp(-q/V) , \end{equation} which is a smoothed out analogue of \begin{equation} \sum_{q\leq V}^{}\frac{\lambda_q(n^2-4)}{q}. \end{equation} It follows from the Mellin inversion formula of the exponential function and a shift of contour (See \cite[Eqs. (1.6)-(1.8), p. 723]{Bykovskiui:1994}, \cite[p. 116, line 2]{SoundararajanYoung:2013} for details) that for $V>0$ and $n\neq2$ we have \begin{equation}\label{approximate func.eq.} \mathscr{L}_{n^2-4}(1)= S_V(n^2-4)-\frac{1}{2\pi i}\int_{(-1/2)}\mathscr{L}_{n^2-4}(1+s)V^s\Gamma(s)ds. \end{equation} Using this identity we find that \begin{multline}\label{application of approx fun.eq} \sum_{n=3}^{\infty}\mathscr{L}_{n^2-4}(1)\Phi(n)= \sum_{n=3}^{\infty}\Phi(n)S_V(n^2-4)-\\ \frac{1}{2\pi i}\int_{(-1/2)}\sum_{n=3}^{\infty}\Phi(n)\mathscr{L}_{n^2-4}(1+s)V^s\Gamma(s)ds. \end{multline} The first summand on the right was estimated in \cite[Eq. 3.75]{BalkanovaFrolenkov:2019a} where it was found that \begin{equation}\label{application of approx fun.eq5} \sum_{n=3}^{\infty}\Phi(n)S_V(n^2-4)\ll V^{1/2}T^2\log^2V. \end{equation} We now consider the second summand on the right of \eqref{application of approx fun.eq}. Due to the rapid decay of $\Gamma(-1/2+it)$ we can truncate the integral over $t$ to the range $|t|\leq X^{{\varepsilon}}$ at the cost of an error term of $O_B(V^{-1/2}X^{-B})$ for any $B>0$. \begin{lem}\label{intermed} For $|t|\ll X^{{\varepsilon}}\ll T $ we have \begin{equation}\label{eq:est:1/2} \sum_{n=3}^{\infty}\mathscr{L}_{n^2-4}(1/2+it)\Phi(n)\ll X^{\alpha/2+{\varepsilon}}T^{2}. \end{equation} \end{lem} \begin{proof} By partial summation we have \begin{equation}\label{eq:abelsumm} \sum_{n=3}^{\infty}\mathscr{L}_{n^2-4}(1/2+it)\Phi(n)=-\int_{1}^{\infty}\Phi'(x)\sum_{3\leq n\leq x} \mathscr{L}_{n^2-4}(1/2+it)dx. \end{equation} Applying Theorem \ref{asymptotic} to the inner sum, \eqref{eq:abelsumm} can be written as follows \begin{equation}\label{mt+et} -\int_{1}^{\infty}\Phi'(x)M_t(x)dx+O\left(\int_{1}^{\infty}\abs{\Phi'(x)}x^{\alpha+{\varepsilon}}dx\right). \end{equation} where $M_t(x)=\int_{2}^{X}m_t(u)du$. To estimate the error term we argue as in \cite[Eqs. 3.70, 3.71]{BalkanovaFrolenkov:2019a}, and find \begin{multline}\label{est:mtprime0} \int_{1}^{\infty}|\Phi'(x)|x^{\alpha+{\varepsilon}}dx\ll |c|^{\alpha+{\varepsilon}}\int_{0}^{\infty}\min\left( T^3,\frac{1}{|1-x|^3}\right)x^{\alpha+{\varepsilon}}dx\\ \ll |c|^{\alpha+{\varepsilon}}T^2\ll X^{\alpha/2+{\varepsilon}}T^{2}, \end{multline} where $c$ is defined by equations \eqref{c def}, \eqref{a,b def}. The next step is to estimate the contribution of the main term in \eqref{mt+et}. Here we use integration by parts to see that \begin{equation} -\int_{1}^{\infty}\Phi'(x)M_t(x)dx=\int_{1}^{\infty}\Phi(x)M_t'(x)dx+O(1). \end{equation} We then use that $M_t'(x)=m_t(x)$ and use the specific form of $m_t(x)$. Combining this with \cite[Eqs. 3.56, 3.57]{BalkanovaFrolenkov:2019a} we see that \begin{align}\label{est:mtprime} \int_{1}^{\infty}&\Phi(x)m_t(x)dx\ll \log(X)\int_0^\infty\abs{\Phi(x)}(\log(x)+1)dx\\ &\ll \log X\int_0^\infty\frac{((1-u^2)^2+4u^2\cos^2\gamma)^{1/2}}{(1-u^2)^2+4u^2\sin^2\gamma}\abs{1+\log(u/4\abs{c})}du. \end{align} Using \eqref{argc} we see that the fraction in the integrand is bounded by a constant times $\min (T^2, \abs{x-1}^{-2})$. Using this estimates we see that the full contribution of \eqref{est:mtprime} is $\ll TX^{\varepsilon}$. The statement follows by combining this with \eqref{mt+et}, \eqref{est:mtprime0}. \end{proof} \begin{lem}\label{lemma estimate on the main error} For $X^{{\varepsilon}}\ll T\ll X^{1/2}$ the following estimate holds \begin{equation}\label{estimate on sum L(1)} \sum_{n=3}^{\infty}\mathscr{L}_{n^2-4}(1)\Phi(n)\ll X^{\alpha/4+{\varepsilon}}T^2. \end{equation} \end{lem}\begin{proof} Using \eqref{application of approx fun.eq}, \eqref{application of approx fun.eq5}, the comment before Lemma \ref{intermed}, as well as Lemma \ref{intermed} we obtain \begin{equation} \sum_{n=3}^{\infty}\mathscr{L}_{n^2-4}(1)\Phi(n)\ll V^{1/2}T^2\log^2V+\frac{X^{\alpha/2+{\varepsilon}}T^{2}}{V^{1/2}}. \end{equation} Choosing $V=X^{\alpha/2}$, the statement follows. \end{proof} \begin{proof}[Proof of Theorem \ref{conditional-bound}] By using Lemma \ref{lemma estimates on the first error} and Lemma \eqref{lemma estimate on the main error} we find that for $X^{{\varepsilon}}\ll T\leq X^{1/2}$ we have \begin{align}\frac{1}{N}\sum_{n}h(n)\sum_{q=1}^{\infty}&\frac{S(n,n;q)}{q} \varphi\left(\frac{4\pi n}{q}\right)\\ & \ll X^{\alpha/4+{\varepsilon}}T^2+\log(NX)+N^{-1/2}X^{1/4+\theta}T^{3/2}. \end{align} Replacing \cite[Lemma 3.5]{BalkanovaFrolenkov:2019a} in the proof of \cite[Theorem 1.2]{BalkanovaFrolenkov:2019a} by , we show that for $X^{{\varepsilon}}\ll T\leq X^{1/2}$ the following estimate holds \begin{equation} \sum_{t_j\ll T}t_jX^{it_j}\ll X^{\alpha/4+{\varepsilon}}T^2. \end{equation} By using partial summation, we see that when $T\leq X^{1/2}$ we have \begin{equation} \sum_{0<t_j\ll T}\frac{X^{it_j}}{1/2+it_j}\ll X^{\alpha/4+{\varepsilon}}\log T. \end{equation} Finally, using this to evaluate \eqref{PrimeGeodesic to spec.sum} we may choose $T=X^{1/2-{\varepsilon}}$ and conclude that \begin{equation} E(X)=O(X^{1/2+\alpha/4+{\varepsilon}}) \end{equation} which proves Theorem \ref{conditional-bound}. \end{proof} \section{Omega results for averages of $\mathscr{L}_{n^2-4}(1/2)$}\label{sec:omega-results} We recall that $f=\Omega(g)$ $x\to\infty$ if $f=o(g)$ does not hold, and that $f=\Omega_\pm(g)$ as $x\to\infty$ if $\limsup_{x\to\infty}f/g>0$ and $\liminf_{x\to\infty}f/g<0$. When proving Omega results for the error function it is useful to smooth it out by integrating against a suitable test function. We now describe how this can be done. We denote the indicator function of a set $A$ as $1_A(t)$. Consider a bump function, i.e. a smooth non-negative function $\phi$ on ${\mathbb R}$ supported in $[-1,1]$ satisfying $\int_{\mathbb R}\phi(t)dt=1$. Let $\delta>0$ be some small parameter and define $$\phi_{\delta}(t):=\delta^{-1}\phi(\delta^{-1}t).$$ Then $\phi_{\delta}$ is supported in $[-\delta, \delta]$ but otherwise has the same characteristics as $\phi$. We now define a smooth indicator function of an interval by the convolution \begin{equation*} 1^{sm}_{\delta,]y,2y]}(t)=\phi_\delta*1_{]y,2y]}(t)=\int_{{\mathbb R}}\phi_\delta(t-v)1_{]y,2y]}(v)dv. \end{equation*} It is straightforward to verify that for $y>\delta$ \begin{enumerate} \item $1^{sm}_{\delta,]y,2y]}(t)\in [0,1]$ \item $1^{sm}_{\delta,]y,2y]}(t)=0$ for $t$ outside $]y-\delta,2y+\delta]$, and \item $1^{sm}_{\delta,]y,2y]}(t)=1$ for $t\in]y+\delta,2y-\delta]$. \end{enumerate} We also consider another smooth non-negative function $\psi$ supported in $]1/2,2]$ and satisfying $\int_{{\mathbb R}_+}\psi(v)\frac{dv}{v}=1$. Define \begin{equation*} \psi_{\delta}=\delta^{-1}\psi(v^{1/\delta}). \end{equation*} Now $\psi_{\delta}$ is supported in $[2^{-\delta}, 2^{\delta}]$ but otherwise share the same characteristics as $\psi$. Fix now small positive parameters $0<\delta_1, \delta_2\leq 1$, and define the function \begin{equation}\label{omega_X} \omega_X(t)=\int_{{\mathbb R}_+}\psi_{\delta_1}\left(\frac{y}{X}\right)1^{sm}_{\delta_2,]y,2y]}(t)\frac{dy}{y}. \end{equation} \begin{tikzpicture} \draw[->] (-0.2,0) -- (10,0) node[right] {$t$}; \draw[->] (0,-0.2) -- (0,1.5) node[above] {$\omega_X(t)$}; \draw (2pt,1) -- (-2pt,1) node[anchor=east] {1}; \draw (0,2pt) -- (0,-2pt) node[anchor=north] {0}; \draw (2,2pt) -- (2,-2pt) node[anchor=west, rotate=-35] {$\frac{X}{2^{\delta_1}}-\delta_2$}; \draw (4,2pt) -- (4,-2pt) node[anchor=west, rotate=-35] {$2^{\delta_1}X+\delta_2$}; \draw (6,2pt) -- (6,-2pt) node[anchor=west, rotate=-35] {$2^{1-\delta_1}X-\delta_2$}; \draw (8,2pt) -- (8,-2pt) node[anchor=west, rotate=-35] {$2^{1+\delta_1}X+\delta_2$}; \draw (1,0) -- (2,0); \draw (2,0) .. controls (3,0) and (3,1) .. (4,1); \draw (4,1) -- (6,1); \draw (6,1) .. controls (7,1) and (7,0) .. (8,0); \draw (8,0) -- (9,0); \draw[dashed,very thin,color=gray] (0,1) -- (4,1); \draw[dashed,very thin,color=gray] (4,0) -- (4,1); \draw[dashed,very thin,color=gray] (6,0) -- (6,1); \end{tikzpicture} \begin{prop}\label{omega-prop}For $X>2$ the function $\omega_{X}$ satisfies that \begin{enumerate} \item \label{one} $\omega_X(t)$ is smooth and $0\leq \omega_X(t)\leq 1$ \item \label{two} $\omega_X(t)=0$ for $t\notin[2^{-\delta_1}X-\delta_2, 2^{1+\delta_1}X+\delta_2]$ \item \label{three} $\omega_X(t)=1$ for $t\in[2^{\delta_1}X+\delta_2, 2^{1-\delta_1}X-\delta_2]$ \item \label{four} for $k\geq 1$ we have $\omega_X^{(k)}\ll_{k}\frac{1}{(\delta_1X)^k}$ with support in 2 intervals of length $O(\delta_1X+\delta_2)$. \end{enumerate} \end{prop} \begin{proof} Observe that \eqref{one} is clear from the definition. We then note that \eqref{two} follows from noticing that if $t\notin[2^{-\delta_1}X-\delta_2, 2^{1+\delta_1}X+\delta_2]$ then for $2^{-\delta_1}\leq y/X\leq 2^{\delta_1}$ which is the support of $\psi_{\delta_1}\left(\frac{y}{X}\right)$ we have $1^{sm}_{[y,2y]}(t)=0$. To see \eqref{three} we note that for $2^{-\delta_1}\leq y/X\leq 2^{\delta_1}$ and $t\in[2^{\delta_1}X+\delta_2, 2^{1-\delta_1}X-\delta_2]$ then $1^{sm}_{[y,2y]}(t)=1$. Finally the claim on the support of $\omega_X^{(k)}$ follows from \eqref{two} and \eqref{three}. To bound $\omega_X^{(k)}$ we note that repeated integration by parts gives \begin{align*} \omega_X^{(k)}(t)&=\int_{{\mathbb R}_+}\psi_{\delta_1}\left(\frac{y}{X}\right)\left.1^{sm}_{\delta_2,]y,2y]}\right.^{(k)}(t)\frac{dy}{y}\\ &=(-1)^{k}\int_{{\mathbb R}_+}\left[y^{-1}\psi_{\delta_1}\left(\frac{y}{X}\right)\right]^{(k)}1^{sm}_{\delta_2,]y,2y]}(t)dy. \end{align*} It is straightforward to see that for $y$ in the support of the integrand we have $$\left[y^{-1}\psi_{\delta_1}\left(\frac{y}{X}\right)\right]^{(k)}\ll \frac{1}{(\delta_1 X)^{k}y}H_{k, \delta_1}\left(\frac{y}{X}\right)$$ for a smooth function satisfying $\int_{0}^{\infty}H_{k, \delta_1}\left(\frac{y}{X}\right)\frac{dy}{y}\ll_k 1.$ This finishes the proof. \end{proof} We let $$E_t(X):=\sum_{2<n\leq X}{\mathscr{L}_{n^2-4}(1/2+it)}-\int_{2}^Xm_t(u)du$$ and consider \begin{equation} \label{smooth-error} E_{\delta_1,\delta_2}(X)=\int_{{\mathbb R}_+}\psi_{\delta_1}\left(\frac{y}{X}\right)\int_{\mathbb R}\phi_{\delta_2}(v)\left(E_0(2y+v)-E_0(y+v)\right)dv\frac{dy}{y}. \end{equation} Note that the integrand is supported in $2^{-\delta_1}X\leq y \leq 2^{\delta_1}X$, $-\delta_2\leq v\leq \delta_2$. It suffices to prove that this smoothed out error-term is $\Omega(X^{1/2})$ as the following lemma shows. \begin{lem}\label{from-unsmooth-to-smooth} If $E_{\delta_1,\delta_2}(X)=\Omega(X^{1/2})$ then $E_0(X)=\Omega(X^{1/2})$. \end{lem} \begin{proof} Assume to the contrary that $E_0(X)/X^{1/2}\to 0$ as $X\to \infty$. For a given ${\varepsilon}>0$ we may choose $X_0$ large enough such that for $X>X_0$ $$\abs{E_0(2y+v)-E_0(y+v)}\leq {\varepsilon} X^{1/2},$$ whenever $(y,v)$ is in the support of the integrand. Combining this with $\int_{{\mathbb R}_+}\psi_{\delta_1}\left(\frac{y}{X}\right)\frac{dy}{y}=\int_{\mathbb R}\phi_{\delta_2}(v)dv=1$ we may conclude that $\abs{E_{\delta_1,\delta_2}(X)}\leq {\varepsilon} X^{1/2}$, which contradicts $E_{\delta_1,\delta_2}(X)=\Omega(X^{1/2})$. \end{proof} We now show how $E_{\delta_1,\delta_2}(X)$ relates to $\omega_X$ defined in \eqref{omega_X}. \begin{lem}\label{herewego}For $X\geq 3$ we have \begin{equation*} E_{\delta_1,\delta_2}(X)=\sum_{n=1}^\infty\mathscr{L}_{n^2-4}(1/2)\omega_X(n)-\int_{0}^\infty m_0(u)\omega_X(u)du. \end{equation*} \end{lem} \begin{proof} Let $X\geq 3$. For $(y,v)$ in the support of the integrand of $E_{\delta_1,\delta_2}(X)$ we have \begin{align*} E_0(2y+v)-E_0(y+v)&=\sum_{y+v<n\leq 2y+v}\mathscr{L}_{n^2-4}(1/2)-\int_{y+v}^{2y+v}m_0(u)du\\ =&\sum_{n=1}^{\infty}1_{]y, 2y]}(n-v)\mathscr{L}_{n^2-4}(1/2)-\int_0^\infty1_{]y, 2y]}(u-v) m_0(u)du. \end{align*} Inserting this in \eqref{smooth-error}, changing order of integration and making a simple change of variables $r=u-v$ gives the result. \end{proof} We now want to prove that \begin{equation}E_{\delta_1,\delta_2}=\Omega_\pm(\sqrt{X}).\end{equation} This implies clearly that $E_{\delta_1,\delta_2}=\Omega(\sqrt{X}), $ and therefore, when combined with Lemma \ref{from-unsmooth-to-smooth}, proves Theorem \ref{omega}. Applying \cite[Theorem 6.3]{BalkanovaFrolenkov:2017b} and Lemma \ref{herewego} we have \begin{equation}\label{expression} E_{\delta_1,\delta_2}(X)=Z_C(1/2)+Z_H(1/2)+Z_D(1/2), \end{equation} where \begin{equation}\label{eq:zc} Z_C(1/2)=\frac{\zeta(1/2)}{\pi}\int_{-\infty}^{\infty} \frac{\zeta(1/2+2ir)\zeta(1/2-2ir)}{\left|\zeta(1+2ir\right)|^2}h(\omega_X;r)dr, \end{equation} \begin{equation}\label{eq:zh} Z_H(1/2)=\sum_{k\geq 6}g(\omega_X;k)\sum_{j\leq \vartheta(k)}\alpha_{j,k}L(\sym^2u_{j,k}, 1/2), \end{equation} \begin{equation} Z_D(1/2)=\sum_{j}\alpha_j L(\sym ^2 u_j,1/2)h(\omega_X;r_j), \end{equation} where, according to \cite[Lemmata 5.1 and 5.2]{BalkanovaFrolenkov:2017b}, the functions $h(\omega_X;r)$, $g(\omega_X;k)$ are defined as follows: \begin{equation}\label{eq:defh} h(\omega_X;r_j)=h_1(\omega_X;r_j)+h_1(\omega_X;-r_j), \end{equation} \begin{multline}\label{eq:defh1} h_1(\omega_X;r)=\frac{\sin{(\pi(-1/4+ir))}}{2\sin(\pi i r)} \frac{\Gamma(1/4+ir)\Gamma(3/4+ir)}{\Gamma(1+2ir)}\\ \times 2^{2ir}\int_{0}^{\infty}\frac{\omega_X(x)}{x^{1/2+2ir}}F(1/4+ir,3/4+ir,1+2ir,4/x^2)dx, \end{multline} \begin{multline}\label{g1def} g(\omega_X;k)=2^{2k-3/2}\int_{0}^{\infty}\omega_X(x)\Gamma(k-1/4)\\ \times\frac{\Gamma(k+1/4)}{\Gamma(2k)} x^{1/2-2k}F(k-1/4,k+1/4,2k;4/x^2)dx. \end{multline} We refer to \cite[Section 2]{BalkanovaFrolenkov:2017b} for the definition of $\alpha_j$ and $\alpha_{j,k}$. \begin{lem}\label{hypasymp1} There is an $x_0>2$ such that for $r \rightarrow \infty$ we have \begin{multline} F\left( \frac{1}{4}+ir,\frac{3}{4}+ir,1+2ir; \frac{4}{x^2}\right)=x^{2ir}\exp(-2ir\acosh{x/2})\\ \times \left( \frac{x^2}{x^2-4}\right)^{1/4} \left(1+\frac{1}{16ir }\left( 1-\frac{x^2-2}{x\sqrt{x^2-4}}\right) \right)+O\left(\frac{1}{x^2r^2} \right), \end{multline} uniformly for all $x>x_0$. \end{lem} \begin{proof} See \cite[Corollary 7.3]{BalkanovaFrolenkov:2017b}. \end{proof} We can now use this asymptotic expansion to to estimate $Z_D(1/2)$: \begin{lem}\label{lem:discspectr} We have \begin{equation} Z_D(1/2)=\Omega_{\pm}(X^{1/2}). \end{equation} \end{lem} \begin{proof} We must analyze \begin{equation}Z_D(1/2)=\sum_{j}\alpha_j L(\sym ^2 u_j,1/2)h(\omega_X;r_j).\end{equation} Let $R$ be a real positive parameter (it will chosen later as $X^{{\varepsilon}}$) and split the sum over $r_j$ into two parts: $r_j>R$ and $r_j\leq R$. {\bf Assume first that $r_j>R$.} In this case we apply the asymptotic formula from Lemma \ref{hypasymp1} in order to approximate the hypergeometric function appearing in $h_1(\omega_X; r_j)$. The contribution of the error term in Lemma \ref{hypasymp1} can be handled as follows. We use the Stirling asymptotics on the gamma factors, and bound everything else trivially. This gives that the contribution to $h_{1}(\omega_X;r_j)$ can be bounded the following way: \begin{equation} \int_{0}^{\infty}\frac{\omega_X(x)}{x^{5/2}r_{j}^{5/2}}dx \ll \frac{1}{r_{j}^{5/2}(X2^{-\delta_1})^{3/2}}\ll \frac{1}{r_{j}^{5/2}X^{3/2}}. \end{equation} Consequently, this contributes to $Z_D(1/2)$ as \begin{equation} \sum_{r_j>R}\alpha_j |L(\sym^2 u_j,1/2)|\frac{1}{r_{j}^{5/2}X^{3/2}}\ll \frac{1}{R^{1/2-{\varepsilon}}X^{3/2}}. \end{equation} Note that in order to get the estimate above we used the Cauchy-Schwartz inequality and the upper bound for the second moment of symmetric-square $L$-functions (see \cite[Thm 3.2]{LuoSarnak:1995}), combined with well-known bounds on $\alpha_j$ (see \cite[Cor. 0.3]{HoffsteinLockhart:1994}). Now we estimate the contribution of the main term in Lemma \ref{hypasymp1}. To this end, consider the integral \begin{equation} \int_{0}^{\infty}\omega_X(x)v(x,r)\exp(-2ir_j\acosh{x/2})dx, \end{equation} where \begin{equation} v(x,r)=\frac{1}{(x^2-4)^{1/4}}\left(1+\frac{1}{16ir }\left( 1-\frac{x^2-2}{x\sqrt{x^2-4}}\right)\right). \end{equation} Using that the derivative of $\exp(-2ir_j\acosh{x/2})$ with respect to $x$ equals $$\exp(-2ir_j\acosh{x/2})(-2ir_j)(x^2-4)^{-1/2},$$ we may integrate by parts twice in the above integral and obtain \begin{equation} \frac{1}{(2ir_j)^2}\int_{0}^{\infty}\Biggl(\sqrt{x^2-4}\left(\sqrt{x^2-4} \omega_X(x)v(x,r)\right)' \Biggr)' \exp(-2ir_j\acosh{x/2})dx. \end{equation} Using Proposition \ref{omega-prop} and estimating the integral trivially, this can be bounded by $\sqrt{X}r_{j}^{-2}\delta_{1}^{-1}$ where we have assumed that $X\delta_1\gg 1$. As a result, the main term in Lemma \ref{hypasymp1} contributes to $Z_D(1/2)$ as \begin{equation} \sum_{r_j>R}\frac{\alpha_j }{\sqrt{r_j}}|L(\sym^2 u_j,1/2)|\frac{\sqrt{X}}{r_{j}^2\delta_1}\ll \frac{\sqrt{X}}{R^{1/2-{\varepsilon}}\delta_1}. \end{equation} {\bf Next assume that $r_j\leq R$.} In this case, we apply the following asymptotic formula \begin{equation}\label{asymp:hygeom2} F(1/4+ir,3/4+ir,1+2ir,4/x^2)=1+O\left( \frac{r}{x^2}\right) \end{equation} when $r/x^2\ll1$, see \cite[Eq (4.13)]{Good:1981b}. Note that \begin{equation} \int_{0}^{\infty}O\left(\frac{r_j}{x^2}\right)\frac{\omega_X(x)}{\sqrt{x}}dx=O\left(\frac{r_j}{X^{3/2}} \right). \end{equation} Therefore, the contribution of the error term in the asymptotic formula \eqref{asymp:hygeom2} to $Z_D(1/2)$ can be bounded by \begin{equation} \sum_{r_j\leq R}\alpha_j |L(\sym^2 u_j,1/2)|\frac{\sqrt{r_j}}{X^{3/2}}\ll \frac{R^{5/2+{\varepsilon}}}{X^{3/2}}, \end{equation} where we have again used Stirling's asymptotics on the Gamma factors. We proceed to estimate the contribution of the main term in \eqref{asymp:hygeom2}. Therefore we consider \begin{equation} \int_{0}^{\infty}\frac{\omega_X(x)}{x^{\frac{1}{2}+2ir_j}}dx = \int_{-\infty}^{+\infty}\phi_{\delta_2}(v)\int_{{\mathbb R}_{+}}\psi_{\delta_1}\left( \frac{y}{X}\right)\int_{0}^{\infty} \frac{1_{]y,2y]}(x-v)}{x^{\frac{1}{2}+2ir_j}}dx\frac{dy}{y}dv, \end{equation} where we have inserted the definition of $\omega_X(x)$ and interchanged summation. Consider now the two inner integrals. Evaluating the integral in $x$ and making the change of variables $y \rightarrow Xy$, we have \begin{equation}\label{eq:doubleint} \frac{X^{\frac{1}{2}-2ir_j}}{\frac{1}{2}-2ir_j}\int_{{\mathbb R}_{+}}\psi_{\delta_1}(y)\left(\left(2y+\frac{v}{X} \right)^{\frac{1}{2}-2ir_j}-\left(y+\frac{v}{X}\right)^{\frac{1}{2}-2ir_j}\right)\frac{dy}{y}. \end{equation} Applying the asymptotic formula \begin{equation} \left(2y+\frac{v}{X} \right)^{\frac{1}{2}-2ir_j}-\left(y+\frac{v}{X}\right)^{\frac{1}{2}-2ir_j}=(2y)^{\frac{1}{2}-2ir_j}-y^{\frac{1}{2}-2ir_j}+O\left(\frac{r_jv}{X\sqrt{y}}\right), \end{equation} and using the definition of $\psi_{\delta_1}$ and $\phi_{\delta_2}$ we find that \begin{equation}\label{eq:asympdoubleint} \int_{0}^{\infty}\frac{\omega_X(x)}{x^{\frac{1}{2}+2ir_j}}dx=\frac{X^{\frac{1}{2}-2ir_j}}{\frac{1}{2}-2ir_j}(2^{\frac{1}{2}-2ir_j}-1)\int_{{\mathbb R}_{+}}\psi_{\delta_1}(y)y^{\frac{1}{2}-2ir_j}\frac{dy}{y}+O(X^{-\frac{1}{2}}). \end{equation} The error term in \eqref{eq:asympdoubleint} contributes to $Z_D(1/2)$ as $O(R^{3/2}/\sqrt{X})$. In order to estimate the contribution of the main term, it is required to evaluate the integral \begin{equation} \int_{{\mathbb R}_{+}}\psi_{\delta_1}(y)y^{\frac{1}{2}-2ir_j}\frac{dy}{y}=\int_{0}^{\infty}\frac{1}{\delta_1}\psi(y^{1/\delta_1})y^{\frac{1}{2}-2ir_j}\frac{dy}{y}. \end{equation} Making the change of variables $y=v^{\delta_1}$ and integrating by parts, this is equal to \begin{equation}\label{eq:hw2} -\frac{1}{\delta_1(\frac{1}{2}-2ir_j)}\int_{0}^{\infty}\psi'(v)v^{\delta_1(\frac{1}{2}-2ir_j)}dv. \end{equation} Noticing that $\overline{h_1(\omega_X;r)}=h_1(\omega_X;-r)$, as follows from the definition \eqref{eq:defh1} it follows from the above considerations that if $r\leq R$ \eqref{eq:asympdoubleint} and \eqref{eq:hw2} that \begin{equation}\label{eq:defhwr} h(\omega_X;r)=-2\Re\Biggl[ X^{\frac{1}{2}-2ir}H(\omega_X;r)\Biggr]+O(X^{-1/2}+R^{1/2}X^{-3/2}), \end{equation} where \begin{multline} H(\omega_X;r):=\frac{\sin(\pi(-1/4+ir))}{\sin(\pi ir)}2^{2ir-1}\frac{\Gamma(1/4+ir)\Gamma(3/4+ir)}{\Gamma(1+2ir)}\\ \times \frac{1}{(\frac{1}{2}-2ir)^2\delta_1}(2^{\frac{1}{2}-2ir}-1)\int_{0}^{\infty}\psi'(v)v^{\delta_1(\frac{1}{2}-2ir)}dv. \end{multline} The contribution to $Z_D(1/2)$ from the error term in \eqref{eq:defhwr} is $R^2/\sqrt X+R^{5/2}/X^{3/2}$ so anticipating $R$ being much smaller than $X$ it suffices to show that \begin{equation}\label{eq:maincontr} -2\sum_{r_j \leq R}\alpha_j L(\sym^2 u_j,1/2) \Re\Biggl[ X^{1/2-2ir_j}H(\omega_X;r_j)\Biggr]=\Omega_{\pm}(X^{1/2}). \end{equation} In order to prove \eqref{eq:maincontr} we would like to apply \cite[Lemma 3]{IvicMotohashi:1990a} (See also \cite[Lemma 3]{Ivic:1999}). With this goal, we first extend the sum over $r_j$ to all $r_j$ at the cost of the error term $O(X^{1/2}R^{-1/2+{\varepsilon}})$. Furthermore, it is required to show that there is a non-zero term in the sum \eqref{eq:maincontr}. Let $r_{j_{0}}$ be the smallest of all $r_j$ such that $L(\sym^2 u_j,1/2)\neq 0$. The existence of such $j_0$ is guaranteed by \cite[Thm 1.2]{Balkanova:2019}, \cite[Thm 1]{Tang:2012}, \cite[Thm 7.1.1]{Ng:2016a}. We are therefore left to show that the integral \begin{equation}\int_{0}^{\infty}\psi'(v)v^{\delta_1(1/2-2ir_{j_{0}})}dv\neq 0. \end{equation} Integrating by parts, we need to prove that \begin{equation}\int_{0}^{\infty}\psi(v)v^{\delta_1/2-1}\exp(-2ir_{j_{0}}\delta_1\log{v})dv\neq 0. \end{equation} For this, it is enough to show that the real part of the integral is non-zero. But the real part equals \begin{equation} \int_{1/2}^{2}\psi(v)v^{\delta_1/2-1}\cos(2r_{j_{0}}\delta_1\log{v})dv, \end{equation} which is positive for $\delta_1$ small enough (depending on $r_{j_0}$) as the integrand is continuous and positive. Finally, we choose the parameter $R=X^{{\varepsilon}}$ such that all error terms are negligible: \begin{equation} R^{{\varepsilon}}\left(\frac{1}{R^{1/2}X^{3/2}}+\frac{X^{1/2}}{R^{1/2}\delta_1}+\frac{R^{5/2}}{X^{3/2}}+\frac{R^2}{\sqrt{X}}+\frac{R^{3/2}}{X^{1/2}}+\frac{X^{1/2}}{R^{1/2}}\right)\ll X^{1/2-{\varepsilon}}. \end{equation} \end{proof} \begin{lem}\label{lemma: holom-cont-contrib} For any ${\varepsilon}>0$ the following estimate holds \begin{equation}\label{eq:contspect} Z_H(1/2)+Z_{C}(1/2)\ll \frac{X^{1/2}}{\log{X}}. \end{equation} \end{lem} \begin{proof} The contribution of $Z_H(1/2)$ is negligible due to the rapid decay of $g(\omega_X;k)$. See \eqref{gest2}. Let us consider $Z_C(1/2)$. Similarly to the proof of Lemma \ref{lem:discspectr}, the contribution of $|r|>R$ is negligible, therefore it is sufficient to consider \begin{equation} \frac{\zeta(1/2)}{\pi}\int_{-\infty}^{\infty} \frac{\zeta(1/2+2ir)\zeta(1/2-2ir)}{\left|\zeta(1+2ir\right)|^2}h(\omega_X;r)\theta\left(\frac{r}{R}\right)dr, \end{equation} where $\theta(y)$ is a smooth characteristic function of the interval $(-1,1)$. In view of \eqref{eq:defhwr}, in order to prove \eqref{eq:contspect}, it is required to show that \begin{equation}\label{eq:ingt1} \int_{-\infty}^{\infty} \frac{\zeta(1/2+2ir)\zeta(1/2-2ir)}{\left|\zeta(1+2ir\right)|^2}H(\omega_X;r)\theta\left(\frac{r}{R}\right)X^{-2ir}dr\ll \frac{1}{\log{X}}. \end{equation} To this end, we integrate by parts, getting \begin{equation}\label{eq:ingt2} \frac{1}{\log{X}}\int_{-\infty}^{\infty}\Biggl( \frac{\zeta(1/2+2ir)\zeta(1/2-2ir)}{\left|\zeta(1+2ir\right)|^2}H(\omega_X;r)\theta\left(\frac{r}{R}\right)\Biggr)'X^{-2ir}dr. \end{equation} Estimating everything in the standard way, we prove \eqref{eq:ingt1}. \end{proof} To summarize we see from \eqref{expression} combined with Lemma \ref{lem:discspectr} and Lemma \ref{lemma: holom-cont-contrib} that \begin{equation}E_{\delta_1,\delta_2}=\Omega_\pm(\sqrt{X}).\end{equation} Combining this with Lemma \ref{from-unsmooth-to-smooth} we have proved Theorem \ref{omega}. \section{Relation between conjectures}\label{sec:iw>>pr} In this final section we prove Proposition \ref{iw>>pr} Our goal is to show that the conjecture \eqref{conjecture3} yields the following estimate \begin{equation}\label{eq:newestimate} \frac{1}{N}\sum_{n}h(n)\sum_{q=1}^{\infty}\frac{S(n,n;q)}{q} \varphi\left(\frac{4\pi n}{q}\right)\ll T^2(NX)^{{\varepsilon}}, \end{equation} where $\varphi$ is as in \eqref{phi def0}. Indeed, replacing \cite[Theorem 1.1.]{BalkanovaFrolenkov:2019a} by \eqref{eq:newestimate} in \cite[Section 4]{BalkanovaFrolenkov:2019a} we infer that \begin{equation} \sum_{t_j}t_jX^{it_j}\exp(-t_j/T)\ll T^2(TX)^{{\varepsilon}}, \end{equation} which implies the conjecture \eqref{conjecture2}. Now we proceed to prove \eqref{eq:newestimate}. We can write \begin{equation} \exp(ix\cosh{\beta})=\exp(ixb)\exp(-xa), \end{equation} which transforms \eqref{eq:newestimate} into \begin{multline} \frac{8\pi}{N}\sinh^2{\beta} \sum_{n}h(n)\sum_{q=1}^{\infty}\frac{S(n,n;q)}{q} \frac{n^2}{q^2}\exp\left( ib\frac{4\pi n}{q}\right)\exp\left( -a\frac{4\pi n}{q}\right). \end{multline} Applying the partial summation we have \begin{multline}\label{exp:partsum} \frac{8\pi}{N}\sinh^2{\beta} \sum_{n}h(n)n^2\int_{1}^{\infty}\Biggl( \sum_{q<y}\frac{S(n,n,q)}{q}\exp\left( ib\frac{4\pi n}{q}\right)\Biggr)\\ \times \frac{\partial}{\partial y}\Biggl(\frac{1}{y^2} \exp\left( -a\frac{4\pi n}{y}\right)\Biggr)dy. \end{multline} Since $b\asymp \sqrt{X}/2$, conjecture \eqref{conjecture3} implies that the inner sum can be estimated as follows \begin{equation} \sum_{q<y}\frac{S(n,n,q)}{q}\exp\left( ib\frac{4\pi n}{q}\right)\ll (n^2\sqrt{X}y)^{{\varepsilon}}. \end{equation} Then \eqref{exp:partsum} can be bounded by a constant times \begin{multline} \frac{\sinh^2{\beta}}{N} \sum_{n}h(n)n^2\int_{1}^{\infty}(n^2\sqrt{X}y)^{{\varepsilon}}\Biggl[\frac{1}{y^3}\exp\left( -a\frac{4\pi n}{y}\right)+\frac{an}{y^4}\exp\left( -a\frac{4\pi n}{y}\right)\Biggr]dy \\ \ll \frac{X}{N}\sum_{n}h(n)n^2\frac{(nX)^{{\varepsilon}}}{(na)^2}\ll T^2(NX)^{{\varepsilon}}, \end{multline} where we have used that $a \asymp \frac{\sqrt{X}}{4T}$. This concludes the proof of Proposition \ref{iw>>pr}. \section*{Acknowledgement} The research of Olga Balkanova was funded by RFBR, project number 19-31-60029. The research of Morten S. Risager was supported by the Independent Research Fund Denmark DFF-7014-00060B. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,877,628,089,902
arxiv
\section{Introduction} \IEEEPARstart{R}{ecognition} of the modulation format of unknown received signals is an important functionality of intelligent, or cognitive, radios for both military and civilian applications \cite{Dobre}. In most practical scenarios, the modulation classification task must cope with the fact that transmission takes place over an unknown frequency-selective channel. A generalized likelihood ratio test is proposed in \cite{N.lay} to tackle this problem. The method, however, fails to identify nested signal constellations such as \textcolor{black}{QPSK and 8-PSK} \cite{Panagiotou}, \cite{Hameed}. An alternative approach is to use a preliminary blind equalization stage in order to compensate for the frequency-selective channel \cite{Dobre}, \cite{Thomas}. The joint estimation of channel and modulation is analytically intractable, and hence Markov Chain Monte Carlo (MCMC) methods \cite{Doucet} provide viable solutions. A classification method based on MCMC was proposed in \cite{Thomas} for a single-antenna system by leveraging Gibbs sampling and by introducing a ``superconstellation'' in order to solve the convergence problem of conventional Gibbs sampling (see Sec. \ref{sec:Preliminaries}). In this paper, a novel Bayesian modulation classification scheme is proposed for a single-antenna system over frequency-selective fading channels. The method is based on Gibbs sampling as applied to a latent Dirichlet Bayesian network (BN). The use of the proposed latent Dirichlet BN provides a systematic solution to the convergence problem encountered by the conventional Gibbs sampling approach. The method in \cite{Thomas} based on ``superconstellation'' can be interpreted as an approximation of the proposed approach. Furthermore, with an appropriate selection of the prior distribution of the latent variable, our technique is shown to improve over \cite{Thomas}. \textit{Notation}: The superscripts \textit{T} and \textit{H} are used to denote matrix or vector transpose and Hermitian, respectively. We use the notation $\propto$ to denote proportionality. The cardinality of a set $\mathcal{S}$ is denoted as $\left|\mathcal{S}\right|$. Notation $\mathrm{\mathbf{1}}\left(\cdot\right)$ represents the indicator function. The following notations for some important distributions will be used: we denote by $\mathcal{CN}(\mathbf{\boldsymbol{\mu}},\mathbf{C})$ the circularly symmetric complex Gaussian distribution with mean vector $\boldsymbol{\mu}$ and covariance matrix $\mathbf{C}$; we use $\mathcal{U}(\mathcal{S})$ to denote a uniform distribution on a set $\mathcal{S}$, i.e., all elements of $S$ are equiprobable; notation $\mathrm{Cat}{}_{\mathcal{S}}\left(\mathbf{p}\right)$ is used for a categorical distribution on a set $\mathcal{S}$ with a length-$\left|\mathcal{S}\right|$ vector $\mathbf{p}$ of probabilities; the inverse gamma distribution with shape parameter $a$ and scale parameter $b$ is denoted as $\mathcal{IG}\left(a,b\right)$. \section{System Model} We consider a single-antenna system over a frequency-selective fading channel. The transmitted symbols $s_{n}$ are taken from a finite constellation $A$, such as $M$-PSK or $M$-QAM. We assume that the constellation $A$ belongs to a known finite set $\mathcal{A}$ of possible constellations. The received baseband signal $r\left(t\right)$ at the output of the matched filter is given by \begin{equation} r\left(t\right)=\sum_{n=-\infty}^{\infty}s_{n}h\left(t-nT\right)+w(t),\label{eq:original system} \end{equation} where $h\left(t\right)$ represents the overall pulse shape, which includes the effect of transmit and receive filters and of the channel \cite{Barry}, and $w\left(t\right)$ is complex white Gaussian noise with zero mean and variance $\sigma^{2}$. Without loss of generality, the average power of the transmitted symbols is assumed to be unity, i.e., $E(\left|s_{n}\right|^{2})=1$. Moreover, the pulse shape $h\left(t\right)$ is assumed to be of finite duration $LT$, for a given integer $L$. Sampling at multiples of the symbol period $T$, the $k$th received baseband sample is \begin{equation} r\left(kT\right)=\sum_{n=-\infty}^{\infty}s_{n}h\left(kT-nT\right)+w(kT).\label{eq: integer samples} \end{equation} Processing is performed in blocks of size $N$. Within this block, the received samples (\ref{eq: integer samples}) can be collected in an $N\times1$ vector $\mathbf{r}=[r(0),r(T),\cdots,r(\left(N-1\right)T)]^{T},$ which, using (\ref{eq: integer samples}), can be written as \begin{equation} \mathbf{r}=\mathbf{Sh\mathbf{\textrm{+}w}}.\label{eq:matrix rep} \end{equation} In (\ref{eq:matrix rep}), we have defined the $L\times1$ channel vector $\mathbf{h}$ as $\mathbf{h}=[h(0),h(T),\cdots,h(\left(L-1\right)T)]^{T}$; the vector $\mathbf{w}=[w(0),w(T),\cdots,w(\left(N-1\right)T)]^{T}\sim\mathcal{CN}(\mathbf{0},\sigma^{2}\mathbf{I})$ is the complex white Gaussian noise; and the $N\times L$ transmitted symbol convolution matrix is defined as \begin{equation} \mathbf{S}=\left[\mathbf{s}_{0},\mathbf{s}_{1},\cdots,\mathbf{s}_{N-1}\right]^{T}, \end{equation} where $\mathbf{s}_{n}=[s_{n},s_{n-1},\cdots,s_{n-L+1}]^{T}.$ Given the received signal vector $\mathbf{r}$ in (\ref{eq:matrix rep}), the goal of the receiver is to classify the modulation format $A$ while being uninformed about the transmitted symbols $\mathbf{s}$, defined as $\mathbf{s}=\left\{ s_{n}\right\} _{n=-L+1}^{N-1}$, the channel vector $\mathbf{h}$ and the noise power $\sigma^{2}$. \section{Preliminaries\label{sec:Preliminaries}} In this work, as in \cite{Thomas}, we perform the modulation classification task outlined above by using a Bayesian approach via MCMC methods. In this section, we review some key preliminary concepts. \subsection{Bayesian Approach} The joint posterior probability density function (pdf) of the unknown quantities $(A,\mathbf{s},\mathbf{\mathrm{\mathrm{\mathbf{h}},}\sigma}^{2})$ can be expressed as \begin{equation} p\left(A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\Big|\mathbf{r}\right)\propto p\left(\mathbf{r}\Big|A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\right)p\left(A,\mathbf{s},\mathbf{h},\mathbf{\mathrm{\sigma^{2}}}\right),\label{eq:posterior} \end{equation} where the likelihood function $p(\mathbf{r}|A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}})$ is such that \begin{equation} \mathbf{r}\Big|\left(A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\right)\sim\mathcal{CN}(\mathbf{Sh},\sigma^{2}\mathbf{I}),\label{eq:likelihood r} \end{equation} and the term $p(A,\mathbf{s},\mathbf{h},\mathbf{\mathrm{\sigma^{2}}})$ represents the available prior information on the unknown variables. We assume that this prior distribution factorizes as \begin{equation} p(A,\mathbf{s},\mathbf{h},\mathbf{\mathrm{\sigma^{2}}})=p\left(A\right)\left\{ \prod_{n=-L+1}^{N-1}p\left(s_{n}|A\right)\right\} p\left(\mathbf{h}\right)p\left(\sigma^{2}\right),\label{eq:7priors} \end{equation} where we have $A\sim\mathcal{U}\left(\mathcal{A}\right)$, $s_{n}|A\sim\mathcal{U}(A)$, $\mathbf{h}\sim\mathcal{CN}(\mathbf{0},\alpha\mathbf{I})$ and $\sigma^{2}\sim\mathcal{IG}\left(\alpha_{0},\beta_{0}\right),$ where $(\alpha,\alpha_{0},\beta_{0})$ are fixed parameters. One typically selects $\alpha$ and $\beta_{0}$ to be sufficiently large and $\alpha_{0}$ to be sufficiently small in order to obtain uninformative priors \cite{Thomas}. The Bayesian approach to modulation classification aims at estimating the posterior probability of the modulation $A$ when conditioned on the received signal $\mathbf{r}$, namely \begin{equation} p\left(A|\mathbf{r}\right)=\sum_{\mathbf{s}}\int p\left(A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}|\mathbf{r}\right)d\mathbf{h}d\sigma^{2}.\label{eq:original Bayesian query} \end{equation} The computation of (\ref{eq:original Bayesian query}) involves a multidimensional integration, which is generally infeasible. In the following, we illustrate how this task can be accomplished by MCMC techniques. \subsection{Bayesian Network } In order to facilitate the introduction of MCMC methods for modulation classification in the next section, we first recall some basic facts about BNs \cite{Koller}. A BN is a directed graph, whose nodes are the random variables in the domain of interest and whose edges encode the direct probabilistic influence of one variable on another. Specifically, for a set of random variables $\left\{ X_{k}\right\} _{k=1}^{K}$, a BN encodes a factorization of the joint distribution of the variables at hand of the form \begin{equation} p\left(X_{1},\cdots,X_{K}\right)=\prod_{k=1}^{K}p\left(X_{k}|\mathrm{P}\mathrm{a}_{X_{k}}\right),\label{eq:BN' chain rule} \end{equation} where $\mathrm{P}\mathrm{a}_{X_{k}}$ represents a subset of the variables $\left(X_{1},\cdots,X_{k-1}\right)$. By the chain rule, the factorization (\ref{eq:BN' chain rule}) states that, when conditioning on all the variables $\left(X_{1},\cdots,X_{k-1}\right)$, each variable $X_{k}$ is only influenced by the ``parent'' variables $\mathrm{P}\mathrm{a}_{X_{k}}$ (i.e., we have the Markov chain $\left(X_{1},\cdots,X_{k-1}\right)-\mathrm{P}\mathrm{a}_{X_{k}}-X_{k}$). This statistical dependence between variable $X_{k}$ and the set of parent variables $\mathrm{P}\mathrm{a}_{X_{k}}$ is encoded in the BN by introducing a directed edge between all variables $\mathrm{P}\mathrm{a}_{X_{k}}$ and $X_{k}$. As an example, the BN encoding factorization (\ref{eq:posterior}), (\ref{eq:7priors}) is shown in Fig. \ref{fig:BN-for-modulation}. \subsection{Markov Chain Monte Carlo } MCMC methods provide a general approach for generating $M$ samples $\mathbf{x}^{\left(1\right)},\cdots,\mathbf{x}^{\left(M\right)}$ from an arbitrary target distribution $p\left(\mathbf{X}\right)$ with the aim of estimating ensemble averages, and hence multidimensional integrals such as in (\ref{eq:original Bayesian query}). MCMC methods simulate a Markov chain whose equilibrium distribution is $p\left(\mathbf{X}\right)$ in order to produce such samples \cite{Koller}. For instance, the marginal of a joint distribution $p\left(\mathbf{X}\right)$ with respect to any variable $X_{i}$ in $\mathbf{X}$ can be estimated by MCMC as \begin{equation} p\left(X_{i}=x_{i}\right)\approx\frac{1}{M}\sum_{m=M_{0}+1}^{M_{0}+M}\mathrm{\mathbf{1}}\left(\mathbf{x}_{i}^{\left(m\right)}=x_{i}\right),\label{eq:marginalization} \end{equation} where $\mathbf{x}_{i}^{\left(m\right)}$ is the $i$th element of the $m$th sample of the simulated Markov chain and $x_{i}$ is a value in the domain of $X_{i}$. Note that, in (\ref{eq:marginalization}), the first $M_{0}$ samples generated by the Markov chain are not used in order to limit the impact of the initialization. Gibbs sampling \cite{Koller} is a classical MCMC algorithm, whereby, at each step $m$, a new sample of a given random variable $X_{i}$ is generated according to the conditional distribution $p(X_{i}|\mathbf{X}_{-i}=\mathbf{x}_{-i}^{\left(m-1\right)})$, where $\mathbf{X}_{-i}$ denotes all variables in $\mathbf{X}$ except $X_{i}$, which are fixed to the current value $\mathbf{x}_{-i}^{\left(m-1\right)}$. Gibbs sampling is known to provide asymptotically correct estimates (\ref{eq:marginalization}) (with probability one) under appropriate conditions. A sufficient condition for convergence is that the conditional distributions $p(X_{i}|\mathbf{X}_{-i})$ are strictly positive in their domains for all $i$ \cite[Ch. 12]{Koller}. As we will see, this condition is not satisfied by the distribution (\ref{eq:posterior}) for the problem under study. This suggests that more sophisticated strategies than conventional Gibbs sampling are needed, as discussed in the next section. \section{Gibbs Sampling for Modulation Classification\label{sec:Bayesian-Modulation-Classificati}} In this section, we design a Gibbs sampler that performs modulation classification in the presence of frequency-selective fading. As outlined in the previous section, the goal is to estimate the posterior probability $p\left(A|\mathbf{r}\right)$ in (\ref{eq:original Bayesian query}). \subsection{Conventional Gibbs Sampling } We first elaborate on the conventional Gibbs sampler for the calculation of the posterior $p\left(A|\mathbf{r}\right)$ that is based directly on the joint distribution (\ref{eq:posterior})-(\ref{eq:7priors}). The corresponding BN $\mathcal{G}_{1}$ is shown in Fig. \ref{fig:BN-for-modulation}. \begin{figure}[htbp] \begin{centering} \textsf{\includegraphics[width=7.5cm,height=4cm]{Fig1}} \par\end{centering} \caption{\label{fig:BN-for-modulation}BN \textsl{$\mathcal{G}_{1}$} for modulation classification based on the factorization (\ref{eq:posterior}).} \end{figure} As per the discussion in the previous section, Gibbs sampling requires the knowledge of the conditional distributions of each variable given all others. Calculating these conditional probabilities requires to multiply all the factors in the factorization (\ref{eq:posterior})-(\ref{eq:7priors}) that contain the variable of interest and then normalize the resulting distribution \cite[Ch. 12]{Koller} . This leads to the following (see also \cite{Thomas}): \begin{equation} p\left(A\Big|\mathbf{s},\mathbf{\mathbf{\mathbf{h\mathrm{,\sigma^{2}}},\mathbf{r}}}\right)\propto p\left(A\right)\left\{ \prod_{n=-L+1}^{N-1}p\left(s_{n}|A\right)\right\} ,\label{eq:conditonal A} \end{equation} \begin{equation} p\left(s_{n}\Big|A,\mathbf{s}_{-n},\mathbf{h\mathrm{,\sigma^{2},}}\mathbf{r}\right)\propto p\left(s_{n}|A\right)p\left(\mathbf{r}\Big|A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\right),\label{eq:conditional sn} \end{equation} \begin{equation} \mathbf{h}\Big|\left(A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2},}}\mathbf{r}\right)\sim\mathcal{CN}(\mathbf{h}_{*},\boldsymbol{\Sigma}_{*}),\label{eq:conditional h} \end{equation} \begin{equation} \mathrm{and}\,\,\sigma^{2}\Big|\left(A,\left\{ s_{n}\right\} _{n=-L+1}^{N-1},\mathbf{\mathbf{\mathbf{h\mathrm{,}}\mathbf{r}}}\right)\sim\mathcal{IG}\left(\alpha,\beta\right),\label{eq:contional sigma} \end{equation} where $h_{*}=\boldsymbol{\Sigma}_{*}(\mathbf{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{h}_{0}+\frac{\mathbf{S}^{H}\mathbf{r}}{\sigma^{2}}),$ $\mathbf{\boldsymbol{\Sigma}}_{*}^{-1}=\mathbf{\boldsymbol{\Sigma}}_{0}^{-1}+\frac{\mathbf{S}^{H}\mathbf{S}}{\sigma^{2}},$ $\alpha=\alpha_{0}+N$ and $\beta=\beta_{0}+\left\Vert \mathbf{r}-\mathbf{Sh}\right\Vert ^{2}$. Note that (\ref{eq:conditional h}) follows from standard MMSE estimation results (see, e.g., \cite{Koller}) and that (\ref{eq:contional sigma}) is a consequence of the fact that the inverse Gamma distribution is the conjugate prior for the Gaussian likelihood \cite{Mruphy}. Gibbs sampling starts with an arbitrary feasible initialization for all variables. In particular, one needs to initialize the constellation $A$ to some value $A=a$ and, correspondingly, the transmitted symbols $\mathbf{s}$ to source values belonging to the constellation $a$. Using (\ref{eq:conditonal A}) and (\ref{eq:conditional sn}), it can be easily seen that conventional Gibbs sampling will never select values of $A$ different from the initial value $a$. This is due to the fact that the conditional distribution $p\left(s_{n}|A\right)$ gives zero probability to all values of $s_{n}$ not belonging to $A$. As a result, Gibbs sampling fails to converge to the posterior distribution (see also Sec. \ref{sec:Preliminaries}). Next, we demonstrate how this problem can be solved by the proposed approach based on the latent Dirichlet BN. \subsection{Gibbs Sampling Based on Latent Dirichlet BN} In order to avoid the problem described above, we propose to base the Gibbs sampler on the BN $\mathcal{G}_{2}$ shown in Fig. \ref{fig:New BN}. In this BN, each transmitted symbol $s_{n}$ is distributed according to a random mixture of uniform distributions on the different constellations. Specifically, we introduce a random vector $\mathbf{P}_{A}$ to represent the mixture weights, so that $\mathbf{P}_{A}\left(a\right)$ is the probability that $s_{n}$ takes values in the constellation $a\mathcal{\in A}$. The prior distribution of $\mathbf{P}_{A}$ is Dirichlet, so that we have $\mathbf{P}_{A}\sim\mathrm{Dirichlet}\left(\boldsymbol{\gamma}\right)$ for a given set of nonnegative parameters $\boldsymbol{\gamma}=[\gamma_{1},\cdots,\gamma_{\left|\mathcal{A}\right|}]^{T}$ \cite{Koller \footnote{Intuitively, the parameter $\gamma_{a}$ can be interpreted as the number of symbols in constellation $a\in\mathcal{A}$ observed during some preliminary measurements }. When conditioned on $\mathbf{P}_{A}$, the transmitted symbol variables $s_{n}$ are independent and distributed according to a mixture of uniform distributions, i.e., $p\left(s_{n}|\mathbf{P}_{A}\right)=\sum_{a:\, s_{n}\in a}\mathbf{P}_{A}\left(a\right)/\left|a\right|$. The BN $\mathcal{G}_{2}$, while departing from the original model (\ref{eq:posterior})-(\ref{eq:7priors}), has the advantage that a Gibbs sampler based on it is not limited by the zeros present in the distribution (\ref{eq:posterior})-(\ref{eq:7priors}). In particular, thanks to the introduction of the latent variable $\mathbf{P}_{A}$, Gibbs sampling is able to explore different constellations irrespective of its initialization. The idea of introducing the latent Dirichlet variable $\mathbf{P}_{A}$ is inspired by \cite{Blei}, where a similar quantity was used to account for the distribution of topics within a document. According to the BN in Fig. \ref{fig:New BN}, the joint pdf $p\left(\mathbf{P}_{A},\mathbf{s},\mathbf{h}\mathrm{,\sigma^{2}},\mathbf{r}\right)$ can be factorized as \begin{align} & p\left(\mathbf{P}_{A},\mathbf{s},\mathbf{h}\mathrm{,\sigma^{2}},\mathbf{r}\right)\nonumber \\ = & p\left(\mathbf{P}_{A}\right)\left\{ \prod_{n=-L+1}^{N-1}p\left(s_{n}|\mathbf{P}_{A}\right)\right\} p\left(\mathbf{h}\right)\cdot\nonumber \\ & \cdot p\left(\sigma^{2}\right)p\left(\mathbf{r}\Big|\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\right),\label{eq: factorization 2} \end{align} where, as discussed above, we have $\mathbf{P}_{A}\sim\mathrm{Dirichlet}(\boldsymbol{\gamma})$ and $p\left(s_{n}|\mathbf{P}_{A}\right)=\sum_{a:\, s_{n}\in a}\mathbf{P}_{A}\left(a\right)/\left|a\right|$, while the remaining conditional distributions are as in (\ref{eq:likelihood r}) and (\ref{eq:7priors}). \begin{figure}[htbp] \begin{centering} \textsf{\includegraphics[width=7.5cm,height=4cm]{Fig2}} \par\end{centering} \caption{\label{fig:New BN}BN \textsl{$\mathcal{G}_{2}$} for the proposed modulation classification scheme based on the Dirichlet latent variable $\mathbf{P}_{A}$.} \end{figure} To apply Gibbs sampling based on the factorization (\ref{eq: factorization 2}), the conditional distributions for $\mathbf{P}_{A}$, $s_{n}$, $\mathbf{h}$ and $\mathrm{\sigma^{2}}$ conditioned on all other variables are required. It can be shown that the conditional distribution for $\mathbf{h}$ and $\sigma^{2}$ are (\ref{eq:conditional h}) and (\ref{eq:contional sigma}) respectively. The other required conditional distributions of $\mathbf{P}_{A}$ and $s_{n}$ are as follows: \begin{equation} \mathbf{P}_{A}\Big|\left(\mathbf{s},\mathbf{h}\mathrm{,\sigma^{2}},\mathbf{r}\right)\sim\mathrm{Dirichlet}\left(\boldsymbol{\gamma}+\mathbf{c}\right),\label{eq:conditonal Pa} \end{equation} where $\mathbf{c}=\left[c_{1},\cdots,c_{\left|\mathcal{A}\right|}\right]^{T}$ and $c_{a}$ is the number of symbols that belong to constellation $a\in\mathcal{A}$; \begin{align} \mathrm{and}\,\, & p\left(s_{n}\Big|\mathbf{P}_{A},\left\{ s_{k}\right\} _{k\neq n}^{N-1},\mathbf{h\mathrm{,\sigma^{2},}}\mathbf{r}\right)\nonumber \\ \propto & p\left(s_{n}|\mathbf{P}_{A}\right)p\left(\mathbf{r}\Big|\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\right). \end{align} Note that (\ref{eq:conditonal Pa}) follows the fact that different modes of the mixture distribution is distributed as a categorical distribution and Dirichlet distribution is the conjugate prior for categorical distribution \cite{Koller}. The task of modulation classification is achieved by computing the posterior distribution $p\left(\mathbf{P}_{A}|\mathbf{r}\right)$ following the Gibbs procedure discussed above. From the posterior $p\left(\mathbf{P}_{A}|\mathbf{r}\right)$, we can then obtain an estimate $\hat{A}$ for the constellation as \begin{equation} \hat{A}=\arg\max_{a\in\mathcal{A}}E\left[\mathbf{P}_{A}\left(a\right)\mid\mathbf{r}\right], \end{equation} where the expectation is taken over the distribution $p\left(\mathbf{P}_{A}|\mathbf{r}\right)$. \textit{Remark 1}: The method proposed in \cite{Thomas}, based on the introduction of a ``superconstellation'', can be seen as an approximation of the approach presented above. Specifically, the scheme of \cite{Thomas} is obtained by setting $\boldsymbol{\gamma}=\mathbf{0}$ and by choosing $\mathbf{P}_{A}$ to be equal to $\mathbf{c}/\sum_{a=1}^{\left|\mathcal{A}\right|}c_{a}$, at each Gibbs iteration, where we recall that $c_{a}$ is the number of symbols that belong to constellation $a\in\mathcal{A}$. \textcolor{black}{Furthermore, the computational complexity of the proposed scheme is comparable to the superconstellation Gibbs sampler}\textcolor{blue}{{} }\textcolor{black}{\cite[Table I]{Thomas}. } \textit{Remark 2}: At high signal-to-noise ratios (SNR), the relationship between $\mathbf{r}$ and $\mathbf{s}$ defined by (\ref{eq:likelihood r}) is almost deterministic. Following the discussion in the previous section, this may create convergence problems. This issue can be tackled via the idea of annealing \cite[Ch. 12]{Koller}. Accordingly, the distribution (\ref{eq:likelihood r}) is modified as $\mathbf{r}\Big|\left(A,\mathbf{s},\mathbf{h\mathrm{,\sigma^{2}}}\right)\sim\mathcal{CN}(\mathbf{Sh},\rho\sigma^{2}\mathbf{I})$, where $\rho$ is a ``temperature'' parameter. The procedure starts with a high value of $\rho$ to prevent the mentioned convergence problems, and then cools to a lower temperature to produce the desired target distribution. Effective cooling schedules include logarithmic and linear decreases of the temperature, whose parameters can be determined based on preliminary runs \cite{Nourani}. \section{Numerical Results and Concluding Remarks} In this section, we evaluate the performance of the proposed scheme for the recognition of three modulation formats, namely QPSK, 8-PSK and 16-QAM. We assume Rayleigh fading channels, which are normalized so that $E[\left\Vert \mathbf{h}\right\Vert ^{2}]=1$. The average SNR is defined as SNR$=1/\mathrm{\sigma^{2}}$. The number of samples used by Gibbs sampling are $M=300$ and $M_{0}=100$ in (\ref{eq:marginalization}). No annealing is used. The performance criterion of interest is probability of correct classification (PCC). In Fig. \ref{fig:PCC1}, we plot the PCC for $L=3$ independent taps with relative powers given by $\left[0\mathrm{dB},-0.9\mathrm{dB},-4.9\mathrm{dB}\right]$. The performance of the proposed method is compared to the superconstellation Gibbs sampler of \cite{Thomas}. The prior distribution $\mathrm{Dirichlet}\left(\boldsymbol{\gamma}\right)$ for $\mathbf{P}_{A}$ is selected so that all elements of the vector $\mathbf{\mathbf{\boldsymbol{\gamma}}}$ are identical and equal to a parameter $\gamma$. In order to investigate the impact of prior distributions for $\mathbf{P}_{A}$ on the classification performance, five values, $0.1$, $0.5$, $1$, $10$ and $15$ are considered for $\gamma$. \textcolor{black}{It is observed that the performance is enhanced with a larger $\gamma$, especially for higher SNR values. This is because increasing $\gamma$ enhances the relative importance of the prior distribution and helps improve the convergence properties of the algorithm (see Remark 2).} \textcolor{black}{We observe that in practice the value of the hyperparameter $\gamma$, as is the case for all of MCMC methods \cite{Neal}, can be determined based on offline preliminary runs.} \textcolor{black}{Finally, for sufficiently high SNRs, the superconstellation Gibbs sampler achieves a PCC of about $83\%$, while the proposed scheme achieves a PCC of $95\%$ with $\gamma=15$. } \begin{figure}[htbp] \begin{centering} \textsf{\includegraphics[width=8.5cm,height=6.5cm]{Fig3}} \par\end{centering} \caption{\label{fig:PCC1}Probability of correct classification using the proposed method and the superconstellation Gibbs sampling of \cite{Thomas} versus SNR with $L=3$ independent taps.} \end{figure} Fig. \ref{fig:PCC1-1} shows the PCC for channels with two paths with non-integer delays $\left[0,1.3T\right]$ and with relative powers $\left[0\mathrm{dB},-0.9\mathrm{dB}\right]$ \cite[Ch. 3]{Goldsmith}. A raised cosine pulse shape with roll-off factor 0.3 is assumed and we set $L=6$. For $N=100$, while there is performance degradation as compared to 3-tap channels due to more severe frequency selectivity, the proposed scheme still can achieve above $90\%$ PCC at sufficiently large SNR, while the method of \cite{Thomas} achieves a PCC of $80\%$. \textcolor{black}{For $N=400$, both classification schemes obtain performance gains, but the proposed scheme attains a PCC of $95.6\%$, while the superconstellation method achieves a PCC of $90\%$.} \begin{figure}[htbp] \begin{centering} \textsf{\includegraphics[width=8.5cm,height=6.5cm]{Fig4}} \par\end{centering} \caption{\label{fig:PCC1-1}Probability of correct classification using the proposed method and the superconstellation Gibbs sampling of \cite{Thomas} versus SNR with two paths and non-integer delays ($L=6$).} \end{figure} \textcolor{black}{In summary, as demonstrated by the discussed numerical results, the proposed Gibbs sampling method based on latent Dirichlet Bayesian network provides significant performance gains over the state of the art.}
2,877,628,089,903
arxiv
2,877,628,089,904
arxiv
\section{Introduction} Of all the generators of Earth-detectable gravitational waves, core collapse supernovae (CCSN) are among the most interesting ones. The waveforms produced by these explosions will produce signals well within Advanced LIGO's \cite{AdvLIGO} sensitivity across a broad range of frequencies, carrying information about a number of phenomena, such as fluid instabilities in the proto-neutron star, neutrino-driven convection beneath the supernova shock wave, the standing accretion shock instability (SASI), deceleration at an aspherical shock, and aspherical neutrino emission (see Ott's review \cite{Ott2009}). This information is invaluable given that gravitational radiation, together with neutrino signals, are the only carriers of information that penetrate the heavy material expelled by a SN explosion. We present here preliminary results from two-dimensional simulations carried out with the code CHIMERA, with emphasis in gravitational waveform (GW) generation. \section{Two-Dimensional Simulations with CHIMERA} The CHIMERA code is composed of different modules that handle the hydrodynamics, neutrino transport, self-gravity, a nuclear equation of state, and a nuclear reaction network. Details of these are given in \cite{Messer08, Bruenn09, Yakunin10}. Of special interest in this article is the gravitational physics module, which consists of a spectral Poisson solver used to determine the gravitational field \cite{Muller95} with general relativistic corrections to the spherical component \cite{Marek06}. The gravitational waves produced by CCSN are due to fluid dynamics and neutrino radiation. The details of the GW extraction methods employed in the post-processing analysis of CHIMERA data are given in \cite{Yakunin10}. \begin{figure} \begin{center} \includegraphics[scale=0.8]{figure1.jpg} \caption{{\it Left:} Gravitational wave strain $h_+$ times the distance to the observer $r$ vs. post-bounce time for the 15 $M_{\odot}$ non-rotating progenitor model. Below, entropy distribution snapshots typical of the {\it prompt}, {\it strong}, and {\it tail} stages of the signal. Note the difference in scale of the left snapshot and two others.} \label{figure1} \end{center} \end{figure} The results presented here are based on axisymmetric simulations of CCSN for non-rotating 12, 15, and 25 $M_\odot$ progenitors \cite{Woosley07}. The gravitational signals produced by neutrino radiation and matter motion were followed for up to 530 ms after core bounce. The GWs are quantitatively different for every model but all of them show the same qualitative four stages of development: \begin{itemize} \item A {\bf prompt signal}: an initial and relatively weak signal that starts at bounce and ends at between 50 and 75 ms post-bounce. \item A {\bf quiescent stage} that immediately follows the prompt signal and ends somewhere between 125 ms and 175 ms after bounce. \item A {\bf strong signal}, which follows the quiescent stage and is the most energetic part of the GW signal. This stage ends somewhere between 350 ms and 450 ms after bounce. \item A {\bf tail}, which starts before the end of the strong signal at about 300 ms after bounce and consists of a slow increase in $rh_{+}$. This tail continues to rise at the end of our runs. \end{itemize} Gravitational waveforms for core collapse supernova have been obtained by other groups: Marek \etal \cite{Marek09} showed the presence of the first three phases (prior to explosion) and Murphy \etal \cite{Murphy09} presented the same four stages based on parameterized explosions. Since our models are non-parameterized, the GWs presented here allow us to determine in a more precise manner the amplitudes and timescales. Figure \ref{figure1} shows a typical GW (this one corresponding to the 15 $M_\odot$ progenitor) together with snapshots of the entropy distribution corresponding to the three active stages of the signal. The first stage is composed of the contribution of two signals generated in different places: a high-frequency component produced inside the proto-neutron star (PNS), which has a radius of less than 30 km, and a low-frequency component originating at the shock radius (located at about 100 km at this time). The latter is due to the deflection of infalling matter through the shock, determined through the use of tracer particles \cite{Yakunin10}. Previous work attributed the prompt signal to PNS convection only \cite{Marek09, Murphy09}. The second stage is a period of relative calm that precedes the stronger phase of the signal: a third stage dominated by neutrino-driven convection and the Standing Shock Accretion Instability (SASI). During this stage, low-entropy downflows impinge on the PNS surface, creating the spikes shown in Fig. \ref{figure1}. The modulation of these funnels is driven by the SASI, affecting their kinetic energy and, consequently, the amplitude of the GWs generated when these flows hit the PNS. The models end with GW tails of positive slope. These tails are associated with matter deflected at the expanding shock (see Fig. 5 in \cite{Murphy09}), characteristic of an explosion and are consistent with the prolate nature of the shock in the final stages of the simulations. (An oblate shock would give rise to a tail of opposite sign.) \section{Conclusions and Future Work} Our results mark step forward in GW astrophysics, but it is important to remark on their preliminary nature: we are currently performing a new set of runs using an enhanced version of CHIMERA that will provide an essential test of the validity of the GWs presented here \cite{Bruenn10}. Moreover, the most important limitation in these models is their restriction to axisymmetry. We are currently evolving three-dimensional models, and we anticipate that the greatest change to our GW predictions will be in the phase-4 tail. This is because prolate explosions are usually present in axisymmetric simulations, where artificial boundary conditions must be imposed that prevent the turnover of material along the symmetry axis. Additionally, the amplitude and timescales associated with the early stages are bound to change too, since three-dimensional simulations of the SASI have shown spiraling flows that cannot be captured by our current 2D models \cite{Blondin07}. \section{Acknowledgements} The authors would like to acknowledge the computational resources provided at the Leadership Computing Facility in the National Center for Computational Sciences at ORNL (INCITE Program) and at TACC (TG-MCA08X010). PM acknowledges partial support from NSF-PHYS-0855315, and PM and SWB acknowledge partial support from an NSF-OCI-0749204 award. AM, OEBM, PM, SWB, and WRH acknowledge partial support from a NASA ATFP award (07-ATFP07-0011). AM and WRH acknowledge support from the Office of Nuclear Physics, U.S. Department of Energy, and AM and OEBM acknowledge support from the Office of Advanced Scientific Computing Research, U.S. Department of Energy. \section*{References}
2,877,628,089,905
arxiv
\section{Introduction} It is often the case that adding some randomness to a combinatorial structure can lead to significant positive change. Perhaps the most important example of this and the inspiration for a lot of what has followed, is the seminal result of Spielman and Teng \cite{ST} on the performance of the simplex algorithm, see also Vershynin \cite{V} and Dadush and Huiberts \cite{DH}. The paper \cite{ST} inspired the following model of Bohman, Frieze and Martin \cite{BFM}. They consider adding random edges to an arbitrary member $G$ of $\mathcal{G}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$. Here $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta$ is a positive constant and $\mathcal{G}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$ is the set of graphs with vertex set $[n]$ and minimum degree at least $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$. They show that adding $O(n)$ random edges to $G$ is enough to create a Hamilton cycle w.h.p. This is in contrast to the approximately $\frac12n\log n$ edges needed if we rely only on the random edges. Research on this model and its variations has been quite substantial, see for example \cite{BFKM}, \cite{KST}, \cite{SV}, \cite{KKS}, \cite{KKS1}, \cite{BMPP}, \cite{BTW}, \cite{MM}, \cite{BHY}, \cite{HZ}, \cite{DT}, \cite{P}, \cite{DRRS}, \cite{P1}. Anastos and Frieze \cite{AF} introduced a variation on this theme by adding color to the edges. They consider rainbow Hamiltonicity and rainbow connection in the context of a randomly colored dense graph with the addition of randomly colored edges. Aigner-Horev and Hefetz \cite{AHH} strengthened the Hamiltonicity result of \cite{AF}. In this paper we introduce another variation. We start with a dense graph in which each edge $e$ has a random weight $X(e)$ and add randomly weighted random edges. We study the effect on the minimum value of various combinatorial structures. We will for simplicity restrict our attention to what we will call $\mathcal{G}_{reg}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$, the graphs in $\mathcal{G}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$ that are $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ regular. \subsection{Spanning Trees} We start with spanning trees. Suppose that $G\in \mathcal{G}_{reg}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$ and each edge $e$ of $G$ is given an independent random weight $X(e)$ chosen uniformly from $[0,1]$. Let $mst(G)$ denote the expected minimum weight of a spanning tree of $G$, assuming it is connected. Beveridge, Frieze and McDiarmid \cite{BFM0} and Frieze, Ruszinko and Thoma \cite{FRT} show that assuming certain connectivity conditions on $G$, \beq{eq1a}{ mst(G)\approx \frac{\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(3)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\text{ as $n\to\infty$}. } where for a positive integer $k\geq 2$ we have $\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(k)=\sum_{n=1}^\infty n^{-k}$. Here $A_n\approx B_n$ if $A_n=(1+o(1))B_n$ as $n\to \infty$ and $A_n\lesssim B_n$ if $A_n\leq (1+o(1))B_n$ as $n\to \infty$ and $A_n\gg B_n$ if $A_n/B_n\to \infty$. Now let $G(m)$ be obtained from $G$ by adding $m$ random edges to $G$. Each added random edge also has an independent uniform $[0,1]$ weight. Also, let $G(p)$ be obtained from $G$ by independently adding randomly weighted copies of edges not in $G$, with probability $p$. We let $R_m,R_p$ denote the added edges. Our first theorem is a simple extension of \eqref{eq1a}. \begin{theorem}\label{th1} Suppose that $G\in \mathcal{G}_{reg}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$ and $n\log n\ll m\ll n^{5/3}$ and the edges of $G(m)$ have independent weights chosen uniformly from $[0,1]$. Then w.h.p. \beq{eq2a}{ mst(G(m))\approx \frac{\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(3)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\text{ as $n\to\infty$}. } In addition, if $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$ then \eqref{eq2a} holds without the use of random edges. \end{theorem} This theorem is very easy to prove. One simply verifies that certain conditions in \cite{BFM0} hold w.h.p. On the other hand it sets the stage for what we are trying to prove in other scenarios. The upper bound on $m$ is not essential, we could most likely replace it by $o(n^2)$, but this would require us to re-do the calculations in \cite{BFM0}. Without the addition of random edges, all that can be claimed (assuming $G$ is connected) is that \beq{known}{ \frac{\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(3)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\lesssim mst(G)\lesssim \frac{\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(3)+1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}. } See \cite{FRT}.\\ {\bf Conjecture:} The +1 in \eqref{known} can be replaced by +1/2 (which is best possible). The example giving 1/2 is a collection of $n/r$ copies of $H=K_r-e,r=\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ where there is a perfect matching on the vertices of degree $r-2$ added so that the copies of $H$ are connected in a cycle by bridges. \subsection{Shortest paths} We turn our attention next to shortest paths. Janson \cite{Jan} considered the following scenario: the edges of $K_n$ are given independent exponential mean one random lengths, denoted by $E(1)$. Let $d_{i,j}$ denote the shortest distance between vertex $i$ and vertex $j$. He shows that w.h.p. \[ d_{1,2}\approx \frac{\log n}{n},\quad \max_{j\in [n]}d_{1,j}\approx \frac{2\log n}{n},\quad \max_{i,j}d_{i,j}\approx \frac{3\log n}{n}. \] Bhamidi and van der Hofstad \cite{BH} proved an equivalent expression for $d_{1,2}$ for a much wider class of distribution. They actually determined an asymptotic limiting distribution. (See also Bhamidi, van der Hofstad and Hooghiemstra \cite{BHH}.) We prove the following: \begin{theorem}\label{th2} Suppose that $n^2/\log n\ll m\ll n^2$ and that $G\in G_{reg}(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta)$ and the edges of $G(m)$ are given independent exponential mean one random lengths. Let $d_{i,j}$ denote the shortest distance between vertex $i$ and vertex $j$. Then w.h.p. \[ d_{1,2}\approx \frac{\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n},\quad \max_{j\in [n]}d_{1,j}\approx \frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n},\quad \max_{i,j\in [n]}d_{i,j}\approx \frac{3\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}. \] In addition, if $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$ then \eqref{eq2a} holds without the use of random edges. \end{theorem} \subsection{Bipartite matchings}\label{bip} We turn our attention next to bipartite matchings. For background consider the following well-studied problem: each edge of the complete bipartite graph $K_{n,n}$ is given an independent edge weight $X(e)$. Let $C_n$ denote the minimum weight of a perfect matching in this context. Walkup \cite{W80a} considered the case where $X(e)$ is uniform $[0,1]$ and proved that $\EE{C_n}\leq 3$. Later Karp \cite{K87} proved that $\EE{C_n}\leq 2$. Aldous \cite{A92,A01} proved that if the $X(e)$ are independent exponential mean one random variables then $\lim_{n\to\infty}\EE{C_n}=\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(2)=\sum_{k=1}^\infty\frac{1}{k^2}$. Parisi \cite{P98} conjectured that in fact $\EE{C_n}= \sum_{k=1}^{n}\frac{1}{k^2}$. This was proved independently by Linusson and W\"astlund \cite{LW04} and by Nair, Prabhakar and Sharma \cite{NPS05}. A short elegant proof was given by W\"astlund \cite{W1,W2}. We now consider $G(m)$. $G$ is an $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ regular bipartite graph with vertex set $A\cup B, |A|=|B|=n$. Unfortunately, our proof only works if $G$ is {\em pseudo-random}, as defined by Thomason \cite{T}. By this we mean that for some $0<\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma<1$ we have \beq{quasi}{ |co-degree(u,v)-\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta^2n|\leq \m=O(n^{1-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma})\quad\text{ for all }u,v\in A. } Here, as usual, $co-degree(u,v)=|\set{w\in B:(u,w),(v,w)\in E(G)}$. \begin{theorem}\label{th3} Let $G$ be a pseudo-random $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ regular bipartite graph with vertex set $A\cup B, |A|=|B|=n$. Suppose that $n^{49/25}\ll m=o(n^2)$. Let $C_n$ denote the minimum weight of a perfect matching when the weights of the edges of $G(m)$ are independent exponential mean one random variables. (To be clear, the $m$ added random edges have endpoints in the different vertex classes $A$ and $B$, so that $G(m)$ is bipartite.) Then \beq{Cn}{ \mathbb{E}(C_n)\approx \frac{\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda(2)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}=\frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}. } \end{theorem} {\bf Conjecture:} equation \eqref{Cn} holds for $G(m)$, $m=o(n^2)$ growing sufficiently quickly, but without the assumption of pseudo-randomness. Frieze and Johansson \cite{FJ} showed that if $G$ is the random bipartite graph $K_{n,n,p}$ where $np\gg \log^2n$ then \beq{psi}{ \mathbb{E}(C_n)\approx \frac{\p^2}{6p}. } That paper also conjectured that if $(G_n)$ is a sequence of $r=r(n)$-regular bipartite graphs with $n+n$ vertices that $\mathbb{E}(C_n)\approx \frac{n\p^2}{6r}$ as $r,n\to\infty$. This conjecture is false. Instead we have:\\ {\bf Conjecture:} $\mathbb{E}(C_n)\approx \frac{n\p^2}{6r}$ if the connectivity of $G_n$ tends to infinity. Also, in general $\mathbb{E}(C_n)\lesssim \frac{n}{r}\brac{\frac{\p^2}{6}+\frac12}$. The 1/2 here is best possible in general. We take $n/r$ copies of $H=K_{r,r}-e$ where there is a perfect matching on the vertices of degree $r-1$ added so that the copies of $H$ are connected in a cycle by bridges. In what follows we will sometimes treat large values as integers when strictly speaking we should round up or down. In all cases the choice of up or down has negligible effect on the proof. \section{Spanning Trees} Theorem 2 of Beveridge, Frieze and McDiarmid \cite{BFM0} yields the following. Suppose that \beq{eq3}{ \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n\leq \gd(G)\leq\D(G)\leq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta(1+O(n^{-1/3})))n. } Let $S:\bar{S}$ denote the set of edges of $G$ with exactly one endpoint in $S$. Then \eqref{eq2a} holds if \beq{expand}{ \frac{|S:\bar{S}|}{|S|}\geq n^{2/3}\log^{3/2}n\text{ for all }S\subseteq [n], \frac{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}{2}\leq |S|\leq \frac{n}{2}. } Now if we add $m$ random edges satisfying the conditions of the theorem then all degrees will be $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n+o(n^{2/3})$ and this will satisfy \eqref{eq3}. So, to prove Theorem \ref{th1}, all we need to do is to verify \eqref{expand}. Now let $p=\frac{m}{\binom{n}{2}}\gg\frac{\log n}{n}$. The probability that $G(p)$ contains a set failing to satisfy \eqref{expand} can be bounded by \beq{eq4}{ \sum_{s=\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n/2}^{n/2}\binom{n}{s}\mathbb{P}(Bin(sn/2,p)\leq sn^{2/3}\log^{3/2}n)\leq \sum_{s=\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n/2}^{n/2}\bfrac{ne}{s}^s e^{-snp/10}=o(1), } where we have just looked at the edges $R_p$ to satisfy \eqref{expand}. The property described in \eqref{expand} is monotone increasing and so the $o(1)$ upper bound in \eqref{eq4} holds in $G(m)$ as well, see for example Lemma 1.3 of \cite{FK}. Finally note that if $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$ and $S$ is as in \eqref{expand} then each $v\in S$ has at least $\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma n$ neighbors in $\bar{S}$. And therefore $|S:\bar{S}|/|S|\geq \varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma n$. This completes the proof of Theorem \ref{th1}. \section{Shortest Paths} We use the ideas of Janson \cite{Jan}. Sometimes we make a small tweak and in one case we shorten his proof considerably. The case $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$ will be discussed at the end of this section. We note that the lower bounds hold a fortiori if we do not have random edges $R_p$. \subsection{$d_{1,2}$}\label{sec1} We set $S_1=\set{1}$ and $d_1=0$ and consider running Dijkstra's shortest path algorithm \cite{Dijk}. At the end of Step $k$ we will have computed $S_k=\set{1=v_1,v_2,\ldots,v_k}$ and $0=d_1,d_2,\ldots,d_k$ where $d_i$ is the minimum length of a path from 1 to $i,v_i=1,2,\ldots,k$. Let there be $\n_k$ edges from $S_k$ to $[n]\setminus S_k$. Arguing as in \cite{Jan} we see that $d_{k+1}-d_k=Z_k$ where $Z_k$ is the minimum of $\n_k$ independent exponential mean one random variables, independent of $d_k$. We note that \beq{Zk}{ \mathbb{E}(Z_k\mid \n_k)=\frac{1}{\n_k}\text{ and }\mathbb{V\text{ar}}(Z_k\mid \n_k)=\frac{1}{\n_k^2}. } Suppose now that \[ m=\frac{\om n^2}{\log n}\text{ where }1\ll \om\ll \log n. \] It follows that w.h.p. $\gd(G(m))\approx\D(G(m))\approx \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$. Now \[ k\gd-2\binom{k}{2}\leq \n_k\leq k\D(G(m)) \] and so \beq{nuk}{ \text{w.h.p. $\n_k\approx k\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ for $k=o(n)$.} } Conditioning on the set of added edges and taking expectations with respect to edge weights, we see that if $1\ll k=o(n)$ then \beq{meank0}{ \mathbb{E}(d_k)=\mathbb{E}\brac{\sum_{i=1}^{k-1}\frac{1}{\n_i}} \approx \sum_{i=1}^{k-1}\frac{1}{i\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n} \approx \frac{\log k}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}. } By the same token, \beq{vark0}{ \mathbb{V\text{ar}}(d_k)\approx \sum_{i=1}^{k-1}\frac{1}{i^2\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta^2n^2}=O(n^{-2}). } \subsubsection{Upper Bound} If $k_0=n^{1/2}\om^{1/2}$ then w.h.p. $d_k\lesssim \frac{\log n}{2\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ for $0\leq k\leq k_0$. Now execute Dijkstra's algorithm from vertex 2 and let $\bar{d}_k,T_k$ correspond to $d_k,S_k$. If $S_{k_0}\cap T_{k_0}\neq \emptyset$ then we already have $d_{1,2}\lesssim \frac{\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$. If $S_{k_0},T_{k_0}$ are disjoint then we use the random edges $R_m$ or $R_p$. Let $p=m/\binom{n}{2}\approx 2\om/\log n$. Then, \mult{SP1}{ \mathbb{P}\brac{\not\exists e\in R_p\cap (S_{k_0}:T_{k_0}):X(e)\leq \frac{\log n}{\om n}}\leq \brac{1-p\brac{1-\exp\set{-\frac{\log n}{\om n}}}}^{k_0^2}\\ =\brac{1-(1+o(1))\frac{p\log n}{\om n}}^{k_0^2}\leq \exp\set{-\frac{k_0^2p\log n}{2\om n}}=e^{-\om}. } So, in this case we see too that w.h.p. \[ d_{1,2}\leq (1+o(1))\brac{\frac{\log n}{2\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}+\frac{\log n}{2\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}}+\frac{\log n}{\om n}\approx \frac{\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}. \] \subsubsection{Lower Bound} We now consider a lower bound for $d_{1,2}$. Let $k_1=n^{1/2}/\log n$. We observe that because w.h.p. all vertices have degree $\approx \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ and because the edge joining $v_{k+1}$ to $S_k$ is uniform among $S_k:\bar{S}_k$ edges, we see that $\mathbb{P}(2\in S_{k_1})=O(k_1/n)=o(1)$. By the same token, $\mathbb{P}(T_{k_1}\cap S_{k_1}\neq \emptyset)=O(k_1^2/n)=o(1)$. It follows that w.h.p. \[ d_{1,2}\gtrsim 2\frac{\log k_1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}\approx \frac{\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}. \] \subsection{$\max_jd_{1,j}$} \subsubsection{Lower Bound} For this we run Dijkstra's algorithm until all vertices have been included in the shortest path tree. We can therefore immediately see that if $k_2=n/\log n$ then \beq{low2}{ \mathbb{E}(\max_jd_{1,j})\gtrsim\sum_{i=1}^{k_2}\frac{1}{i\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}+\sum_{i=n-k_2+1}^{n-1}\frac{1}{(n-i)\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}\approx \frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}. } The second sum in \eqref{low2} is the contribution from adding the final $k_2$ vertices and uses $\n_{n-i}\approx (n-i)\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$ w.h.p. for $i=o(n)$. Equation \eqref{vark0} allows us to claim the lower bound w.h.p. \subsubsection{Upper Bound} For an upper bound we use the fact that w.h.p. there are approximately $i(n-i)p$ $R_p$ edges between $S_i$ and $\bar{S}_i$ in order to show that if $k_2=n/\om$ then \mult{upp1}{ \mathbb{E}(\max_jd_{1,j})\lesssim\brac{\frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}+\sum_{i=k_2+1}^{n-k_2}\frac{1}{i(n-i)p}}\\ \approx \frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}+\frac{\log n}{2\om n}\sum_{i=k_2+1}^{n-k_2}\brac{\frac{1}{i}+\frac{1}{n-i}} = \frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}\brac{1+\frac{(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta+o(1))\log \om}{2\om}}\approx \frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}. } Equations \eqref{low2} and \eqref{upp1} imply that $\mathbb{E}(\max_jd_{1,j})\approx \frac{2\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ and we can use equation \eqref{vark0} to get concentration around the mean. \subsection{$\max_{i,j}d_{i,j}$} \subsubsection{Lower Bound} Our proof here is somewhat shorter than that in \cite{Jan}, but it is based on the same idea. We begin with a lower bound. Let $Y_v=\min\set{X(e):e=\set{v,w}\in G(m)}$. Let $A=\set{v:Y_v\geq \frac{(1-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}}$. Then, given that all vertex degrees are asymptotically equal to $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n$, we have that for $v\in [n]$, \beq{A}{ \mathbb{P}(v\in A)=\exp\set{-(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n+o(n))\frac{(1-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}}=n^{-1+\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}. } An application of the Chebyshev inequality shows that $|A|\approx n^{\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}$ w.h.p. and we can assume the existence of $a_1\neq a_2\in A$. Now the expected number of paths from $a_1$ to $a_2$ of length at most $\frac{(3-2\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ can be bounded by \beq{a1a2path}{ n^{2\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}\times n^2\times n^{-3\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}\times \frac{\log^2n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta^2n^2}=n^{-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}. } {\bf Explanation for \eqref{a1a2path}:} The first factor $n^{2\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}$ is the expected number of pairs of vertices $a_1,a_2\in A$. The second factor is a bound on the number of choices $b_1,b_2$ for the neighbors of $a_1,a_2$ on the path. The third factor $F_3$ is a bound on the expected number of paths of length at most $\frac{\b\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ from $b_1$ to $b_2$, $\b=1-3\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma$. This factor comes from \[ F_3\leq \sum_{\ell\geq 0}((\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta+o(1)n)^\ell \bfrac{\b\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}^{\ell+1}\frac{1}{(\ell+1)!}. \] Here $\ell$ is the number of internal vertices on the path. There will be $((\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta+o(1))n)^\ell$ choices for the sequence of vertices on the path. We then use the fact that the exponential mean one random variable stochastically dominates the uniform $[0,1]$ random variable $U$. The final two factors are the probability that the sum of $\ell+1$ independent copies of $U$ sum to at most $\frac{\b\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$. Continuing we have \[ F_3\leq \sum_{\ell\geq 0}\frac{\b\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n(\ell+1)}\bfrac{e^{1+o(1)}\b\log n}{\ell}^\ell \leq \frac{\b\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n} \brac{\sum_{\ell=0}^{10\log n}n^{\b+o(1)}+\sum_{\ell>10\log n}e^{-\ell}}=n^{-1+\b+o(1)}=n^{-3\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma+o(1)}. \] The final factor in \eqref{a1a2path} is a bound on the probability that $X_{a_1b_1}+X_{a_2b_2}\leq \frac{(2+\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$. For this we use the fact that $X_{a_ib_i},i=1,2$ is distributed as $\frac{(1-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}+E_i$ where $E_1,E_2$ are independent exponential mean one. Now $\mathbb{P}(E_1+E_2\leq t)\leq (1-e^{-t})^2\leq t^2$ and taking $t=\frac{3\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ justifies the final factor of \eqref{a1a2path}. It follows from \eqref{a1a2path} and the Markov inequality that the shortest distance between a pair of vertices in $A$ is at least $\frac{(3-2\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ w.h.p., completing our proof of the lower bound in Theorem \ref{th2}. \subsubsection{Upper Bound} We now consider the upper bound. Let $Y_1=d_{k_3}$ where $d_k$ is from Section \ref{sec1} and $k_3=n^{1/2}\log n$. For $t<1-\frac{1+o(1)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}$ we have that w.h.p. over our choice of $R_m$, that \[ \mathbb{E}(e^{t\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta nY_1})=\mathbb{E}\brac{\exp\set{\sum_{i=1}^{k_3}\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta tnZ_i}}= \prod_{i=1}^{k_3}\brac{1-\frac{(1+o(1))t}{i}}^{-1}, \] where the $Z_i$ are as in \eqref{Zk}. Then for any $\b>0$ and for we have \mults{ \mathbb{P}\brac{Y_1\geq \frac{\b\log n}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}}\leq \mathbb{E}(e^{t\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta nY_1-t\b\log n}) \leq e^{-t\b\log n}\prod_{i=1}^{k_3}\brac{1-\frac{(1+o(1))t}{i}}^{-1}\\ =e^{-t\b\log n}\exp\set{\sum_{i=1}^{k_3}\frac{(1+o(1))t}{i}+O\bfrac{t}{i^2}} = \exp\set{\brac{\frac12+o(1)-\b}t\log n}. } It follows, on taking $\b=3/2+o(1)$ that w.h.p. \[ Y_j\leq \frac{(3+o(1))\log n}{2\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}\text{ for all }j\in [n]. \] Letting $T_j$ be the set corresponding to $S_{k_3}$ when we execute Dijkstra's algorithm starting at $j$, then we have that for $j\neq k$ where $T_j\cap T_k=\emptyset$, \beq{SP2}{ \mathbb{P}\brac{\not\exists e\in R_p\cap (T_j:T_k):X(e)\leq \frac{\log n}{\om n}}\leq \exp\set{-\frac{(1+o(1))k_3^2p\log n}{\om n}}=e^{-(2+o(1))\log^2n}=o(n^{-2}) } and this is enough to complete the proof of Theorem \ref{th2}, except for when $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$ and we do not add random edges. \subsection{$\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$} The $R_p$ edges are needed for \eqref{SP1}, \eqref{upp1} and \eqref{SP2}. In each case we are two sets $S,T$ of size $s=o(n)$ say and we need to argue for a short edge between them. In our case we look for a short path of length two. So, let $X$ denote the number of triples $a,b,x$ where $a\in S,b\in T$ and $x\notin S\cup T$ and the lengths of edges $\set{a,x},\set{b,x}$ are both at most $p=\frac{\log n}{\om n}$. Let ${\mathcal T}$ denote the set of such triples, so that $X=|{\mathcal T}|$. The lengths of candidate edges will not be conditioned by the history of the process. We use Janson's inequality \cite{Jan1}. Each pair $a\in S,b\in T$ have at least $2\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma n$ common neighbors. It follows that \[ \mathbb{E}(X)\geq s^2\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma np^2. \] We then estimate \[ \D=\sum_{(a,b,x)\sim(a',b',x')}\mathbb{P}((a,b,x),(a',b',x')\in{\mathcal T}), \] where $(a,b,x)\sim(a',b',x')$ if $\set{a,x}=\set{a',x'}$ or $\set{b,x}=\set{b',x'}$. Then, \[ \D\leq \mathbb{E}(X)+2s^2np^2\times sp \] Then Janson's inequality implies \beq{X=0}{ \mathbb{P}(X=0)\leq \exp\set{-\frac{\mathbb{E}(X)^2}{2\D}}\leq \exp\set{-\frac{s^4\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma^2n^2p^4}{s^2np^2+4s^3np^3}} =e^{-\Omega(snp)}=e^{-\Omega(s\log n/\om)}. } In all cases considered, $s\geq n^{1/2+o(1)}$ an so the RHS of \eqref{X=0} is $o(n^{-1})$, completing the proof of Theorem \ref{th2} for the case where $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1/2$. \section{Bipartite matchings}\label{BIP} We find, just as in \cite{FJ}, that the proofs in \cite{W1}, \cite{W2} can be adapted to our current situation. Suppose that the vertices of $ G$ are denoted $A=\set{a_i,i\in [n]}$ and $B=\set{b_j,j\in [n]}$. We will need to assume that \[ a_1,a_2,\ldots,a_n\text{ constitutes a random ordering of the vertices in }A. \] We will use the notation $(a,b)$ for edges of $G$, where $a\in A$ and $b\in B$. We will let $w(a,b)$ denote the weight of $(a,b)$. Let $A_r=\set{a_1,a_2,\ldots,a_r}$ and let $C(n,r)$ denote the weight of the minimum weight matching of $M_r$ of $A_r$ into $B$. ($M_r$ is unique with probability one.) Suppose also that $\f_r$ is defined by $M_r=\set{(a_i,\f_r(a_i)):\,i=1,2,\ldots,r}$. Let $B_r=\set{\f_r(a_i):\,i=1,2,\ldots,r}$. We will prove that \beq{eq1}{ \mathbb{E}(C(n,r)-C(n,r-1))\approx \frac1{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{i=1}^{r}\frac{1}{r(n-i+1)}. } for $r=1,2,\ldots,n-o(n)$. Using this and a simple argument for $r\geq n-o(n)$ we argue that \beq{eq2}{ \mathbb{E}(C_n)=\EE{C(n,n)}\approx \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta} \sum_{r=1}^{n}\sum_{i=1}^{r}\frac{1}{r(n-i+1)} \approx\frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}. } \subsection{Proof details} We add a special vertex $b_{n+1}$ to $B$, with edges to all $n$ vertices of $A$. Each edge adjacent to $b_{n+1}$ is assigned an $E(\l)$ weight independently, $\l>0$. Here $E(\l)$ is an exponential random variable of rate $\l$ i.e. $\mathbb{P}(E(\l)\geq x)=e^{-\l x}$. We now consider $M_r$ to be the minimum weight matching of $A_r$ into $B^*=B\cup \set{b_{n+1}}$. (As $\l\to 0$ it becomes increasingly unlikely that any of the extra edges are actually used in the minimum weight matching.) We denote this matching by $M_r^*$ and we let $B_r^*$ denote the corresponding set of vertices of $B^*$ that are covered by $M_r^*$. Define $P(n, r)$ as the normalized probability that $b_{n+1}$ participates in $M_r^*$, i.e. \begin{equation} P(n, r) = \lim_{\l \rightarrow 0} \frac{\mathbb{P}(b_{n+1}\in B_r^*)}{\l}. \end{equation} Its importance lies in the following lemma: \begin{lemma}\label{lem10a} \begin{equation} \EE{C(n, r) - C(n, r-1)} = \frac{P(n, r)}{r}. \end{equation} \end{lemma} \begin{proof} Choose $i$ randomly from $[r]$ and let $\widehat{B}_i\subseteq B_r$ be the $B$-vertices in the minimum weight matching of $(A_r\setminus\set{a_i})$ into $B^*$. Let $X=C(n,r)$ and let $Y=C(n,r-1)$. Let $w_i$ be the weight of the edge $(a_i, b_{n+1})$, and let $I_i$ denote the indicator variable for the event that the minimum weight of an $A_{r}$ matching that contains this edge is smaller than the minimum weight of an $A_{r}$ matching that does not use $b_{n+1}$. We can see that $I_i$ is the indicator variable for the event $\{Y_i + w_i< X\}$, where $Y_i$ is the minimum weight of a matching from $A_r\setminus \set{a_i}$ to $B$. Indeed, if $(a_i, b_{n+1}) \in M_r^*$ then $w_i < X - Y_i$. Conversely, if $w_i < X - Y_i$ and no other edge from $b_{n+1}$ has weight smaller than $X - Y_i$, then $(a_i, b_{n+1})\in M_r^*$, and when $\l \to 0$, the probability that there are two distinct edges from $b_{n+1}$ of weight smaller than $X - Y_i$ is of order $O(\l^2)$. Indeed, let ${\cal F}$ denote the existence of two distinct edges from $b_{n+1}$ of weight smaller than $X$ and let ${\cal F}_{i,j}$ denote the event that $(a_i,b_{n+1})$ and $a_j,b_{n+1})$ both have weight smaller than $X$. Then, \beq{no2}{ \mathbb{P}({\cal F})\leq n^2\mathbb{E}_X(\max_{i,j}\mathbb{P}({\cal F}_{i,j}\mid X))=n^2\mathbb{E}((1-e^{-\l X})^2)\leq n^2\l^2\mathbb{E}(X^2), } and since $\mathbb{E}(X^2)$ is finite and independent of $\l$, this is $O(\l^2)$. Note that $Y$ and $Y_i$ have the same distribution. They are both equal to the minimum weight of a matching of a random $(r-1)$-set of $A$ into $B$. As a consequence, $\mathbb{E}(Y)=\mathbb{E}(Y_i)=\frac{1}{r}\sum_{j\in A_r}\mathbb{E}(Y_j)$. Since $w_i$ is $E(\l)$ distributed, as $\l\to 0$ we have from \eqref{no2} that \mults{ P(n,r)= \lim_{\l \rightarrow 0} \brac{\frac{1}{\l}\sum_{j\in A_r}\mathbb{P}(w_j<X-Y_j)+O(\l)}=\lim_{\l \rightarrow 0}\mathbb{E}\brac{ \frac{1}{\l}\sum_{j\in A_r} \brac{1- e^{-\lambda(X-Y_j)}}}\\ =\sum_{j\in A_r}\EE{X - Y_i}=r\EE{X - Y}. } \end{proof} We now proceed to estimate $P(n,r)$. Fix $r$ and assume that $b_{n+1}\notin B_{r-1}^*$. Suppose that $M_{r}^*$ is obtained from $M_{r-1}^*$ by finding an augmenting path $P=(a_{r},\ldots,a_{\s},b_\tau} \def\om{\omega} \def\OM{\Omega} \def\Om{\Omega}\def\up{\upsilon}\def\U{\Upsilon)$ from $a_{r}$ to $B\setminus B_{r-1}$ of minimum additional weight. We condition on (i) $\s$, (ii) the lengths of all edges other than $(a_\s,b_j),b_j\in B\setminus B_{r-1}$ and (iii) $\min\set{w(a_\s,b_j):b_j\in B\setminus B_{r-1}}$. With this conditioning $M_{r-1}=M_{r-1}^*$ will be fixed and so will $P'=(a_{r},\ldots,a_\s)$. We can now use the following fact: Let $X_1,X_2,\ldots,X_M$ be independent exponential random variables of rates $\l_1,\l_2,\ldots,\l_M$. Then the probability that $X_i$ is the smallest of them is $\l_i/(\l_1+\l_2+\cdots+\l_M)$. Furthermore, the probability stays the same if we condition on the value of $\min\set{X_1,X_2,\ldots,X_M}$. Thus $$\mathbb{P}(b_{n+1}\in B_{r}^*\mid b_{n+1}\notin B_{r-1}^*)=\mathbb{E}\bfrac{\l}{\gd_r + \l}$$ where $\gd_r=d_{r-1}(a_\s)$ is the number of neighbors of $a_\s$ in $B\setminus B_{r-1}$. \begin{lemma}\label{lem3} \begin{equation}\label{13} P(n,r)=\mathbb{E}\brac{\frac{1}{\gd_1} + \frac{1}{\gd_2} + \dots + \frac{1}{\gd_r}}. \end{equation} \end{lemma} \begin{proof} \begin{align} \lim_{\l\to 0}\l^{-1}\mathbb{P}(b_{n+1}\in B_r^*)&= \lim_{\l\to 0}\l^{-1}\mathbb{E}\brac{1 - \frac{\gd_1}{\gd_1 + \l} \cdot \frac{\gd_2}{\gd_2 + \l} \cdots \frac{\gd_r}{\gd_r+\l}}\nonumber\\ &=\lim_{\l\to 0}\l^{-1}\mathbb{E}\brac{1 - \left(1 + \frac{\l}{\gd_1}\right)^{-1}\cdots \left(1 + \frac{\l}{\gd_r}\right)^{-1}} \nonumber\\ &=\lim_{\l\to 0}\l^{-1}\mathbb{E}\brac{ \left(\frac{1}{\gd_1} + \frac{1}{\gd_2} + \dots + \frac{1}{\gd_r}\right)\l + O(\l^2) }\nonumber\\ &=\mathbb{E}\left(\frac{1}{\gd_1} + \frac{1}{\gd_2} + \dots + \frac{1}{\gd_r}\right).\label{sumdelta} \end{align} \end{proof} It is this point we need to assume that $G$ is pseudo-random. We have used this to control the values of the $\gd_i$. We now state (part of) Theorem 2 of Thomason \cite{T} in terms of our notation. Assume that $G(m)$ is as in Theorem \ref{th3}. \begin{theorem}\label{Thom} If $X\subseteq A,Y\subseteq B$ and $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta|X|>1$ and $x=|X|,y=|Y|$, then \[ |e(X,Y)-\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta xy|\leq (xy(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n+\m x))^{1/2}. \] where $e(X,Y)$ is the number of edges with one end in $X$ and the other in $Y$. \end{theorem} \subsubsection{Upper bound} We begin with an upper bound estimate for \eqref{sumdelta}. This means finding lower bounds for the $\gd_i$. Let \beq{values}{ r_0=n^{\b},\quad\om=n^{\g},\quad \th=\frac{1}{\om^2},\quad \varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma=\frac{1}{\om}, \quad k=\om^3. } We will establish bounds on $0<\b,\g<1$ as we proceed through the proof. We will then choose suitable values for these parameters. We have the trivial bound $\gd_r\geq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n-r$ which implies that \beq{dg1}{ \sum_{r=1}^{r_0}\frac{1}{r}\sum_{i=1}^r\frac{1}{\gd_i}\leq \sum_{r=1}^{r_0}\frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n-r_0}=o(1). } Now suppose that $r\geq r_0$ and let \[ {\cal E}_r=\set{\exists S\subseteq [r,r+\th r]:\;|S|=k,\gd_i\leq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta(1-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)(n-r-i)\text{ for }i\in S}. \] We claim that Theorem \ref{Thom} implies that ${\cal E}_r$ cannot occur for $r\leq n-r_0$. Indeed, suppose that ${\cal E}_r$ occurs. Then \beq{geq}{ e(S,B_{r+\th r})\geq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta\sum_{i\in S}(r+i+\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma(n-r-i))\geq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta k(r+\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma(n-r)). } On the other hand, Theorem \ref{Thom} implies that \beq{leq}{ e(S,B_{r+\th r})\leq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta k(r+\th r)+(k(r+\th r)(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n+\m k))^{1/2}. } Plugging in the values from \eqref{values} into \eqref{geq} and \eqref{leq} we see that after subtracting $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta kr$ the RHS of \eqref{geq} is $\Omega\bfrac{kn^{\b}}{\om}$ and the RHS of \eqref{leq} is $O\bfrac{kn}{\om^{3/2}}$, a contradiction, assuming \beq{bg1}{ \frac{n^\b}{n^\g}\gg \frac{n}{n^{3\g/2}}\text{ or }\b>1-\frac{\g}{2}. } Let \beq{bg2}{ p=\frac{m}{\binom{n}{2}}=\frac{1}{n^\eta}. } Let $\zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda_a$ denote the number of times that vertex $a$ takes the role of $a_\s$. We will show that with probability $1-o(n^{-1})$ that \beq{smallone}{ \zeta} \def\th{\theta} \def\TH{\Theta} \def\Th{\Theta} \def\l{\lambda_a\leq \n_2=\frac{\log^3n}{p},\text{ for all }a\in A. } We will prove below that if $r\leq n-r_0$ then with probability $1-O(n^{-2\eta})$ \beq{dlarge}{ \gd_r\geq \n_1=\frac{n^{9\b/10}}{\n_2^2}=n^{9\b/10-2\eta+o(1)}\text{ except for at most }\frac{n^{9\b/10+2\eta+o(1)}}{\n_2}\text{ indices }r. } Let $\xi(r)$ be the indicator for the exceptions in \eqref{dlarge}. Let $I_1,I_2,\ldots,I_s,s=\rdup{\frac{n-2r_0}{\th n}}$ be an equitable partition of $[r_0,n-r_0]$ into consecutive intervals of length $\approx \th n$. By equitable we mean that $|I_k-I_l|\leq 1$ for all $k\neq l$. Given that ${\cal E}_r$ doesn't occur and \eqref{dlarge} we see that with probability $1-O(n^{-3\eta/2})$ we have \beq{anot}{ \sum_{j\in I_\ell}\frac{1}{\gd_j}\leq \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{j\in I_\ell}\frac{1}{(1-\varepsilon} \def\f{\phi} \def\F{{\Phi}} \def\vp{\varphi} \def\g{\gamma)(n-j)}+\frac{k\n_2}{\n_1}+\xi(I_t). } Consequently, if $\g_t=|I_1|+\cdots+|I_t|$ and \beq{bg3}{ \frac{k\n_2}{\th\n_1r_0}=n^{5\g+3\eta-2\b+o(1)}=o(1), } then with probability $1-O(n^{-3\eta/2})$ we have \mult{bigsum}{ C(n,n-r_0)\leq o(1)+\frac{1+o(1)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{t=1}^s\sum_{r=r_0+\g_{t-1}+1}^{r_0+\g_t}\frac{1}{r} \sum_{j=1}^{r}\frac{1}{n-j+1}+\frac{k\n_2s}{\n_1r_0}+\frac{\n_2n^{9\b/10+2\eta+o(1)}}{\n_2r_0}\\ =o(1)+\frac{1+o(1)}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{r=r_0}^{n-r_0}\frac{1}{r} \sum_{j=1}^{r}\frac{1}{n-j+1}\approx \frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}, } assuming that \beq{bg3a}{ \eta<\frac{\b}{20}. } {\bf Explanation:} The first $o(1)$ term in \eqref{bigsum} comes from \eqref{dg1}. For a proof of the final estimate in \eqref{bigsum} we refer the reader to Section \ref{X} in an appendix. (The calculation is taken from \cite{FJ}.) We show later that with probability $1-o(n^{-1})$ we have \beq{later}{ C(n,n)-C(n,n-r_0)=o(1). } To get an upper bound on $\mathbb{E}(C_n)$ we have to deal with the possibility of failure of \eqref{dlarge}. So what we do is to think of adding $R_p$ as the union of two copies of $R_{p/2}$, a red copy and a blue copy. With just the red copy, we see that with probability $1-O(n^{-3\eta/2})$ we have $C_n\lesssim \frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}$ and we use \eqref{psi} for the blue copy. Thus, \[ \mathbb{E}(C_n)\leq \frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}+O(n^{-2\eta}\times p^{-1})\approx \frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}. \] \subsubsection{Lower bound} Let $\ell=n^{1/3}$ and $n_1=n-r_0$ and $s=\rdup{\frac{n_1}{\ell}}\approx n^{2/3}$ and equitably partition $[n_1]$ into intervals $I_j,j=1,2,\ldots,\ell$ of length $\approx s$ and define $\g_t=|I_1|+\cdots+|I_t|$ as before. Fix $j$ and let $I=I_j$. Next let $S_i,i\leq \n_2$ denote the set of elements of $A$ that appear $i$ times as $a_\s$ in $I$ and let $s_i=|S_i|$. Let $T_i$ denote the subset of $I_j$ corresponding to $S_i$. Partition $T_i=U_1\cup\cdots \cup U_i$ into $i$ copies of $S_i$ in a natural way. Then it follows from Theorem \ref{Thom} that if $s_i\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta>1$ then for $1\leq k\leq i$, \[ \card{\sum_{j\in U_k}\gd_{j}-s_i\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta(n-(j-1)s)}\leq (s_i(n-(j-1)s)(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n+\m s_i))^{1/2}. \] Therefore, \mults{ \sum_{i:s_i>1/\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\brac{\sum_{j\in T_i}\gd_{j}-is_i\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta(n-(j-1)s)}\leq \sum_{i:s_i>1/\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}i(s_i(n-(j-1)s)(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n+\m s_i))^{1/2}\\ \leq n^{1/2}(n-(j-1)s)^{1/2}\sum_{i:s_i>1/\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}is_i^{1/2}\leq \n_2^2s^{1/2}n^{1/2}(n-(j-1)s)^{1/2}. } It follows then that \beq{hope}{ \sum_{k\in I_j}\gd_k\leq \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta s(n-(j-1)s)+\n_2^2s^{1/2}n^{1/2}(n-(j-1)s)^{1/2}+\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta^{-1}\n_2(n-(j-1)s). } We have from \eqref{hope} and the fact that the harmonic mean is at most the arithmetic mean that if \beq{bg4}{ \eta<\frac{1}{6}\text{ which implies that }\n_2^2=o(s^{1/2}) } then, assuming \beq{bdx}{ \b>\frac{2}{3}, } \mults{ \sum_{i\in I_j}\frac{1}{\gd_i}\geq \frac{s^2}{\sum_{i\in I_j}\gd_i}\geq \frac{s}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta(n-(j-1)s)\brac{1+\frac{\n_2^2}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\bfrac{n}{s(n-(j-1)s)}^{1/2}+\frac{\n_2}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta s}}}\\ =\frac{s}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta(n-(j-1)s)}\brac{1+O\bfrac{\n_2^2}{s^{1/2}\brac{1-\frac{(j-1)s}{n}}}} =\\ \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{i\in I_j}\frac{1}{n-i+1} \brac{1+O\bfrac{\n_2^2}{s^{1/2}\brac{1-\frac{(j-1)s}{n}}}+O\bfrac{s}{n-(j-1)s}} \approx \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{i\in I_j}\frac{1}{n-i+1}. } Therefore, \mult{aa}{ \mathbb{E}(C(n,n_1))=\mathbb{E}\brac{ \sum_{r=1}^{n_1}\frac{1}{r}\sum_{i=1}^r\frac{1}{\gd_i}}= \mathbb{E}\brac{\sum_{j=1}^\ell\sum_{r\in I_j}\frac{1}{r}\sum_{i=1}^{r}\frac{1}{\gd_i}}\gtrsim \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{j=1}^{\ell} \sum_{r\in I_j}\frac{1}{r}\sum_{i=1}^{\g_{j-1}}\frac{1}{n-i+1} \geq\\ \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{r=1}^{n_1}\frac{1}{r} \sum_{i=1}^r\frac{1}{n-i+1}- \frac{s}{n-s}-\sum_{j=2}^{\ell}\frac{s}{(j-1)s(n-js)} = \frac{1}{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}\sum_{r=1}^{n_1}\frac{1}{r} \sum_{i=1}^r\frac{1}{n-i+1}-o(1)\approx \frac{\p^2}{6\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta}. } {\bf Explanation:} the term $-\sum_{j=2}^{\ell}\frac{s}{(j-1)s(n-js)}$ accounts for the third summation in the last term of the first line only going as far as $\g_{j-1}$ instead of $\g_j$. The term $-\frac{s}{n-s}$, which is small due to \eqref{bdx}, accounts for the $j=1$ summand in the last term of the first line being missing, due to the same thing. This gives the correct lower bound for Theorem \ref{th3}. The proof of the final estimate in \eqref{aa} is proved in the same way as the final estimate in \eqref{bigsum}. We now have to verify \eqref{dlarge}, \eqref{later}. These claims rest on a bound on the maximum weight of an edge in the minimum weight perfect matching. \subsubsection{No long edges} The aim of this section is to show that w.h.p. no edges of weight more than $w_1=2w_0\log n$ (where $w_0=\frac{c_1\log n}{np}$) are used in the construction of $M_n$. Here $c_1$ is a sufficiently large constant. For a set $S\subseteq A,2\leq |S|$ we let \[ N_0(S)=\set{b\in B:(a,b)\in R_p\text{ and }w(a,b)\leq w_0\text{ for some }a\in S}. \] And for $a\in A$ let \[ N_0(a)=\set{b\in B: (a,b)\in E(G)\text{ and }w(a,b)\leq w_0} \] Let \[ r_1=p^{-1}\log^{1/2}n;\qquad r_2=\frac{n}{10c_1\log n};\qquad r_3=\frac{n}{2000};\qquad r_4=n-\frac{4000n}{c_1\log n}. \] \begin{lemma}\label{lemy} W.h.p. we have \begin{align} |N_0(a)|&\geq 2r_1\quad\text{ for all $a\in A$}.\label{k1}\\ |N_0(S)|&\geq \frac{c_1|S|\log n}{4}\quad\text{ for all }S\subseteq A, r_1<|S|\leq r_2.\label{k2}\\ |N_0(S)|&\geq \frac{n}{40}\quad\text{ for all }S\subseteq A, r_2<|S|\leq r_3.\label{k3}\\ |N_0(S)|&\geq n-\frac{3000n}{c_1\log n}\quad\text{ for all }S\subseteq A, r_3<|S|.\label{k4}\\ n-|N_0(S)|&\leq \frac12(n-|S|)\quad\text{for all }S\subseteq A, |S|\geq r_4.\label{k5} \end{align} \end{lemma} \begin{proof} We first observe that $|N_0(a)|$ is distributed as $Bin(\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n,1-e^{-w_0})$ and $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n(1-e^{-w_0})\gtrsim \alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta r_1\log^{1/2}n$ and so the Chernoff bounds imply that \[ \mathbb{P}(\exists a:|N_0(a)|\leq 2r_1)\leq ne^{-\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta nw_0/4}=o(n^{-1}). \] We next observe that for a fixed $S\subseteq A$ we have \[ |N_0(S)|\sim Bin(n,q)\qquad \text{where }q=1-(1-p(1-e^{-w_0}))^{|S|}=1-(1-(1+o(1))w_0p)^{|S|}. \] (Here $\sim$ is used to indicate the distribution of $|N_0(S)|$.) If $r_1<|S|\leq r_2$ then $q\geq w_0p|S|/2$. So, \[ \mathbb{P}\brac{\neg\eqref{k2}}\leq \sum_{s=r_1}^{r_2}\binom{n}{s} \mathbb{P}\brac{Bin(n,q)\leq \frac{c_1s\log n}{4}} \leq \sum_{s=r_1}^{r_2}\bfrac{ne}{s}^s e^{-c_1s\log n/4}=\sum_{s=r_1}^{r_2}\bfrac{n^{1-c_1/4}e}{s}^s=o(1). \] If $r_2<|S|\leq r_3$ then $q>1/20$. So, \[ \mathbb{P}\brac{\neg\eqref{k3}}\leq \sum_{s=r_2}^{r_3}\binom{n}{s} \mathbb{P}\brac{Bin\brac{n,\frac{1}{20}}\leq \frac{n}{40}} \leq \sum_{s=r_2}^{r_3}\bfrac{ne}{s}^s e^{-n/160}\leq 2(2000)^{n/2000} e^{-n/160}=o(1). \] If $r_3<|S|\leq r_4$ then $q\geq 1-n^{-c_1/3000}$. So, \[ \mathbb{P}\brac{\neg\eqref{k4}}\leq 2^nn^{-(n-n_0)c_1/3000}=o(1). \] If $r_4< |S|$, let $t=n-|S|$. Then, $q\geq 1-n^{-c_1/2}$ and so \[ \mathbb{P}(\neg\eqref{k5})\leq \sum_{t=1}^{n-r_4}\binom{n}{t}\binom{n}{t/2}(1-q)^{t/2}\leq \sum_{t=1}^{n-r_4} \bfrac{ne}{t}^{2t}n^{-c_1t/2}=o(1). \] \end{proof} \begin{lemma}\label{lemx} W.h.p., no edge of length at least $w_1$ appears in any $M_r,r\leq n$ \end{lemma} \begin{proof} We first consider $r=1,2,\ldots,r_1=p^{-1}\log^{1/2}n$. If $a\in A_r$ and $w(a,\f_r(a))>w_0$ then \eqref{k1} implies that there are at least $r_1$ choices of $b\in B\setminus \f(A_r)$ such that we can reduce the matching cost by replacing $(a,\f_r(a))$ by $(a,b)$. We now consider $r>r_1$. Choose $a\in A_r$ and let $S_0=\set{a}$ and let an alternating path $P=(a=u_1,v_1,\ldots,v_{k-1},u_k,\ldots)$ be {\em acceptable} if (i) $u_1,\ldots,u_k,\ldots\in A$, $v_1,\ldots,v_{k-1},\ldots\in B$, (ii) $(u_{i+1},v_i)\in M_r, i=1,2,\ldots$ and (iii) $w(u_i,v_i)\leq w_0,\,i=1,2,\ldots$. Now consider the sequence of sets $S_0=\set{a_0},S_1,S_2,\ldots,S_i,\ldots$ defined as follows: {\bf Case (a):} $N_0(S_i)\subseteq \f(A_r)$. In this case we define $S_{i+1}=\f_r^{-1}(T_i)$, where $T_i=N_0(S_i)$. By construction then, every vertex in $S_j,j\leq i+1$ is the endpoint of some acceptable alternating path. {\bf Case (b):} $T_i\setminus \f(A_r)\neq \emptyset$. In this case there exists $b\in T_i$ which is the endpoint of some acceptable augmenting path. It follows from \eqref{k2} applied to $S_i$ that w.h.p. there exists $k=O\bfrac{\log n}{\log\log n}$ such that $|N_0(S_k)|>r$ and so Case (b) holds. This implies that if $r_1\leq r\leq r_2$ then $w(a,\f_r(a))\leq w_0\log n$ for all $a\in A_r$. For if $w(a,\f_r(a))>w_0\log n$ then there are at least $\Omega(r\log n)$ choices of $b\in B\setminus \f(A_r)$ such that we can reduce the matching cost by deleting $(a,\f_r(a))$ and changing $M_r$ via an acceptable augmenting path from $a$ to $b$. The extra cost of the edges added in this path is $o(w_0\log n)$. Now consider $r_2<r\leq r_3=n/100$. We know that w.h.p. there is $k=o(\log n)$ such that $|S_k|>r_2$ and that by \eqref{k3} we have that w.h.p. $|N_0(S_{k+1})|>n/40>r$ and we are in Case (b) and there is a low cost augmenting path for every $a$, as in the previous case. When $r_3<|S_k|\leq r_4$ we use the same argument and find by \eqref{k4} we have w.h.p. $N_0(S_{k+1})>r_4 \geq r$ and there is a low cost augmenting path. Similarly for $r>r_4$, using \eqref{k5}. Finally note that the number of edges in the augmenting paths we find is always at most $o(\log n)+\log_2n \leq 2\log n$. \end{proof} This also proves that \[ \mathbb{E}(C(n,n)-C(n,n-r_0)=O\brac{n^{\b}w_1}=o(1), \] provided \beq{bg5}{ \b<1-\eta. } This verifies \eqref{later}. To prove \eqref{smallone} we argue \mult{ff}{ \mathbb{P}\brac{\exists a\in A:\card{\set{e:a\in e,X_e\leq w_1}}\geq \frac{\log^3n}{p}}\leq \mathbb{P}\brac{Bin\brac{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n,w_1}\geq \frac{\log^3n}{p}}\\ \leq \binom{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n}{p^{-1}\log^3n}w_1^{p^{-1}\log^3n}\leq \bfrac{ew_1}{\log^3n}^{p^{-1}\log^2n}=o(1). } This verifies \eqref{smallone}. We finally consider \eqref{dlarge}. Consider how a vertex $a\in A$ loses neighbors in $B\setminus B_r$. It can lose up to $\n_2$ for the times when $a=a_\s$. Otherwise, it loses a neighbor when $a_\s\neq a$ chooses a common neighbor with $a$. The important point here is that this choice depends on the structure of $G$, but not on the weights of edges incident with $a$. It follows that the cheapest neighbors at any time are randomly distributed among the current set of available neighbors. To get to the point where $a_\s=a$ and $\gd_r\leq \n_1$, we must have at least one of the $\n_2$ original cheapest neighbors occuring in a random $\n_1$ subset of a set of size $\approx \m_r=\min\set{\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta n,n-r}$. This has probability $O(\n_1\n_2/\m_r)$ and \eqref{dlarge} follows from the Markov inequality. We finally choose $\b,\g,\eta$ such that \eqref{bg1}, \eqref{bg3}, \eqref{bg3a}, \eqref{bg4}, \eqref{bdx} and \eqref{bg5} hold. We let $\b=5/6+\eta$ and then we choose $\g=1/3-\eta,\eta=1/25$. \section{Final remarks} We have shown that adding sufficiently many random edges is enough to ``smooth out'' the optimal value in certain optimization problems. There are several questions that remain. The first is to remove the pseudo-random requirement from Theorem \ref{th3}. The problem is to control the sizes of the $\gd_r$. Another possibility is to consider matchings and 2-factors in arbitrary regular graphs, not just bipartite ones. Then one can consider the Travelling Salesperson problem. We could also consider relaxing $\alpha} \def\b{\beta} \def\gd{\delta} \def\D{\Delta$ to be $o(1)$ and we can consider more general distributions than $E(1)$.
2,877,628,089,906
arxiv
\section{Introduction} \label{sec:intro} The study of $B$-meson decays, especially the exclusive semileptonic and two-body nonleptonic decays, presents rich information for testing and understanding the standard model (SM). In the past two decades, as the running and upgrading of $B$-factories, a great amount of experimental data has been accumulated. Although a lot of models and/or approaches have been developed in theory, the poor knowledge of nonperturbative quantum chromodynamics (QCD) effects still limits theoretical predictions severely. In two-body nonleptonic decays of $B$-meson, QCD factorization \cite{Beneke:1999br,Beneke:2000ry,Beneke:2001ev,Beneke:2003zv} and perturbative QCD approaches \cite{Keum:2000ph,Keum:2000wi,Lu:2000em,Keum:2002cr,Keum:2003js} have been developed, which allow us to separate the nonperturbative effect out as universal quantities, such as, the light-cone distribution amplitudes (LCDA) and/or form factors. The $B$-meson LCDA has been studied extensively. Several forms of the distribution amplitudes are proposed or obtained by some theoretical methods such as solving the equations of motion in the literature \cite{Grozin:1996pq,Beneke:2000wa,Lange:2003ff,Lee:2005gza,Kawamura:2001jm,Huang:2004na,Huang:2005kk,Hwang:2010hw,Bell:2013tfa}. Inspired by the construction of initial bound state in Ref.\cite{Leutwyler:1984je} and based on our previous works on the mass spectrum and wave functions of $B$-meson \cite{Yang:2011ie,Liu:2013maa,Liu:2015lka}, we try an alternate way to study the distribution amplitudes with the help of wave functions obtained in the relativistic potential model \cite{Liu:2013maa,Liu:2015lka}. Considering the recent experimental data on the pure leptonic decays of $B$ mesons, we focus on a careful investigation about the decay constants and the distribution amplitudes (DAs) of $B$-mesons in this paper. In general, the decay constants of charged heavy-light mesons are related directly to the pure leptonic decay widths and thus measuring decay constants can provide a chance to check different theoretical models and may also give some hints for physics beyond the standard model (SM). During the past decades, many methods have been applied to the study of the decay constants, such as, QCD sum rules \cite{Penin:2001ux,Bordes:2004vu,Bordes:2005wi,Lucha:2011zp,Narison:2012xy,Gelhausen:2013wia,Narison:2015nxh}, the Bethe-Salpeter equation \cite{Wang:2004xs,Cvetic:2004qg}, the field correlator method \cite{Badalian:2007km}, the soft-wall holographic approach \cite{Branz:2010ub}, the potential models \cite{Godfrey:1985xj,Colangelo:1990rv,DiPierro:2001dwf,Ebert:2006hj,Yang:2011ie}, and the lattice QCD simulations \cite{Davies:2003ik,Gray:2005ad,Davies:2010ip,McNeile:2011ng,Bazavov:2011aa,Becirevic:2012ti,Na:2012kp,Dowdall:2013tga}, etc. Up to now there are still large uncertainties for the value of $\abs{V_{ub}}$ \cite{Agashe:2014kda}, and only the pure leptonic decay mode of $B$ meson with $\tau$ lepton in the final state has been measured in experiment \cite{Adachi:2012mm,Kronenbitter:2015kls,Lees:2012ju,Aubert:2009wt} (also with large uncertainties). Our result for the branching ratio of $B\to\tau \nu $ decay is well located in the experimental error bars \cite{Adachi:2012mm,Kronenbitter:2015kls,Lees:2012ju,Aubert:2009wt,Rosner:2015wva}. Further tests from experiments are needed in the future with enhanced precision (most possibly come from the Belle II / SuperKEKB collaboration \cite{Barrett:2015gvz,Wang:2015kmm}). We study the $B$-meson distribution amplitudes in this work. The analytical forms both in coordinate and momentum space are obtained. When they are transformed to the commonly used form of LCDA, the figures show that they obey the model-independent limitations \cite{Lee:2005gza}. We also consider the pure leptonic decays of $B$-meson up to one-loop level in QCD corrections. We find that one-loop corrections to the hard-scattering kernel in QCD will be zero after subtracting the infrared divergence by using the factorization method. The paper is organized as followings. In Sec.\ref{sec:dc}, we calculate the decay constants of the $B$ and $B_s$ mesons. The branching ratios of leptonic decays of $B$ meson are also calculated and compared with experimental data. In Sec.\ref{sec:da}, the matrix element between $B$ meson and vacuum state, which defines the distribution amplitudes (DAs), is studied. The analytical form of the matrix element and DAs are obtained and figures are shown as illustrations. We finally obtain a compact expression for the matrix element. Section \ref{sec:qcdf} is devoted to the study of the pure leptonic decay of the $B$-mesons up to one-loop level in QCD and Sec.\ref{sec:concl-or-disc} is for the conclusion and discussion. \section{Decay Constants of $B$ and $B_s$ Mesons} \label{sec:dc} Recently, the spectra of heavy-light quark-antiquark system have been studied in the relativistic potential model in our previous works \cite{Yang:2011ie,Liu:2013maa,Liu:2015lka}, where hyperfine interactions are included \cite{Liu:2013maa,Liu:2015lka}. The whole spectra of $B$ and $D$ system are in well agreement with experimental measurements. Hence in this work, we extend our previous works \cite{Liu:2013maa,Liu:2015lka} by studying the decay properties of $B$ meson with the wave functions obtained in the relativistic potential model. We study the decay constants of $B$ and $B_s$ mesons at first, and then give a compact form of distribution amplitudes of $B$-meson, which shall be useful for studying $B$ decays. The decay constant of a pseudoscalar meson is defined by the matrix element of the axial current between the meson and the vacuum state \begin{IEEEeqnarray}{cx} \mel{0}{\bar{q}\gamma^{\mu}\gamma^5 Q}{P}=if_PP^{\mu}\label{eq:1} \end{IEEEeqnarray} where the axial current is composed of a light antiquark field $\bar{q}$ and a heavy quark field $Q$. The pseudoscalar meson as a bound state of a quark and antiquark system can be described by ~\cite{Leutwyler:1984je,Yang:2011ie}, \begin{IEEEeqnarray}{rl} \ket{P(\vv{P})}=\frao{\sqrt{N_L}}\frao{\sqrt{3}} \sum_i&\int\!\dd[3]{k_q}\dd[3]{k_Q}\dlt{3}{\vv{P}-\vv{k}_q -\vv{k}_Q}\varPsi_0(\vv{k}_q)\IEEEnonumber\\ &\frao{\sqrt{2}}\qty[c^{i\dagger}(\vv{k}_Q,\uparrow) b^{i\dagger}(\vv{k}_q,\downarrow)- c^{i\dagger}(\vv{k}_Q,\downarrow) b^{i\dagger}(\vv{k}_q,\uparrow)] \ket{\stackrel{\phantom{\rightarrow}}{0}}\label{eq:2} \end{IEEEeqnarray} where $N_L$ is the normalization factor, and the normalization conditions will be shown explicitly below. $i$ stands for the QCD color index and $\frao{\sqrt{3}}$ is the corresponding normalization factor. The factor $\frao{\sqrt{2}}$ is the normalization factor for the quark spin states which are indexed by up or down arrows. Inside the square parenthesis, $b^{i\dagger}$ and $c^{i\dagger}$ are the creation operators of the light antiquark $\bar{q}$ and the heavy quark $Q$, respectively. The function $\varPsi_0(\vv{k}_q)$ is the normalized wave function of the pseudoscalar meson at ground state in the momentum space, which describes the wave function of the quark and antiquark constituents in a meson. It is noted here that these quark constituents are the effective quarks carrying a gluon cloud and therefore the quarks have constituent masses~\cite{Wirbel:1985ji}. The wave function can be obtained by solving the Schr\"odinger type wave equation with relativistic dynamics \begin{IEEEeqnarray}{cx} (H_0+H')\varPsi(\vv{r})=E\varPsi(\vv{r}),\label{eq:3} \end{IEEEeqnarray} where $H_0+H'$ is the effective Hamiltonian (its explicit expression can be found in Ref.~\cite{Liu:2015lka}) and $E$ is the energy of the meson. The first term $H_0$ contains the kinetic part and the effective potential which is taken as a combination of a Coulomb term and a linear confining term inspired by QCD~\cite{Godfrey:1985xj,Eichten:1978tg,Eichten:1979ms}. The second term $H'$ is the spin-dependent part of the Hamiltonian including contributions of one-gluon-exchange diagram in the nonrelativistic approximation \cite{DeRujula:1975smg,Godfrey:1985xj} and new terms which account for contributions of nonperturbative dynamics in the bound state system and relativistic corrections for the light quark in the heavy meson~\cite{Liu:2013maa,Liu:2015lka}. The normalization conditions for wave function are \begin{IEEEeqnarray}{cx} \int\dd[3]{k}\abs{\varPsi_0(\vv{k})}^2=1, \IEEEyesnumber\IEEEyessubnumber\label{eq:4a}\\ \qty{c(\vv{k},s),c^{\dagger}(\vv{k}',s')} =\delta_{ss'}\dlt{3}{\vv{k}-\vv{k}'},\IEEEyessubnumber\label{eq:4b}\\ \ip{P(\vv{P})}{P(\vv{P}')}=(2\pi)^32E \dlt{3}{\vv{P}-\vv{P}'}.\IEEEyessubnumber\label{eq:4c} \end{IEEEeqnarray} Note that we omit the color index of the operator $c$ and use $s,s'$ to denote the spin states. Substituting Eq.\eqref{eq:2} into Eq.\eqref{eq:4c} and using Eq.\eqref{eq:4a} and Eq.\eqref{eq:4b}, we can obtain the normalization factor \begin{IEEEeqnarray}{cx} N_L=\frao{(2\pi)^32E}.\label{eq:5} \end{IEEEeqnarray} The wave function has been solved numerically in our previous work \cite{Liu:2015lka}. For $B$ meson, the wave function can be expressed by \begin{IEEEeqnarray}{cx} \varPsi_0(\vv{k})= \frac{\varphi_0(|\vv{k}|)}{|\vv{k}|} Y_{00}(\theta,\phi) \label{eq:6} \end{IEEEeqnarray} where $\varphi_0(|\vv{k}|)$ is the reduced wave function. The numerical result of $\varphi_0(|\vv{k}|)$ can be shown in Fig.\ref{fig:1}. \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{Fig-1.pdf} \caption{\label{fig:1} Reduced wave functions for $B$-meson.} \end{figure} Since it is convenient to have an analytical form of the wave function $\varPsi_0(\vv{k})$ for the numerical calculation, we fit the wave function obtained in our previous work \cite{Liu:2015lka} with an exponential function and finally obtain the fitted form for the $B_{(s)}$ meson wave function with combined theoretical uncertainties as \begin{IEEEeqnarray}{cx} \varPsi_0(\vv{k})= a_1e^{a_2|\vv{k}|^2+a_3|\vv{k}|+a_4}\label{eq:7}, \end{IEEEeqnarray} where the parameters including uncertainties for $B$ meson are \begin{IEEEeqnarray}{llx} a_1=4.55_{-0.30}^{+0.40}\,\mathrm{GeV}^{-3/2},\quad& a_2=-0.39_{-0.20}^{+0.15}\,\mathrm{GeV}^{-2};\nonumber\\ a_3=-1.55\!\pm\!0.20\,\mathrm{GeV}^{-1},\quad& a_4=-1.10_{-0.05}^{+0.10}, \end{IEEEeqnarray} and for $B_s$ meson: \begin{IEEEeqnarray}{llx} a_1=1.60_{-0.18}^{+0.15}\,\mathrm{GeV}^{-3/2},\quad& a_2=-0.43_{-0.10}^{+0.15}\,\mathrm{GeV}^{-2};\nonumber\\ a_3=-1.28_{-0.20}^{+0.18}\,\mathrm{GeV}^{-1},\quad& a_4=-0.22_{-0.08}^{+0.06}. \end{IEEEeqnarray} The uncertainties for the parameters ensure that the deviation of the wave function from its central value is approximately about 8\%. The illustrations for the fit of the wave functions are shown in Fig.\ref{fig:2}, where the grey bands denote the relevant uncertainties for the wave functions of $B$ and $B_s$ mesons. \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{Fig-2.pdf} \caption{\label{fig:2}The wave functions (W.F.) of $B$-meson.} \end{figure} In the calculation of the decay constants, four-momentum conservation should hold \begin{IEEEeqnarray}{cx} k_q+k_Q=P,\label{eq:8} \end{IEEEeqnarray} where $k_{q,Q}$ and $P$ are the momenta of the quark constituents and the meson, respectively. With the restriction above, we consider the ACCMM scenario \cite{Altarelli:1982kh,Colangelo:1998eb}, where the light quark is kept on-shell, while the heavy quark off-shell, \begin{IEEEeqnarray}{cx} E_q+E_Q=m_P,\IEEEyesnumber\IEEEyessubnumber\label{eq:9a}\\ E_q^2=m_q^2+|\vv{k}|^2,\IEEEyessubnumber\label{eq:9b}\\ m_Q^2(\vv{k})=E_Q^2-|\vv{k}|^2.\IEEEyessubnumber\label{eq:9c} \end{IEEEeqnarray} Equation \eqref{eq:9a} is the energy conservation in the meson rest frame. We assume that the running mass of the heavy quark must be positive $m_Q(\vv{k})\geq 0$. Thus the actual range of the momentum $|\vv{k}|$ is restricted under a particular value, which is shown as the cut lines in Figs.\ref{fig:1} and \ref{fig:2}. Substituting Eq.\eqref{eq:2} into Eq.\eqref{eq:1} in the rest frame and contracting the quark (antiquark) creation operators with the annihilation operators in the quark field of the axial current $\bar{q}\gamma^{\mu}\gamma^5Q$, we obtain \begin{IEEEeqnarray}{cx} f_P=\sqrt{\frac{3}{(2\pi)^3m_P}}\int\!\dd[3]{k}\varPsi_0(\vv{k}) \frac{\qty(E_q+m_q)\qty(E_Q+m_Q)-|\vv{k}|^2} {\sqrt{E_qE_Q\qty(E_q+m_q)\qty(E_Q+m_Q)}},\label{eq:10} \end{IEEEeqnarray} where the integral over the variable $\vv{k}$ should be limited in the finite range according to Eqs.\eqref{eq:9a}--\eqref{eq:9c}. The parameters used in this work are \cite{Liu:2015lka} \begin{IEEEeqnarray}{ll} m_s=0.32\,\mathrm{GeV}\qc&m_u=m_d=0.06\,\mathrm{GeV},\;\; m_b=4.99\,\mathrm{GeV},\label{eq:11} \end{IEEEeqnarray} and the mesons' masses are taken from PDG~\cite{Agashe:2014kda} \begin{IEEEeqnarray}{cx} m_B=5.28\,\mathrm{GeV}\qc m_{B_s}=5.37\,\mathrm{GeV}.\label{eq:12} \end{IEEEeqnarray} The errors are estimated by varying the parameters in the allowed ranges. The total errors are around 7\% for the decay constants of $B$ and $B_s$ mesons. We also calculate the ratio of the decay constants of $B$ and $B_s$ mesons $f_{B_s}/f_B$. The final results obtained are \begin{IEEEeqnarray}{cx} f_B=219\pm 15\,\mathrm{MeV}\qc f_{B_s}=266\pm 19\,\mathrm{MeV}\qc f_{B_s}/f_B=1.21\pm 0.09\,.\label{eq:13} \end{IEEEeqnarray} During past decades, many theoretical methods or models have been developed for the calculation of the $B$-meson decay constants. In this paper, we list some of the results for comparison in Table.\ref{tab:i}, where one can see that our results are consistent with most of the theoretical predictions. \begin{table}[tbp] \centering \begin{tabular}{l@{\quad}l@{\quad}l@{\quad}l@{\quad}l} \hline \hline \textbf{Reference}&\textbf{Method} &$f_B$\,(MeV)&$f_{B_s}$\,(Mev)&$f_{B_s}/f_B$\\ \hline this work&RPM$^{\ast}$ &$219\pm 15$&$266\pm 19$&$1.21\pm 0.09$\\ Colangelo 91 \cite{Colangelo:1990rv}&RPM &$230\pm 35$&$245\pm 37$&$1.07\pm 0.17$\\ Cveti\u{c} 04 \cite{Cvetic:2004qg}&QM BS$^{\ddag}$ &$196\pm 29$&$216\pm 32$&$1.10\pm 0.18$\\ Badalian 07 \cite{Badalian:2007km}&FCM$^{\pounds}$ &$182\pm 8$&$216\pm 8$&$1.19\pm 0.03$\\ Hwang 09 \cite{Hwang:2009qz}&LFQM$^{\S}$ &$204\pm 31$&$270.0\pm 42.8$&$1.32\pm 0.08$\\ HPQCD 11 \cite{McNeile:2011ng}&LQCD (2+1)$^{\P}$ &\qquad --&$225\!\pm\! 3\!\pm\! 3$&\qquad --\\ FNAL/MILC 11 \cite{Bazavov:2011aa}&LQCD (2+1) &$196.9\!\pm\! 5.5\!\pm\! 7.0$&$242.0\!\pm\! 5.1\!\pm\! 8.0$ &$1.229\!\pm\! 0.013\!\pm\! 0.023$\\ HPQCD 12 \cite{Na:2012kp}&LQCD (2+1) &$191\!\pm\! 1\!\pm\! 8$&$228\!\pm\! 3\!\pm\! 10$ &$1.188\!\pm\! 0.012\!\pm\! 0.013$\\ Narison 12 \cite{Narison:2012xy}&QCD SR$^{\dag}$ &$206\pm 7$&$234\pm 5$&$1.14\pm 0.03$\\ Gelhausen 13 \cite{Gelhausen:2013wia}&QCD SR &$207_{-9}^{+17}$&$242_{-12}^{+17}$&$1.17_{-0.04}^{+0.03}$\\ HPQCD 13 \cite{Dowdall:2013tga}&LQCD (2+1+1) &$184\pm 4$&$224\pm 5$&$1.217\pm 0.008$\\ ETM 13 \cite{Carrasco:2013naa}&LQCD (2+1+1) &$196\pm 9$&$235\pm 9$&$1.201\pm 25$\\ Aoki 14 \cite{Aoki:2014nga}&LQCD (2+1) &$218.8\!\pm\! 6.4\!\pm\! 30.8$&$263.5\!\pm\! 4.8\!\pm\! 36.7$ &$1.193\!\pm\! 0.020\!\pm\! 0.044$\\ RBC/UKQCD 14 \cite{Christ:2014uea}&LQCD (2+1) &$195.6\!\pm\! 6.4\!\pm\! 13.3$&$235.4\!\pm\! 5.2\!\pm\! 11.1$ &$1.223\!\pm\! 0.014\!\pm\! 0.070$\\ Wang 15 \cite{Wang:2015mxa}&QCD SR &$194\pm 15$&$231\pm 16$&$1.19\pm 0.10$\\ \hline \hline \end{tabular} \\\vspace{3mm} \begin{minipage}{1.0\linewidth} \begin{itemize} {\footnotesize \item[$^{\ast}$] Relativistic potential model. \\\vspace{-5mm} \item[$^{\dag}$] QCD sum rules. \\\vspace{-5mm} \item[$^{\ddag}$] Quark model based on Bethe-Salpeter equation. \\\vspace{-5mm} \item[$^{\P}$] lattice-QCD with dynamical quark flavors $N_f$ in the parentheses. \\\vspace{-5mm} \item[$^{\S}$] Light-front quark model. \\\vspace{-5mm} \item[$^{\pounds}$] Field correlator method. } \end{itemize} \end{minipage} \caption{\label{tab:i}Theoretical results of the decay constants of $B$-mesons.} \end{table} The branching ratio of the leptonic decay of $B$ meson can be calculated by the following formula \begin{IEEEeqnarray}{cx} \mathcal{B}\qty(B^{\pm}\rightarrow l^{\pm}\nu)=\frac{G_F^2m_l^2m_B}{8\pi} \qty(1-\frac{m_l^2}{m_B^2})^2f_B^2\abs{V_{ub}}^2\tau_B,\label{eq:14} \end{IEEEeqnarray} where $G_F$ is the Fermi constant, $V_{ub}$ the Cabibbo-Kobayashi-Maskawa (CKM) matrix element, $m_B$ and $m_l$ the masses of $B^{\pm}$ meson and lepton, respectively, and $\tau_B$ is the life time of $B^{\pm}$ meson. In this work, we obtain \begin{IEEEeqnarray}{rCl} \mathcal{B}\qty(B^+\rightarrow e^+\nu_e)&= &\qty(1.17\pm 0.18)\times 10^{-11},\IEEEyesnumber\IEEEyessubnumber\label{eq:15a}\\ \mathcal{B}\qty(B^+\rightarrow \mu^+\nu_{\mu})&= &\qty(5.01\pm 0.78)\times 10^{-7},\IEEEyessubnumber\label{eq:15b}\\ \mathcal{B}\qty(B^+\rightarrow \tau^+\nu_{\tau})&= &\qty(1.41\pm 0.22)\times 10^{-4},\IEEEyessubnumber\label{eq:15c} \end{IEEEeqnarray} where the errors mainly come from the uncertainties of the decay constants $f_B$ and the CKM matrix element $|V_{ub}|$~\cite{Agashe:2014kda} \begin{IEEEeqnarray}{cx} |V_{ub}|=\qty(4.09\pm 0.39)\times 10^{-3}.\label{eq:16} \end{IEEEeqnarray} The branching ratio of $B\rightarrow\tau^+\nu_{\tau}$ channel has been measured by Belle and \emph{BABAR} collaborations \cite{Adachi:2012mm,Kronenbitter:2015kls,Lees:2012ju,Aubert:2009wt}. The results are shown in Table \ref{tab:ii}. \begin{table}[tbp] \centering \begin{tabular}{|c|c|l|} \hline \textbf{Experiment}&\textbf{Tag} &$\mathcal{B}$(units of $10^{-4}$)\\ \hline Belle\cite{Adachi:2012mm}&Hadronic &\rule{0pt}{3ex}$0.72_{-0.25}^{+0.27}\pm 0.11$\\ Belle\cite{Kronenbitter:2015kls}&Semileptonic &\rule{0pt}{3ex}$1.25\pm 0.28\pm 0.27$\\ \hline \emph{BABAR}\cite{Lees:2012ju}&Hadronic &\rule{0pt}{3ex}$1.83_{-0.49}^{+0.53}\pm 0.24$\\ \emph{BABAR}\cite{Aubert:2009wt}&Semileptonic &\rule{0pt}{3ex}$1.7\pm 0.8\pm 0.2$\\ \hline \end{tabular} \caption{\label{tab:ii}Experimental results for $\mathcal{B}(B^+\rightarrow \tau^+\nu_{\tau})$.} \end{table} Taking the large uncertainties of the experimental data into consideration, our predicted branching ratio of the decay channel $B^+\rightarrow\tau^+\nu_{\tau}$ [Eq.\eqref{eq:15c}] is consistent with the experimental results. As an upgrade of the Belle / KEKB experiment, the Belle II / SuperKEKB will start taking data from 2018. With a designed luminosity $8\times 10^{35}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}$, which is about 40 times larger than its predecessor, data sample corresponding to $50\,\mathrm{ab}^{-1}$ will be accumulated within five years of operation~\cite{Wang:2015kmm}. It is expected to reduce both the statistical and systematic errors of the $B^+\rightarrow\tau^+\nu_{\tau}$ decay mode by a factor about 7~\cite{Pakhlov:2016yzk}. \section{$B$-Mesons Distribution Amplitudes} \label{sec:da} Based on the success of our predictions on the mass spectra \cite{Yang:2011ie,Liu:2013maa,Liu:2015lka} and the decay constants of $B$-mesons, we continue to study of the matrix element of $B$ meson which defines the DAs. The matrix element and DAs are generally used in studying hadronic decays of $B$ meson. Generalizing the current in the definition of the decay constant in Eq.\eqref{eq:1} from local to nonlocal operators and making use of Fierz identity, we obtain the matrix element between the $B$ meson and the vacuum state in coordinate space \begin{IEEEeqnarray}{rCl} \tilde{\Phi}_{\alpha\beta}(z)&\equiv&\mel{0} {\bar{q}_{\beta}(z)[z,0]Q_{\alpha}(0)}{\bar{B}(P)} \IEEEyesnumber\IEEEyessubnumber\label{eq:17a}\\ &=&\frao{4}\mel{0}{\bar{q}(z)Q(0)}{\bar{B}}I_{\alpha\beta} +\frao{4}\mel{0}{\bar{q}(z)\gamma^5Q(0)} {\bar{B}}(\gamma^5)_{\alpha\beta}\IEEEnonumber\\ &&+\frao{8}\mel{0}{\bar{q}(z)\sigma^{\mu\nu}\gamma^5Q(0)} {\bar{B}}(\sigma_{\mu\nu}\gamma^5)_{\alpha\beta} +\frao{4}\mel{0}{\bar{q}(z)\gamma^{\mu}Q(0)} {\bar{B}}(\gamma_{\mu})_{\alpha\beta}\IEEEnonumber\\ &&-\frao{4}\mel{0}{\bar{q}(z)\gamma^{\mu}\gamma^5Q(0)} {\bar{B}}(\gamma_{\mu}\gamma^5)_{\alpha\beta}, \IEEEyessubnumber\label{eq:17b} \end{IEEEeqnarray} where $\sigma^{\mu\nu}=\frac{i}{2}\qty[\gamma^{\mu},\gamma^{\nu}]$, and $[z,0]$ stands for the path-ordered exponential, which is called Wilson line that connects the point $0$ and $z$. The definition of Wilson line is \begin{IEEEeqnarray}{cx} [z,0]\equiv\mathrm{P}\mathrm{exp} \qty(i\!\int_0^z\dd{x^{\mu}}\!A_{\mu}(x)).\label{eq:18} \end{IEEEeqnarray} According to discrete symmetries of $C,P$ and $T$ , the matrix elements in the right-hand side of Eq.\eqref{eq:17b} are related to four DAs $\tilde{\phi}_i$ ($i=P,T,A1,A2$ ) as defined in Ref.\cite{Grozin:1996pq} \begin{IEEEeqnarray}{rCl} \mel{0}{\bar{q}(z)Q(0)}{\bar{B}}&=&0, \IEEEyesnumber\IEEEyessubnumber\label{eq:19a}\\ \mel{0}{\bar{q}(z)\gamma^5Q(0)}{\bar{B}}&=&-if_Bm_B\tilde{\phi}_P, \IEEEyessubnumber\label{eq:19b}\\ \mel{0}{\bar{q}(z)\sigma^{\mu\nu}\gamma^5Q(0)}{\bar{B}}&=& -if_B\tilde{\phi}_T\qty(P^{\mu}z^{\nu}-P^{\nu}z^{\mu}), \IEEEyessubnumber\label{eq:19c}\\ \mel{0}{\bar{q}(z)\gamma^{\mu}Q(0)}{\bar{B}}&=&0, \IEEEyessubnumber\label{eq:19d}\\ \mel{0}{\bar{q}(z)\gamma^{\mu}\gamma^5Q(0)}{\bar{B}}&=& f_B\qty(i\tilde{\phi}_{A1}P^{\mu}-m_B\tilde{\phi}_{A2}z^{\mu}), \IEEEyessubnumber\label{eq:19e} \end{IEEEeqnarray} where the DAs $\tilde{\phi}_i$ are functions of the coordinate $z$. In our scenario, we calculate these five matrix elements in the $B$-meson rest frame by using the $B$ meson state defined in Eq.(\ref{eq:2}). We confirmed that the matrix elements in Eq.\eqref{eq:19a} and Eq.\eqref{eq:19c} are indeed zero \begin{IEEEeqnarray}{cx} \mel{0}{\bar{q}(z)Q(0)}{\bar{B}}= \mel{0}{\bar{q}(z)\gamma^{\mu}Q(0)}{\bar{B}}=0.\label{eq:20} \end{IEEEeqnarray} For the pseudoscalar DA in Eq.\eqref{eq:19b}, we obtain \begin{IEEEeqnarray}{cx} \tilde{\phi}_P(z)=N_B\int\!\dd[3]{k}\varPsi_0(\vv{k}) \frac{-\qty[(E_q+m_q)(E_Q+m_Q)+|\vv{k}|^2]} {\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}}e^{-ik_q\cdot z},\label{eq:21} \end{IEEEeqnarray} where $k_q^{\mu}=(E_q,\vv{k})$ is the four-momentum of the light quark in the meson rest frame, and \begin{equation} N_B\equiv\frac{i}{f_B}\sqrt{\frac{3}{(2\pi)^3m_B}}. \end{equation} It should be understood that the wave function $\varPsi_0(\vv{k})$ may have an arbitrary phase which can be adjusted to obtain a positive real decay constant according to the definition in Eq.\eqref{eq:1}. For the other DAs in Eqs.\eqref{eq:19c} and \eqref{eq:19e} (the detailed derivation can be found in Appendix A), we introduce two functions $A_T$ and $A$ at first, \begin{IEEEeqnarray}{rCl} A_T(k^1,k^2,k^3)&\equiv&\varPsi_0(\vv{k}) \frac{E_Q+m_Q+E_q+m_q}{\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}}, \IEEEyesnumber\IEEEyessubnumber\label{eq:22a}\\ A(k^1,k^2,k^3)&\equiv&\varPsi_0(\vv{k}) \frac{E_Q+m_Q-E_q-m_q}{\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}}, \IEEEyessubnumber\label{eq:22b} \end{IEEEeqnarray} where $k^1,k^2,k^3$ are the components of the light quark momentum $\vv{k}$ , i.e., $\vv{k}=(k^1,k^2,k^3)$. Then we obtain the DAs as \begin{IEEEeqnarray}{rCl} \tilde{\phi}_T(z)&=&N_B \int\!\dd[3]{k}\qty[\frao{3}\sum_i\int_0^{k^i}\!\! A_T(\eta,\dots)\eta\dd{\eta}]e^{-ik_q\cdot z}, \IEEEyesnumber\IEEEyessubnumber\label{eq:23a}\\ \tilde{\phi}_{A2}(z)&=&N_B \int\!\dd[3]{k}\qty[\frao{3}\sum_i\int_0^{k^i}\!\! A(\eta,\dots)\eta\dd{\eta}]e^{-ik_q\cdot z}, \IEEEyessubnumber\label{eq:23b}\\ \tilde{\phi}_{A1}(z)&=&-N_B \int\!\dd[3]{k}e^{-ik_q\cdot z}\IEEEnonumber\\ &&\cdot \qty[\varPsi_0(\vv{k})\frac{(E_q+m_q)(E_Q+m_Q)-|\vv{k}|^2} {\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}}+E_qA(k^1,k^2,k^3)]. \IEEEyessubnumber\label{eq:23c} \end{IEEEeqnarray} For the details of the summation in the square parentheses containing the ellipsis, see Eq.\eqref{eq:A3}. Now, with Eqs.(\ref{eq:19a})$\sim$(\ref{eq:19e}), the matrix element for $B$-meson in Eq.(\ref{eq:17a}) can be rewritten as \begin{IEEEeqnarray}{rl} \tilde{\Phi}_{\alpha\beta}(z)=\frac{-if_B}{4}\Big\{ \Big[m_B\tilde{\phi}_P&+\frao{2}\tilde{\phi}_{T} \qty(P^{\mu}z^{\nu}-P^{\nu}z^{\mu}) \sigma_{\mu\nu}\IEEEnonumber\\ &+\qty(\tilde{\phi}_{A1}P^{\mu}+im_B\tilde{\phi}_{A2}z^{\mu}) \gamma_{\mu}\Big]\gamma^5\Big\}_{\alpha\beta},\label{eq:24} \end{IEEEeqnarray} where the DAs are given in Eqs.\eqref{eq:21} and \eqref{eq:23a}--\eqref{eq:23c}. In order to obtain the expressions of the DAs in momentum space, we make use of the amplitude of a decay process which can be expressed as a convolution \cite{Beneke:2000wa} \begin{IEEEeqnarray}{cx} F=\int\!\dd[4]{z}\tilde{\Phi}_{\alpha\beta}(z) \tilde{T}_{\beta\alpha}(z).\label{eq:25} \end{IEEEeqnarray} Substituting Eq.\eqref{eq:24} into Eq.\eqref{eq:25} and with a few steps of calculation (see Appendix B for details), we obtain \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}(l)&=\Bigg\{\frac{-if_Bm_B}{4}\Bigg[\phi_P(l) &+\frac{i}{2}\phi_{T}(l)\sigma_{\mu\nu} \qty(v^{\mu}\pdv{l_{\nu}}-v^{\nu}\pdv{l_{\mu}})\IEEEnonumber\\ &&+\qty(\phi_{A1}(l)\slashed{v}-\phi_{A2}(l)\gamma_{\mu}\pdv{l_{\mu}}) \Bigg]\gamma^5\Bigg\}_{\alpha\beta}\label{eq:26} \end{IEEEeqnarray} and \begin{IEEEeqnarray}{cx} F=\int\!\dd[3]{l}\Phi_{\alpha\beta}(l) \eval{T_{\beta\alpha}(l)}_{l^2=m_q^2}.\label{eq:27} \end{IEEEeqnarray} It is understood that the derivative $\pdv{l_{\mu,\nu}}$ in Eq.\eqref{eq:26} (which is called the momentum space projector \cite{Beneke:2000wa,Wei:2002iu}) acts on the hard-scattering kernel $T_{\beta\alpha}(l)$ before $l=k_q$ is taken. For the DAs in the momentum space, we obtain \begin{IEEEeqnarray}{rCl} \phi_P(k_q^{\mu})&=&-N_B\frac{(E_q+m_q)(E_Q+m_Q)+|\vv{k}|^2} {\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}}\varPsi_0(\vv{k}), \IEEEyesnumber\IEEEyessubnumber\label{eq:28a}\\ \phi_T(k_q^{\mu})&=&\frac{N_B}{3}\sum_i\int_0^{k^i}\!\!A_T(\eta,\dots) \eta\dd{\eta},\IEEEyessubnumber\label{eq:28b}\\ \phi_{A2}(k_q^{\mu})&=&\frac{N_B}{3}\sum_i\int_0^{k^i}\!\!A(\eta,\dots) \eta\dd{\eta},\IEEEyessubnumber\label{eq:28c}\\ \phi_{A1}(k_q^{\mu})&=&-N_B\qty[\varPsi_0(\vv{k}) \frac{(E_q+m_q)(E_Q+m_Q)-|\vv{k}|^2}{\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}} +E_qA(k^1,k^2,k^3)],\IEEEyessubnumber\label{eq:28d} \end{IEEEeqnarray} In general, these DAs play an important role in the study of the $B$-meson decays \cite{Lee:2005gza}. Thus it is necessary and useful to give an numerical illustration of them. For simplicity, we take $\vv{k}=(0,0,k^3)$ and the DAs as functions of $|k^3|$ are shown in Fig.\ref{fig:3}. The grey bands are the possible uncertainties caused by the uncertainty of the wave function. In the heavy-quark limit, one can obtain that one of the axial-vector DA $\phi_{A2}$ is equal to the axial-tensor DA $\phi_T$ \cite{Grozin:1996pq}. For our results, as shown in Figs.\ref{fig:3} (c) and (e), (d) and (f), these two DAs are indeed very close, which indicate that our scenario is reasonable and their difference reflects the influence of the finite heavy-quark mass. One can also see that the figures for $B$ and $B_s$ mesons are very similar, but in detail, for the same values of $|k^3|$, the absolute values of the DAs of $B$ meson are always a bit larger than that of $B_s$ meson. This is consistent with the fact that the DAs are inversely proportional to the square root of the decay constants and masses. \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{Fig-3.pdf} \caption{\label{fig:3}Distribution amplitudes as functions of $|k^3|$, where the grey bands are uncertainties caused by the wave function.} \end{figure} In addition, the light-cone coordinate is widely used in the study of the DAs , for example, the works in Refs.\cite{Beneke:2000wa,Lee:2005gza,Grozin:2005iz,Wei:2002iu,Bell:2008er,Bell:2013tfa,Feldmann:2014ika} and references therein, where the DAs depend on a single variable $k_+$ or $k_-$, which are the light-cone projections of the momentum of the light antiquark in the rest frame of the meson . The definitions of the light-cone projections of the momentum of the light antiquark are \begin{equation} k_{\pm}=\frac{E_q\pm k^3}{\sqrt{2}},\quad{} k_{\perp}^{\mu}=(0,k^1,k^2,0) \end{equation} Performing the integration over the transverse momentum $k_{\perp}$, we can obtain the light-cone distribution amplitudes (LCDAs) in our scenario. Usually, the $k_{\perp}$-integral is restricted by a scale $\mu$, i.e., $|k_{\perp}|<\mu$ \cite{Ball:1998je,Beneke:2000ry}. In our model, the wave function is spherically symmetric with respect to $k^1$, $k^2$, and $k^3$. The integral region of the $k_{\perp}$ has an upper limit, which is determined by Eqs.\eqref{eq:9a}--\eqref{eq:9c}. The upper limits are shown clearly by the cut lines in Fig.\ref{fig:1}. The distribution amplitude $\phi_{A1}$ as a function of $k_+$ is shown in Fig.\ref{fig:4}. $\phi_{A1}$ is relevant to the LCDA $\phi_B^+$ in the heavy quark limit, which is generally used in the study of $B$ decays . Our results are consistent with the general analysis given in Ref.\cite{Lee:2005gza}. \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{Fig-4.pdf} \caption{\label{fig:4}Distribution amplitudes as functions of $k_+$, where the grey bands are uncertainties caused by the wave function.} \end{figure} Next we try to give a compact form of the matrix element $\tilde{\Phi}_{\alpha\beta}(z)=\mel{0} {\bar{q}_{\beta}(z)[z,0]Q_{\alpha}(0)}{\bar{B}(P)}$. Substituting Eq.\eqref{eq:21} and Eqs.\eqref{eq:23a}--\eqref{eq:23c} into Eq.\eqref{eq:17b} and after a few steps of simplification, we obtain \begin{IEEEeqnarray}{rcl} \tilde{\Phi}_{\alpha\beta} (z)&=&\frac{-1}{4} \sqrt{\frac{3m_B}{(2\pi)^3}}\!\int\!\dd[3]{k} \frac{\varPsi_0(\vv{k})e^{-i k_q\cdot z}} {\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}} \qty{\mqty(b\\c)\mqty(c&a)}_{\alpha\beta}\text{(\,D.R.)} \IEEEyesnumber\IEEEyessubnumber\label{eq:29a}\\ &=&\frac{-1}{4}\sqrt{\frac{3m_B}{(2\pi)^3}}\int\dd[3]{k} \cdot{}\IEEEnonumber\\ &&\quad\frac{\varPsi_0(\vv{k}) e^{-ik_q\cdot z}}{\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}} \qty{\mqty(b-c\\b+c)\mqty(c-a&c+a)}_{\alpha\beta}\text{(\,W.R.)}. \IEEEyessubnumber\label{eq:29b} \end{IEEEeqnarray} where $a$, $b$, and $c$ are three $2\!\times{}\!2$ matrices, which are defined as \begin{equation*} a=(E_q+m_q)I_{2\times2}, \; b=(E_Q+m_Q)I_{2\times 2},\; c=\vv{k}\cdot\vv{\sigma} \end{equation*} and $\vv{\sigma}$ is the Pauli matrix. These two expressions in Eqs.\eqref{eq:29a} and \eqref{eq:29b} are derived with different representations of the gamma matrix $\gamma^{\mu}$. The label D.R. denotes Dirac representation, and W.R. Weyl representation. For simplicity, we define \begin{IEEEeqnarray}{cx} K(\vv{k})\equiv\frac{-N_B\varPsi_0(\vv{k})} {\sqrt{E_qE_Q(E_q+m_q)(E_Q+m_Q)}}.\label{eq:30} \end{IEEEeqnarray} Then the convolution formula of Eq.\eqref{eq:27} can be rewritten as \begin{IEEEeqnarray}{cx} F=\int\!\dd[3]{k}\frac{-if_Bm_B}{4}K(\vv{k}) \qty{\mqty(b\\c)\mqty(c&a)}_{\alpha\beta} \eval{T_{\beta\alpha}(k_q)}_{k_q^2=m_q^2}\label{eq:31} \end{IEEEeqnarray} where the spinor matrices are given in Dirac representation (D.R.). Next we introduce two light-like vectors $n_{\pm}^{\mu}=(1,0,0,\mp 1)$ and define $\slashed{n}_+\equiv{}n_+^{\mu}\gamma_\mu=\left(\smqty{1 & \sigma{}^3 \\ -\sigma{}^3 & -1}\right)$, $\slashed{n}_-\equiv n_-^{\mu}\gamma_{\mu}=\left(\smqty{1 & -\sigma^3 \\ \sigma^3 & -1}\right)$. With these two vectors $n_{\pm}^{\mu}$, the matrix element $\Phi_{\alpha\beta}(k_q^\mu)$ can be expressed in another form \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}(k_q^{\mu})&=&\frac{-if_Bm_B}{4}K(\vv{k}) \qty{\mqty(b\\c)\mqty(c&a)}_{\alpha\beta}\IEEEnonumber\\ &=&\frac{-if_Bm_B}{4}K(\vv{k})\IEEEnonumber\\ &&\cdot\Bigg\{(E_Q+m_Q)\frac{1+\slashed{v}}{2}\Bigg[\qty(\frac{k_+}{\sqrt{2}} +\frac{m_q}{2})\slashed{n}_++\qty(\frac{k_-}{\sqrt{2}} +\frac{m_q}{2})\slashed{n}_--k_{\perp}^{\mu}\gamma_{\mu} \Bigg]\gamma^5\IEEEnonumber\\ &&-(E_q+m_q)\frac{1-\slashed{v}}{2}\Bigg[ \qty(\frac{k_+}{\sqrt{2}}-\frac{m_q}{2})\slashed{n}_+ +\qty(\frac{k_-}{\sqrt{2}}-\frac{m_q}{2})\slashed{n}_- -k_{\perp}^{\mu}\gamma_{\mu}\Bigg]\gamma^5 \Bigg\}_{\alpha\beta}.\label{eq:32} \end{IEEEeqnarray} Compared with the commonly used results (for instance, see Eq.(109) in Ref.\cite{Beneke:2000wa} and Eq.(2.48) in Ref.\cite{Bell:2013tfa}), this new form includes the whole spinor structure of the momentum projector. The part containing $\frac{1+\slashed{v}}{2}$ is proportional to the heavy quark's mass and is the only term in the heavy quark limit. Since when the heavy-quark mass $m_Q$ goes infinity, the contribution of other part in Eq.\eqref{eq:32} will be relatively very small and can be ignored. Therefore, as we take the finite heavy-quark mass, the part with $(E_q+m_q)$ will give extra contribution and may be an important correction in the study of $B$-meson decays. \section{QCD One-Loop Corrections to Leptonic Decays of $B$-Meson} \label{sec:qcdf} In Sec.\ref{sec:dc}, we study the leptonic decays of $B$ meson at tree level. In this section, we extend this study by including QCD one-loop corrections. When considering one-loop corrections in QCD, if one naively calculate the loop diagrams, one will encounter not only ultraviolet divergence, but also infrared divergence. Factorization method can be applied to obtain the infrared-safe amplitude at the quark level. To obtain the infrared-safe transition amplitude at quark level, let us consider the free quark state $\ket{\bar{u}^r(k)b^s(p-k)}$ as the initial state at first. Factorization means that the matrix element of a physics transition process $F^\mu$ can be expressed as the convolution of the wave function of the initial state and the hard transition amplitude $T$ \begin{equation} F^\mu=\Phi{}\otimes{}T \end{equation} where the circle-time $\otimes$ denotes the convolution in Eq.\eqref{eq:25}, and $\mu$ denotes the Lorentz index that may appear in the physical transition matrix element. All the infrared contributions are absorbed into the wave function $\Phi$, while the hard amplitude $T$ is infrared safe. In perturbation theory, the matrix element $F^{\mu}$, which relevant to the quark transition process, the wave function $\Phi$ and the hard-scattering kernel $T$ can all be expanded by the power of $\alpha_s$. Therefore the factorization formula takes the form \cite{DescotesGenon:2002mw} \begin{IEEEeqnarray}{rCl} F^{\mu}&=&F^{(0)\mu}+F^{(1)\mu}+\cdots=\Phi\otimes T\IEEEnonumber\\ &=&\qty[\Phi^{(0)}\otimes T^{(0)}]+\qty[\Phi^{(0)}\otimes T^{(1)}+\Phi^{(1)}\otimes T^{(0)}]+\cdots,\label{eq:33} \end{IEEEeqnarray} where the superscripts $(n)$'s indicate the perturbation levels. After calculating both the matrix element $F^{(1)\mu}$ and the wave function $\Phi^{(1)}$ at one-loop order, one can extract the hard amplitude $T^{(1)}$ by using Eq.\eqref{eq:33}, that is \begin{equation} \Phi^{(0)}\otimes{}T^{(1)}=F^{(1)\mu}-\Phi^{(1)}\otimes T^{(0)} \label{eq:34} \end{equation} At one-loop level, both the matrix element $F^{(1)\mu}$ and the wave function $\Phi^{(1)}$ are infrared divergent. Through the subtraction in the right-hand side of Eq.\eqref{eq:34}, the infrared divergence can be cancelled . Then the hard amplitude $T^{(1)}$ we get through Eq.\eqref{eq:34} is infrared safe. At tree level, the factorization can be achieved straightforwardly and we show the results briefly at first. The matrix element $F^\mu$ at tree level, as shown in Fig. \ref{fig:5}, can be obtained as \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Fig-5.pdf} \caption{\label{fig:5}Factorization at tree level} \end{figure} \begin{IEEEeqnarray}{rCl} F_{b\bar{u}}^{(0)\mu}&=&\mel{0}{\bar{u}\gamma_L^{\mu}b} {\bar{u}^r(k)b^s(p-k)}\IEEEnonumber\\ &=&\frao{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}} \bar{v}^r(k)\gamma_L^{\mu}u^s(p-k)\label{eq:35} \end{IEEEeqnarray} where the coefficient $\frao{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}}$ is from our convention, and $\bar{v}$ and $u$ are the spinors of the quarks $\bar{u}$ and $b$, respectively. The superscripts $r$ and $s$ are the spin labels. The wave function of the free quark state $\ket{\bar{u}^r(k)b^s(p-k)}$ at tree level is \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}^{(0)b\bar{u}}(\tilde{k})&=& \int\!\dd[4]{z}e^{i\tilde{k}\cdot z} \mel{0}{\bar{u}_{\beta}(z)[z,0]b_{\alpha}(0)}{\bar{u}^r(k)b^s(p-k)} \IEEEnonumber\\ &=&\frao{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}} (2\pi)^4\dlt{4}{\tilde{k}-k}\bar{v}^r_{\beta}(k)u^s_{\alpha}(p-k) .\label{eq:36} \end{IEEEeqnarray} Matching the matrix element in Eq.\eqref{eq:35} and the wave function in Eq.\eqref{eq:36} into the factorization formula \begin{IEEEeqnarray}{rCl} F^{(0)\mu}_{b\bar{u}}&=&\int\!\frac{\dd[4]{\tilde{k}}}{(2\pi)^4} \Phi_{\alpha\beta}^{(0)b\bar{u}}(\tilde{k})T_{\beta\alpha}^{(0)}(\tilde{k}) \IEEEnonumber\\ &=&\frao{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}}\bar{v}_{\beta}^r(k) T_{\beta\alpha}^{(0)}(k)u_{\alpha}^s(p-k),\label{eq:37} \end{IEEEeqnarray} we can obtain the hard-scattering kernel at tree level \begin{IEEEeqnarray}{cx} T_{\beta\alpha}^{(0)}(k)=\qty(\gamma_L^{\mu})_{\beta\alpha}.\label{eq:38} \end{IEEEeqnarray} This tree-level result is independent of the quark momentum $k$. It plays an important role in the calculation of the hard amplitude at one-loop level. Next we shall establish the factorization at one-loop level. The Feynman diagram for the matrix element $F^{(1)\mu}$ at one-loop level is shown as Fig. \ref{fig:6}(a). The renormalization factor $\sqrt{Z_2^{\bar{u}} Z_2^b}$ must appear in the contribution of Fig. \ref{fig:6}(a) due to the renormalization of the external quark fields, where $\sqrt{Z_2^{\bar{u}}}$ and $\sqrt{Z_2^b}$ are the renormalization constants of the external quark fields $\bar{u}$ and $b$, respectively. Since the factor $\sqrt{Z_2^{b}}$ and $\sqrt{Z_2^{\bar{u}}}$ correspond to the self-energy diagrams of the external quark $b$ and $\bar{u}$, the factor $\sqrt{Z_2^{\bar{u}}Z_2^b}$ can be represented by the contributions of Fig. \ref{fig:6}(b) and (c) . \begin{figure}[bp] \centering \includegraphics[width=1.0\textwidth]{Fig-6.pdf} \caption{\label{fig:6}Feynman diagrams at one-loop level for $F$.} \end{figure} The contribution of Fig. \ref{fig:6} (a) is \begin{IEEEeqnarray}{rCl} F_V^{(1)\mu}&=&\frao{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}} \qty(-ig_s^2)C_F\bar{v}^r(k)\IEEEnonumber\\ &&\cdot\int\!\frac{\dd[4]{l}}{(2\pi)^4} \gamma^{\rho}\frao{m_u-\qty(\slashed{l}-\slashed{k})}\gamma_L^{\mu} \frao{m_b-\qty(\slashed{p}-\slashed{k}+\slashed{l})}\gamma_{\rho}\frao{l^2} u^s(p-k), \label{eq:39} \end{IEEEeqnarray} where $g_s$ is the strong coupling constant, and all the momenta of quarks and gluon are labelled in Fig. \ref{fig:6} (a). The explicit result after the loop integration is given in Appendix C. The contributions of Fig. \ref{fig:6} (b) and (c) are \begin{IEEEeqnarray}{cx} F_{bR}^{(1)\mu}=\frao{2}\qty(Z_2^b-1)F_{b\bar{u}}^{(0)\mu}\qc F_{\bar{u}R}^{(1)\mu}=\frao{2}\qty(Z_2^{\bar{u}}-1)F_{b\bar{u}}^{(0)\mu}. \label{eq:40} \end{IEEEeqnarray} The renormalization constants (the explicit expressions are listed in Appendix C) are defined in terms of the one-particle irreducible (1PI) diagrams $\Sigma$ by \begin{IEEEeqnarray}{cx} Z_2^{b,\bar{u}}=1+i\eval{\dv{\Sigma}{\slashed{p}}}_{\slashed{p}=m}.\label{eq:41} \end{IEEEeqnarray} The corrections for the wave functions at one-loop order contain 6 Feynman diagrams which have been divided into two groups. They are shown in Figs. \ref{fig:7} and \ref{fig:8}. It will be shown later that, when the contribution of the diagrams in Fig. \ref{fig:8} is convoluted with the hard-transition kernel at tree level, the result will be zero. \begin{figure \centering \includegraphics[width=1.0\textwidth]{Fig-7.pdf} \caption{\label{fig:7}Feynman diagrams at one-loop level for WF (1).} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=1.0\textwidth]{Fig-8.pdf} \caption{\label{fig:8}Feynman diagrams at one-loop level for WF (2).} \end{figure} The contribution of the diagram Fig. \ref{fig:7}(a) to the wave function is \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}^{(1)V}(\tilde{k})&=&(2\pi)^4\dlt{4}{l-k+\tilde{k}} \frao{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}} (-ig_s^2)C_F\int\!\frac{\dd[4]{l}}{(2\pi)^4}\cdot\IEEEnonumber\\ &&\qty[\bar{v}^r(k)\gamma^{\rho}\frao{m_u-\qty(\slashed{l}-\slashed{k})}]_{\beta} \qty[\frao{m_b-\qty(\slashed{p}-\slashed{k}+\slashed{l})}\gamma_{\rho}\frao{l^2} u^s(p-k)]_{\alpha}\label{eq:42} \end{IEEEeqnarray} and the contributions of the wave function renormalization of the heavy quark field [Fig. \ref{fig:7}(b)] and the light quark field [Fig. \ref{fig:7}(c)] are \begin{IEEEeqnarray}{cx} \Phi_{\alpha\beta}^{(1)b}=\frao{2}\qty(Z_2^b-1)\Phi_{\alpha\beta}^{(0)b\bar{u}} \qc\Phi_{\alpha\beta}^{(1)\bar{u}}= \frao{2}\qty(Z_2^{\bar{u}}-1)\Phi_{\alpha\beta}^{(0)b\bar{u}}.\label{eq:43} \end{IEEEeqnarray} Then it is straightforward to obtain the results after the convolution with the hard-scattering kernel at tree level $T_{\beta\alpha}^{(0)}=\qty(\gamma_L^{\mu})_{\beta\alpha}$ and we find that \begin{IEEEeqnarray}{rCl} F_V^{(1)\mu}&=&\Phi_{\alpha\beta}^{(1)V}\otimes T_{\beta\alpha}^{(0)} ,\IEEEyesnumber\IEEEyessubnumber\label{eq:44a}\\ F_{bR}^{(1)\mu}&=&\Phi_{\alpha\beta}^{(1)b}\otimes T_{\beta\alpha}^{(0)} ,\IEEEyessubnumber\label{eq:44b}\\ F_{\bar{u}R}^{(1)\mu}&=&\Phi_{\alpha\beta}^{(1)\bar{u}}\otimes T_{\beta\alpha}^{(0)}.\IEEEyessubnumber\label{eq:44c} \end{IEEEeqnarray} It is noted that there are two scales in the above equations, i.e., the factorization scale $\mu_F$ in the wave functions $\Phi_{\alpha\beta}^{(1)b,\bar{u}}$ and the renormalization scale $\mu_R$ in the matrix element $F_{b,\bar{u}R}^{(1)\mu}$. Here we take $\mu_F=\mu_R$. At last, we turn to the contributions of Feynman diagrams in Fig. \ref{fig:8}. The contribution of Fig. \ref{fig:8}(a) contains a gluon propagator both the starting and ending points being on the Wilson-line. In the light-cone approximation and working in the Feynman gauge, this propagator vanishes \cite{DescotesGenon:2002mw} since $z$ is a null vector on the light-cone ($z^2=0$). As for our case, the result is still zero. First, we obtain \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}^{(1)0a}(\tilde{k})&=&\frao{(2\pi)^3} \sqrt{\frac{m_um_b}{k^0(p-k)^0}}(ig_s^2)C_F \bar{v}^r(k)_{\beta}\int\!\frac{\dd[4]{l}}{(2\pi)^4} \int\!\dd[4]{z}e^{i\tilde{k}\cdot z}\cdot\IEEEnonumber\\ &&2\int_0^1\!\dd{x}z^{\mu}\int_0^x\dd{y}z^{\nu} e^{-ik\cdot z}e^{ixl\cdot z}e^{iy(-l)\cdot z}\frac{g_{\mu\nu}}{l^2} u^s(p-k)_{\alpha}\IEEEyesnumber\IEEEyessubnumber\label{eq:45a}\\ &=&\frac{-i2g_s^2C_F}{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}} \bar{v}^r(k)_{\beta}\int\!\frac{\dd[4]{l}}{(2\pi)^4}\int\! \dd[4]{z}\int_0^1\!\dd{x}\int_0^x\!\dd{y}\IEEEnonumber\\ &&\frac{e^{i(xl-k-yl)\cdot z}}{l^2} \qty[\pdv{}{\tilde{k}_{\mu}}{\tilde{k}^{\mu}} e^{i\tilde{k}\cdot z}]u^s(p-k)_{\alpha}.\IEEEyessubnumber\label{eq:45b} \end{IEEEeqnarray} In Eq.\eqref{eq:45a} we make the substitution $x^{\mu}=xz^{\mu}$ and $y^{\nu}=yz^{\nu}$ in the Wilson-line. Next, we can substitute Eq.\eqref{eq:45b} into the convolution formula, and perform the partial integration. By noting that the hard-scattering kernel is a constant Dirac matrix, we can demonstrate \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}^{(1)0a}\otimes T_{\beta\alpha}^{(0)} &=&\int\!\frac{\dd[4]{\tilde{k}}}{(2\pi)^4} \frac{i2g_s^2C_F}{(2\pi)^3}\sqrt{\frac{m_um_b}{k^0(p-k)^0}} \bar{v}^r(k)_{\beta}\int\!\frac{\dd[4]{l}}{(2\pi)^4}\int\! \dd[4]{z}\int_0^1\!\dd{x}\int_0^x\!\dd{y}\IEEEnonumber\\ &&\frac{e^{i(xl-k-yl)\cdot z}}{l^2}e^{i\tilde{k}\cdot z} \qty[\pdv{}{\tilde{k}_{\mu}}{\tilde{k}^{\mu}} T_{\beta\alpha}^{(0)}]u^s(p-k)_{\alpha}\IEEEnonumber\\ &=&0.\label{eq:46} \end{IEEEeqnarray} For the other two diagrams in Fig. \ref{fig:8}, the contribution of Fig. \ref{fig:8}(b) is \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}^{(1)0b}(\tilde{k})&=&\frac{i2g_s^2C_F}{(2\pi)^3} \sqrt{\frac{m_um_b}{k^0(p-k)^0}} \int\!\frac{\dd[4]{l}}{(2\pi)^4}\int\!\dd[4]{z}\int_0^1\! \dd{x}\frac{e^{i(xl-k)\cdot z}}{l^2}\IEEEnonumber\\ &&\bar{v}^r(k)_{\beta} \qty[\pdv{}{\tilde{k}_{\rho}}e^{i\tilde{k}\cdot z}] \qty[\frao{m_b-(\slashed{p}-\slashed{k}+\slashed{l})} \gamma_{\rho}u^s(p-k)]_{\alpha}\label{eq:47} \end{IEEEeqnarray} Then the convolution is \begin{IEEEeqnarray}{rCl} \Phi_{\alpha\beta}^{(1)0b}\otimes T_{\beta\alpha}^{(0)} &=&\int\!\frac{\dd[4]{\tilde{k}}}{(2\pi)^4} \frac{-i2g_s^2C_F}{(2\pi)^3} \sqrt{\frac{m_um_b}{k^0(p-k)^0}} \int\!\frac{\dd[4]{l}}{(2\pi)^4}\int\!\dd[4]{z}\int_0^1\! \dd{x}\frac{e^{i(xl-k)\cdot z}}{l^2}\IEEEnonumber\\ &&\bar{v}^r(k)_{\beta}e^{i\tilde{k}\cdot z} \qty[\pdv{}{\tilde{k}_{\rho}}T_{\beta\alpha}^{(0)}] \qty[\frao{m_b-(\slashed{p}-\slashed{k}+\slashed{l})} \gamma_{\rho}u^s(p-k)]_{\alpha}\IEEEnonumber\\ &=&0.\label{eq:48} \end{IEEEeqnarray} Similarly, we can obtain that the contribution of Fig. \ref{fig:8} (c) is also zero. Finally, combining Eqs.\eqref{eq:44a}--\eqref{eq:44c}, Eq.\eqref{eq:46} and Eq.\eqref{eq:48} together, we can demonstrate that $F_{b\bar{u}}^{(1)\mu}=\Phi_{\alpha\beta}^{(1)b\bar{u}}\otimes T_{\beta\alpha}^{(0)}$ and thus considering Eq.\eqref{eq:34}, the total contribution to the hard-scattering kernel at one-loop level $T_{\beta\alpha}^{(1)}$ is zero. Therefore the QCD one-loop corrections to the hard amplitude of the leptonic decay of $B$ meson are zero in the factorization scheme. A brief remark about this result should be given here. The vanishment of QCD one-loop correction to the hard decay amplitude of the pure leptonic decay of $B$ meson does not mean that the naive calculation of the QCD one-loop correction diagrams in Fig. \ref{fig:6} will result in zero. The results in Eqs.\eqref{eq:39}, \eqref{eq:40} and that given in Appendix C show that the contributions of these diagrams are not zero. They include both hard and infrared singularities. The infrared singularities come from the limit that the mass of the light quark approaches zero and/or the momentum of the gluon vanishes, i.e., $m_q\to 0$ and $l\to 0$. It has been known that a conserved current requires no renormalization because of gauge invariance \cite{boyer}. Here the axial current $\bar{q}\gamma^\mu\gamma_5 b$ inducing the leptonic decay of $B$ meson is partially conserved. Our calculation shows that the axial current as a composite operator still does not require renormalization. Only the external quark field renormalization is needed. Although the naive contributions of the diagrams in Fig. \ref{fig:6} are not zero, when QCD corrections to the wave function are also considered up to one-loop order, the infrared singularities and the hard contribution in the short-distance amplitude are simultaneously subtracted by that in the wave function by using Eq.\eqref{eq:34}. This implies that the infrared contribution in the short-distance amplitude can be absorbed into the wave function, and the hard terms are also absorbed and they will contribute to the evolution of the wave function. The factorization and the result that the hard amplitude receives no QCD correction are proved up to one-loop order in this work. But we expect that this result may hold up to all orders in QCD, because the gluons are always restricted between the heavy quark and the light antiquark lines for both the cases of the QCD corrections to the wave function and that to the hard amplitude of the pure leptonic decay process. Therefore the subtraction may happen up to all orders in perturbative expansions. Then the formula that expresses the decay rate of the leptonic decay in Eq.\eqref{eq:14} holds in all orders in perturbation theory. QCD corrections can only change the theoretical prediction to the decay constant. \section{Discussion and Conclusion} \label{sec:concl-or-disc} Using the wave function that is obtained in the relativistic potential model in our previous work \cite{Liu:2015lka}, where the hyperfine interactions are included, the decay constants and pure leptonic decays of $B$ meson are studied in this work. To keep the four-momentum conservation between the quark-antiquark pair and the meson, we use the ACCMM scenario \cite{Altarelli:1982kh,Colangelo:1998eb} to treat the constituent quarks, where the heavy quark is taken to be off-shell, while the light antiquark is kept on shell. Compared with our earlier work \cite{Yang:2011ie}, the difference is that the wave function used here is obtained by considering the hyperfine interactions in the wave equation, and the heavy quark is treated off-shell in the decay process. The off-shellness of the heavy quark can be explained as absorbing the effective effects of the gluon cloud around the heavy quark. With such a treatment, the branching ratios of leptonic decays of $B$ meson obtained in this work are consistent with experimental data. Based on the success of studying the leptonic decays of the $B$ meson, we further obtain the distribution amplitudes for $B$ meson both in coordinate and momentum space. The distribution amplitudes of $B$ meson are widely used in the study of $B$-meson decays. In addition, we obtain another form of the nonlocal matrix element in Eq.\eqref{eq:32}. Considering the success of the ACCMM scenario in studying the leptonic decays, the heavy quark in the distribution amplitude needs to be treated to be off-shell to maintain the momentum and energy conservation. The new form of the nonlocal matrix element obtained in this work, Eqs. (\ref{eq:29a}) or (\ref{eq:29b}) and Eq.(\ref{eq:32}) should be useful in the study of the semileptonic and nonleptonic $B$ decays, where the longitudinal and transverse components are automatically included. We finally study the QCD one-loop corrections within the frame work of the factorization approach. We find that, after subtracting the infrared divergence, the QCD one-loop corrections to the hard transition amplitude will be zero. This implies that the infrared contributions in the hard amplitude can be absorbed into the wave function and the hard terms originated from one-loop diagrams are also absorbed by the wave function and they will contribute to the evolution of the wave function. The formula expressing the leptonic decay rate of $B$ meson in Eq. (\ref{eq:14}) is not affected by QCD corrections.
2,877,628,089,907
arxiv
\section{Introduction} Misinformation is the act of accidentally spreading false or inaccurate information \cite{online2019}. Some cases of Misinformation can be false rumors and pranks. Contrary to this, Disinformation describes the dissemination of malicious content, like, but not limited to, hoaxes, spear-phishing, and propaganda \cite{wooley}. The problem of the quality of information and how it reaches the European citizens has become enormous nowadays. It is important to note that a false expressed opinion by anybody, even without an intention to manipulate facts, could potentially fuel disinformation. An even more significant fact, is that the structure of the internet allows for a snowballing effect, potentially reaching a huge audience \cite{renda}. The phenomenon of Misinformation is emerging across Europe and has many manifestations. An insight into whether and to what extent European citizens trust their national information network comes from the fact that 23 out of the 28 EU states (82\%) have at least a medium level of trust in the information provided by their national media \cite{ebu}. Another statistic that should worry us regarding the confidence of European citizens in the media is that Social networks are the least trusted media in 29 out of 33 of the countries surveyed (88\%). Citizens of the EU are more likely to trust radio and television over the internet and social media \cite{ebu}. Not enough in-depth research has been conducted on networking patterns, which are a very valuable tool \cite{koulas2}. On a more specific level, few studies have been carried out about Misinformation and fake news spreading. Still, they primarily measured the level of the phenomenon and the corresponding impact of it on the political and social environment. In the domain our research was held, there aren't any relevant studies dealing with the correlation among the websites containing misinformation content. Nonetheless, few studies have focused on the analysis of Misinformation through Europe. A limited number of systematic reviews have examined the key actors that cause and encourage the spread of Misinformation. This study provides insights about the role of the European Governmental Organizations (GOs), Non-Governmental Organizations (NGOs), International Organizations (IOs), and International Institutes (IIs) in the spreading of that phenomenon. The method we used in order to construct our website collection sheet was by searching some key phrases about Misinformation on the search engines Google and Yahoo. On Bing, we faced some difficulties due to the lack of websites when we searched about specific key phrases. We collected and analyzed 49 seed websites with the criterion of the high ranking in the search engine. Our study provides a detailed webometric network analysis, based on the seed websites enlisted as Misinformation in Europe. It fills a gap in the literature for reviews of the correlation between sites. This study aims to empirically and methodologically assist in combating Misinformation both in Europe and the Global level. The remainder of the paper is structured as follows. In Section 2, we perform an extensive literature review, comprising of the misinformation and disinformation effects in Europe, related webometric studies and action taken by European stakeholder on the issue. In Section 3, we formulate our research questions and analyze the procedure we follow regarding the selection of the seed sites and their analysis. Results of the webometrics analysis follow at Section 4. Finally, in Section 5 we discuss our findings and we provide our concluding remarks in Section 6. \section{Literature Review} \subsection{Misinformation and Disinformation effects in Europe} Misinformation and Disinformation are both phenomena that physically exist in everyday life almost since always. It is their digital nature that is novel nowadays and occurred a few years ago. Recently, the misinformation effect has been the source of significant concern in the Member States and has provoked discussions and investigations on this issue. The European Parliament, in June 2017, required, the European Commission analyzed thoroughly misinformation. Furthermore, it enquired from the Commission to formulating strategies for the effective mitigation of the problem. The Parliament even considered the possibility of legislative intervention to accomplish that mitigation, using as justification that fake news and Disinformation consist a considerable threat to the freedom of opinion, expression and democracy, which are of paramount importance under the European Union's Charter of Fundamental Rights. In a 2018 research conducted by the Robert Schuman Foundation, 68\% of European citizens claim they encounter fake news at least once a week, while at the same time, 37\% claim they encounter fake news daily \cite{schuman}. The European Commission, in order to address the issue, on the 26th of April 2018, proceeded with the publication of a communication titled “Communication on tackling Disinformation: A European approach” \cite{commission}. Within this communication self-regulatory tools, which constitute the first step into countering effectively online disinformation in Europe, are contained. President Juncker, in his speech on the State of Union, presented in detail the actions that will take place based on this Communication and noted that he would do everything within his power to protect the civil rights and democracy. The purpose of this communication is to ensure that all citizens can have access to objective, quality information. The Communications work plan was a series of actions that lead to the creation of a robust mechanism that prevents fake news and wrong information to spread. A team of fact-checkers was created, who reviewed information coming from reputable public sources and evaluated them. Another task was to promote non-toxic and quality journalism and punish any news media channel that doesn't provide valid information to the public. At the same time, actions to educate people about choosing the right online and media information sources were taken in order to raise awareness for the issues of Misinformation and Disinformation and how these can affect the people. The most significant success of this Communication was the acceptance and adaptation of the Code of good practice to fight online disinformation \cite{commission1}, on the 26th of September 2018, which represents all the points and goals of the Communication. This Code successfully classifies all the prerequisites for a trustworthy only campaign, while at the same time enhancing fundamental principles, like the freedom of expression and media pluralism. Misinformation can be tackled, and its effects can be mitigated through ICT and monitored via the use of the internet. Internet is a useful tool to accomplish that feat, and ICT is the medium that gives the capability to collect, accumulate, interpret, and show data in order to make crucial decisions and formulate strategies. And as the initiator of Europe's democratic system Robert Schuman has said: "Technology is changing, but our fundamental values remain. A citizen that is equipped with the necessary skills and that can listen, watch, and read critically is a prior condition for the success of these values". \subsection{Webometrics related studies} The application of bibliometric and infometric approaches to study the web, its information resources, structures, and technologies, is known as webometrics. The term webometrics is a coinage from the English word "web" and the andicet Greek "metric", which means to measure. Since the term was coined in 1997 by Tomas Almind and Peter Ingwersen, the value of webometrics quickly became established through the Web Impact Factor, the critical metric for measuring and analyzing website hyperlinks \cite{thelwall12}. If we would like to specify that methodology, an excellent definition to show would be the one by Mike Thelwall in 2004 "… the study of web-based content with primarily quantitative methods for social science research goals using techniques that are not specific to one field of study". The purpose of this alternative definition was not to replace the description within Information Science \cite{thelwall09}. The actual use of the first definition is to support the publishing of appropriate methods out of the scope of Information Science \cite{kunosic}. Since the emergence of webometrics, this tool has become a useful methodology that applies in many fields such as the ranking of universities and scientometric evaluations or investigations of research areas. In order for the effective analysis of data for webometrics usage, it is of paramount importance to use known credible sources, for every category of webometrics. The study from Roghaye Tafaroji, Iman Tahamtan, Masoud Roudbari, and Shahram Sedghi, which was conducted in 2012, aimed to present the findings of a webometric analysis of web sites of medical universities of Iran. They tried to investigate the Webometric ranking of Iranian Universities of Medical Sciences. In comparison to other similar studies that were conducted before and used inlinks and size as the main webometric criteria. The authors of this study examined rich text format files (doc, pdf, etc.) as a webometric indicator, and measured the impact of this new indicator on webometric ranks. The main findings indicate that Iranian Medical Universities performed poorly in regard to number of webpages, external links, and rich files. This observation is very useful because it can explain the anemic presence of these universities on the web \cite{tafaroji}. Another study presented a ranking of Alternative Search Engines (ASEs). Using indicators to evaluate a large amount of data that can be retrieved effortlessly and effectively, Bernd Markscheffel and Bastian Eine managed to create a picture of the most popular ASEs currently available. This approach allows further investigation for other studies, exploring the dynamics of the search engine market, while at the same time assessing the categories of ASEs \cite{markscheffel}. A recent study conducted by The University of Burdwan measured and gave a clear idea about the information provided by the websites of the 25 High courts using this time just Google Search engine in contrast with the previous studies. This paper highlighted the various web impact factors, scores, and ranking of the websites of high courts in India and the final results that gave did open the door to further studies of other new areas of the webometric analysis \cite{mahji}. Another study also examined information originating from the website of 71 universities in Bangladesh. The results given indicate that the universities of Bangladesh do not have greater web visibility and, it is clear that these universities need to focus on several issues to increase the visibility of their websites \cite{islam}. Furthermore, Webometrics Analysis was also used to measure Web Impact Factor (WIF) of 8 National Libraries' websites in South Asian Countries. WIF provides tools for quantitative research for several categories, like ranking, evaluation, categorization and comparison of websites, both on top-level and sub-level domains. The results visualized that the National Library of India leads with the highest Domain Authority and Page Authority, and it is followed by the National Library of Sri Lanka and National Library of Bhutan among the other National Libraries websites. Users visit the websites of the National Libraries for their information needs \cite{verma}. Last but not least, one of the significant studies regarding webometrics is "Open Data in Nepal: A Webometric Analysis", which measures the impact of Open Data in the Nepalese cyber domain \cite{acharya}. Acharya and Park's study serves as a guide for this study. Taking in mind the related studies above it is clear that Webometrics is a tool used in many studies to examine the World Wide Web and give specific results about the construction and use of information resources. This is the reason why we decided to use Webometrics analysis in order to search the web and examine Misinformation in Europe. \subsection{Anti-misinformation stakeholders inside and outside Europe} As the volume of information flowing on the internet snowballs, the phenomenon of Misinformation is becoming more and more intense. For this reason, in recent years, many Governmental Organizations (GOs), Non-Governmental Organizations (NGOs) and International Organizations (IOs), inside and outside of the European border, have been mobilized to deal with Misinformation. \subsubsection{European Level} The anti-misinformation concern of the European Commission increased after some cases of intense manipulation of the public opinion on political issues. These phenomena occurred during the U.S and French presidential elections (2016-2017), as well as the Italian constitutional referendum (2016) \cite{service}. After a two-day conference in Brussels, the European Commission concluded that expert's help is vital in order to combat Misinformation \cite{commission17}. Finally, a High-Level Expert Group (HELG) on Artificial Intelligence was formed in January 2018 to reduce Misinformation, fake news, and Disinformation at any level within Europe. A report containing the best strategies and solutions about every disinformation issue depended on the same set of fundamental principles, which was the main deliverable of the HELG \cite{connect}. \subsubsection{Member states of European Union} Besides the European Commission, there are also many mechanisms within the states of Europe, that try to combat Misinformation. In Germany and Croatia, strict laws about Misinformation and hate speech were applied. Websites that would not comply with the law within a specific period after the warning would pay a considerable fine \cite{bbc1, vintof}. In Italy, a portal where people could report to the authorities, fake news occurrences, was set, but unfortunately, it didn't work rationally because of the lack of knowledge about fake news \cite{commissario}. However, when a man was sent to prison for using a false identity in TripAdvisor reviews, the government's intentions were very clear. After that, the country's communications authority released a report on Misinformation \cite{agcom}. Sweden, Denmark, Belgium, and the Netherlands launched campaigns on websites and social media in order to inform the citizens about Misinformation and fake news, on the initiative of their governments (2018-2019) \cite{funke}. In Spain, when Russia was blamed for spreading Misinformation concerning the Catalan referendum by national authorities, an agreement was signed between the two of them, to create together a cybersecurity team to prevent Misinformation \cite{funke}. Moreover, a task force of about 100 officials was activated during the 2019 elections, with the aim to combat fake political posts, especially on social media. In France, a very innovative law was set on the press, which gives the power to the authorities to do whatever they must do with sites that illegally use fake news and Misinformation, enabling them to shut them down. However, this measure was criticized especially from opposition senators and journalists because according to what they say, it is against the principle of proportional justice and press freedom \cite{ricci}. In Greece, there are many NGOs nowadays, dealing with the refugees coming from regions where the situation is turbulent. Because of the significant number of migrants, this situation has become very sensitive. As a result, Misinformation and fake news are spread both from people and the media very fast. In order to help the migrant, many mechanisms collaborated to protect refugees' rights and fight Misinformation about this topic. They also take care of their housing and education. These mechanisms are the "United Nations High Commissioner for Refugees (UNHCR)", together with the "Emergency Support to Integration and Accommodation (ESTIA)" program, funded by the European Union Civil Protection and Humanitarian Aid, and some other local NGOs. In the United Kingdom, a parliamentary report was published with the purpose to enforce citizens to avoid fake news and misinformation spreading, because the country suffers from a democracy crisis since the idea of Brexit came to the surface \cite{waterson}. Furthermore, the National Security Communications Unit was launched with the task to fight Disinformation from authority people and others, after Russia got involved in Brexit by spreading fake news \cite{bbc2}. \subsubsection{NGOs} At the same time, even though the mobilization of governmental organizations is essential, the contribution of non-governmental organizations to combating misinformation is just as remarkable. An international NGO, "Reporters Without Borders (RSF)", launched an innovative media to deal with Disinformation online \cite{boulay}. It is called the "Journalism Trust Initiative (JTI)" and is designed to encourage journalism by applying some agreed transparency standards to protect information and combat misinformation. Moreover, charity NGOs, who are dealing either with refugees or with citizens that need help, are those who try besides their actions to protect people's rights and to publish only the real thing about the issues in which they are involved. "ActionAid" is one fascinating example of international NGO which has already offered very much in this sector. \subsubsection{Anti-misinformation stakeholders outside Europe} In addition to the efforts made to combat Misinformation within the European Union, efforts are also being made from countries outside of Europe. In some cases, Europe is firmly connected and immediately affected by the efforts made to fight Misinformation outside of Europe. Russia has been one very important player in spreading fake news in recent years across Europe and the whole world, at both a political and social level. Especially in the US presidential elections, many campaigns where bots were used, were set in order to manipulate the result. For this reason, media continuously publishing fake news and misinformation are punished. For a start with a fine, if they would not conform with the law the people accountable for this would go to prison, and if that was not enough, their website would shut down \cite{stanglin}. In the Americas, fines are the most widely used method of combating fake news and misinformation. In the USA, Brazil, and Chile, when someone is found to be disseminating fake news, the responsible party will be fined, and may even face prison time. This applies to everyone whether they are citizens writing on the net, whether they are journalists, webpages or even politicians who spread fake news \cite{funke}. In Asia, many countries have adopted strict laws to deal with misinformation. In China and Malaysia spreading fake news is considered as a crime. Those who rumor fake news that can be harmful in public order are punished by the law with up to seven years in prison or public surveillance \cite{zhang, leong}. Moreover, south Asia countries, Thailand and Indonesia have also enforced laws in cases of misinformation detection. Many people were arrested for fabricating fake news, especially on social media, and many others paid huge fines \cite{funke}. \section{Methods} \subsection{Goals and Research Questions} The rise of the internet and computational power has allowed for the disproportional growth of misinformation phenomena in the last years. In this study we want to discuss the measures taken by stakeholders in Europe to tackle those incidents and assess their effectiveness. For this, we formulated two research questions: (RQ1) Which are the key stakeholders and how do they fight the phenomenon of misinformation? (RQ2) Do the actions of the key stakeholders have a palpable impact? The first question aims to mapping the key stakeholders, as well as, assess their actions and cooperation. The second question tries to showcase the impact, if any, those actions have. \subsection{Data Collection} Our method for gathering information and seed sites regarding Misinformation was Google's search engine, as well as, Bing and Yahoo search engines. Trying to have a variety in our results and have the whole idea of Misinformation, we used different keyword combinations to get more accurate results. These are the search queries we used: \begin{itemize} \item Misinformation in Europe \item Role of NGOs in Europe to tackle Misinformation \item Governmental Organizations tackling Misinformation in the Europe Area \item Expanding Misinformation in society \item The Consequences of Misinformation \item Fighting Misinformation \item Misinformation in Belgium and \item Misinformation Tackling \end{itemize} The search results included most of European and Global NGOs, IOs, GOs, and IIs. The homepages of these organizations were visited and read individually to assess the importance of researching the phenomenon of Misinformation in Europe. We opted to include Belgium as a separate search query, due to the fact that the European Union's present there has led to the creation of a variety of think tanks, NGOs and corporations. Since the exposure to a large scale of disinformation is proliferating, fighting misinformation is a top priority for the European Commission. Therefore we emphasize identifying the key factors that encourage the spread of this phenomenon throughout Europe. It is highly essential to understand the role of all the European Stakeholders and Institutions, such as the European Commission, Council of Europe, etc., that are responsible for measuring all the appropriate criteria that give us a more analytic point of view about the issue. Furthermore, it is significant to measure the impact and the consequences of Misinformation to find new and more efficient ways to counter misleading information in the European Area. Subsequently, we should emphasize on social-economic elements that will be able to lead us to determine a general framework for the protection of information throughout Europe, in the same way, the General Data Protection Regulation has been defined. We choose to collect data from three Top Level Domain Categories and the specific eight Country Code Top Level Domains. In this context, to make it more understandable, we must observe the following hierarchy tree: \begin{figure}[h] \caption{Hierarchy tree} \centering \includegraphics[width=0.5\textwidth]{tree.png} \end{figure} From the hierarchy tree in Figure 1, we notice the Top-Level Domain categories that there will be shown in the webometrics analysis and the following Country Code Top Level Domains. It's time to perceive the provenance of each domain. Firstly, .com derives from commercial, indicating its original intended purpose for domains registered by commercial organizations. Secondly, .org is truncated from organizations. Following the same way of thinking, we recognize that all Country Code Top Level Domains reserved for specific countries. In our case, we ended up with seedsites having the country codes of the European Union, Germany, Belgium, and Luxembourg. Finally, it's meaningful to recognize the role of Sponsored Top Level Domains. This category of the domain name is supported by a community or organization and considered to have the strictest application policies of all TLDs, as it implies that the holder is a subject of international law. Table 2 lists the organizations, their established date, the sector in which they belong, their website address, and their URL. \subsection{Process and Assessment} The above websites are analyzed using Webometric Analyst 2.0 (http://lexiurl. wlv.ac.uk) and the Bing Application Program Interface, which is capable of carrying out advanced Boolean searches and tracking external websites linked to URL citations under study. Thus, the lists of external sites corresponding to the base query, i.e., the websites mentioned above, were obtained. Interlinkage and co-mention, as explained in Table 1, will be used for the data analysis. \begin{table}[] \caption{Analytical techniques and concepts of Webometrics\cite{acharya}} \label{tab:tools} \begin{tabular}{p{0.2\textwidth}p{0.8\textwidth}} \hline Inter-mention network analysis & Network diagrams illustrate the accompanying networks of the communication strength of the provided pairs of websites. It is the indicators based on asymmetric (directed) inter-mention counts between a pair of websites. A diagram illustrates the pattern of interconnectivity between collections of sites. This analysis gives a proxy for the hyperlinks between the websites under study \\ \\ Co-mention network analysis & Network diagrams and their indicators based on the number of external sites referring to a pair of target sites. The co-mentions show something important in common but are not directly related to each other. The competitors who are also considered as stakeholders show a different pattern in the webometric analysis. Co-mention does not have a direction \\ \hline \end{tabular} \end{table} For the assessment we are going to use the inlink degree, the outlink degree and the betweenness centrality. The inlink degree shows how many links from the network are inbound for each specific node, while the outdegree shows the outbound links towards other nodes of the network. Betweenness centrality shows the sum of times any particular node is found on the shortest path between different nodes of the network \cite{disney, valente, friedkin}. The higher the metrics, the more influential a node, thus an organization, is in the network. \section{Results} Figure 2 and Tables 2 and 3 answer our first research question. Figure 3 and Table 4 answer our second research question. Figure 2 depicts a network diagram that demonstrates the inter-mention between websites conclusively. The red nodes and arrows show the linkage between the websites, whereas the green nodes indicate that there is no connectivity with any of the seed sites. Every website domain and URL was converted accordingly to meet the requirements for Bing classification so that Bing can make the connections between the URLs and seed websites. According to the results below, websites are firmly connected and are the core of our findings, www.theverge.com, www.nytimes.com, www.cnet.com, www.washingtonpost.com, www.reuters.com, and www.bloomberg.com. As you see, the core URLs are oriented to the private company sector, and they have .com TLD. Most of the government based websites are connected, having a substantial presence on the web, for example, www.coe.int and www.poynter.org. \begin{figure}[h] \caption{Inter-mention Network Diagram} \centering \includegraphics[width=0.95\textwidth]{mininfo-interlinks.png} \end{figure} We also observe that www.goethe.de (which is a Nonprofit organization), www.zeit.de, and www.welt.de, which are both POs, are strongly connected to www.dw.com which is the most centralized website from all NPO's. Another seed site that has many connections to the central nodes of the diagram and vice versa is www.technologyreview.com and www.newscientist.com; both of them have a .com TLD. Additionally, we see that from websites with TLD .be and .lu only today.rtl.lu is not connected with any website. On the contrary, www.lesoir.be and www.knack.be are connected between them and also to the most central websites. Furthermore, the interlinkage was investigated for the purpose of analyzing the online networking patterns in different networking scenarios. The values that we have observed are the networking density value for the directed network, which is 0,0791, and the value for the undirected network which is 0,1182, increased in comparison to the direct network. Density is calculated by diving the number of relations by the maximum number of possible relations. Density means an average value of entire cell blocks, when we refer to a network matrix that is weighted and valued. Next, 'degree' and 'betweenness' network centrality values are calculated. The term degree centrality refers to the amount of ties that are immediately connected to a node (i.e., website), rather than indirect ties to all others in the network\cite{acharya}. The two-degree centrality, specifically indegree, and outdegree, is calculated by the direction of the connection between two nodes. On the other hand, betweenness centrality measures how important a node is in the network. This is calculated by the effectiveness that a specific node plays as a broker, while connecting a pair of nodes. In this instance, the number of the shortest paths via the node is considered. Our network metrics were calculated utilizing built-in functions within Webometric Analyst. \begingroup \footnotesize \begin{longtable}{p{0.1\textwidth}p{0.3\textwidth}p{0.1\textwidth}p{0.1\textwidth}p{0.3\textwidth}} \caption{Description of the Selected Seed Sites} \label{tab:seedsites}\\ \hline No. & Organization Name & Est. date & Sector & URL \\ \hline \endfirsthead \multicolumn{5}{c}% {{\bfseries Table \thetable\ continued from previous page}} \\ \hline No. & Organization Name & Est. date & Sector & URL \\ \hline \endhead \hline \endfoot \endlastfoot 1 & Council of Europe & 1999 & GOV & coe.int \\ 2 & European Commission & 1958 & GOV & ec.europa.eu \\ 3 & Public Data Lab & 2017 & IO & publicdatalab.org \\ 4 & Reporters Without Borders & 1985 & IO & rsf.org \\ 5 & European External Action Service's East StratCom Task Force & 2015 & NGO & euvsdisinfo.eu \\ 6 & Center for Strategic and International Studies (CSIS) & 1962 & Non-profit Organization & csis.org \\ 7 & Investigate Europe & 2014 & NGO & investigate-europe.eu \\ 8 & Journalismfund.eu & 2008 & NGO & journalismfund.eu \\ 9 & CNN Digital & 1980 & Private Company & edition.cnn.com \\ 10 & European Data Journalism Network (EDJNet) & 2017 & Private Organization & europeandatajournalism.eu \\ 11 & European Centre for Press and Media Freedom (ECPMF) & 2009 & NGO & ecpmf.eu \\ 12 & Poynter Institute & 1975 & GOV & poynter.org \\ 13 & Social Observatory for Disinformation and Social Media Analysis (SOMA) & 2018 & Project & disinfobservatory.org \\ 14 & Media Freedom Resource Centre & 2015 & NGO & rcmediafreedom.eu \\ 15 & WAN-IFRA - World Association of News Publishers & 1948 & ORG & wan-ifra.org \\ 16 & Parliament of Europe & 1952 & Int'l Institution & europarl.europa.eu \\ 17 & EU Open Data Portal & 2012 & Portal & data.europa.eu \\ 18 & Euractiv & 1999 & Network of Media & www.euractiv.com \\ 19 & Fandango Project & 2018 & Project & fandango-project.eu \\ 20 & The Guardian & 2011 & Private Company & www.theguardian.com \\ 21 & Sunlight Foundation & 2006 & Non-profit Organization & sunlightfoundation.com \\ 22 & New Scientist & 1956 & NGO & newscientist.com \\ 23 & Fipp & 1920 & Private Company & fipp.com \\ 24 & The New York Times & 1851 & Private Company & nytimes.com \\ 25 & Washington Post & 1877 & Private Company & washingtonpost.com \\ 26 & Euronews & 1993 & Portal & euronews.com \\ 27 & MIT Technology Review & 1899 & Private Company & technologyreview.com \\ 28 & Media Frenzy Global & 2006 & Private Company & mediafrenzyglobal.com \\ 29 & Singularity University & 2008 & Univer-sity & singularityhub.com \\ 30 & The Atlantic & 1857 & Private Organization & theatlantic.com \\ 31 & The Verge & 2011 & NGO & theverge.com \\ 32 & Civil Beat & 2010 & NGO & https://www.civilbeat.org/ \\ 33 & The World Bank Group & 1944 & NGO & blogs.worldbank.org \\ 34 & The Social Science Research Council (SSRC) & 1923 & NGO & ssrc.org \\ 35 & Bloomberg & 1981 & Private Company & bloomberg.com/europe \\ 36 & Star Tribune & 1897 & Private Organization & startribune.com \\ 37 & Science Direct & 1997 & Private Organization & sciencedirect.com \\ 38 & NetGov & 2008 & GOV & nextgov.com \\ 39 & Lesoir & 1928 & Private Organization & lesoir.be \\ 40 & Knack & 1971 & Private Organization & knack.be \\ 41 & Deutsche Welle & 1953 & Non-profit Organization & dw.com \\ 42 & Zeit & 1946 & Private Organization & zeit.de \\ 43 & Welt & 1946 & Private Organization & welt.de \\ 44 & Goethe Institute & 1951 & Non-profit & goethe.de \\ 45 & RTL & 1929 & Private Organization & today.rtl.lu \\ 46 & American Press Institute & 1946 & Institute & americanpressinstitute.org \\ 47 & Reuters & 1851 & Private Organization & reuters.com \\ 48 & Computer Network & 1994 & Private Organization & cnet.com \\ 49 & Pew Research Center & 2004 & Research Center & pewresearch.org \\ \hline \end{longtable} \endgroup Particularly, the 14 websites of our seed sites with the highest indegree and outdegree centrality are presented in Table 3. The Reuters (www.reuters.com) has the highest indegree centrality (74), and a Private Company 'The Guardian' (www.theguardian.com) has the highest outdegree centrality (66). The balance between big organizations, like the European Parliament, and case specific seed sites like the EUvsDisinfo, ensure that the metrics are accurately depicting connectivity around misinformation. \begin{table}[] \caption{Top 14 websites with the highest indegree and outdegree centralities.} \label{tab:centralities} \resizebox{\textwidth}{!}{% \begin{tabular}{lllllll} \cline{1-7} Organization & Sector & Indegree & & Organization & Sector & Outdegree \\ \hline Reuters & Private Company & 74 & & The Guardian & Private Company & 66 \\ Washington Post & Private Company & 54 & & The New York Times & Private Company & 60 \\ The Atlantic & Private Organisation & 50 & & Washington Post & Private Company & 56 \\ The Guardian & Private Company & 44 & & The Atlantic & Private Organization & 28 \\ Bloomberg & Private Company & 36 & & Reuters news agency & Private Company & 24 \\ The New York Times & Private Company & 26 & & European Commission & GOV & 24 \\ The Verge & NGO & 24 & & Science Direct & Private Organisation & 24 \\ Pew Research Center & Research Center & 20 & & The Verge & NGO & 20 \\ Deutsche Welle & Non-profit Organisation & 16 & & CNET (Computer Network) & Private Company & 20 \\ CNET (Computer Network) & Private Company & 14 & & Deutsche Welle & Non-profit Organisation & 14 \\ MIT Technology Review & Private Company & 14 & & Poynter & GOV & 14 \\ New Scientist & NGO & 14 & & CNN World News & Private Company & 14 \\ The Star Tribune & Online Media Company & 10 & & MIT Technology Review & Private Company & 12 \\ Euronews & Portal & 10 & & The Star Tribune & Online media Company & 10 \\ \hline \end{tabular}% } \end{table} Concerning the Private Organizations, like theguardian.com, theatlantic.com, washingtonpost.com, and reuters.com, they have high betweenness centrality (248, 185, 184, 180, 137). The local NGO www.theverge.com has the highest betweenness centrality among all NGOs (11,217). The Government Organization ec.europa.eu has a high betweenness centrality (113,867). The Nonprofit Organization Deutsche Welle (www.dw.com) have 42,35 betweenness centrality. The betweenness centrality of a website shows the amount of control that this website exerts over the interactions of other websites in the network. The Pew Research Center is a research center that has the minimum betweenness centrality (0,2). Thus, it is noticeable that the Private Organizations have the highest betweenness centrality from NGOs. In addition to the above conclusions, we see many websites with a weak connection between them such as www.disinfobservatory.org which is a technology-based company, also www.publicdatalab.org the only IO which has no connection. We have 6 NGO's websites with a minor presence on the web and no connection at all with the rest of the websites. Finally, www.fipp.com and www.mediafrenzyglobal.com are two private companies with no connection between them. Figure 3 shows the co-mention links of the websites. All of the nodes are colored red because all websites have at least one co-mention with another website. Each line's width is proportionately calculated and drawn based on the number of co-linking websites. The paramount importance of the issues related to Misinformation is highlighted by the vast co-mentions among the websites analyzed. It is observed that the European Commission spearheads the efforts for tackling Misinformation. The European Commission's role in promoting effectively European initiatives for tackling Misinformation can be shown by the fact that ec.europa.eu and www.disinfobservatory.org are co-mentioned 15 times, and ec.europa.eu and euvsdisinfo.eu are co-mentioned 185 times. \begin{figure}[h] \caption{Co-mention Network Diagram} \centering \includegraphics[width=0.95\textwidth]{misinfo_comention.png} \end{figure} \begin{table}[] \caption{Seed site calculation (N=49). Data values are defined as the total number of counts of TLDs citing seed sites.} \label{tab:domains} \begin{tabular}{llll} \hline No & Seed site TLDs/GTLDs/CCTLDs & Total Number of Seed Sites & Percentage (\%) \\ \hline 1 & .com & 21 & 42.85\% \\ 2 & .org & 11 & 22.44\% \\ 3 & .eu & 10 & 20.40\% \\ 4 & .de & 3 & 6.12\% \\ 5 & .be & 2 & 4.08\% \\ 6 & .int & 1 & 2.04\% \\ 7 & .lu & 1 & 2.04\% \\ \hline \end{tabular} \end{table} At the same time, the Commission is used as a reference alongside the world's most influential think tanks, as it is observed that ec.europa.eu and www.theatlantic.com are co-mentioned 305 times; ec.europa.eu and www.csis.org are co-mentioned 359 times; ec.europa.eu and pewresearch.org are co-mentioned 541 times. Lastly, the Commission is used to set a paradigm with other major organizations since ec.europa.eu and www.wan-ifra.org are co-mentioned 180 times; ec.europa.eu and www.rsf.org are co-mentioned 479 times, and ec.europa.eu and www.coe.int are co-mentioned 721 times. Furthermore, it is observed that resource produced by European initiatives is often used alongside resources produced by leading think tanks. For example, euvsdisinfo.eu and www.csis.org are co-mentioned 49 times; euvsdisinfo.eu and www.pewresearch.org are co-mentioned 81 times, and euvsdisinfo.eu and www.theatlantic.com are co-mentioned 178 times. There are, however, initiatives that fall behind when it comes to European Projects. For instance, www.disinfobservatory.org and www.theatlantic.com are co-mentioned once, while www.disinfobservatory.org and www.csis.org are co-mentioned once, and www.disinfobservatory.org and pewresearch.org are co-mentioned twice. Last but not least, European NGOs play an important role in the efforts to tackle misinformation since www.investigate-europe.eu and ec.europa.eu are co-mentioned 33 times; www.investigate-europe.eu and www.theatlantic.com are co-mentioned 661 times; www.investigate-europe.eu and www.csis.org are co-mentioned 841 times; www.investigate-europe.eu and www.pewresearch.org are co-mentioned 845 times; www.investigate-europe.eu and www.rsf.org are co-mentioned 849 times; and www.investigate-europe.eu and fandango-project.eu are co-mentioned 979 times. All those findings show the massive effort, and the resources pooled from various stakeholders to tackle Misinformation. \section{Discussion} Misinformation is a constantly evolving threat that requires rigorous checks and balances in order to address it. The results can be categorized as those that stem from the theoretical part of the paper and those that derive from the webometrics analysis. On the theoretical part, it is an interesting fact that in many cases, Webometrics is used as an evaluation system of a wide range of universities in the world. This system is known as a "ranking" system, where ranking describes a process where the position of the elements in a group regarding its entirety is defined by the relevance between the elements. The ranking process appears in many areas besides academics. For example, there was also a study on the ranking of Alternative Search Engines (ASEs) \cite{markscheffel}. Webometrics is a tool used in many studies to examine the World Wide Web for different reasons. In this project, we chose to use this tool to explore a totally different issue that also has an enormous impact on people and tries to interpret the data exported from the analysis. From the research on combating Misinformation at the state level, it can be concluded that the use of strict laws or regulations, to punish the people accountable for this phenomenon , is not an effective strategy. Inform people how to detect Misinformation, thus preventing them from reproducing it, is much more effective. Democratic societies ought to help their citizens learn how to acquire their information only from proven reputable sources, question what they read, examine its accuracy, avoid reading only the headlines of articles, and in case something is fake to avoid sharing it. Last but not least, we observe the involvement of International Organizations in the effort to tackle Misinformation and Disinformation whose original mandate was totally irrelevant. For instance, the Council of Europe is heavily investing in the cyberspace domain, to remain relevant in a shifting world \cite{koulas}, while this change in perspective can foster meaningful collaborations \cite{brass}. In the webometrics area, this study aims to investigate on a webometric dimension the role of all public and private sectors in Europe, as well as on the international level, that have taken actions as stakeholders to tackle the misinformation effect in Europe. We explore the structure of these stakeholders' portals and websites, their source, the organization's vision, methods of gathering and crosschecking data and information, and actions on the matter of Misinformation in Europe. We searched the seed sites countrywide and on an international level, for URL- based hyperlinks, title mentions and external links that refer to the seed sites of the stakeholders. According to the results, European governmental websites and portals are cooperating with the concerned NGOs inside Europe, but not as much as IOs are among themselves. In the co-mention network, all European portals show a strong connection with international websites and the other way around. Also, news organizations among them and other organizations of the same work- nature appear to have a very strong co-mention relationship. Besides, diverse organizations are also well co-mentioned. This could be because of the severe and sensitive nature of the issue and the urgency to counter it. Overall, European IOs and Governmental portals seem to have the most interlinkage and co-mentions as legal bodies that officially take actions to counter the Misinformation in Europe on a national governmental level with legislation, new laws and various efforts to raise awareness. In our study we have identified 49 different stakeholder that took action in the fight against misinformation in Europe, as it is shown in Table 2. We found out that these efforts had some success, in terms of networking patterns. The limitations we faced are similar to those Acharya and Park \cite{acharya} face. Firstly, from a total of 71 possible websites, we manually selected the ones we deemed more important and more relevant to our study and used them as seed sites. Early tries to use all 71 of the original findings resulted in messy diagrams and due to the high number of irrelevant sites, the different metrics, especially the betweenness centrality, were boosted, without actually ensuring the misinformation focus we opted for. Secondly, many of our seed sites are not solely focused on Misinformation and Misinformation tackling alone but have made contributions towards that field. Thirdly, the seed site analysis was conducted with third party software and the search engine www.bing.com. \section{Conclusions} Misinformation is one of the significant challenges that modern societies need to address effectively because it severely impacts multiple aspects of our lives. We observe that there have been attempts to tackle Misinformation, with mixed results. The European Union spearheads these efforts, in cooperation with other organizations, but there it is possible to further enhance and improve those efforts. The ease world wide web provides for the spreading of Misinformation is a significant factor that increases the complexity of the situation. Further study should focus on the limitations of this study, as well as the usage of new technologies, like artificial intelligence, in order to process more data and yield better results.
2,877,628,089,908
arxiv
\section{Introduction} Supermassive Black Hole accretion is affected by the properties of the accretion flow on a wide range of length scales. Gravity and gas dynamics play an important role from the gravitational radius out to much larger distances where the gravitational potential energy of the gas is comparable to its kinetic energy. The properties of the infalling gas at even larger radii, where gravity is weak can also in principle affect the global properties of the accretion flow, an important point that is sometimes overlooked in studies of accretion physics. In the unmagnetized spherically symmetric case with no angular momentum and adiabatic equation of state, the global accretion flow is given by the Bondi solution \citep{bondi52}. The Bondi radius $R_{\rm B}\equiv GM_{\rm BH}/c_{s,\infty}^2$ sets the scale at which gravity becomes important, and the mass accretion rate is given by $\dot{M}_{\rm B}\equiv \Lambda4\pi r_{\rm B}^2\rho_\infty c_{s,\infty}$. $M_{\rm BH}$ here is the black hole mass, $\rho_\infty$ is the mass density of gas at infinity, $c_{s,\infty}$ is the gas sound speed at infinity, and $\Lambda\equiv[2/(5-3\gamma)]^{(5-3\gamma)/[2(\gamma-1)]}/4$ \citep{frank02}, where $\gamma$ is the adiabatic index. The introduction of angular momentum fundamentally modifies this picture, however, and for sufficiently high angular momentum the gas can no longer accrete onto the central black hole. For gas with specific angular momentum $j$, the centrifugal radius $R_{\rm c}\equiv j^2/GM_{\rm BH}$ defines the radius interior to which the gas has difficulty penetrating. Accretion in this case requires some form of angular momentum transport. Turbulent stresses associated with the nonlinear development of the MagnetoRotational Instability (MRI) are thought to be capable of transporting angular momentum \citep{velikhov59,chandrasekhar60,balbus91}. The angular momentum transport properties of the MRI are often parameterized with an effective shear viscosity $\nu\equiv \alpha c_s H_p$, where numerical simulations suggest $\alpha$ is typically of order $0.01$ \citep{stone96,hawley11,mckinney12}. Observations actually suggest a vertically and azimuthally averaged $\alpha \sim 0.1-0.4$ \citep{king07}, a discrepancy that to date has not yet been fully addressed. There are a number of general classes of analytic solutions for rotating flows with viscous angular momentum transport. The standard solution in the limit of a geometrically thin and optically thick viscous disk was given by Shakura \& Sunyaev (1973; see also Lynden-Bell \& Pringle 1974, Novikov \& Thorne 1973), commonly referred to as an $\alpha$-disk. Thin disks typically neglect global heat transport and only consider effects that are first order in the disk thickness $H/R$, where $H$ is the disk thickness at cylindrical radius $R$. The next order of approximation is slim disks, which account for terms in the equations of motion that are second order in the disk thickness $H/R$ (Abramowicz et al. 1988; see also Abramowicz et al. 1986, Kato et al. 1988, Chen \& Taam 1993, Narayan \& Popham 1993, Katz 1977, Begelman 1978, Begelman \& Meier 1982, Eggum et al. 1988). Slim disks allow for non-negligible radial velocity, horizontal pressure gradients, and advective heat transport, and they have been used to model geometrically thick, hot disks accreting at super-Eddington rates. Advection Dominated Accretion Flows \citep[ADAFs,][]{narayan94,narayan95a, narayan95b,abramowicz95,ichimaru77,rees82,quataert99a,narayan11} describe radiatively inefficient sub-Eddington flows. The accreting low density gas is hot and geometrically extended in the polar direction, and the thermal energy is advected onto the central object. The closely related Adiabatic Inflow Outflow Solutions \citep[ADIOS,][]{blandford99,begelman11} can accrete at much lower rates by allowing for outflows that can carry off energy and angular momentum \citep[see also][]{jiao11}. At present it is unclear which, if any, of these solutions real supermassive black hole accretion flows will pick. Certainly for hot disks with $H\sim R$, the standard approximations in thin disk accretion theory break down \citep{pringle81}, and we should consider terms of all order in $H/R$. Further study of the detailed properties of geometrically thick disks is required. Numerical simulations have provided further insight on accretion physics, and Convection Dominated Accretion Flows \citep[CDAFs;][]{igumenshchev99,igumenshchev00,igumenshchev00b,igumenshchev03,stone99,narayan00,quataertgruzinov00} , which operate in the weak viscosity regime for flows with no cooling, have been proposed to explain one class of simulations. Recent work by \cite{yuan12b} tends to favor ADIOS-type models over CDAFs, however. Other authors have studied the geometry of two dimensional magnetohydrodynamical flows accreting from torus-like initial reservoirs of gas (\cite{stone01,hawley01,machida01,igumenshchev02,hawley02}; see also \cite{narayan12} for 3D simulations with general relativity). Many previous works neglect the radiative properties of the flow, however, which may play an important role in determining the geometry of the flow \citep{dibi12,fragile12}. For accretion flows with radiative cooling, the properties of the infalling gas at infinity determine the strength of the radiative cooling and can affect the global properties of the flow (Yuan et al. 2000). Several authors mimic cooling by varying the adiabatic index of the gas \citep{moscibrodzka08,moscibrodzka09,janiuk09} or include radiative transfer (\citealt{ohsuga09,ohsuga11}; see also \citealt[][and references therein]{zanotti11}), but accretion in these works is aided by at least some inflowing gas that has low angular momentum or starting with gas that is bound to the black hole. Another important consequence of radiation from gas near the black hole is that it can drastically transform the outer parts of the accretion flow via heating \citep{park01,park07}. It is well known that observed black holes can accrete at rates that are only a small fraction of the Bondi accretion rate. For example, Sgr ${\rm A}^*$ appears to be accreting at a rate of $10^{-3}-10^{-2}\dot{M}_{\rm B}$ \citep{yuan03,quataert04}. A number of authors have studied the transition between the cooling flow at large radii and the accretion flow near and below the Bondi radius \citep{quataert99b,quataertnarayan00}, and the implications for accretion rates. Various energy transport mechanisms have been suggested that may be able to reduce accretion rates well below the Bondi rate \citep{ciotti01,quataert04,sharma08,shcherbakov10,igumenshchev06}. Other authors have invoked Radiatively Inefficient Accretion Flows to explain the low luminosity of observed accreting systems \citep[][and references therein]{narayan08}. We consider here the alternate view that it is simply the angular momentum of the gas that slows accretion. There has been some analytic and semi-analytic work exploring this idea, including angular momentum transport \citep{xu97,blandford99}, and both angular momentum transport and cooling \citep[e.g.,][]{chakrabarti96,park09,inogamov10}, as well as numerical work without cooling \citep[e.g.,][]{stone99,proga03a,proga03b,yuan12a,yuan12b} and with cooling \citep{yuan10}. This paper will treat the accretion problem in the domain where the angular momentum is not sufficient to prevent inflow at the Bondi rate but ample to inhibit accretion without the operation of some viscous process. That is, we will work exclusively in the domain \begin{equation} R_s \ll R_{\rm c} \ll R_{\rm B}, \label{eq:rcent} \end{equation} where $R_s\equiv 2GM_{\rm BH}/c^2$ is the Schwarzschild radius. Since the ratio of the bounding radii, $R_{\rm B}/R_s \sim 10^{8}/T_4$, this choice is well-defined. Moreover, many elliptical galaxies, including M87, have stellar populations with small average spin that is insufficient to create Keplerian disks near the Bondi radius \citep[see e.g.,][]{inogamov10}. We run hydrodynamical simulations to follow infalling, radiating gas that starts from well beyond the Bondi radius with $Be>c_{s,\infty}^2/(\gamma-1)$ (see Equation \ref{eq:bernoulli}) down to the vicinity of the black hole. We include shear viscosity to capture qualitatively the angular momentum transport properties of the MRI, and so accretion can proceed at varying rates, depending on the flow parameters. Since the gravitational and centrifugal forces do not have the same scaling with radial distance in our setup, we do not expect our solutions to be self-similar as in standard ADAF \citep{narayan94} and CDAF models\citep{quataertgruzinov00,narayan00}. The thermal Bremsstrahlung cooling can also play an essential role in the energetics of the flow and determine if gas is bound to the black hole or not. We provide accretion flow solutions in both the very sub-Eddington, nearly adiabatic domain, and also solutions approaching the super-Eddington domain. Section \ref{sec:method} provides the mathematical framework framework for our problem and describes our numerical method. Section \ref{sec:results} gives our main results, and the broader implications of our results are discussed in sections \ref{sec:discussion} and \ref{sec:conclusions}. The Appendix provides details on the convergence properties of our solutions. We will designate the low inflow solutions that we find as ``RRIOS'' - Radiating Rotating Inflow-Outflow Solutions. \section{Method}\label{sec:method} \subsection{Equations}\label{sec:eqs} We use ZEUS-2D v2.0 (Stone \& Norman 1992) to solve the equations of hydrodynamics, \begin{align} \left(\frac{\partial }{\partial t}+ \boldsymbol{v\cdot\nabla} \right)\rho+ \rho \boldsymbol{\nabla\cdot v} =0, \\ \rho\left(\frac{\partial }{\partial t}+ \boldsymbol{v\cdot\nabla} \right)\boldsymbol{v}=-\boldsymbol{\nabla}p - \rho \boldsymbol{\nabla}\Phi, \\ \rho\left(\frac{\partial }{\partial t}+ \boldsymbol{v\cdot\nabla} \right)\frac{e}{\rho}=-p\boldsymbol{\nabla\cdot v}, \end{align} where $\rho$ is mass density, $\boldsymbol{v}$ is flow velocity, $e$ is energy density, gas pressure $p\equiv (\gamma-1)e$, where $\gamma$ is the adiabatic index, and the gravitational potential $\Phi\equiv -GM_{\rm BH}/r$. We run global hydrodynamical simulations of axisymmetric rotating accretion flows in spherical polar coordinates. The simulations include thermal Bremsstrahlung cooling, shear viscosity to capture drag due to turbulent stresses, and modified boundary conditions to allow for disk and wind outflows. The source terms for momentum and energy in the ZEUS hydrocode are modified by the addition of the terms within square brackets below: \begin{equation} \label{eq:NavierStokesNumerical} \rho\left(\frac{\partial \boldsymbol{v}}{\partial t}\right)_{\rm sources} = -\boldsymbol{\nabla} p -\rho \boldsymbol{\nabla} \Phi -\nabla \cdot \boldsymbol{Q}+ \left[\boldsymbol{\nabla\cdot\sigma'}\right]\,, \end{equation} and \begin{equation} \label{eq:energy} \begin{split} \left(\frac{\partial e}{\partial t}\right)_{\rm sources}= &-p \boldsymbol{\nabla\cdot v} -\boldsymbol{Q\cdot\nabla v} + \\ & \left[\boldsymbol{\sigma'\cdot\nabla v} - \dot{e}_{\rm Brem} \right]\,, \end{split} \end{equation} in which $\boldsymbol{\sigma'}$ is the viscous shear tensor and $\boldsymbol{q}$ is the heat flux due to thermal conduction of electrons. ZEUS' standard tensor artificial viscosity $\boldsymbol{Q}$ is applied with shocks spread over $\approx 2$ zones, and we use a Foward Time Centered Space differencing scheme for our modified source terms. We neglect the non-azimuthal components of the viscous shear tensor $\boldsymbol{\sigma'}$, as it is believed that the poloidal shear is subdominant to the azimuthal shear in the non-linear regime of the MRI \citep[see][]{stone99}. The azimuthal components of the viscous shear tensor are \begin{equation} \boldsymbol{\sigma'}_{r\phi}=\nu \rho r \frac{\partial}{\partial r}\left(\frac{v_{\phi}}{r}\right), \end{equation} \begin{equation} \boldsymbol{\sigma'}_{\theta\phi}=\frac{\nu \rho \sin \theta}{r} \frac{\partial}{\partial \theta}\left(\frac{v_{\phi}}{\sin \theta}\right), \end{equation} where $\nu$ is the kinematic viscosity. The Bremsstrahlung cooling term (Svensson 1982, Ball et al. 2001) \begin{equation} \dot{e}_{\rm Brem}\equiv \alpha_f r_e^2m_ec^3n^2(32/3)(2/\pi)^{1/2}(\frac{k_BT}{m_ec^2})^{1/2}, \label{eq:brem} \end{equation} where $\alpha_f$ is the fine structure constant and $r_e$ is the classical electron radius. We use this abbreviated form for the Bremsstrahlung cooling to capture the qualitative effects of thermal emission; in fact in high temperature regions the electron component approaches relativistic speeds and Bremsstrahlung radiation increases substantially above equation (\ref{eq:brem}). The cooling is limited by a floor in the temperature, equal to the gas temperature at the outer boundary, in order to maintain stability. The advective transport terms in ZEUS are unmodified. \subsection{Setup \& Flow Properties} In our simulations gas flows in through the outer boundary, located at $R_{\rm out}=10R_B$, at the Bondi rate, $\dot{M}_{\rm B}$. We modify the outer boundary condition locally to an outflow boundary condition in whichever grid zones at the outer boundary that the flow is moving radially outward, however. We note that several other authors have studied accretion flows with outer boundary near the Bondi radius \citep[e.g.,][]{pen03,pang11}. Our inner boundary, located at $R_{\rm in}=10^{-3} R_B$, is a standard no torque outflow boundary condition. $R_{\rm in}$ is located at $91R_s$, and we have checked the convergence of our results with respect to the location of the inner boundary (see Appendix). The flow is initialized throughout with Bondi profile for density, energy density, and radial velocity. Our standard simulations are run with $T_{\infty}=2\times 10^7$ K. This is close to the temperature of gas heated radiatively by a typical quasar output spectrum \citep{sazonov05}. The simulations are run with adiabatic index $\gamma=1.5$, except where otherwise noted, and we verify explicitly that this assumption does not affect our main results (cf Figure \ref{fig:gamma}). The real gas will have $\gamma=5/3$ above $R\sim 300-500 R_s$. At lower radii, in the non-collisional case, electrons become relativistic while protons are nonrelativistic, leading to lower effective $\gamma$. Our choice for $\gamma$ is made in order to allow us to well separate the sonic and centrifugal radii for supersonic infall. The density at infinity, $\rho_\infty$, is a parameter that determines the dimensionless quantity $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$, where $\dot{M}_{\rm Edd}\equiv 4\pi G M_{\rm BH}m_p / \epsilon \sigma_T c$ is the Eddington luminosity and we assume an efficiency $\epsilon=0.1$. This dimensionless mass accretion rate parameter scales as $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\propto M_{\rm BH}\rho_\infty /c_{s,\infty}^3$. The dimensionless cooling fraction $\dot{e}_{\rm Brem} \Delta t_{\rm char}/nkT \propto M_{\rm BH}\rho_\infty /c_{s,\infty}^4$, where we have taken characteristic time $\Delta t_{\rm char}=M_{\rm BH}/c_{s,\infty}^4$. The black hole mass and gas density at infinity appear together in the above two quantities, and the free parameter in the accretion flow is actually the product $M_{\rm BH}\rho_{\infty}$ (see Chang \& Ostriker 1985). This parameter effectively determines the strength of the Bremsstrahlung cooling, with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd} \ll 1$ corresponding to no cooling and $\dot{M}_{\rm B}/\dot{M}_{\rm Edd} \gg 1$ corresponding to strong cooling. For convenience we pick $M_{\rm BH}=2\times10^8 M_{\odot}$ and allow $\rho_\infty$ to vary freely. At $\rho_\infty=2.3\times10^{-21} g\, cm^{-3}$, the Eddington ratio $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=0.1$. Gas is initialized with constant specific angular momentum throughout, except near the rotation axis, where it tapers to zero. The inflowing gas at the outer boundary has the same angular momentum, without tapering, ensuring that after initial transients, the gas has uniform angular momentum throughout. For adiabatic inviscid flows, the Bernoulli constant \begin{equation} Be\equiv \frac12\left(v_r^2+v_\theta^2\right)+\frac12\frac{j^2}{\varpi^2}-\frac{GM}{r}+\frac{c_s^2}{\gamma-1}, \label{eq:bernoulli} \end{equation} where $\varpi$ is cylindrical radius, is a conserved quantity (see e.g., Narayan \& Yi 1994). It immediately follows that there is a centrifugal radius interior to which the gas cannot penetrate if gas starts from rest at infinity with $Be=\gamma c_{s,\infty}^2/(\gamma-1)$. The specific angular momentum is another free parameter in our simulations, but we pick $j_\infty=\sqrt{0.02}R_{\rm B}c_{s,\infty}$ so that the centrifugal radius $R_{\rm c}=0.02 R_{\rm B}$ is well resolved and lies at a radius $20$ times that of the inner boundary. Hence we are computing within the domain given by equation (\ref{eq:rcent}). The limit of weak angular momentum with $R_{\rm c}\ll R_{\rm in}$ approaches the Bondi solution. For our choice of specific angular momentum, the introduction of shear viscosity and angular momentum transport allows for gas to migrate from the centrifugal radius down to the inner boundary and to accrete. The accreting gas carries only a small fraction ($<20\%$) of the initial angular momentum $j_\infty$ through the inner boundary. In our simulations we set the kinematic viscosity $\nu$ to a constant nonzero value $\nu=10^{-3}c_{s,\infty}R_{\rm B}$. The effective $\alpha$ coefficient, $\alpha\equiv \nu/c_sH_p$, where $c_s$ is the local sound speed and $H_p$ the local pressure scale height (see Section \ref{sec:radcool}), is then spatially variable, but is typically of order $\sim 0.01$ near the centrifugal barrier where the density peaks. \cite{stone99} found that the properties of their non-radiative hydrodynamic flow do not depend strongly on the assumed form for kinematic viscosity, and \cite{stone01} found weakly magnetized flows to be qualitatively similar to the viscous hydrodynamic ones. Incidentally, the limit of large specific angular momentum with $R_{\rm c} \gtrsim R_{\rm B}$ should approach that of a giant viscous disk with large radial extent, but we have not explored this regime in any detail. The Bremsstrahlung cooling also modifies the flow profile, and for cold disks the inflow can be highly supersonic. In these cold solutions the sonic radius is typically intermediate between but still well separated from the centrifugal and Bondi radii. \subsection{Numerics} \label{sec:numerics} Our standard simulations are run with $21$ equally spaced grid zones in polar angle $\theta$ varying from $\theta=0$ to $\theta=\pi/2$, enforcing symmetry across the equatorial plane. In the radial direction we use a non-uniform logarithmic grid with $32$ zones per decade ranging from $R_{\rm in}=10^{-3}R_{\rm B}$ to $R_{\rm out}=10R_{\rm B}$, with $128$ total grid zones. As a code check, we ran the spherically symmetric test problem with no cooling. Our code is able to maintain the Bondi profile for mass density and radial velocity to within $5\%$ for various values of the adiabatic index $\gamma$. The internal energy density is less well conserved in the very inner portions of the flow ($\approx 10$ innermost cells) due to artificial viscous heating, but is still within a factor of $\approx 2$ of the Bondi solution. The simulations are run until all time-averaged gross quantifiers of the flow structure approach quasi-steady values. All diagnostics of the flow are averaged over many tens of Bondi times $t_{\rm B}\equiv R_{\rm B}/c_{s,\infty}$, typically between times $t=40\, t_{\rm B}$ and $t=90\,t_{\rm B}$ (see Appendix). Flow patterns are illustrated at time $t=90\,t_{\rm B}$ except where otherwise noted. The code is mass conservative, so mass flowing inwards from large radii must accumulate on the grid or outflow either through the inner boundary or back out to large radii via the combination of polar and equatorial outflow. We also run a number of tests to check the proper implementation of the additional physics we have added to ZEUS. We verified the proper thermal cooling of a stationary constant density gas, as well as the diffusion of thermal energy when a temperature gradient exists. Finally, we have reproduced qualitatively the viscous evolution of an initial ring of matter \citep{pringle81}. The ring spreads in radius as matter moves inwards, but the bulk of the angular momentum is transferred outwards. Kinetic energy lost to viscous dissipation is added to the internal energy of the gas, with the viscous substep of the code conserving energy to within $10$\%. \section{Results}\label{sec:results} \subsection{Adiabatic Inviscid Flows} For adiabatic flows with sufficiently high angular momentum that the centrifugal barrier is located far from the black hole, equation (\ref{eq:bernoulli}) immediately tells us that there can be no accretion onto the black hole. Any inflowing high angular momentum material must either accumulate beyond the centrifugal radius or be deflected back outwards and ejected in outflows (\citealt[see][and references therein]{proga03a}; see also \citealt{hawley84a,hawley84b,hawley86}). In the adiabatic regime with constant angular momentum $j$ throughout, there is a stationary ($v_r=v_\theta=0$) settling solution satisfying the momentum equation \citep[see][]{papaloizou84,fishbone76}. Starting from the momentum equation, \begin{equation} -\frac{1}{\rho}\boldsymbol{\nabla}p - \boldsymbol{\nabla}\Phi +\frac{j^2}{\varpi^3} \boldsymbol{\hat{\varpi}}=0, \end{equation} the adiabatic relation $p\propto \rho^{\gamma}$ gives \begin{equation} \boldsymbol{\nabla} \left(\frac{c_s^2}{\gamma-1} + \Phi +\frac{j^2}{2\varpi^2} \right) =0. \end{equation} This yields an equation for the sound speed, \begin{equation} \frac{c_s^2}{c_{s,\infty}^2}=1+(\gamma-1)\frac{GM}{c_{s,\infty}^2r}-\frac{\gamma-1}{2}\frac{j^2}{c_{s,\infty}^2\varpi^2}, \label{eq:cs} \end{equation} valid where the right hand side is positive, as well as the density profile, \begin{equation} \frac{\rho}{\rho_{\infty}}=[1+(\gamma-1)\frac{GM}{c_{s,\infty}^2r}-\frac{\gamma-1}{2}\frac{j^2}{c_{s,\infty}^2\varpi^2}]^{1/(\gamma-1)}. \label{eq:dens} \end{equation} The density is zero near the polar axis in the region where the right hand side of equation (\ref{eq:cs}) is negative. This solution is actually dynamically unstable to non-axisymmetric perturbations \citep{papaloizou84}, which we do not allow for in our axisymmetric code. Neglecting these instabilities is likely a reasonable assumption, however, as we have angular momentum gradients that can stabilize the flow \citep{papaloizou87}. Our multidimensional adiabatic simulations closely resemble this stationary settling solution, with sound speed profile matching equation (\ref{eq:cs}) to within a few percent exterior to the centrifugal barrier. Figure~1a shows the structure of the flow within the Bondi radius at Eddington ratio $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-4}$. The red contours are logarithmic contours of density, given in cgs units, and we overlay black contours of logarithmic density for the adiabatic settling solution. The solid black line shows the zero density surface in the stationary solution. Blue arrows indicate vectors of mass flux in the $R-z$ plane, with length proportional to the magnitude of the mass flux vector. Subsequent diagrams of flow patterns showing logarithmic density contours and mass flux vectors are illustrated in a similar fashion. The pressure associated with the accumulation of matter in an extended torus impedes the inflow of matter, as in the stationary solution. Indeed, the mass inflow rate at the Bondi radius is reduced by more than one order of magnitude to $\sim 0.06\dot{M}_{\rm B}$ at $t=45 t_{\rm B}$. There is evidence of time-dependent circulation, however, partially due to our choice of initial conditions, and the flow is not purely stationary with zero inflow rate at the Bondi radius. Matter flows inwards at intermediate latitudes and turns around at radii exterior to the centrifugal barrier, flowing back outwards in the polar and equatorial regions. Figure~1b shows the flow structure interior to $0.1R_{\rm B}$, with centrifugal radius located at $R_{\rm c}=0.02 R_{\rm B}$. There is again evidence of time-dependent circulation, but the largest differences in the flow relative to the stationary solution occur near the polar axis where the stationary solution has zero mass density. There are very small inflows in the polar region due to angular momentum transport associated with numerical viscosity, but the accretion rate is extremely low, $\dot{M}_{\rm accretion}< 10^{-3}\dot{M}_{\rm B}$, i.e. any accretion is due to numerical error. In the absence of physical angular momentum transport the gas in the centrifugal barrier is stationary in the $r$ and $\theta$ directions, cannot accrete, and its pressure prevents new gas from falling down to small radii. Our results indicate that in the limit of very long and accurate integrations the solution would approach the stationary solution given in equations (\ref{eq:cs}) and (\ref{eq:dens}), since there is no engine to drive circulation for adiabatic flows. \begin{figure*}[htp] \centering \includegraphics[width=.87\textwidth]{f1a.eps}\\ \includegraphics[width=.87\textwidth]{f1b.eps} \caption{(a) The structure of the inviscid flow with no cooling, interior to $R_{\rm B}$ at $t=45 t_{\rm B}$. Red contours are logarithmic contours of density, and blue arrows indicate mass flux in the $R-z$ plane. The flow approaches a stationary settling solution (black contours, cf equation (\ref{eq:dens})) in which the inflow rate through the Bondi radius is reduced by an order of magnitude below the Bondi rate. The solid black line shows the zero density surface in the stationary solution. (b) Flow structure interior to $5R_{\rm c}=0.1R_{\rm B}$. The bulk of the gas in the disk cannot accrete and the accretion through the polar funnel is negligible and due to numerical error.}\label{fig:inviscid} \end{figure*} \subsection{Hot and Cold Disks}\label{sec:hotcold} We find two general states for rotating axisymmetric accretion flows with radiative cooling and viscous angular momentum transport: hot thick disks with low accretion having polar and disk outflows, and cold thin disks with high accretion and weak polar and disk outflows. Figure~2a shows the flow structure interior to the Bondi radius for the hot disk with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-4}$. Red contours are logarithmic contours of density, blue vectors represent mass flux in the $R-z$ plane, and the solid black line indicates the zero density surface in the adiabatic settling solution. Gas flows inwards over a wide range of intermediate polar angles, and there are equatorial and polar outflows. The mass ejected in the polar outflow, $\sim 0.02\dot{M}_{\rm B}$, is similar in magnitude to the adiabatic case, and is again caused by inflowing matter turning around exterior to the centrifugal torus. The detailed properties of this circulation may depend on the length and accuracy of our numerical integrations. The equatorial outflow is driven primarily by azimuthal stresses associated with the shear viscosity, and it has magnitude $\sim 0.25\dot{M}_{\rm B}$ at the Bondi radius. The mass inflow rate through the Bondi radius increases over that in the inviscid case and essentially balances this equatorial outflow. Figure~2b shows red contours of specific angular momentum in units of $j_\infty=\sqrt{0.02}R_{\rm B}c_{s,\infty}$, with blue vectors indicating the flux of angular momentum. The purple contour shows where $j=j_\infty$; equatorward the gas has higher specific angular momentum, and poleward the gas has lower specific angular momentum. The equatorial outflow carries off the angular momentum of the gas that was accreted \citep{kolykhalov79,inogamov10} and some of the angular momentum lost by the polar outflow. Figure~3a shows logarithmic density contours and velocity vectors interior to $0.1R_{\rm B}$. The black line indicates the zero density surface in the stationary solution. Inflowing matter dumps angular momentum into the equatorial regions, and an outflow is driven beginning just beyond the centrifugal radius, located at $R_{\rm c}=0.02R_{\rm B}$. The inflowing gas is able to penetrate below the centrifugal radius only due to the angular momentum transport, i.e. the assumed weak MRI induced viscosity. In Figure~3b, which shows logarithmic density contours and velocity vectors interior to half the centrifugal radius, we see that the gas accretes over a range of latitudes from a time-dependent, vertically extended, sub-Keplerian disk. The accretion rate is very low, however, with magnitude $\sim 4 \times 10^{-3}\dot{M}_{\rm B}$ of order $\alpha$ times the mass inflow rate. There is some accretion due to the angular momentum transport, but the bulk of the gas is too energetic, with $Be>c_{s,\infty}^2/(\gamma-1)$, and has too much angular momentum for much of it to accrete. This conclusion is in close agreement with that of \cite{stone99}. \begin{figure*}[htp] \centering \includegraphics[width=.87\textwidth]{f2a.eps}\\ \includegraphics[width=.87\textwidth]{f2b.eps} \caption{(a) The structure of the hot disk with angular momentum transport designed to mimic the MRI, shown interior to $R_{\rm B}$ in the low accretion limit where $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-4}$. Cooling is negligible. Red contours are logarithmic contours of density, blue arrows indicate mass flux in the $R-z$ plane, and the black line demarcates the zero density surface in the adiabatic settling solution. Matter flows in through the Bondi radius and back out in disk and conical polar outflows. (b) Contours of specific angular momentum in units of $j_\infty=\sqrt{0.02}R_{\rm B}c_{s,\infty}$ and blue vectors of angular momentum flux. The black line demarcates the zero density surface in the stationary solution, and the purple contour marks where the specific angular momentum $j=j_\infty$. Angular momentum of inflowing material is transferred to the equatorial outflow.}\label{fig:viscoushot} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width=.87\textwidth]{f3a.eps} \includegraphics[width=.87\textwidth]{f3b.eps} \caption{Closeup of the structure of the hot disk with angular momentum transport at Eddington ratio $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-4}$. This is the same model as Figure~2. Red contours are logarithmic contours of density, and blue arrows represent mass flux in the $R-z$ plane. (a) Flow structure interior to $5R_{\rm c}=0.1R_{\rm B}$. The black line shows the zero density surface in the adiabatic settling solution. Angular momentum transport allows a small amount of matter to inflow below the centrifugal radius, and viscous stresses drive a strong equatorial outflow. (b) Flow structure interior to half the centrifugal radius or $0.01R_{\rm B}$. Accretion proceeds at very low rates over a range of polar angles.}\label{fig:viscoushotsmall} \end{figure*} In the cold disk solutions, the non-adiabatic gas radiates energy and most of the gas cannot travel back out to infinity on energetic grounds since $Be<c_{s,\infty}^2/(\gamma-1)$. In the absence of angular momentum transport the gas continuously accumulates near the centrifugal barrier. Gas pressure is insufficient to drive matter inward onto the central black hole and there is no steady solution. The introduction of angular momentum transport allows for significant accretion from the centrifugal torus, however. Figure~4a shows logarithmic density contours and mass flux vectors for the cold disk solution at Eddington ratio $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=0.1$, interior to the Bondi radius. The solid black line shows the zero density surface in the adiabatic settling solution given in equation (\ref{eq:dens}). Matter accumulates in an extended torus whose pressure slows mass inflow, similar in spirit to the adiabatic settling solution. Inflow proceeds at low polar angles $\theta\lesssim \pi/4$, however, with total inflow rate $\sim 0.45\dot{M}_{\rm B}$ at the Bondi radius. Equatorial gas exterior to the centrifugal barrier is still driven outwards by the viscous engine, but the outward propagation of the equatorial outflow is slowed by the strong radiative cooling. In the absence of external pressure, the equatorial outflow would continue to propagate outwards indefinitely \citep{kolykhalov80}. Our constant pressure outer boundary condition slows the outflow, however, and we find that the equatorial outflow is unable to propagate to the Bondi radius on time scales of order $\sim 100\,t_{\rm B}$. This solution is not technically a steady-state solution, as angular momentum is continuously dumped into the radially extended torus, which acts as a large sink of angular momentum. If we integrated for much longer times, however, the equatorial outflow must continue to propagate outwards to conserve momentum in a steady-state. Figure~4b shows logarithmic density contours and mass fluxes interior to $0.1R_{\rm B}$. The flow forms a geometrically thin disk, and angular momentum transport allows matter to continue to flow inwards from this disk. We illustrate the cold thin disk near the centrifugal barrier in greater detail in Figure 5. Red contours are again logarithmic density contours and blue vectors represent mass flux. Accretion proceeds through the inner boundary from the cold, thin, sub-Keplerian disk, at a rate nearly equal to the mass inflow rate through the Bondi radius. This case resembles the standard thin disk \citep{shakura73}. \begin{figure*}[htp] \centering \includegraphics[width=.87\textwidth]{f4a.eps}\\ \includegraphics[width=.87\textwidth]{f4b.eps}\\ \caption{(a) The structure of the cold disk with angular momentum transport interior to $R_{\rm B}$, at Eddington ratio $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=0.1$. Red contours are logarithmic contours of density, and blue arrows indicate mass flux in the $R-z$ plane. Inflow is impeded by gas pressure at large polar angles but proceeds through the Bondi radius at polar angles $\theta \lesssim \pi/4$. Angular momentum is dumped into the extended torus. (b) Flow structure interior to $5R_{\rm c}=0.1R_{\rm B}$. Matter continues to flow inwards from a geometrically thin disk. This is the case which most closely approximates the standard Shakura-Sunyaev disk.}\label{fig:viscouscold} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width=.87\textwidth]{f5.eps} \caption{A closeup view of the high inflow case with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=0.1$. We show logarithmic density contours and mass flux vectors for the cold disk with angular momentum transport, interior to half the centrifugal radius. Matter accretes from the thin disk as in the standard \cite{shakura73} picture with angular momentum transport due to the MRI.}\label{fig:viscouscoldtiny} \end{figure*} \subsection{Radiative Cooling} \label{sec:radcool} Figure~\ref{fig:accretion_wind} shows the quasi-steady mass accretion rate through the inner boundary, the equatorial and polar wind mass outflow rates through the Bondi radius, and the mass inflow rate through the Bondi radius, all as functions of the parameter $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$. Crosses on the accretion rate curve indicate the Eddington ratios for which our simulations have converged with respect to temporal duration. For $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\lesssim 0.01$, the Bremsstrahlung cooling is weak and plays little dynamic role in the flow. The flow converges to a low accretion rate solution for $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\lesssim 10^{-3}$. The non-zero viscosity allows only a small fraction, $\sim 0.4$\%, of the inflowing matter at the Bondi radius to accrete onto the central black hole. The mass inflow rate at the Bondi radius $\dot{M}_{\rm inflow}$ is itself only $\sim 0.29 \dot{M}_{\rm B}$ because the centrifugal torus and outflows significantly impede the inflow (cf Figures 2 and 3). The gas density on the equator is much larger than in the polar region, and the equatorial outflow dominates the outward mass and momentum flux at the Bondi radius. The equatorial outflow nearly matches the mass inflow rate at the Bondi radius, whereas the polar outflow is much less significant and an order of magnitude weaker than either. It is important to note, however, that viscous heating near the centrifugal radius leads to a buildup of thermal energy and can lead to intermittent episodes of convective overturning. The overturning episodes can last for $\sim10 t_B$, with outward mass fluxes over a broad range of polar angles and a significant fraction of the characteristic mass inflow rates. The filled boxes in Figure~\ref{fig:accretion_wind} represent estimates of the time-averaged polar mass outflows including these overturning episodes, which by definition are not included in our quasi-steady average mass fluxes (see Appendix for more detail). The strength of the radiative cooling increases with the parameter $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$, and in the range $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\sim 0.01 - 0.1 $ there is an abrupt transition in the properties of the flow to the strong cooling regime. For $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\sim 0.1$ the inflowing gas is impeded in the equatorial regions by gas pressure in an extended centrifugal torus (cf Figure 4), and matter flows in through the Bondi radius at roughly half the Bondi rate. Nearly the entirety of this inflowing gas is accreted onto the central black hole and the solutions approach the standard \cite{shakura73} picture. Most of the inflowing gas does not have enough energy to travel back out to infinity, and only a very small portion bounces off the centrifugal barrier and is ejected in a polar wind. The persistence of the polar wind is uncertain, and much higher resolution studies with additional physics such as magnetic fields and radiation are necessary to study the detailed properties of polar outflows. We emphasize again that the high Eddington ratio solutions are only quasi-steady (see Section \ref{sec:hotcold}), and the equatorial outflows would be stronger if we integrated for times $\gtrsim 1000 R_{\rm B}/c_{s,\infty}$. \begin{figure*}[thbp] \centering \includegraphics[width=1.\textwidth]{f6.eps} \caption{Mass inflow rate from large radii, equatorial outflow rate, polar outflow rate (boxes are averages that include episodes of convective overturning), and accretion rate as a function of the Eddington parameter $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$. At high $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$ mass inflows at $\sim1/2$ the Bondi rate and nearly all of it is accreted. At low $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$, mass again inflows at $\sim1/2$ Bondi, but nearly all of this matter is deflected outwards in disk outflows. There is an abrupt transition between these distinct solutions in the range $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\sim 0.01 - 0.1$.}\label{fig:accretion_wind} \end{figure*} We have further verified that the low accretion rate solutions depend weakly on our choice of adiabatic index. Figure~\ref{fig:gamma} shows the mass inflow rate from large radii, equatorial and polar outflow rates at $R_{\rm B}$, and accretion rate as a function of $\gamma$. Mass fluxes depend weakly on $\gamma$ in the range $1.4-1.65$, and our ansatz $\gamma=1.5$ should not significantly affect our results. \begin{figure*}[thbp] \centering \includegraphics[width=.5\textwidth]{f7.eps} \caption{Mass inflow rate from large radii, equatorial and polar outflow rates at $R_{\rm B}$, and accretion rate as a function of adiabatic index $\gamma$. All solutions are computed for fixed Eddington ratio $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-4}$, the hot disk solutions. Mass fluxes have relatively weak dependence on adiabatic index in the range illustrated, and we set $\gamma=1.5$ throughout the rest of this work.}\label{fig:gamma} \end{figure*} The high accretion rate flows from Figure~\ref{fig:accretion_wind} with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}>0.1$ are cold and form a geometrically thin disk, and the low accretion rate flows with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}<0.01$ are hot and form vertically extended disks. Figure~8a shows the emission weighted temperature, \begin{equation} <T>\equiv \frac{\int \dot{e}_{\rm Brem} T dV}{\int\dot{e}_{\rm Brem} dV}, \end{equation} as a function of $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$. We show three characteristic temperature curves, from top to bottom averaged over the region interior to the centrifugal barrier, the total region within $0.1R_{\rm B}$, and in the centrifugal torus. The centrifugal torus is extremely dense and cools to a lower temperature (the floor that we set) than other parts of the flow at high Eddington ratios. The gas interior to the centrifugal barrier is hottest because the gas has penetrated deeper into the gravitational potential well. For low $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}<0.01$ the gas in the inner parts of the flow is hot and a sizable fraction of the centrifugal temperature \begin{equation} T_{\rm c}\equiv \frac{\mu m_p}{k_b \gamma}\frac{2 GM_{\rm BH}}{ R_{\rm c}}=100T_\infty, \end{equation} where $\mu=0.5$ is the mean molecular weight. Electrons become weakly relativistic at such temperatures, but we neglect in our computations the increase of the Bremsstrahlung energy losses for relativistic electrons. We measure the pressure scale height of the disk as the vertical distance at the centrifugal radius over which the pressure drops by a factor of $e$. The pressure scale height of the disk for the hot solutions with vertically extended disks is $H_p/R_{\rm c}\sim 0.75$ (see Figure~8b), and they have $\alpha\sim0.01$ at the centrifugal barrier. The high accretion rate solutions with cold thin disks have pressure scale at the centrifugal radius $H_p/R_{\rm c}\sim0.15$, covering $\sim 5$ cells in the highest resolution simulation (see Appendix). For a cold thin disk in hydrostatic equilibrium and with angular momentum $j=\sqrt{0.02}R_{\rm B}c_{s,\infty}$ we expect $H_p/R_{\rm c}=c_s/v_\phi=0.07$. Since we fix the kinematic viscosity $\nu$ in our code, our cold disks have $\alpha\sim0.4$ rather than $\alpha\sim0.01$ at the centrifugal barrier. The code has difficulty running the cold solutions at lower viscosity, as there is significantly greater mass deposited near the centrifugal radius before accretion can proceed. The gas has cooled strongly and energetically cannot reach infinity, however, and we expect it to accrete independently of the strength of the viscosity, as long as it is non-zero. In any case, radiative transfer becomes important at Eddington ratios of $\sim {\rm few} \times 0.01$, physics that we do not include here. The precise location of the transition between thin and thick disks, as well as the density, luminosity, and optical depth of the thin disks, will depend on these choices. The conical polar outflows, driven by inflowing matter bouncing off the centrifugal barrier, are in general weak, but they exist over a wide range of Eddington ratios. Figure~8c gives the half-opening angle of the polar outflow at the Bondi radius as a function of $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$. The half-opening angle is typically $\theta_{\rm half-open}\sim 5^{\circ}$. We show also the half-angles at which half of the mass and half of the momentum in the polar outflows are ejected, again computed at the Bondi radius. The zero density surface in the adiabatic settling solution lies at an angle of $\sim 3^{\circ}$ at the Bondi radius. Figure~8d shows the mass-flux weighted average wind velocity in the polar outflow, measured at the Bondi radius, as a function of $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$. The wind velocity can be a significant fraction of the centrifugal velocity \begin{equation} v_{\rm c}\equiv (2GM_{\rm BH}/R_{\rm c})^{1/2}, \end{equation} with typical wind velocity $<v_{\rm wind}>\sim 0.15\, v_{\rm c}$. This value is similar to that found in \citep{yuan12b}. The outflow opening angle and velocity depend on the radius at which we measure these quantities. The outflow propagates into a constant density and pressure medium in our setup, and at sufficiently large distances the outflow will transfer its momentum and energy to the external medium. Beyond this region the outflow would become subsonic and probably circulate back to join the inflowing gas at intermediate latitudes. We quote values at the Bondi radius to give a quantitative estimate for the outflow properties at a definite radius. We now discuss in greater detail the radiative properties of our accretion flows. If a spherically symmetric distribution of gas is in free-fall with velocity \begin{equation} v_{\rm ff}=(2GM_{\rm BH}/r)^{1/2} \end{equation} and number density \begin{equation} n(r)=\frac{\dot{M}_{\rm B}}{4\pi c R_s^2 \eta}(r/R_s)^{-3/2}, \end{equation} where $\eta$ is the mean mass per electron, then the optical depth \begin{equation} \tau\equiv \sigma_T \int n(r) dr \propto \dot{M}_{\rm B}/\dot{M}_{\rm Edd}. \end{equation} We compute the optical depth along three different sightlines with fixed polar angle $\theta=0^{\circ},45^{\circ},90^{\circ}$, starting from the inner boundary at $R_{\rm in}=91R_s$. We find that $\tau$ scales linearly with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$ (see Figure~8e). The optical depths approach $10^{-2}$ just below $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-2}$, where the flow switches between the hot, vertically extended disk and the cold, thin disk, and there is a jump by a factor of $\sim 100$ in $\tau_{\rm 90}$ at this transition. Further, making use of the temperature profile of an adiabatic gas in free-fall, \begin{equation} T\simeq T(R_s)(r/R_s)^{-3(\gamma-1)/2}, \end{equation} the total integrated Bremsstrahlung luminosity \begin{align} L_{\rm Brem}/L_{\rm Edd} &\propto \frac{1}{L_{\rm Edd}}\int n(r)^2 T^{1/2} r^2 dr \\ &\propto (\dot{M}_{\rm B}/\dot{M}_{\rm Edd})^2. \end{align} Figure~8f shows the Bremsstrahlung luminosity in our simulations integrated over the regions between $R_{\rm in}=91R_s$ and $0.1R_{\rm B}$, and between $R_{\rm in}=91R_s$ and the centrifugal torus. We obtain the scaling $L_{\rm Brem}/L_{\rm Edd} \propto (\dot{M}_{\rm B}/\dot{M}_{\rm Edd})^2$ for both integrated luminosities, and $L_{\rm Brem}\to 10^{-7}L_{\rm Edd}$ at $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-2}$. As we move the inner boundary at $R_{\rm in}$ closer to the black hole, we expect the luminosity, temperature, and optical depth interior to the centrifugal radius to increase. Other work including general relativistic and magnetohydrodynamic physics and extending down to the horizon for non-spinning black holes suggests that essentially all matter flowing inwards at our inner boundary $R_{\rm in}=91 R_s$ will ultimately accrete, however \citep{narayan12}. \begin{figure*}[htp] \centering \includegraphics[width=1.\textwidth]{f8.eps}\\ \caption{6 characteristic quantities in our solutions, all as a function of $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$. (a) Emission weighted temperature, averaged over the region interior to the centrifugal torus, the region interior to $0.1 R_{\rm B}$, and the centrifugal torus. The low accretion rate solutions are hot, with temperature a sizable fraction of the centrifugal temperature, whereas the high accretion rate solutions are cold. (b) Pressure scale height. The hot disks are vertically extended, with $H_p/R_{\rm c}\sim 0.75$, whereas the cold disks are geometrically thin, with $H_p/R_{\rm c}\ll 1$. (c) Polar wind half-opening angle, measured at $R_{\rm B}$, typically of order $5^{\circ}$. We show also the half-angles at which half the mass and momentum in the wind are ejected. (d) Mass-flux weighted polar wind velocity, measured at $R_{\rm B}$, typically a sizable fraction of the centrifugal velocity. (e) Optical depth along three sightlines with fixed $\theta=0^{\circ},45^{\circ},90^{\circ}$. $\tau$ scales linearly with $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$, but there is a jump in $\tau_{90}$ when the disk switches to a dense, cold thin disk. The optical depths approach $10^{-2}$ at $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-2}$. (f) Total Bremsstrahlung luminosity integrated over the region between the inner boundary and $0.1R_{\rm B}$, and between the inner boundary and the centrifugal torus. Luminosity scales as the second power of $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}$ and approaches $10^{-7}L_{\rm Edd}$ at $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}=10^{-2}$.}\label{fig:mdot} \end{figure*} \section{Discussion}\label{sec:discussion} Observations suggest that the Active Galactic Nuclei (AGN) in nature form a bimodal distribution, with broad-line AGN having accretion rates $\sim 2$ orders of magnitude higher than narrow-line and lineless AGN \citep{trump09,trump11,kollmeier06,russell12}. The transition between these classes occurs at an Eddington ratio between $0.01$ and $0.1$, in remarkable agreement with our transition between hot and cold flows. Further, the luminosities of the cold and hot disk states suggests a natural feedback loop that can result in switching between the two solutions. From Figure~\ref{fig:mdot} we see that the Bremsstrahlung emission from a cold thin disk can approach $10^{-2}L_{\rm Edd}$ for $\dot{M}_{\rm B}/\dot{M}_{\rm Edd}\sim0.1$. We do not include the radiative forcing or thermal heating effects of this emission on infalling gas in our code \citep{park01,park07,yuan09}, but the Bremsstrahlung luminosity would significantly heat the inflowing matter. The disk could transition to the hot, vertically extended state with low accretion, effectively shutting off the Bremsstrahlung emission. In the absence of irradiating photons, the hot disk could then cool and begin accreting at high rates again, thus forming a cycle. This process may be an important detail in explaining the high and low accretion states of quasars \citep{ciotti01}. The physics of radiative feedback on supermassive black hole accretion flows has been studied numerically in some detail \citep{sazonov05,proga07,nagamine11,ciotti09,ciotti10,kurosawa09,proga08}, and a potentially significant effect here would be to increase the strength of the polar outflows. Independently of any radiative feedback loops, our results indicate that temporal variations in the external boundary conditions at infinity would lead to strongly time-dependent accretion flows. Observations are now approaching sufficient accuracy that for nearby supermassive black holes both the external boundary conditions surrounding the black hole and the reaction of the flow to these external boundary conditions can be measured. Indeed, \cite{gillessen12} recently discovered a gas cloud of only several Earth masses orbiting Sgr ${\rm A}^*$ and with radius of closest approach $\sim 3100 R_s$. The computations provided here are detailed enough to make concrete predictions concerning the geometrical, dynamical, and thermal properties of accretion disks and outflows of nearby supermassive black holes from currently feasible observations. We do not provide detailed predictions here but may further explore these issues in a separate paper addressing observational consequences. The lack of magnetic fields is a limitation of our work \citep{pessah08,beckwith09,penna10,tchekhovskoy11,mckinney12}. \cite{proga03b} considered the non-radiative two-dimensional magnetohydrodynamic accretion problem, and they find angular momentum transport and accretion driven by the MRI. The two-dimensional structure of the flow determines the accretion properties, with polar outflows quenching the accretion rate. Similar qualitative behavior is present in the Adiabatic Inflow Outflow Solutions \citep{blandford99}, and we also see circulation and polar outflows in our hydrodynamical simulations with weak cooling. In principle the nonlinear properties of the MRI could modify the detailed structure of the flow in the weak cooling regime. As the cooling rate increases, however, the total energy of the fluid determines its fate and the detailed kinetic properties are less important. Another extension of our work relates to the enforcement of reflection symmetry through the equatorial plane. The flow does not necessarily need to preserve this symmetry, and in principle there could be asymmetric, unstable modes that seed turbulence and/or convection that modifies the properties of the accretion flow. We have explored this idea in some detail, and we find that the main difference from the presented results to be that the strong equatorial outflow of high angular momentum gas need no longer be centered exactly on the equator, as in \cite{stone99} and \cite{yuan12a}. Asymmetries in bipolar outflows would have clear observational consequences as well, and the behavior of these outflows in different cooling regimes is an avenue of potential research. In any case future studies of the global structure of accretion flows should consider improvements such as fully 3D simulations, general relativistic corrections, self-gravity, and more detailed heat and radiation transport mechanisms. One heat transport mechanism we have studied in some detail at low Eddington ratios is conduction due to thermal electrons \citep{inogamov10}. At lower Prandtl number convective overturning of hot gas can be suppressed, and the strength of the inflow and outflow can vary by a factor of a few. The distribution of outflow between polar and equatorial outflow also depends on the strength of the conduction, but the essential feature of mass accretion rates nearly two orders of magnitude below the Bondi rate remains unchanged. \section{Conclusions} \label{sec:conclusions} Our high inflow rate solutions are very similar to the standard and widely adopted \cite{shakura73} results. But our low inflow results are to zeroth order the stationary \cite{papaloizou84} solution. To next order in the small assumed viscosity they show patterns of circulation, with outflow almost balancing inflow and the net accretion rate through a geometrically thick disk falling to only the order of $\alpha$ (the dimensionless viscous parameter) times the inflow rate. We label this behavior a RRIOS solution for "Radiating, Rotating Inflow-Outflow Solution", as it is a significant generalization of the ADIOS (Adiabatic Inflow-Outflow Solution of \cite{blandford99}) applicable to the common case wherein the angular momentum load of the inflowing gas strongly restricts the level of net accretion, at low inflow rates. In this low inflow state, viscous forces can drive an equatorial outflow that propagates out beyond the Bondi radius. Further, entropy released by viscous stresses can accumulate near the centrifugal barrier and lead to episodes of convective overturning. Observationally, the solutions that we find appear to be consistent with the fact that most nearby massive black holes show very low bolometric luminosity, considerably below that anticipated for spherically symmetric (non-rotating) Bondi-Hoyle flows, but also show occasional outbursts indicating a much higher rate of episodic outflow. We thank Jim Stone for many discussions and for help with the ZEUS code. We thank also, Daniel Proga, Feng Yuan, Jim Pringle, Sasha Tchekhovskoy, Yan-Fei Jiang, Bob Penna, Eve Ostriker, Patrick Hall, \& Anatoly Spitkovsky for useful comments. The simulations presented in this article were performed on the Orbital and Della clusters supported by the PICSciE-OIT High Performance Computing Center and Visualization Laboratory.
2,877,628,089,909
arxiv
\section{Background}\label{Background} This section presents some preliminary material on single- and multi-agent RL. \subsection{Single-agent Reinforcement Learning} An MDP \cite{puterman2014markov} is a tuple $(S,A,\mathbb{T},\rho_0, R)$. $S$ is the set of states, $A$ the set of actions, $\mathbb{T}:S \times A \times S \rightarrow [0,1]$ encodes $\mathbb{P}(s_{t+1}|s_t,a_t)$, the probability of transition to $s_{t+1}$, given current state $s_t$ and action $a_t$. $\rho_0$ is a probability distribution over the initial states. $R : S \times A \rightarrow \mathbb{R}$ denotes the reward that the agent receives when transitioning from $s_t$ while taking action $a_t$. In this paper, $R < \infty$. The goal for an RL agent \cite{sutton2018reinforcement} is to learn a \emph{policy} $\pi$, in order to maximize $J:=\mathbb{E}_{\tau \sim \pi}[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)]$. Here, $\gamma$ is a discounting factor, and the expectation is taken over the trajectory $\tau=(s_0,a_0,r_0,s_1,\dots)$ induced by policy $\pi$. If $\pi: S \rightarrow A$, the policy is \emph{deterministic}. On the other hand, a randomized policy returns a probability distribution over the set of actions, and is denoted $\pi: S \times A \rightarrow [0,1]$. The value of a state-action pair $(s,a)$ following policy $\pi$ is represented by the \emph{Q-function}, written $Q^{\pi}(s,a) = \mathbb{E}_{\tau \sim \pi}[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)|s_0=s,a_0=a]$. The Q-function allows us to calculate the state value $V^{\pi}(s) = \mathbb{E}_{a \sim \pi}[Q^{\pi}(s,a)]$. The advantage of a particular action $a$, over other actions at a state $s$ is defined by $A^{\pi}(s,a) := Q^{\pi}(s,a)-V^{\pi}(s)$. \subsection{Stochastic Games for Multi-agent Reinforcement Learning} A \emph{stochastic game} with $n$ players is a tuple $\mathcal{G} = (S, A^1, \dots, A^n, \mathbb{T}, R^1, \dots, R^n, O^1,\dots,O^n,\rho_0, \gamma)$. $S$ is the set of states, $A^i$ is the action set of player $i$, $\mathbb{T}: S \times A^1 \times \dots \times A^n \times S \rightarrow [0,1]$ encodes $\mathbb{P}(s_{t+1}|s_t, a^1_t, \dots, a^n_t)$, the probability of transition to state $s_{t+1}$ from $s_t$, given the respective player actions. $R^i: S \times A^1 \times \dots \times A^n \rightarrow \mathbb{R}$ is the reward obtained by agent $i$ when transiting from $s_t$ while each player takes action $a^i_t$. $O^i$ is the set of observations for agent $i$. At every state, each agent receives an observation correlated with the state: $o^i : S \rightarrow O^i$. $\rho_0$ is a distribution over the initial states, and $\gamma \in [0,1]$ is a discounting factor. A \emph{policy} for agent $i$ is a distribution over actions, defined by $\pi^i: O^i \times A^i \rightarrow [0,1]$. Let $\bm{\pi}:=\{\pi^1, \dots, \pi^n\}$ and $s:=(s^1,\dots,s^n)$. Following \cite{lowe2017multi}, in the simplest case, $s^i = o^i$ for each agent $i$, and we use this for the remainder of the paper. Additional information about states of agents can be included since we compute \emph{centralized} value functions. Let $V_i ^{\bm{\pi}} (s)=V_i (s, \pi^1, \dots, \pi^n):= \mathbb{E}_{\bm{\pi}} [\sum_t \gamma^t R^i_t | s_0 = s, \bm{\pi}]$ and $Q_i ^{\bm{\pi}} (s, a^1,\dots,a^n):=R^i + \gamma \mathbb{E}_{s'}[V_i ^{\bm{\pi}} (s')]$ where $V_i ^{\bm{\pi}} (s)= \mathbb{E}_{\{a^i \sim \pi^i\}_{i=1}^n}[Q_i ^{\bm{\pi}} (s, a^1,\dots,a^n)]$. \subsection{Reward Shaping in Reinforcement Learning} Reward shaping methods augment the environment reward $R$ with an additional reward $F \in \mathbb{R}$, $F< \infty$. This changes the structure of the original MDP $M(=(S,A,\mathbb{T},\rho_0, R))$ to $M'=(S,A,\mathbb{T},\rho_0, R+F)$. The goal is to choose $F$ so that an optimal policy for $M'$, $\pi^{*}_{M'}$, is also optimal for the original MDP $M$. \emph{Potential-based reward shaping} (PBRS) schemes were shown to be able to preserve the optimality of deterministic policies in \cite{ng1999policy}. PBRS was used in model-based RL in [20], in episodic RL in \cite{grzes2017reward}, and was extended to planning in partially observable domains in \cite{eck2016potential}. These works focused on the finite-horizon case. In comparison, we consider the infinite horizon, discounted cost setting. The additional reward $F$ in PBRS is defined as a difference of \emph{potentials}, $\phi(\cdot)$. Specifically, $F(s_t,a_t,s_{t+1}) := \gamma \phi(s_{t+1}) - \phi(s_t)$. Then, the Q-function, $Q^{*}_{M}(s,a)$, of the optimal greedy policy for $M$ and the optimal Q-function $Q^{*}_{M'}(s,a)$ for $M'$ are related by: $Q^{*}_{M'}(s,a)= Q^{*}_{M}(s,a) - \phi(s)$. Therefore, the optimal greedy policy is not changed \cite{ng1999policy, devlin2012dynamic}, since: \begin{align*} &\pi^{*}_{M'}(s) \in \arg\!\max_{a\in A}~Q^{*}_{M'}(s,a) \\ &\qquad = \arg\!\max_{a\in A}~\big(Q^{*}_{M}(s,a) - \phi(s)\big) = \arg\!\max_{a\in A}~Q^{*}_{M}(s,a). \end{align*} The authors of \cite{wiewiora2003principled} augmented $\phi(s)$ to include action $a$ as an argument and termed this \emph{potential-based advice} (PBA). They defined two forms of PBA, \emph{look-ahead PBA} and \emph{look-back PBA}, respectively defined by: \begin{align} F(s_{t},a_{t},s_{t+1},a_{t+1}) &= \gamma \phi(s_{t+1},a_{t+1}) - \phi(s_{t},a_{t})\label{lookaheadPBA}\\ F(s_{t},a_{t},s_{t-1},a_{t-1}) &= \phi(s_{t},a_{t}) - {\gamma}^{-1}\phi(s_{t-1},a_{t-1}).\label{lookbackPBA} \end{align} For the look-ahead PBA scheme, the state-action value function for $M$ following policy $\pi$ is given by: \begin{align}\label{PBAq} Q^{\pi}_{M}(s,a) = Q^{\pi}_{M'}(s,a)+\phi(s,a). \end{align} The optimal greedy policy for $M$ can be recovered from the optimal state-action value function for $M'$ using the fact \cite{wiewiora2003principled}: \begin{align}\label{PBAp} \pi^*_{M}(s_t) &\in \arg\!\max_{a \in A} \big(Q^{*}_{M'}(s_t,a)+\phi(s_t,a)\big). \end{align} The optimal greedy policy for $M$ using look-back PBA can be recovered similarly. In the multi-agent case, the \emph{shaping advice} for an agent at each time is a function of observations and actions of all agents. The shaping advice is augmented to the environment reward during training, and can take one of two forms, \emph{look-ahead} and \emph{look-back}, respectively given by: \begin{align} &F^i_t(s_t,a^i_t,a^{-i}_t,s_{t+1},a^i_{t+1},a^{-i}_{t+1})\label{LAPBA}\\ &\qquad \qquad :=\gamma \phi_i(s_{t+1},a^i_{t+1},a^{-i}_{t+1}) - \phi_i(s_t,a^i_t,a^{-i}_t)\nonumber \end{align} \begin{align} &F^i_t(s_t,a^i_t,a^{-i}_t,s_{t-1},a^i_{t-1}, a^{-i}_{t-1})\label{LBPBA}\\&\qquad \qquad:=\phi_i(s_t,a^i_t,a^{-i}_t) - \gamma^{-1} \phi_i(s_{t-1},a^i_{t-1}, a^{-i}_{t-1})\nonumber \end{align} We will denote by $\mathcal{G}'$ the $n$ player stochastic game that is identical to $\mathcal{G}$, but with rewards $R'^{i}:=R^i + F^i$ for each $i$. When the value of the potential function is identical for all actions in a particular state, we will term this \textbf{uniform advice}. On the other hand, when the value of the potential function depends on the action taken in a state, we will term this \textbf{nonuniform advice}. We will explicitly distinguish between uniform and nonuniform variants of shaping advice in the single and multi-agent settings subsequently. The shaping advice is a heuristic that uses knowledge of the environment and task, along with information available to the agent \cite{gupta2017cooperative}. For example, in the particle world tasks that we study, each agent has access to positions of other agents and of landmarks, relative to itself. This is used to design shaping advice for individual agents at each time step. In this paper, we develop a framework for incorporating shaping advice in RL environments with continuous action spaces. Moreover, in the multi-agent case, different from prior works that emphasize learning joint policies for agents, we learn decentralized policies for each agent. \section{Conclusion}\label{Conclusion} This paper presented a comprehensive framework to incorporate domain knowledge through shaping advice in single- and multi-agent reinforcement learning environments with sparse rewards. The shaping advice for each agent was a heuristic specified as a difference of potential functions, and was augmented to the reward provided by the environment. The modified reward signal provided agents with immediate feedback on the quality of the actions taken at each time-step. In the single-agent case, our algorithm SAS enabled the agent to obtain higher rewards, and was able to reach a target state more successfully than without a shaping reward. For the multi-agent setting, SAM used the centralized training with decentralized execution paradigm to efficiently learn decentralized policies for each agent that used only their individual local observations. We showed through theoretical analyses and experimental validation that shaping advice provided by SAS and SAM did not distract agents from accomplishing task objectives specified by the environment reward. Using SAS or SAM allowed the agents to obtain a higher average reward in fewer number of training episodes. Future research will aim to extend the scope of shaping advice based techniques. Two research questions are: i) can we design principled methods based on SAS and SAM in order to adaptively learn shaping advice (rather than being fixed for the duration of training)?, ii) how might such an adaptive procedure affect the sample efficiency and number of training episodes required? We believe that answers to these questions will be a step towards broadening the application of shaping advice to more challenging real-world RL environments. \section{Experiments}\label{Experiments} Our experiments study the incorporation of shaping advice in both single-agent and multi-agent environments. The code for our experimental evaluations is available at \textcolor{blue}{\url{https://github.com/baicenxiao/Shaping-Advice}}. \subsection{Shaping advice for single-agent RL} In the single-agent case, we seek to compare the performance of actor-critic without shaping advice, SAS-Uniform and SAS-NonUniform. For both SAS-Uniform and SAS-NonUniform, actor-critic \cite{sutton2018reinforcement} is adopted as the base RL algorithm. We consider two setups. The first is a \emph{Puddle-Jump Gridworld} \cite{marom2018belief}, where state and action spaces are discrete. The second environment is a continuous state and action space \emph{mountain car} \cite{brockman2016openai}. In each experiment, we compare the rewards received by the agent when it uses the following schemes: \emph{i)}: actor-critic with sparse rewards (Sparse); \emph{ii)}: SAS-Uniform; \emph{iii)}: SAS-NonUniform. For SAS-NonUniform, we apply it in a look-back way since it does not require the estimation of future actions. \subsubsection{Puddle-Jump Gridworld} \begin{figure} \centering \includegraphics[width=1.5 in]{gridworld.pdf} \caption{Schematic of the puddle-jump gridworld. The state of the agent is its position $(x,y)$. The shaded row (row $2$) represents the puddle the agent should jump over. The two blue grids denote states that are indistinguishable to the agent. The agent can choose an action from the set $\{up, down, left, right, jump\}$ at each step.}\label{GridWorld} \end{figure} Figure \ref{GridWorld} depicts the \emph{Puddle-jump gridworld} environment as a 10x10 grid. The state space is $s=(x,y)$ denoting the position of the agent in the grid, where $x,y \in \{0,1,\dots,9\}$. The goal of the agent is to navigate from the start state $S= (0,0)$ to the goal $G=(9,9)$. At each step, the agent can choose from actions in the set $A = \{up, down, left, right, jump\}$. There is a \emph{puddle} along row $2$ which the agent should jump over. Further, the states $(9,8)$ and $(8,9)$ (blue squares in Figure \ref{GridWorld}) are indistinguishable to the agent. As a result, any optimal policy for the agent will be a stochastic policy. If the $jump$ action is chosen in rows $3$ or $1$, the agent will land on the other side of the puddle with probability $p_j$, and remain in the same state otherwise. This action chosen in other rows will keep the agent in its current state. Any action that will move the agent off the grid will keep its state unchanged. The agent receives a reward of $-0.05$ for each action, and $+1000$ for reaching $G$. When using SAS-Uniform, we set $\phi^{U}(s):=u_0$ for states in rows $0$ and $1$, and $\phi^{U}(s):=u_1$ for all other states. We need $u_1>u_0$ to encourage the agent to jump over the puddle. Unlike SAS-Uniform, SAS-NonUniform can provide the agent with more information about the actions it can take. We set $\phi^{NU}(s,a)$ to a `large' value if action $a$ at state $s$ results in the agent moving closer to the goal according to the $\ell_1$ norm, $\big(|G-x|+|G-y| \big)$. We additionally stipulate that $\frac{1}{|A|}\sum_{a\in A}\phi^{NU}(s,a) = \phi^{U}(s)$. That is, the state potential of SAS-NonUniform is the same as that of SAS-Uniform under a uniform distribution over actions. This is to ensure a fair comparison between SAS-Uniform and SAS-NonUniform. In our experiment, we set the discount factor $\gamma=1$. Since the dimensions of the state and action spaces is not large, we do not use a function approximator for the policy $\pi$. A parameter $\theta_{s,a}$ is associated to each state-action pair, and the policy is computed as: $\pi_{\theta}(a|s)=\frac{\exp(\theta_{s,a})}{\sum_{a\in A}\exp(\theta_{s,a})}$. We fix $\alpha^{\omega}= 0.001$, and $\alpha^{\theta}= 0.2$ for all cases. From Figure \ref{cliff_results}, we observe that SAS-NonUniform scheme performs the best, in that the agent converges to the goal in \textbf{five times} fewer episodes ($25$ vs. $125$ episodes) than A2C without advice (Sparse). When A2C is augmented with SAS-Uniform, convergence to the goal is slightly faster than without any reward shaping. \begin{figure} \centering \includegraphics[width=2.8 in]{cliff_jump.png} \caption{Average rewards in puddle-jump gridworld when jump success probability $p_j=0.2$. The baseline is the advantage actor-critic without advice.}\label{cliff_results} \end{figure} A smaller jump success probability $p_j$ is an indication that it is more difficult for the agent to reach the goal state $G$. Figure \ref{cliff_results2} shows that SAS-NonUniform results in the highest reward for a more difficult task (lower $p_j$), when compared with the other reward schemes. \begin{figure} \centering \includegraphics[width=2.8 in]{cliff_jump2.png} \caption{Average reward for the first 100 episodes with respect to the jump success probability $p_j$.}\label{cliff_results2} \end{figure} \subsubsection{Continuous Mountain Car} In the mountain car (MC) environment, an under-powered car in a valley has to drive up a steep hill to reach the goal. In order to achieve this, the car should learn to accumulate momentum. A schematic for this environment is shown in Figure \ref{MountainCarFig}. \begin{figure} \centering \includegraphics[width=3 in]{MountainCar.png} \caption{Schematic of the mountain-car environment. The agent's state is represented by its position $p_t$ (along the $x-$coordinate) and velocity $v_t$. The action $a_t$ is a force applied to the car. The goal is marked as a flag.}\label{MountainCarFig} \end{figure} This environment has continuous state and action spaces. The state $s=(p,v)$ denotes position $p \in [-1.2,0.6]$ and velocity $v \in [-0.07,0.07]$. The action $a \in [-1,+1]$. The continuous action space makes it difficult to use classic value-based methods, such as Q-learning and SARSA-learning. The reward provided by the environment depends on the action and whether the car reaches the goal. Specifically, once the car reaches the goal it receives $+100$, and before that, the reward at time $t$ is $-|a_t|^2$. This reward structure therefore discourages the waste of energy. This acts as a barrier for learning, because there appears to be a sub-optimal solution where the agent remains at the bottom of the valley. Moreover, the reward for reaching the goal is significantly delayed, which makes it difficult for the conventional actor-critic algorithm to learn a good policy. One choice of a potential function while using SAS-Uniform in this environment is $\phi^{U}(s_t):=p_t+P$. Since the position $p_t$ can take values in $[-1.2,0.6]$, the offset $P$ is chosen so that the potential $\phi^{U}(s_t)$ will always be positive. We use $P = 2$. An interpretation of this scheme is: \emph{`state value is larger when the car is horizontally closer to the goal.'} The SAS-NonUniform scheme we use for this environment encourages the accumulation of momentum by the car-- the direction of the action is encouraged to be the same as the current direction of the car's velocity. In the meanwhile, we discourage inaction. Mathematically, the potential advice function has a larger value if \emph{ $a_t\neq 0$}. We let $\phi^{NU}(s_t,a_t)=1$, if $a_tv_t >0$, and $\phi^{NU}(s_t,a_t)=0$, otherwise. In our experiments, we set $\gamma= 0.99$. To deal with the continuous state space, we use a neural network (NN) as a function approximator. The policy distribution $\pi_{\theta}(a|s)$ is approximated by a normal distribution, the mean and variance of which are the outputs of the NN. The value function is also represented by an NN. We set $\alpha^{\theta}=1\times 10^{-5}$ and $\alpha^{\omega}=5.6\times 10^{-4}$, and use Adam \cite{Adam} to update the NN parameters. The results we report are averaged over 10 different environment seeds \begin{figure} \centering \includegraphics[width=2.6 in]{MC.png} \caption{Average rewards for continuous mountain car problem (averaged over 10 different environment random seeds). The baseline is the A2C without advice.}\label{MountainCar_results} \end{figure} Our experiments indicate that the policy makes the agent converge to one of two points: the goal, or remain stationary at the bottom of the valley. We observe that when learning with the base algorithm (A2C with sparse rewards), the agent is able to reach the goal only in $10\%$ of the trials (out of 10 trials), and was stuck at the sub-optimal solution for the remaining trials. With SAS-Uniform, the agent could converge correctly in only $20\%$ of the trials. This is because the agent might have to take an action that moves it away from the goal in order to accumulate momentum. However, the potential function $\phi^{U}(\cdot)$ discourages such actions. SAS-NonUniform performs the best, where we observed that the agent was able to reach the goal in $100\%$ of the trials. Figure \ref{MountainCar_results} reflects these observations, where we see that SAS-NonUniform results in the agent obtaining the highest rewards. \begin{table*}[] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Task} & \textbf{$\phi_i(s_t,a^i_t,a^{-i}_t)$: SAM-Uniform} & \textbf{$\phi_i(s_t,a^i_t,a^{-i}_t)$: SAM-NonUniform} \\ \hline CN & $\alpha_{1}exp(-\beta_{1}\sum_{j=1}^N dist(s_t^j,L_j))$ & $ -M_{1}\theta_{{a^i_tL_i}}+\alpha_{2}exp(-\beta_{2}\sum_{j=1}^N dist(s_t^j,L_j))$ \\ \hline PD & $ \alpha_{3}exp(-\beta_{2}\sum_{j=1}^N dist(s_t^j,L_j))$ & $-M_{2}\theta_{{a^i_tL_i}}+\alpha_{4}exp(-\beta_{4}\sum_{j=1}^N dist(s_t^j,L_j))$\\ \hline PP & $ \alpha_{5}exp(-\beta_{5}\sum_{j=1}^N dist(s_t^{pred_j},s_t^{prey}))$ & $-M_{3}\sum_{j=1}^N \theta_{{a^{pred_j}_t s_t^{prey}}}+\alpha_{6}exp(-\beta_{6}\sum_{j=1}^N dist(s_t^{pred_j},s_t^{prey}))$ \\ \hline \end{tabular}\caption{Shaping advice, $F^i_t$ provided by SAM is given by Equation (\ref{LAPBA}) or (\ref{LBPBA}). The table lists the potential functions used in the Cooperative Navigation (CN), Physical Deception (PD), and Predator-Prey (PP) tasks. $L_j$ is the landmark to which agent $j$ is \emph{anchored} to. $dist(\cdot,\cdot)$ denotes the Euclidean distance. $\theta_{{a^j_tL_j}} \in [0, \pi]$ is the angle between the direction of the action taken by agent $j$ and the vector directed from its current position to $L_j$. In \emph{SAM-Uniform}, advice for every action of the agents for a particular $s_t$ is the same. In \emph{SAM-NonUniform}, agents are additionally penalized if their actions are not in the direction of their target. In each case, $F^i_t$ is positive when agents take actions that move it towards their target }\label{TableSAM} \end{table*} \subsection{Shaping Advice for Multi-agent RL} This section describes the multi-agent tasks that we evaluate SAM on, and these include tasks with cooperative and competitive objectives. In each case, the rewards provided to the agents are sparse, which affects the agents' ability to obtain immediate feedback on the quality of their actions at each time-step. Shaping advice provided by SAM is used to guide the agents to obtain higher rewards than in the case without advice. We conclude the section by presenting the results of our experiments evaluating SAM on these tasks. \subsubsection{Task Descriptions and Shaping Advice} \begin{figure}[!h] \centering \includegraphics[width=3.30 in]{ParticleWorld.pdf} \caption{Representations of tasks from the Particle World Environment \cite{lowe2017multi} that we study. (\emph{Left to Right}) Predator-Prey (PP), Cooperative Navigation (CN), and Physical Deception (PD). In PP, predators (red) seek to catch the prey (green) while avoiding obstacles (grey). In CN, agents (green) each seek to navigate to a different landmark ($\times$) and are penalized for collisions with each other. In PD, one of the agents (green) must reach the true landmark (red $\times$), while preventing the adversary from reaching this landmark. In all tasks, rewards are \emph{sparse}. Agents receive a reward or penalty only when a corresponding reachability or collision criterion is satisfied. }\label{PartWorldEnv} \end{figure} We examine three tasks from the \emph{Particle World} environment \cite{lowe2017multi} where multiple agents share a two-dimensional space with continuous states and actions. An illustration of the tasks is shown in Figure \ref{PartWorldEnv}, and we describe them below. \emph{Predator-Prey}: This task has $N$ predator agents who cooperate to capture $1$ faster-moving prey. Predators are rewarded when one of them collides with the prey, while the prey is penalized for the collision. The reward at other times is zero. Two landmarks impede movement of the agents. \emph{Cooperative Navigation}: This task has $N$ agents and $N$ landmarks. Agents are each rewarded $r$ when an agent reaches a landmark, and penalized for collisions with each other. The reward at other times is zero. Therefore, the maximum rewards agents can obtain is $rN$. Thus, agents must learn to \emph{cover} the landmarks, and not collide with each other. \emph{Physical Deception}: This task has $1$ adversary, $N$ agents, and $N$ landmarks. Only one landmark is the true target. Agents are rewarded when any one reaches the target, and penalized if the adversary reaches the target. At all other times, the agents get a reward of zero. An adversary also wants to reach the target, but it does not know which landmark is the target landmark. Thus, agents have to learn to split up and cover the landmarks to deceive the adversary. In each environment, SAM provides shaping advice to guide agents to obtain a higher positive reward. This advice is augmented to the reward received from the environment. The advice is a heuristic given by a difference of potential functions (Equations (\ref{LAPBA}) or (\ref{LBPBA})), and only needs to be specified \emph{once} at the start of the training process. In the \emph{Cooperative Navigation} and \emph{Physical Deception} tasks, we \emph{anchor} each agent to a (distinct) landmark. The shaping advice will then depend on the distance of an agent to the landmark it is anchored to. Although distances computed in this manner will depend on the order in which the agents and landmarks are chosen, we observe that it \emph{empirically} works across multiple training episodes where positions of landmarks and initial positions of agents are generated randomly. The advice provided by SAM is positive when agents move closer to landmarks they are anchored to. In the absence of anchoring, they may get distracted and move towards different landmarks at different time steps. Anchoring results in agents learning to cover landmarks faster We consider two variants of advice for each task. In \textbf{\emph{SAM-Uniform}}, the advice for every action taken is the same. In \textbf{\emph{SAM-NonUniform}}, a higher weight is given to some `good' actions over others for each $s_t$. We enumerate the advice for each task in Table \ref{TableSAM}. We use MADDPG as the base RL algorithm \cite{lowe2017multi}. We compare the performance of agents trained with SAM (SAM-Uniform or SAM-NonUniform) to the performance of agents trained using the sparse reward from the environment. We also compare SAM with a state-of-the-art reward redistribution technique introduced in \cite{gangwani2020learning} called Iterative Relative Credit Assignment (IRCR). \subsubsection{Implementation details} When we tested SAM-NonUniform, we used look-back advice following Equation (\ref{LBPBA}). This was done to avoid estimating a `future' action, i.e. $a^i_{t+1}$, at each time-step (since the replay buffer contains tuples of the form $(s_t, a^1_t,\dots,a^n_t, r^1_t,\dots,r^n_t, s_{t+1})$). Noisy estimates of $a^i_{t+1}$ can cause oscillations in the rewards obtained by the agent. We adopt the same hyperparameter values and network architectures as used in \cite{lowe2017multi}. We let $\Gamma_i$ \emph{Line 18} of the SAM Algorithm be the identity matrix. We list values of $\alpha_{\cdot}, \beta_{\cdot}, M_{\cdot}$ (from Table 1 in the main paper) that were used in our experiments. \textbf{Sparse Reward Setting}: For results reported in Figures \ref{BarPlotScore} and \ref{FigGraphs}, we use the following parameters for the shaping advice: \begin{center} \begin{tabular}{l l l l} CN ($N=3$) & $\alpha_1 = \alpha_2 = 100$ & $\beta_1 = \beta_2 =1$ & $M_1 = 1$ \\ CN ($N=6$) & $\alpha_1 = \alpha_2 = 1000$ & $\beta_1 = \beta_2 =1$ & $M_1 = 10$ \\ PD ($N=2$) & $\alpha_3 = \alpha_4 = 500$ & $\beta_3 = \beta_4 = 1$ & $M_2 = 1$\\ PD ($N=4$) & $\alpha_3 = \alpha_4 = 500$ & $\beta_3 = \beta_4 = 1$ & $M_2 = 10$\\ PP ($N=3$)& $\alpha_5 = \alpha_6 = 100$ & $\beta_5 = \beta_6 = 1$ & $M_3 = 1$\\ \\ \end{tabular} \end{center} \textbf{Other forms of shaping rewards}: In the three tasks that we evaluated, we observed that a `linear' distance-based advice (i.e., advice of the form $\phi_i(s_t,a^i_t,a^{-i}_t) :=\sum_j dist(s_t^j,L_j)$) did not work. From Equations (\ref{LAPBA}) and (\ref{LBPBA}), using this form of advice, an agent would get the same additional reward when it took a step towards the target, independent of its distance to the target. For example, an agent $100$ steps from the target would get the same shaping advice if it took one step towards the target as an agent who takes a step towards the target from a state that is $50$ steps from the target. \\ \subsubsection{Results} \begin{figure}[!h] \centering \includegraphics[width=3.20 in]{bar_plot_v1.pdf} \caption{Comparison between SAM (SAM-NonUniform (blue) or SAM-Uniform (purple)) augmented to MADDPG and classical MADDPG policies (orange) on cooperative and competitive tasks with sparse rewards. The \textbf{score} for a task is the average agent reward in cooperative tasks, and the \emph{average agent advantage} ($=$ agent reward $-$ adversary reward) in competitive tasks. Each bar cluster shows Normalized $0-1$ scores, averaged over the last $1000$ training episodes. Higher score is better. SAM-NonUniform outperforms SAM-Uniform and the classical MADDPG baseline by a larger margin when there are more agents in the cooperative navigation and physical deception tasks. The scores of IRCR are not shown here, since it performs consistently worse than other approaches in these three tasks. }\label{BarPlotScore} \end{figure} \begin{figure*} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{simple_spread_N6.png} \caption{Cooperative Navigation ($N=6$)} \label{fig:1a} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{physical_deception_N5.png} \caption{Physical Deception ($N=4$)} \label{fig:1b} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.31\textwidth} \includegraphics[width=\linewidth]{simple_tag.png} \caption{Predator-Prey ($3$ pred., $1$ prey.)} \label{fig:1c} \end{subfigure} \caption{Average and variance of scores when agents use SAM-NonUniform (blue), SAM-Uniform (purple), IRCR (green) and sparse rewards (orange). SAM-NonUniform results in the highest average scores. SAM-Uniform compares favorably, and both significantly outperform agents trained using only sparse rewards. IRCR is not able to guide agents to obtain higher rewards in all three tasks } \label{FigGraphs} \end{figure*} Figure \ref{BarPlotScore} shows $0-1$ normalized scores, averaged over the last 1000 training episodes, comparing SAM augmented to MADDPG and classical MADDPG policies. The \textbf{score} for a task is the average agent reward in cooperative tasks, and the \emph{average agent advantage} ($=$ agent $-$ adversary reward) in competitive tasks \cite{wen2019probabilistic}. Agents equipped with \emph{SAM-NonUniform} have the best performance. This is because SAM-NonUniform provides specific feedback on the quality of agents' actions. \emph{SAM-Uniform} also performs well in these tasks. SAM-NonUniform outperforms SAM-Uniform and the classical MADDPG baseline by a significant margin when there are more agents. In cooperative navigation, when rewards are sparse, the agents are not able to learn policies that will allow them to even partially cover the landmarks using the MADDPG baseline. In comparison, SAM guides agents to learn to adapt to each others' policies, and cover all the landmarks. SAM-NonUniform results in much higher rewards than other methods in the complex task with $N=6$ agents. We observe a similar phenomenon in physical deception, where SAM guides agents to learn policies to cover the landmarks. This behavior of the agents is useful in deceiving the adversary from moving towards the true landmark, thereby resulting in lower rewards for the adversary. Therefore, agent advantage is higher with SAM. In the predator-prey task, we see that the performance of SAM is comparable to MADDPG. We believe that this is because this task might have a well-defined and unique \emph{equilibrium} to which agent policies eventually converge. Figure \ref{FigGraphs} shows the average and variance of the \textbf{score} during different stages of the training process. The score for a task is the average agent reward in cooperative tasks, and the \emph{average agent advantage} ($=$ agent $-$ adversary reward) in competitive tasks \cite{wen2019probabilistic}. In terms of agent scores averaged over the last $1000$ training episodes, agents equipped with \emph{SAM-NonUniform} have the best performance. This is because SAM-NonUniform provides specific feedback on quality of agents' actions. \emph{SAM-Uniform} also performs well in these tasks. In cooperative navigation, when agents use only the sparse rewards from the environment, the agents are not able to learn policies that will allow them to even partially cover the landmarks. In comparison, SAM guides agents to learn to adapt to each others' policies, and cover all the landmarks. A similar phenomenon is observed in physical deception, where SAM guides agents to learn policies to cover the landmarks. This behavior of the agents is useful in deceiving the adversary from moving towards the true landmark, thereby resulting in lower final rewards for the adversary. We additionally compare the performance of SAM with a technique called IRCR that was introduced in \cite{gangwani2020learning}. We observe that agents using IRCR receive the lowest scores in all three tasks. We believe that a possible reason for this is that in each training episode, IRCR accumulates rewards till the end of the episode, and then uniformly redistributes the accumulated reward along the length of the episode. A consequence of this is that an agent may find it more difficult to identify the time-step when it reaches a landmark or when a collision occurs. For example, in the \emph{Predator-Prey} task, suppose that the length of an episode is $T_{ep}$. Consider a scenario where one of the predators collides with the prey at a time $T < T_{ep}$, and subsequently moves away from the prey. When IRCR is applied to this scenario, the redistributed reward at time $T$ will be the same as that at other time steps before $T_{ep}$. This property makes it difficult to identify critical time-steps when collisions between agents happen. The authors of \cite{lowe2017multi} observed that agent policies being unable to adapt to each other in competitive environments resulted in oscillations in rewards. Figure \ref{fig:1b} indicates that SAM is able to alleviate this problem. Policies learned by agents using SAM in the physical deception task result in much smaller oscillations in the rewards than when using sparse rewards alone. \section{Introduction} Reinforcement learning (RL) is a framework that allows agents to complete tasks in an environment, even when a model of the environment is not known \cite{sutton2018reinforcement}. An RL agent `learns' to complete tasks by maximizing an expected long-term reward, where the reward signal is provided by the environment. RL algorithms have been successfully implemented in many fields, including games \cite{mnih2015human, silver2016mastering, tampuu2017multiagent}, robotics \cite{lillicrap2016continuous}, autonomous vehicle coordination \cite{sallab2017deep}, analysis of social dilemmas \cite{leibo2017multi}, and resource allocation in cellular networks \cite{yin2021resource}. When the environment is not known, availability of immediate feedback on quality of actions taken at each time-step is critical to the learning of behaviors to successfully complete a task. This is termed \emph{credit assignment} \cite{sutton2018reinforcement}. When reward signals provided by the environment are sparse, it becomes difficult to perform effective credit assignment at intermediate steps of the learning process. One approach that has been shown to improve learning when rewards are sparse is \emph{reward shaping} \cite{agogino2008analyzing, devlin2011empirical, devlin2014potential}. Reward shaping techniques augment the reward provided by the environment with an additional \emph{shaping reward}. The shaping reward can be designed to be \emph{dense} (i.e., not sparse), and agents learn policies (which action to take in a particular state) using the augmented reward. Any additional reward can distract an agent from completing a task specified by the reward provided by the environment, and therefore will need to be provided in a systematic manner \cite{randlov1998learning}. In this paper, we term the additional reward given to agents at each time-step as \emph{\textbf{shaping advice}}. The shaping advice is specified by a difference of potential functions at consecutive time-steps, where each potential function depends on observations and actions of the agents. Potential functions satisfy a critical property that the total potential when starting from a state and returning to the same state is zero \cite{ng1999policy}. This ensures that an agent will not be distracted from completing a task specified by the reward provided by the environment. There are additional challenges when there are multiple RL agents. In such a setting, each agent will have to interact not only with its environment, but also with other agents. As behaviors of agents evolve in the environment, the environment will become non-stationary from the perspective of any single agent. Thus, agents that independently learn behaviors by assuming other agents to be part of the environment can result in unstable learning regimes \cite{foerster2017stabilising, matignon2012independent, tan1993multi}. When multiple trained agents are deployed independently, or when communication among agents is costly, the agents will need to be able to learn decentralized policies. Decentralized policies can be efficicently learned by adopting the \emph{centralized training with decentralized execution (CTDE)} paradigm, first introduced in \cite{lowe2017multi}. An agent using CTDE can make use of information about other agents' observations and actions to aid its own learning during training, but will have to take decisions independently at test-time. However, the ability of an agent to learn decentralized policies can be affected if reward signals from the environment are sparse. Reward shaping techniques that use potential functions satisfy an additional property that the identity of the optimal policies with and without the shaping reward is the same \cite{ng1999policy}. The state-of-the-art on the potential-based reward shaping in RL \cite{agogino2008analyzing, devlin2011empirical, devlin2014potential} is focused on environments with finite and discrete action spaces. To the best of our knowledge, adapting solution techniques proposed in the above papers to more general settings with continuous action spaces has not been explored. Moreover, in the multi-agent case, these works emphasize learning joint policies for agents. In comparison, we propose to learn decentralized policies for each agent, which will enable application of our method to environments with large numbers of agents. In this paper, we develop a comprehensive framework to enable effective credit assignment in RL environments with sparse rewards. These methods incorporate information about the task and environment to define \emph{shaping advice}. We term our algorithms \emph{Shaping Advice in Single-agent reinforcement learning (SAS)} when there is only a single agent, and \emph{Shaping Advice in Multi-agent reinforcement learning (SAM)}, when there are multiple agents. The shaping advice in SAS and SAM can be interpreted as \emph{domain knowledge} that aids credit assignment \cite{mannion2018reward}. This advice needs to be specified only once at the start of the training process. SAS and SAM can be applied in environments with continuous or discrete state and action spaces. We demonstrate that both algorithms do not distract agents from completing tasks specified by rewards from the environment. Specifically, our contributions are \begin{itemize} \item We introduce SAS and SAM to incorporate potential-based shaping advice in single- and multi-agent deep RL environments with continuous action spaces. \item We demonstrate that shaping advice provided by SAS and SAM does not distract agents from completing tasks specified by the environment reward. We accomplish this by theoretically establishing that convergence of policy gradients and values when using the shaping advice implies convergence of these quantities in the absence of the shaping advice. \item We verify our theoretical results through extensive experimental evaluations. Specifically, \begin{itemize} \item we evaluate SAS on two environments with sparse rewards- puddle-jump gridworld and continuous mountain car; \item we evaluate SAM on three tasks in the multi-agent Particle World environment \cite{lowe2017multi}. All these tasks have sparse rewards. We show that using shaping advice allows agents to learn policies to complete the tasks faster, and obtain higher rewards than: i) using sparse rewards alone, and ii) a state-of-the-art reward redistribution technique from \cite{gangwani2020learning}. \end{itemize} \end{itemize} Compared to a preliminary version that appeared in \cite{xiao2019potential}, in this paper, we develop a comprehensive framework for providing shaping advice in both, single- and multi-agent RL. We provide detailed theoretical analyses and experimental evaluations for each setting. The remainder of this paper is organized as follows: Section \ref{RelatedWork} presents related work and Section \ref{Background} provides an introduction to single- and multi-agent RL and potential-based shaping advice. Sections \ref{Methods1} - \ref{Experiments} present the main contributions of this paper. We provide details on SAS and SAM and present theoretical analysis of their convergence in Sections \ref{Methods1} and \ref{Methods2}. Experiments validating the use of SAS and SAM are reported in Section \ref{Experiments}, and Section \ref{Conclusion} concludes the paper. \section{Shaping Advice in Single-Agent RL}\label{Methods1} This section presents our results when potential-based shaping advice is used to learn stochastic policies in single-agent RL. We term this \textbf{\emph{Shaping Advice in Single-agent RL (SAS)}}. This generalizes and extends the use of potential-based methods in the literature which have hitherto focused on augmenting value-based methods to learn optimal deterministic policies. We consider two variants- (i) \emph{SAS-Uniform} when the advice for all actions at a particular state is the same, and (ii) \emph{SAS-NonUniform}, when a higher weight might be given to some `good' actions over others at each state. We first prove that the optimality of stochastic policies is preserved when using \emph{SAS-Uniform}. Then, we describe an approach to integrate \emph{SAS-NonUniform} in to policy-gradient algorithms to enable effective learning of stochastic policies in single-agent RL. \subsection{Uniform Advice}\label{PBRSSection} The following result shows that \emph{SAS-Uniform} preserves optimality even when the optimal policy is stochastic. \begin{prop}\label{PBRSResult} Let ${\pi}_M^*$ denote the optimal policy for an MDP $M$, and suppose that ${\pi}_M^*$ is a stochastic policy. Let ${\pi}_{M'}^*$ denote the optimal policy for the MDP $M'$ whose reward is augmented by $F:=\gamma\phi(s_{t+1})-\phi(s_t)$. Then, \emph{SAS-Uniform} preserves the optimality of stochastic policies- i.e., ${\pi}_M^* = {\pi}_{M'}^*$. \end{prop} \begin{proof} The goal in the original MDP $M$ was to find a policy $\pi$ in order to maximize: \begin{align}\label{PBRS_M} {\pi}_M^* &= \arg\!\max_{\pi} \mathbb{E}_{\tau \sim \pi}\left[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)\right]. \end{align} When using \emph{SAS-Uniform}, the goal is to determine a policy so that: \begin{align} &{\pi}_{M'}^* = \arg\!\max_{\pi} \mathbb{E}_{\tau \sim \pi}\big[\sum_{t=0}^{\infty}\gamma^t \big(R(s_t,a_t)+ F(s_t,a_t,s_{t+1},a_{t+1})\big)\big] \nonumber\\ &= \arg\!\max_{\pi} \mathbb{E}_{\tau \sim \pi}\big[\sum_{t=0}^{\infty}\gamma^t \big(R(s_t,a_t)+\gamma\phi(s_{t+1})-\phi(s_t)\big)\big] \nonumber\\ &= \arg\!\max_{\pi} \bigg[\mathbb{E}_{\tau \sim \pi}\big[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)\big]-\mathbb{E}_{\tau \sim \pi}\big[\phi(s_0)\big]\bigg]\nonumber\\ &=\arg\!\max_{\pi}~ \mathbb{E}_{\tau \sim \pi}\big[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)\big]-\int_{s}\rho_0(s)\phi(s)\text{d}s.\label{PBRS_M'} \end{align} % The last term in Equation (\ref{PBRS_M'}) is constant, and does not affect the identity of the maximizing policy of (\ref{PBRS_M}). \end{proof} \subsection{Nonuniform Advice}\label{PBASection} Although \emph{SAS-Uniform} preserves optimality of policies in several settings, it suffers from the drawback of being unable to encode richer information, such as desired relations between states and actions. The authors of \cite{wiewiora2003principled} proposed \emph{potential-based nonuniform advice}, a scheme that augmented the potential function by including actions as an argument together with states. In this part, we show that when using \emph{SAS-NonUniform}, recovering the optimal policy can be difficult if the optimal policy is stochastic. To overcome this barrier, we propose a novel way to impart prior information about the environment in order to use \emph{SAS-NonUniform} to learn a stochastic policy \subsubsection{Stochastic policy learning with nonuniform advice} Assume that we can compute $Q^{*}_{M}(s,a)$, the optimal value for state-action pair $(s,a)$ in MDP $M$. The optimal stochastic policy for $M$ is $\pi^*_M = \arg\!\max_{\tau \sim \pi}\mathbb{E}_{\pi}\big[Q^{*}_{M}(s,a)\big]$. From Eqn. (\ref{PBAq}), the optimal stochastic policy for the modified MDP $M'$ is given by $\pi^*_{M'} = \arg\!\max_{\pi}\mathbb{E}_{\tau \sim \pi}\big[Q^{*}_{M}(s,a)-\phi(s,a)\big]$. Without loss of generality, $\pi^*_M \neq \pi^*_{M'}$. If the optimal policy is deterministic, then the policy for $M$ can be recovered easily from that for $M'$ using Eqn. (\ref{PBAp}). However, for stochastic optimal policies, we will need to average over trajectories of the MDP, which makes it difficult to recover the optimal policy for $M$ from that of $M'$. In what follows, we will propose a novel way to take advantage of \emph{SAS-NonUniform} in the policy gradient framework in order to directly learn a stochastic policy. \subsubsection{Imparting nonuniform advice in policy gradient} Let $J_M(\theta)$ denote the value of a parameterized policy $\pi_{\theta}$ in MDP $M$. That is, $J_M(\theta) = \mathbb{E}_{\tau \sim \pi_{\theta}}\left[\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)\right]$. Following the policy gradient theorem \cite{sutton2018reinforcement}, and defining $G(s_t,a_t):=\sum_{i=t}^{i=\infty}\gamma^{i-t}r_i$, the gradient of $J(\theta)$ with respect to $\theta$ is: \begin{align}\label{REINFORCE} \nabla_{\theta}J_M(\theta) = \mathbb{E}_{\tau \sim \pi_{\theta}}\big[G(s_t,a_t)\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\big]. \end{align} Then, $\mathbb{E}_{\tau \sim\pi_{\theta}}\big[G(s_t,a_t)\big]=Q^{\pi_{\theta}}(s_t,a_t)$. REINFORCE \cite{sutton2018reinforcement} is a policy gradient method that uses Monte Carlo simulation to learn $\theta$, where the parameter update is performed only at the end of an episode (a trajectory of length $T$). If we apply the look-ahead variant of \emph{SAS-NonUniform} as in Equation (\ref{lookaheadPBA}) along with REINFORCE, then the total return from time $t$ is given by: \begin{align} \begin{split} G^{a}(s_t,a_t)&=\sum_{i=t}^{i=T}\gamma^{i-t}r_i+\gamma^{T-t}\phi(s_T,a_T)-\phi(s_t,a_t) \\ &=G(s_t,a_t)+\gamma^{T-t}\phi(s_T,a_T)-\phi(s_t,a_t). \end{split} \end{align} Notice that if $G^{a}(s_t,a_t)$ is used in Eqn. (\ref{REINFORCE}) instead of $G(s_t,a_t)$, then the policy gradient is biased. One way to resolve the problem is to add the difference $-\gamma^{T-t}\phi(s_T,a_T)+\phi(s_t,a_t)$ to $G^{a}(s_t,a_t)$. However, this makes the learning process identical to the original REINFORCE and nonuniform advice is not used. When using nonuniform advice in a policy gradient setup, it is important to add the term $\phi(s,a)$ so that the policy gradient is unbiased, and also leverage the advantage that nonuniform advice offers during learning. \subsection{Analysis and Algorithm} To integrate \emph{SAS-NonUniform} with policy gradient-based techniques, we turn to temporal difference (TD) methods. TD methods update estimates of the accumulated return based in part on other learned estimates, before the end of an episode. A popular TD-based policy gradient method is the actor-critic framework \cite{sutton2018reinforcement}. In this setup, after performing action $a_t$ at step $t$, the accumulated return $G(s_t,a_t)$ is estimated by $Q_M(s_t,a_t)$ which, in turn, is estimated by $r_t+\gamma V_M(s_{t+1})$. It should be noted that the estimates are unbiased. When the reward is augmented with look-ahead \emph{SAS-NonUniform}, the accumulated return is changed to $Q_{M'}(s_t,a_t)$, which is estimated by $r_t+\gamma\phi(s_{t+1},a_{t+1})-\phi(s_t,a_t)+\gamma V_{M'}(s_{t+1})$. From Eqn. (\ref{PBAq}), at steady state, $Q_M(s_t,a_t)-Q_{M'}(s_t,a_t)=\phi(s_t,a_t)$. Intuitively, to keep policy gradient unbiased when augmented with look-ahead nonuniform advice, we can add $\phi(s_t,a_t)$ at each training step. In other words, we can use $r_t+\gamma\phi(s_{t+1},a_{t+1})+\gamma V_{M'}(s_{t+1})$ as the estimated return. It should be noted that before the policy reaches steady state, adding $\phi(s_t,a_t)$ at each time step will not cancel out the effect of nonuniform advice. This is unlike in REINFORCE, where the addition of this term negates the effect of using nonuniform advice. In the advantage actor-critic, an advantage term is used instead of the Q-function in order to reduce the variance of the estimated policy gradient. In this case also, the potential term $\phi(s_t,a_t)$ can be added in order to keep the policy gradient unbiased. \begin{algorithm} \small \caption{SAS: Shaping Advice in Single-agent RL} \begin{algorithmic}[1] \label{Algo1} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \REQUIRE Differentiable policy and value functions $\pi_{\theta}(a|s)$, $V_{\omega}(s)$; shaping advice $\phi(s,a)$; maximum episode length $T_{max}$. \textbf{Initialization}: \\ policy parameter $\theta$, value parameter $\omega$, learning rate $\alpha^{\theta}$ and $\alpha^{\omega}$, discount factor $\gamma$. \STATE{$T = 0$} \REPEAT \STATE initialize state $s_0$, $t \leftarrow 0$ \REPEAT \STATE Sample action $a_t \sim \pi_{\theta}(\cdot|s_t)$ \STATE Take action $a_t$, observe reward $r_t$, next state $s_{t+1}$ \STATE $R= \begin{cases} 0, & \text{if } \begin{aligned}[t] s_{t+1} \text{ is a terminal state }, \end{aligned} \\ V_{\omega}(s_{t+1}), & \text{otherwise.} \end{cases}$ \IF{use look-ahead advice} \STATE $\delta_t=r_t + \gamma\phi(s_{t+1},a_{t+1})-\phi(s_t,a_t)+\gamma R - V_{\omega}(s_t)$ \STATE Update $\theta \leftarrow \theta + \alpha^{\theta} \big(\delta_t+\phi(s_t,a_t)\big)\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)$ \ELSE \STATE $\delta_t = r_t + \phi(s_{t},a_{t})-\gamma^{-1}\phi(s_{t-1},a_{t-1})+\gamma R - V_{\omega}(s_t)$ \STATE Update $\theta \leftarrow \theta + \alpha^{\theta} \delta_t\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)$ \ENDIF \STATE Update $\omega \leftarrow \omega - \alpha^{\omega} \delta_t\nabla_{\omega}V_{\omega}(s_t)$ \UNTIL{$s_{t+1}$ is a terminal state} \STATE $T \leftarrow T+1$ \UNTIL{$T>T_{max}$} \end{algorithmic} \end{algorithm} A procedure for augmenting the advantage actor-critic with \emph{SAS-NonUniform} is presented in Algorithm \ref{Algo1}. $\alpha^{\theta}$ and $\alpha^{\omega}$ denote learning rates for the actor and critic respectively. When applying look-ahead nonuniform advice, at training step $t$, parameter $\omega$ of the critic $V_{\omega}(s)$ is updated as follows: \begin{align} \delta^a_t &= r_t + \gamma\phi(s_{t+1},a_{t+1})-\phi(s_t,a_t)+\gamma V_{\omega}(s_{t+1}) - V_{\omega}(s_t)\nonumber \\ \omega &= \omega - \alpha^{\omega} \delta^a_t\nabla_{\omega}V_{\omega}(s_t),\nonumber \end{align} where $\delta^a_t$ is the estimation error of the state value after receiving new reward $[r_t + \gamma\phi(s_{t+1},a_{t+1})-\phi(s_t,a_t)]$ at step $t$. To ensure an unbiased estimate of the policy gradient, the potential term $\phi(s_t,a_t)$ is added while updating $\theta$ as \cite{xiao2019potential}: \begin{align} \theta = \theta + \alpha^{\theta} \big(\delta^a_t+\phi(s_t,a_t)\big)\nabla_{\theta}\log\pi_{\theta}(a_t|s_t).\nonumber \end{align} A similar method can be used when learning with look-back nonuniform advice. In this case, the critic and the policy parameter are updated as follows: \begin{align} \delta^b_t &= r_t + \phi(s_{t},a_{t})-\gamma^{-1}\phi(s_{t-1},a_{t-1})+\gamma V_{\omega}(s_{t+1}) - V_{\omega}(s_t)\nonumber\\ \omega &= \omega - \alpha^{\omega} \delta^b_t\nabla_{\omega}V_{\omega}(s_t),\nonumber\\ \theta &= \theta + \alpha \big(\delta^b_t+\gamma^{-1}\mathbb{E}\big[\phi(s_{t-1},a_{t-1})|s_t\big]\big)\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\label{addback} \end{align} In the above case, the potential term need not be added to ensure an unbiased estimate. Then, the policy parameter update becomes: \begin{align}\label{noaddback} \theta = \theta + \alpha \delta^b_t\nabla_{\theta}\log\pi_{\theta}(a_t|s_t), \end{align} which is exactly the policy update of the advantage actor-critic. This is formally stated in Proposition \ref{PBAprop} \begin{prop} \label{PBAprop} When the actor-critic is augmented with look-back variant of \emph{SAS-NonUniform}, Equations (\ref{addback}) and (\ref{noaddback}) are equal in the sense of expectation. That is \begin{align} \mathbb{E}_{(s_t,a_t) \sim \rho^{\pi_\theta}}\big[\big(\delta^b_t+&\gamma^{-1}\mathbb{E}\big[\phi(s_{t-1},a_{t-1})|s_t\big]\big)\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\big] \nonumber\\ = \quad&\mathbb{E}_{(s_t,a_t) \sim \rho^{\pi_\theta}}\big[\delta^b_t\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\big], \end{align} where $\rho^{\pi_\theta}$ is the distribution induced by the policy $\pi_\theta$. \end{prop} \begin{proof} It is equivalent to show that: \begin{align} \mathbb{E}_{(s_t,a_t) \sim \rho^{\pi_\theta}}\big[\mathbb{E}\big[\phi(s_{t-1},a_{t-1})|s_t\big]\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\big] = 0. \end{align} The inner expectation $\mathbb{E}\big[\phi(s_{t-1},a_{t-1})|s_t\big]$ is a function of $s_t$, policy $\pi_{\theta}$, and transition probability $\mathbb{T}$. Denoting this expectation by $f(s_t,\pi_{\theta},\mathbb{T})$, we obtain: \begin{align} &\mathbb{E}_{(s_t,a_t) \sim \rho^{\pi_\theta}}\big[f(s_t,\pi_{\theta},\mathbb{T})\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\big]\nonumber \\= &\mathbb{E}_{s_t \sim \rho^{\pi_\theta}}\bigg[\mathbb{E}_{a_t\sim \pi_\theta}\big[f(s_t,\pi_{\theta},\mathbb{T})\nabla_{\theta}\log\pi_{\theta}(a_t|s_t)\big]\bigg]\nonumber \\= &\mathbb{E}_{s_t \sim \rho^{\pi_\theta}}\bigg[\int_{A}\pi_{\theta}(a_t|s_t)f(s_t,\pi_{\theta},\mathbb{T})\frac{\nabla_{\theta}\pi_{\theta}(a_t|s_t)}{\pi_{\theta}(a_t|s_t)}\text{d}a\bigg] \nonumber\\= &\mathbb{E}_{s_t \sim \rho^{\pi_\theta}}\bigg[f(s_t,\pi_{\theta},\mathbb{T})\nabla_{\theta}\int_{A}\pi_{\theta}(a_t|s_t)\text{d}a\bigg] = 0. \end{align} The last equality follows from the fact that the integral evaluates to $1$, and its gradient is $0$. \end{proof} \begin{rem} Look-back nonuniform advice could result in better performance compared to look-ahead nonuniform advice since look-back nonuniform advice does not involve estimating a future action. \end{rem} \section{Shaping Advice in Multi-Agent RL}\label{Methods2} \begin{figure}[!h] \centering \includegraphics[width=3.05 in]{CTDESchematic.pdf} \caption{Schematic of SAM. A centralized critic estimates value functions $V_{\omega_1},\dots,V_{\omega_n}$. Actions for an agent $i$ are sampled from its policy $\pi_{\theta_i}$ in a decentralized manner. Actions and observations of all agents are used to determine \emph{shaping advice} $F^1,\dots,F^n$. The advice $F^i$ is augmented to the reward $r^i$ from the RL environment. The workflow shown by blue arrows in the outer box is required only during training. During execution, only the workflow shown by the red arrows inside the inner boxes is needed.}\label{SAMARITANSchematic} \end{figure} This section introduces shaping advice in multi-agent reinforcement learning (\emph{SAM}). The goal of SAM is to augment \emph{shaping advice} to the reward supplied by the MARL environment to provide immediate feedback to agents on their actions. SAM uses the CTDE paradigm wherein agents share parameters with each other during the training phase, but execute decentralized policies using only their own observations at test-time. Figure \ref{SAMARITANSchematic} shows a schematic of SAM. We subsequently detail how the shaping advice is provided to the agents, and analyze the optimality and convergence of policies when using SAM. \subsection{Centralized Critic} SAM uses a centralized critic during the training phase. Information about observations and actions of all agents is used to learn a decentralized policy for each agent. One way to do this is by using an actor-critic framework, which combines policy gradients with \emph{temporal difference (TD)} techniques. At time $t$, the joint action $(a^1_t,\dots,a^n_t)$ is used to estimate the accumulated return for each agent $i$ as $r^i_t + \gamma V^i (s_{t+1})$. This quantity is called the \emph{TD-target}. Subtracting $V^i(s_t)$ from the TD-target gives the \emph{TD-error}, which is an unbiased estimate of the agent advantage \cite{sutton2018reinforcement}. Each actor can then be updated following a gradient based on this TD-error. We learn a separate critic for each agent like in \cite{lowe2017multi}. However, the learning process can be affected when rewards provided by the environment are sparse. SAM uses a \emph{potential-based} heuristic as shaping advice that is augmented to the reward received from the environment. This resulting reward is less sparse and can be used by the agents to learn policies. \subsection{Shaping Advice in Multi-Agent Actor-Critic} We describe how to augment shaping advice to the multi-agent policy gradient to assign credit. We use the actor-critic framework with a centralized critic and decentralized actors. For an agent $i$, shaping advice $F^i$ is augmented to the environment reward $r^i$ at each time step. $F^i$ is specified by a difference of potentials (Eqn. (\ref{LAPBA}) or (\ref{LBPBA})). The centralized critic allows using observations and actions of all agents to specify $F^i$. Using look-ahead advice, $Q$-values in the modified game $\mathcal{G}'$ with reward $R^i+F^i$ and original game $\mathcal{G}$ with reward $R^i$ are related as \cite{wiewiora2003principled}: \begin{align} [Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]_{\mathcal{G}}&=[Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]_{\mathcal{G}'}\nonumber\\&\qquad \qquad+\phi_i(s_t,a^i_t,a^{-i}_t) \label{RelnGG'} \end{align} The accumulated return in $\mathcal{G}'$ for agent $i$ is then estimated by $r^i_t+ \gamma \phi_i(s_{t+1},a^i_{t+1},a^{-i}_{t+1}) - \phi_i(s_t,a^i_t,a^{-i}_t)+ \gamma V^i (s_{t+1})$. From Eqn. (\ref{RelnGG'}), we can add $\phi_i(s_t,a^i_t,a^{-i}_t)$ to the TD-target in $\mathcal{G}'$ at each time $t$ to keep the policy gradient unbiased in $\mathcal{G}$. Let the critic and actor in SAM for agent $i$ be respectively parameterized by $\omega_i$ and $\theta_i$. When the actor is updated at a slower rate than critic, the asymptotic behavior of the critic can be analyzed by keeping the actor fixed using \emph{two time-scale stochastic approximation} methods \cite{borkar2009stochastic}. For agent $i$, the TD-error at time $t$ is given by \begin{align} \delta^i_t&:=r^i_t+F^i_t+\gamma V_{\omega_i} (s_{t+1}) - V_{\omega_i} (s_t). \label{TDErrori} \end{align} The update of the critic can be expressed as a first-order ordinary differential equation (ODE) in $\omega_i$, given by: \begin{align} \dot{\omega}_i &= \mathbb{E}_{\bm{\pi_\theta}}[\delta^i \nabla_{\omega_i} V_{\omega_i} (s_t)] \label{CriticODE} \end{align} Under an appropriate parameterization of the value function, this ODE will converge to an asymptotically stable equilibrium, denoted $\omega_i (\bm{\theta})$. At this equilibrium, the TD-error for agent $i$ is $\delta_{t,\omega_i(\bm{\theta})}^i = r^i_t+F^i_t +\gamma V_{\omega_i(\bm{\theta})} (s_{t+1}) - V_{\omega_i(\bm{\theta})} (s_t)$. The update of the actor can then be determined by solving a first order ODE in $\theta_i$. With look-ahead advice, a term corresponding to the shaping advice at time $t$ will have to be added to ensure an unbiased policy gradient (Eqn. (\ref{RelnGG'})). This ODE can be written as: \begin{align} \dot{\theta}_i &= \mathbb{E}_{\bm{\pi_\theta}}[(\delta_{t,\omega_i(\bm{\theta})}^i +\phi_i(s_t,a^i_t,a^{-i}_t)) \nabla_{\theta_i} \log~\pi_{\theta_i}(a_t^i|o^i_t)] \label{ActorODE} \end{align} A potential term will not have to be added to ensure an unbiased policy gradient when utilizing look-back advice. This insight follows from Proposition 3 in \cite{xiao2019potential} since we consider decentralized policies. \subsection{Analysis and Algorithm} In this part, we present a proof of the convergence of the actor and critic parameters when learning with shaping advice. We also demonstrate that convergence of policy gradients and values when using SAM implies convergence of these quantities in the absence of SAM. This will guarantee that policies learned in the modified stochastic game $\mathcal{G}'$ will be locally optimal in the original game $\mathcal{G}$. For agent $i$, the update dynamics of the critic can be expressed by the ODE in Eqn. (\ref{CriticODE}). Assuming parameterization of $V(s)$ over a linear family, this ODE will converge to an asymptotically stable equilibrium \cite{borkar2009stochastic}. The actor update is then given by the ODE in Eqn. (\ref{ActorODE}). The parameters associated with the critics are assumed to be updated on a faster timescale than those of the actors. Then, the behaviors of the actor and critic can be analyzed separately using two timescale stochastic approximation techniques \cite{borkar2009stochastic}. \begin{ass} We make the following assumptions: \begin{enumerate} \item At any time $t$, an agent is aware of the actions taken by all other agents. Rewards received by the agents at each time step are uniformly bounded \item The Markov chain induced by the agent policies is irreducible and aperiodic. \item For each agent $i$, the update of its policy parameter $\theta_i$ includes a projection operator $\Gamma_i$, which projects $\theta_i$ onto a compact set $\Theta_i$. We assume that $\Theta_i$ includes a stationary point of $\nabla_{\theta_i} J_i(\bm{\theta})$ for each $i$. \item For each agent $i$, its value function is parameterized by a linear family. That is, $V_{\omega_i} (s) = \Phi_i \omega_i$, where $\Phi_i$ is a known, full-rank feature matrix for each $i$. \item For each agent $i$, the TD-error at each time $t$ and the gradients $\nabla_{\omega_i} V_{\omega_i}(s)$ are bounded, and the gradients $\nabla_{\theta_i} \log ~\pi_{\theta_i} (\cdot|s_t)$ are Lipschitz with bounded norm. \item The learning rates satisfy $\sum_t \alpha^\theta_t = \sum_t \alpha^\omega_t = \infty$, $\sum_t[(\alpha^\theta_t)^2 + (\alpha^\omega_t)^2] < \infty$, $\lim_{t \rightarrow \infty} \frac{\alpha^\theta_t}{\alpha^\omega_t}=0$. \end{enumerate} \end{ass} We first state a useful result from \cite{kushner2012stochastic} \begin{lm}[\cite{kushner2012stochastic}]\label{KushnerClarkLemma} Let $\Gamma:\mathbb{R}^k \rightarrow \mathbb{R}^k$ be a projection onto a compact set $K \subset \mathbb{R}^k$. Define \begin{align} \hat{\Gamma}(h(x)):&=\lim_{\epsilon \downarrow 0} \frac{\Gamma(x+\epsilon h(x))-x}{\epsilon} \nonumber \end{align} for $x \in K$ and $h:\mathbb{R}^k \rightarrow \mathbb{R}^k$ continuous on $K$. Consider the update $x_{t+1} = \Gamma (x_t + \alpha_t(h(x_t) + \xi_{t,1}+\xi_{t,2}))$ and its associated ODE $\dot{x} = \hat{\Gamma}(h(x))$. % Assume that: i) $\{\alpha_t\}$ is such that $\sum_t \alpha_t = \infty$, $\sum_t \alpha_t^2 < \infty$; ii) $\{\xi_{t,1}\}$ is such that for all $\epsilon > 0$, $\lim_t \mathbb{P} (\sup_{n \geq t} || \sum_{\tau = t}^n \alpha_{\tau} \xi_{\tau,1}|| \geq \epsilon) = 0$; iii) $\{\xi_{t,2}\}$ is an almost surely bounded random sequence, and $\xi_{t,2} \rightarrow 0$ almost surely. % Then, if the set of asymptotically stable equilibria of the ODE in $\dot{x}$ is compact, denoted $K_{eq}$, the updates $x_{t+1}$ will converge almost surely to $K_{eq}$. \end{lm} Let $\{\mathcal{F}^\omega_t\}$ be the filtration where $\mathcal{F}^\omega_t:=\sigma(s_{\tau},$ $r^1_{\tau},\dots,r^n_{\tau},\omega_{1_\tau},\dots,\omega_{n_{\tau}}:\tau \leq t)$ is an increasing $\sigma-$algebra generated by iterates of $\omega_i$ up to time $t$. We first analyze behavior of the critic when parameters of the actor are fixed. \begin{thm}\label{ThmCriticConv For a fixed policy $\bm{\pi_\theta}$, the update $\omega_i \leftarrow \omega_i - \alpha_t^\omega \delta^i_t \nabla_{\omega_i} V_{\omega_i} (s_t)$ converges almost surely to the set of asymptotically stable equilibria of the ODE $\dot{\omega}_i=h_i(\omega_i):=\mathbb{E}_{\bm{\pi_\theta}}[\delta^i_t \nabla_{\omega_i}V_{\omega_i}(s_t)|\mathcal{F}^\omega_t]$. \end{thm} \begin{proof} Let $\xi^i_{t,1}:=\delta^i_t \nabla_{\omega_i} V_{\omega_i} (s_t)-\mathbb{E}_{\bm{\pi_\theta}}[\delta^i_t \nabla_{\omega_i}V_{\omega_i}(s_t)|\mathcal{F}^\omega_t]$. Then, the $\omega_i$ update can be written as $\omega_i \leftarrow \omega_i- \alpha_t^\omega[h_i(\omega_i)+\xi^i_{t,1}]$, where $h_i(\omega_i)$ is continuous in $\omega_i$. Since $\delta^i_t$ and $\nabla_{\omega_i} V_{\omega_i}(s)$ are bounded, $\xi^i_{t,1}$ is almost surely bounded. Let $M^i_t:= \sum_{\tau = 0}^t \alpha^\omega_{\tau} \xi^i_{\tau,1}$. Then $\{M^i_t\}$ is a martingale\footnote{A martingale \cite{williams1991probability} is a stochastic process $S_1,S_2,\dots$ that satisfies $\mathbb{E}(|S_n| < \infty)$ and $\mathbb{E}(S_{n+1}|S_1,\dots,S_n) = S_n$ for each $n = 1,2,\dots.$. , and $\sum_t ||M^i_t-M^i_{t-1}||^2 = \sum_t ||\alpha^\omega_t \xi^i_{t,1}||^2 < \infty$ almost surely. Therefore, from the martingale convergence theorem \cite{williams1991probability}, the sequence $\{M^i_t\}$ converges almost surely. Therefore, the conditions in Lemma \ref{KushnerClarkLemma} are satisfied. Since $V_{\omega_i} = \Phi_i \omega_i$, with $\Phi_i$ a full-rank matrix, $h_i(\omega_i)$ is a linear function, and the ODE will have a unique equibrium point. This will be an asymptotically stable equilibrium since ODE dynamics will be governed by a matrix of the form $(\gamma T_{\pi} - I)$. Here, $I$ is an identity matrix, and $T_{\pi}$ is a stochastic state-transition matrix under policy $\pi$, whose eigen-values have (strictly) negative real parts \cite{prasad2015two}. Denote this asymptotically stable equilibrium by $\omega_i(\bm{\theta})$. \end{proof} We can now analyze the behavior of the actor, assuming that the critic parameters have converged to an asymptotically stable equilibrium. With $\omega_i (\bm{\theta})$ a limit point of the critic update, let $\delta_{t,\omega_i(\bm{\theta})}^i = r^i_t+F^i_t +\gamma V_{\omega_i(\bm{\theta})} (s_{t+1}) - V_{\omega_i(\bm{\theta})} (s_t)$. When using look-ahead or look-back advice, define $\tilde{\delta}^i_{t,\omega_i(\bm{\theta})}$ as \begin{align} \begin{split} &\text{look-ahead:}\quad\tilde{\delta}^i_{t,\omega_i(\bm{\theta})}:= (\delta_{t,\omega_i(\bm{\theta})}^i+\phi_i(s_t,a^i_t,a^{-i}_t)) \\ &\text{look-back:} \quad\tilde{\delta}^i_{t,\omega_i(\bm{\theta})}:= \delta_{t,\omega_i(\bm{\theta})}^i . \end{split}\label{delta} \end{align} Let $\{\mathcal{F}^\theta_t\}$ be a filtration where $\mathcal{F}^\theta_t:=\sigma(\bm{\theta}_{\tau}:=[\theta_{1_\tau}\dots \theta_{n_\tau}]:\tau \leq t)$ is an increasing $\sigma-$algebra generated by iterates of $\theta_i$ up to time $t$. \begin{thm} \label{ThmActorConv} The update $\theta_i \leftarrow \Gamma_i[\theta_i + \alpha_t^\theta \tilde{\delta}^i_t $ $ \nabla_{\theta_i} \log~\pi_{\theta_i}(a_t^i|o^i_t)]$ converges almost surely to the set of asymptotically stable equilbria of the ODE $\dot{\theta}_i = \hat{\Gamma}_i(h_i(\theta_i))$, where $h_i(\theta_i) = \mathbb{E}_{\bm{\pi_\theta}}[\tilde{\delta}^i_{t,\omega_i(\bm{\theta})}\nabla_{\theta_i} \log~\pi_{\theta_i}(a_t^i|o^i_t)|\mathcal{F}^\theta_t]$ \end{thm} \begin{proof} Let $\xi^i_{t,1} := \tilde{\delta}^i_t \nabla_{\theta_i} \log \pi_{\theta_i}(a_t^i|o^i_t) - \mathbb{E}_{\bm{\pi_\theta}}[\tilde{\delta}^i_t \nabla_{\theta_i} \log\pi_{\theta_i}(a_t^i|o^i_t) | \mathcal{F}^\theta_t]$ and $\xi^i_{t,2}:= \mathbb{E}_{\bm{\pi_\theta}}[(\tilde{\delta}^i_t - \tilde{\delta}^i_{t,\omega_i(\bm{\theta})})\nabla_{\theta_i} \log \pi_{\theta_i}(a_t^i|o^i_t) | \mathcal{F}^\theta_t]$. Then, the update of $\theta_i$ can be written as $\theta_i \leftarrow \theta_i + \alpha^\theta_t [h_i(\theta_i) +\xi^i_{t,1} + \xi^i_{t,2}]$, where $h_i(\theta_i)$ is continuous in $\theta_i$. We now need to verify that the conditions in Lemma \ref{KushnerClarkLemma} are satisfied. Since the critic parameters converge almost surely to a fixed point, $\tilde{\delta}^i_t - \tilde{\delta}^i_{t,\omega_i(\bm{\theta})} \rightarrow 0$ almost surely. Therefore, $\xi^i_{t,2} \rightarrow 0$ almost surely, verifying Condition iii) in Lemma \ref{KushnerClarkLemma}. Since $\tilde{\delta}^i_t$ and $\nabla_{\theta_i} \log \pi_{\theta_i}(a_t^i|o^i_t)$ are bounded, $\xi^i_{t,1}$ is continuous in $\theta_i$ and $\theta_i$ belongs to a compact set, the sequence $\{\xi^i_{t,1}\}$ is bounded almost surely \cite{rudin1964principles}. If $M^i_t:= \sum_{\tau = 0}^t \alpha^\theta_{\tau} \xi^i_{\tau,1}$, then $\{M^i_t\}$ is a martingale, and $\sum_t ||M^i_t-M^i_{t-1}||^2 = \sum_t ||\alpha^\theta_t \xi^i_{t,1}||^2 < \infty$ almost surely. Then, $\{M^i_t\}$ converges almost surely \cite{williams1991probability}, satisfying Condition ii) of Lemma \ref{KushnerClarkLemma}. Condition i) is true by assumption, completing the proof. \end{proof} Theorems \ref{ThmCriticConv} and \ref{ThmActorConv} demonstrate the convergence of critic and actor parameters in the stochastic game with the shaped reward, $\mathcal{G}'$. However, our objective is to provide a guarantee of convergence in the original game $\mathcal{G}$. We establish such a guarantee when parameterizations of the value function results in small errors, and policy gradients in $\mathcal{G}'$ are bounded. \begin{df} For a probability measure $\mu$ on a finite set $\mathcal{M}$, the $\ell_2-$norm of a function $f$ with respect to $\mu$ is defined as $||f||_{\mu}:=\bigg[\int_{\mathcal{M}} |f(X)|^2 d\mu (X)\bigg]^{\frac{1}{2}}=\bigg[\mathbb{E}_{\mu}(|f(X)|^2)\bigg]^{\frac{1}{2}}$. \end{df} \begin{thm}\label{BoundInOrigGame} In the stochastic game $\mathcal{G}'$, let $(\gamma+1)||V_i^{\pi_{\bm{\theta}}}(s) - V_{\omega_i(\bm{\theta})}(s)||_{\pi_{\bm{\theta}}} \leq \mathcal{E}_i(\bm{\theta})$, and let $||\nabla_{\theta_i} \log\pi_{\theta_i}||_{\pi_{\bm{\theta}}} \leq C_i(\bm{\theta})$. Let $(\bm{\theta}^*, \omega(\bm{\theta})^*)$ be the set of limit points of SAM. Then, in the original stochastic game $\mathcal{G}$, for each agent $i$, $||\nabla_{\theta_i}J_i(\bm{\theta}^*)||_2 \leq C_i(\bm{\theta}^*)\mathcal{E}_i(\bm{\theta}^*)$. \end{thm} \begin{proof} Let $\Theta_{i_{eq}}$ denote the set of asymptotically stable equilibria of the ODE in $\theta_i$. Let $\Theta_{eq}:=\Theta_{1_{eq}} \times \dots \times \Theta_{n_{eq}}$. Then, in the set $\Theta_{eq}$, $\dot{\theta}_i = 0$ for each agent $i$. Consider a policy $\pi_{\bm{\theta}}$, $\bm{\theta} \in \Theta_{eq}$. In the original game $\mathcal{G}$, \begin{align} \nabla_{\theta_i}J_i(\bm{\theta}) &= \mathbb{E}_{\bm{\pi_\theta}} [ \nabla_{\theta_i} \log\pi_{\theta_i} (a^i_t|o^i_t)Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]\label{MAPolGrad} \end{align} From Equation (\ref{RelnGG'}), $[Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]_{\mathcal{G}}=[Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]_{\mathcal{G}'}$ $+\phi_i(s_t,a^i_t,a^{-i}_t)$. Since we use an advantage actor critic, we replace $[Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]_{\mathcal{G}'}$ with an advantage term, defined as $[Q^{\bm{\pi_\theta}}_i(s_t,a^i_t,a^{-i}_t)]_{\mathcal{G}'}-V^{\bm{\pi_\theta}}_i(s_t)$. Substituting these quantities in Equation (\ref{MAPolGrad}), \begin{align} \nabla_{\theta_i}J_i(\bm{\theta}) &= \mathbb{E}_{\bm{\pi_\theta}}[\nabla_{\theta_i} \log\pi_{\theta_i} (a^i_t|o^i_t).\label{MAPolGradG'}\\&\qquad \qquad (r^i_t+F^i_t+\gamma V_i^{\pi_{\bm{\theta}}}(s_{t+1})\nonumber\\&\qquad \qquad-V^{\bm{\pi_\theta}}_i(s_t)+ \phi_i(s_t,a^i_t,a^{-i}_t))]\nonumber \end{align} At equilibrium, $\dot{\theta}_i= 0$ in Equation (\ref{ActorODE}). Subtracting this from Equation (\ref{MAPolGradG'}), \begin{align} &\nabla_{\theta_i}J_i(\bm{\theta}) - \dot{\theta}_i =\nabla_{\theta_i}J_i(\bm{\theta}) \nonumber\\ &=\mathbb{E}_{\bm{\pi_\theta}}[\nabla_{\theta_i} \log\pi_{\theta_i} (a^i_t|o^i_t). \nonumber \\&\qquad \qquad(\gamma (V_i^{\pi_{\bm{\theta}}}(s_{t+1}) - V_{\omega_i(\bm{\theta})} (s_{t+1}))\nonumber\\&\qquad \qquad- (V_i^{\pi_{\bm{\theta}}}(s_{t}) - V_{\omega_i(\bm{\theta})}(s_t)))] \nonumber \end{align} Using the Cauchy-Schwarz inequality, \begin{align} ||\nabla_{\theta_i}J_i(\bm{\theta}^*)||_2 &\leq |\gamma +1| .||V_i^{\pi_{\bm{\theta}}}(s) - V_{\omega_i(\bm{\theta})}(s)||_{\pi_{\bm{\theta}}}.\nonumber\\&\qquad \qquad \qquad ||\nabla_{\theta_i} \log\pi_{\theta_i}||_{\pi_{\bm{\theta}}}\nonumber\\ &\leq C_i(\bm{\theta}^*)\mathcal{E}_i(\bm{\theta}^*)\label{CauchSchwIneq} \end{align} % Each term on the right side of Eqn. (\ref{CauchSchwIneq}) is bounded. Thus, $J_i(\bm{\theta})$ converges for each agent $i$ in the original game $\mathcal{G}$, even though policies are synthesized in the modified game $\mathcal{G}'$. \end{proof} Proposition \ref{BoundInOrigGame} demonstrates that the additional reward $F^i$ provided by SAM to guide the agents does not distract them from accomplishing the task objective that is originally specified by the environment reward $R^i$. Given similar assumptions, we can obtain Corollary \ref{convergence} for single-agent cases and show guarantees on the convergence of Algorithm \ref{Algo1} using the theory of `two time-scale stochastic analysis'. Corollary \ref{convergence} gives a bound on the error introduced as a result of approximating the value function $V_{M'}$ with $V_{M'}^{\omega}$ in the MDP $M'$. This error term is small if the linear function family for $V_{M'}^{\omega}$ is rich. In fact, if the critic is updated in batches, a tighter bound can be achieved, as shown in Proposition 1 of \cite{Yang2018finite}. \begin{cor}\label{convergence} Let $\mathcal{E}(\theta):=\norm{V^{\omega(\theta)}_{M'}(s)-V^{\pi_{\theta}}_{M'}(s)}_{\rho^{\pi_{\theta}}}$. Then, for any limit point $(\theta^*,\omega^*) :=\lim\limits_{T_{max} \to \infty}(\theta_{T_{max}},\omega_{T_{max}})\}$ of Algorithm \ref{Algo1}, $\norm{\nabla_{\theta}J_M(\theta^*)}_2\leq C\mathcal{E}(\theta^*)$. \end{cor} Algorithm \ref{algo:MAAC+PBA} desecribes SAM. The shaping advice is specified as a difference of potential functions (\emph{Line 15}), and is added to the reward received from the environment. We use an advantage-based actor-critic, and use the TD-error to estimate this advantage (\emph{Line 16}). This is used to update the actor and critic parameters for each agent (\emph{Lines 18-19}) \begin{algorithm}[!h] \small \caption{SAM: Shaping Advice in Multi-agent RL} \label{algo:MAAC+PBA} \begin{algorithmic}[1] \REQUIRE{For each agent $i$: differentiable policy $\pi_{\theta_i}$; differentiable value function $V_{\omega_i}$; shaping advice $\phi_i(s,a^i,a^{-i})$. Maximum episode length $T_{max}$.} \textbf{Initialization}: \\ policy and value function parameters $\theta_i, \omega_i$ for all agents $i$, learning rates $\alpha^\theta, \alpha^\omega$. \STATE{$T = 0$} \REPEAT \STATE{$t \leftarrow -1$; $\phi_i(s_{-1},a^i_{-1},a^{-i}_{-1}) = 0$ for all $i$} \STATE{Initialize information $s_0 = [o_0^1,\dots,o_0^n]$} \REPEAT \STATE{$t \leftarrow t+1$} \FOR{agent $i=1$ to $n$} \STATE{sample $a^i_t \sim \pi_{\theta_i}(\cdot|o^i_t)$} \ENDFOR \STATE{Take action $a_t = [a^1_t,\dots,a^n_t]$, observe new information $s_{t+1}$ and obtain reward $r^i_t$ for each agent. Use $a_t$ to determine $\phi_i(s_t,a_t)$ for all agents} \IF{$s_{t+1}$ is terminal} \STATE{$V_{\omega_i} (s_{t+1}) =0$} \ENDIF \FOR{agent $i=1$ to $n$} \STATE{compute $F^i_t$ based on equations (\ref{LAPBA}) and (\ref{LBPBA})} \STATE{TD-error: $\delta^i_t:=r^i_t+F^i_t+\gamma V_{\omega_i} (s_{t+1}) - V_{\omega_i} (s_t)$} \STATE{compute $\tilde{\delta}^i_t$ based on equations (\ref{delta})} \STATE{Update actor: $\theta_i \leftarrow \Gamma_i[\theta_i + \alpha_t^\theta \tilde{\delta}^i_t \nabla_{\theta_i} \log~\pi_{\theta_i}(a_t^i|o^i_t)]$} \STATE{Update critic: $\omega_i \leftarrow \omega_i - \alpha_t^\omega \delta^i_t \nabla_{\omega_i} V_{\omega_i} (s_t)$} \ENDFOR \UNTIL{$s_{t+1}$ is terminal} \STATE{$T \leftarrow T+1$} \UNTIL{$T>T_{max}$} \end{algorithmic} \end{algorithm} \begin{rem} We note that our objective is to maximize the rewards that can be obtained by agents equipped with shaping advice. Our algorithms are termed to converge when the values of these rewards reaches a `steady-state'. This is distinct from game-theoretic notions of equilibrium (e.g., Nash equilibrium), as studied in \cite{jiang2020enhanced, yang2018mean}. The flavor of theoretical analysis of convergence that we adopt also allows better interpretability of our experimental evaluations, reported in Section VI. \end{rem} \section{Related Work}\label{RelatedWork} Techniques to improve credit assignment using feedback signals provided by a human operator have been studied in single-agent RL. Demonstrations provided by a human operator were used to synthesize a ‘baseline policy’ that was used to guide learning in \cite{taylor2011integrating, wang2017improving}. When expert demonstrations were available, imitation learning was used to guide exploration of the RL agent in \cite{kelly2019hg, ross2011reduction}. Feedback provided by a human operator was converted to a shaping reward to aid training a deep RL agent in environments with delayed rewards in \cite{xiao2020fresh}. These techniques, however, presume the availability of a human operator, which might limit their applicability. In the multi-agent setting, decentralized and distributed control techniques is a popular area of research. A widely studied problem in such systems is the development of algorithms to specify methods by which information can be exchanged among agents so that they can jointly complete tasks. A technique to ensure fixed-time consensus for multi-agent systems whose interactions were specified by a directed graph was studied in \cite{zuo2015nonsingular, tian2018fixed}. The authors of \cite{li2020consensus} proposed an adaptive distributed event-triggering protocol to guarantee consensus for multi-agent systems specified by linear dynamics and interactions specified by an undirected graph. We direct the reader to \cite{qin2016recent} for a survey of recent developments in consensus of multi-agent systems. These works, however, assumed the availability of models of individual agent's dynamics, and of the interactions between agents. Data-driven techniques are being increasingly adopted to solve problems in multi-agent systems. These methods do not require complete knowledge of the system model; rather, they use information about the state, input, and output of each agent to establish feasibility of solutions and guarantees on convergence of reinforcement learning-based algorithms. Input and state data were used in an online manner to design a distributed control algorithm to solve a cooperative optimal output regulation problem in leader-follower systems in \cite{gao2021reinforcement}. Information obtained from trajectories of each player were used in \cite{vamvoudakis2017game} to develop real-time solutions to multi-player games through the design of an actor-critic-based adaptive learning algorithm. The authors of \cite{qu2020scalable} identified a class of networked MARL problems where enforcing specific local interactions among players permitted the exploiting an \emph{exponential decay property} which enabled the development of scalable, distributed algorithms for optimization and control. A comprehensive overview of algorithms in cooperative and competitive MARL with a focus on theoretical analyses of their convergence and complexity was presented in \cite{zhang2021multi}. Cooperative MARL tasks are one instance of the setup described above, wherein all agents share the same global reward. The authors of \cite{sunehag2018value} introduced value decomposition networks that decomposed a centralized value into a sum of individual agent values to assess contributions of individual agents to a shared global reward. An additional assumption on monotonicity of the centralized value function was imposed in QMIX \cite{rashid2018qmix} to assign credit to an agent by enumerating a value for each valid action at a state. The action spaces of agents in the above-mentioned works were discrete and finite, and these techniques cannot be easily adapted to settings with continuous action spaces. In comparison, we study reward shaping in MARL tasks in environments with continuous action spaces. An alternative approach to improve credit assignment is potential-based reward shaping. Although this requires prior knowledge of the problem domain, potential-based techniques have been shown to offer guarantees on optimality and convergence of policies in both single \cite{ng1999policy} and multi-agent \cite{devlin2011empirical, devlin2011theoretical, lu2011policy} cases. The aforementioned works had focused on the use of potential-based methods in environments with discrete action spaces. A preliminary version of this paper \cite{xiao2019potential} introduced potential-based techniques to learn stochastic policies in single-agent RL with continuous states and actions. In this paper, we adapt and extend methods from \cite{xiao2019potential} to develop a comprehensive framework for providing potential-based shaping advice in both single- and multi-agent RL. The authors of \cite{gangwani2020learning} developed a method called iterative relative credit refinement (IRCR), which used a `surrogate objective' to uniformly redistribute a sparse reward along the length of a trajectory in single and multi-agent RL. We empirically compare SAM with IRCR, and explain why SAM is able to guide agents to learn policies that result in higher average rewards than IRCR. Methods to learn a potential-based reward shaping function have been investigated in recent research. When domain knowledge was available, the authors of \cite{harutyunyan2015expressing} proposed a temporal-difference learning method to transform a given reward function to potential-based advice. In the absence of domain knowledge, graph convolutional networks were used in \cite{klissarov2020reward} to generate potential functions `automatically' by performing message passing between states at which the agent received a reward. Different from the above works, an approach to perform credit assignment by conditioning the value function on future events from a trajectory was examined by the authors of \cite{mesnard2020counterfactual}. We note that these works have focused on the single-agent case. An analogous approach for the multi-agent case, while interesting, is beyond the scope of the present paper, and remains a promising direction for future research.
2,877,628,089,910
arxiv
\section{Introduction} Transition metal oxides form a fascinating class of compounds with interesting electronic, magnetic, and crystallographic structures. The phase diagram of niobium oxide is especially rich, with a large number of reported phases for \ce{Nb2O5}~\cite{kato1975,kato1976,schafer1966}, in addition to \ce{NbO} and \ce{NbO2}. The high-temperature \ce{Nb2O5} polymorph (H-\ce{Nb2O5}) can be regarded as the parent compound of a family known as crystallographic shear (or Wadsley--Roth) phases~\cite{roth1965a,roth1965d}. In these phases, niobium is present in octahedral coordination, but the Nb/O ratio of \ce{Nb2O5} prevents the formation of purely corner-sharing octahedra. Instead, the structure must include some amount of edge-sharing connections between octahedra. The crystal structures of these compounds consequently consist of {\it blocks} of corner-sharing octahedra of size $n\times m$ that are connected to neighbouring blocks via crystallographic shear planes of edge-sharing connections. In the direction perpendicular to the $n\times m$ plane the units connect infinitely, and tetrahedrally coordinated ions are sometimes present to fill voids in the structure. By reduction of \ce{Nb2O5}, small amounts of Nb$^{4+}$ can be incorporated, and a series of \ce{Nb2O_{5-\delta}} compounds form. These suboxides include \ce{Nb25O62}, \ce{Nb47O116}, \ce{Nb22O54}, and two polymorphs of \ce{Nb12O29} with different crystal symmetries (Fig.~\ref{fig:xtalstrucs}, Table~\ref{tab:strucinfo}). The metal-oxygen octahedra in these compounds are strongly distorted due to a combination of electrostatic repulsion between transition metal ions and the second-order Jahn--Teller effect~\cite{kunz1995,bersuker2006}. Niobium sites in the center of the block are less distorted than those at the periphery. The structural principle of blocks as the building unit, as introduced by Wadsley and Roth, also applies to phases in the \ce{TiO2}-\ce{Nb2O5} and \ce{WO3}-\ce{Nb2O5} phase diagrams~\cite{roth1965d}. \begin{figure*} \centering \subfloat[\ce{Nb22O54}]{\includegraphics[scale=0.3]{Nb22O54xtal}} \subfloat[H-\ce{Nb2O5}]{\includegraphics[scale=0.2]{HNb2O5xtal}} \subfloat[{\it m}-\ce{Nb12O29}]{\includegraphics[scale=0.25]{mNb12O29xtal}} \subfloat[{\it o}-\ce{Nb12O29}]{\includegraphics[scale=0.25]{oNb12O29xtal}} \subfloat[\ce{Nb25O62}]{\includegraphics[scale=0.25]{Nb25O62xtal}} \subfloat[\ce{Nb47O116}]{\includegraphics[scale=0.27]{Nb47O116xtal}} \caption{a) Idealised (left) and locally distorted (right) crystal structure of \ce{Nb22O54}. The tetrahedral site is shown as a black dot in the idealised structure. Crystal structures of b) H-\ce{Nb2O5}, c) monoclinic \ce{Nb12O29}, d) orthorhombic \ce{Nb12O29}, e) \ce{Nb25O62}, and f) \ce{Nb47O116}. Light and dark colored blocks are offset by half of the lattice parameter perpendicular to the plane of the page. Unit cells are outlined in black.} \label{fig:xtalstrucs} \end{figure*} The fully oxidised parent compound \ce{Nb2O5} is a wide bandgap insulator. Low concentrations of valence electrons are introduced through $n$-type doping to form the \ce{Nb2O_{5-$\delta$}} phases. This reduction changes the crystal structure, but the structural motif of the blocks is retained, which makes the niobium suboxides an excellent series of phases to study the interplay between charge state and crystal structure. Magnetic susceptibility measurements show that all \ce{Nb2O_{5-$\delta$}} phases are paramagnetic, with the number of localised moments increasing with $\delta$~\cite{cava1991a,ruscher1991}. Spin interactions are antiferromagnetic and their strength increases with the level of reduction, as indicated by their Curie--Weiss constants. However, only the monoclinic \ce{Nb12O29} phase is found to exhibit long-range antiferromagnetic order, with an ordering temperature of \SI{12}{\kelvin}~\cite{cava1991,andersen2005}. Electrical conductivity measurements show that all \ce{Nb2O_{5-$\delta$}} phases show thermally-activated transport, except for \ce{Nb12O29}, which is metallic down to \SI{2}{\kelvin}~\cite{ruscher1988,cava1991a}. Both electrical and optical measurements indicate that the electron transport in the \ce{Nb2O_{5-$\delta$}} phases is effectively one-dimensional along the block columns~\cite{ruscher1992}. Despite the evidence for localised electrons, single crystal X-ray diffraction studies on \ce{Nb22O54} and {\it o}-\ce{Nb12O29} have not found evidence for charge ordering~\cite{mcqueen2007}. Additional studies have been performed on \ce{Nb12O29} indicating the presence of localised as well as delocalised electrons~\cite{ruscher1988,lappas2002,cheng2009}. Despite their interesting physical properties, the niobium suboxides have not been investigated with first-principles methods, and the relationship between the level of reduction and the electronic and magnetic properties remains unclear. Two previous first-principles studies have examined the two polymorphs of \ce{Nb12O29}, with rather different conclusions regarding the electronic structure, despite their very strong structural similarity~\cite{fang2014,lee2015}. Additionally, there is new interest in crystallographic shear phases due to their excellent performance as electrode materials in batteries~\cite{griffith2018,li2018}, and the electronic structure of the suboxides is likely to be similar to that of other shear phases. In this article, we study six different niobium (sub)oxides using density-functional theory calculations and establish common principles governing their electronic and magnetic behaviour. As the 4$d$ band in these materials is highly complex, we first examine \ce{Nb22O54} in detail, and then present results on \ce{Nb12O29}, \ce{Nb47O116}, \ce{Nb25O62}, and H-\ce{Nb2O5}. We show that all of these structures host flat and dispersive bands, which correspond to localised and delocalised electronic states. Each block can host a single localised state in the block plane that is spread over multiple niobium sites. Delocalised states are present along the shear planes. These results are independent of the $n$-type doping, and alkali metal doped (lithiated) shear phases show similar features to the suboxides, which has implications for their use as battery electrodes. We discuss experimental studies of electrical and magnetic properties of the suboxides in terms of a consecutive filling of localised and delocalised states. Finally, based on these results, we suggest new directions for theory and experiment. \begin{table*} \centering \begin{ruledtabular} \begin{tabular}{cccccccccc} Compound & O/Nb & $N_{e^{-}}/$block & Space group & Source & $a$ & $b$ & $c$ & $\beta$ & Block size(s) \\ \hline H-\ce{Nb2O5} & 2.500 & 0 & $P2/m$ & Expt.~\cite{kato1976} & $21.153$ & $3.823$ & $19.356$ & $119.80^o$ & $3\times 4, 3\times 5$ \\ & & & $P2$ & PBE & $21.433$ & $3.841$ & $19.614$ & $119.85^o$ & \\ \hline \ce{Nb25O62} & 2.480 & $\nicefrac{1}{2}$ & $C2$ & Expt.~\cite{cava1991a} & 29.78 & 3.825 & 21.14 & $94.7^o$ & $3\times 4$ \\ & & & & PBE & $30.224$ & $3.84$ & $21.44$ & $95.0^o$ & \\ \hline \ce{Nb47O116} & 2.468 & $\nicefrac{2}{3}$ & $C2$ & Expt.~\cite{cava1991a} & 57.74 & 3.823 & 21.18 & $105.3^o$ & $3\times 3, 3\times 4$ \\ & & & & PBE & $58.43$ & $3.84$ & $21.44$ & $105.3^o$ & \\ \hline \ce{Nb22O54} & 2.455 & 1 & $P2/m$ & Expt.~\cite{mcqueen2007} & 15.749 & 3.824 & 17.852 & $102.029^o$ & $3\times 3, 3\times 4$ \\ & & & $P2$ & PBE & $15.931$ & $3.842$ & $18.036$ & $102.06^o$ & \\ & & & $P2$ & PBE+U & $15.935$ & $3.836$ & $18.061$ & $101.99^o$ & \\ \hline {\it m}-\ce{Nb12O29} & 2.417 & 2 & $A2/m$ & Expt.~\cite{waldron2004} & 15.695 & 3.831 & 20.723 & $113.103^o$ & $3\times 4$ \\ & & & & PBE & $15.903$ & $3.832$ & $20.967$ & $113.1^o$ & \\ & & & & PBE+U & $15.885$ & $3.837$ & $20.950$ & $113.09^o$ & \\ \hline {\it o}-\ce{Nb12O29} & 2.417 & 2 & $Cmcm$ & Expt.~\cite{mcqueen2007} & 3.832 & 20.740 & 28.890 & - & $3\times 4$ \\ & & & & PBE & $3.833$ & $20.955$ & $29.241$ & - & \\ & & & & PBE+U & $3.836$ & $20.961$ & $29.204$ & - & \\ \end{tabular} \end{ruledtabular} \caption{Structural properties of niobium (sub)oxides. Experimental and DFT optimised lattice parameters $a$, $b$ and $c$ are given in \si{\angstrom}. Structural optimisations with DFT+U were performed with a $U_{\mathrm{eff}}$ value of 4.0 eV on niobium $d$-orbitals. $N_{e^-}$ denotes number of electrons introduced by doping. Difference between experimental and DFT space group choices related to ordering of tetrahedral sites (see text).} \label{tab:strucinfo} \end{table*} \section{Methods} All density-functional theory calculations were performed using the planewave DFT code CASTEP~\cite{clark2005} (version 18.1). Pseudopotentials including Nb $4s$, $4p$, $4d$ and $5s$, O $2s$ and $2p$, and Li $1s$ and $2s$ states were used for all calculations. Calculations using hybrid functionals employed norm-conserving pseudopotentials~\cite{hamann1979}, all other calculations were performed using Vanderbilt ultrasoft pseudopotentials~\cite{vanderbilt1990}. Crystal structures were obtained from the Inorganic Crystal Structure Database~\cite{hellenbrandt2004} (ICSD) when available. The structure of \ce{Nb47O116} was constructed as described in Ref.~\onlinecite{cava1991a} as a unit cell intergrowth of \ce{Nb25O62} and \ce{Nb22O54} since no crystallographic data, other than the lattice parameters, was available. The space groups of H-\ce{Nb2O5} and \ce{Nb22O54} are reported as both $P2$ and $P2/m$ in the literature~\cite{kato1976,cava1991a,mcqueen2007}. These two space group choices differ only in the full or partial occupancy of the tetrahedral site. For modelling purposes, the ion on the tetrahedral site has to be ordered, resulting in space group $P2$. Atomic positions and lattice parameters of the structures were relaxed using the gradient-corrected Perdew--Burke--Ernzerhof (PBE) functional~\cite{perdew1996}, until the maximum force on any atom was smaller than $0.01$ eV/$\si{\angstrom}$. The calculations used a planewave kinetic energy cutoff of 800 eV for ultrasoft pseudopotentials, and 900 eV for norm-conserving pseudopotentials, unless otherwise stated. The Brillouin zone was sampled with a Monkhorst--Pack grid~\cite{monkhorst1976} finer than $2\pi \times 0.03$ \si{\angstrom}$^{-1}$. Lattice parameters obtained from the structural relaxations are listed in Table~\ref{tab:strucinfo}, and agree very well with the experimental values. Crystallographic information files (CIF) of the PBE optimised structures are available in the Supplemental Material of this article~\footnote{See Supplemental Material at \url{http://link.aps.org/supplemental/10.1103/PhysRevB.99.075151} for crystallographic information files (CIF) of the structures dealt with in this article.}. All electronic structure calculations were performed for antiferromagnetic spin arrangements in the conventional unit cells, as antiferromagnetic spin interactions are observed experimentally~\cite{cava1991a}. Semilocal density functionals suffer from self-interaction error, which can be alleviated by the use of DFT+U. For calculations in this work, the DFT+U implementation in CASTEP~\cite{dudarev1998} was used, which defines an effective $U$ value $U_{\mathrm{eff}}=U-J$. A value of $U_{\mathrm{eff}}=4.0$ eV was chosen for the Nb $d$ orbitals, in line with other studies on niobium oxides that employed similar implementations of DFT+U within planewave codes~\cite{nahm2008}. The results presented herein are mostly insensitive to the exact value of the $U_{\mathrm{eff}}$ parameter if it lies in the range 3--5 eV, even though the value of the bandgap does depend on the choice of the $U_{\mathrm{eff}}$ parameter. The structures of \ce{Nb22O54} and the \ce{Nb12O29} polymorphs were additionally optimised with PBE+U, and the results are listed in Table \ref{tab:strucinfo}. PBE and PBE+U lattice parameters agree closely, and PBE+U bandstructure and density of states calculations for compounds other than \ce{Nb22O54} and \ce{Nb12O29} were performed on PBE optimised structures. Hybrid functionals are another way to correct the self-interaction error of semilocal functionals. The range-separated HSE06 functional~\cite{heyd2003} was used to calculate the bandstructure for \ce{Nb22O54}. Due to the significant additional expense incurred by the use of hybrid functionals, the computational parameters for the calculations of bandstructures at the HSE06 level are coarser. The unit cell of \ce{Nb22O54} contains 610 valence electrons, but since the cell is rather short in one particular dimension and extended in the other two, one cannot use only the $\Gamma$-point in the Brillouin zone (BZ) sampling. Instead, a $1\times 5\times 1$ $\Gamma$-centered $\mathbf{k}$-point grid was used in the HSE06 self-consistent field calculations for \ce{Nb22O54}. Bandstructure calculations were performed for high-symmetry Brillouin zone directions according to those obtained from the SeeK-path package~\cite{hinuma2017}, which relies on spglib~\cite{togo2018}. A spacing between $\mathbf{k}$-points of $2\pi\times 0.025$ \si{\angstrom}$^{-1}$ was used. Density of states calculations were performed with a grid spacing of $2\pi\times 0.01$ \si{\angstrom}$^{-1}$, and the results were postprocessed with the OptaDOS package~\cite{morris2014a}, using the linear extrapolative scheme~\cite{pickard1999,pickard2000}. The c2x~\cite{rutter2018} utility and VESTA~\cite{momma2011} were used for visualisation of wavefunction and density data. Data analysis and visualisation was performed with the matador~\cite{evans} package. \section{Results} \subsection{\texorpdfstring{\ce{Nb22O54}}{ERROR}} \ce{Nb22O54} crystallises in space group $P2/m$~\cite{mcqueen2007}, and shows an ordered mixture of $3\times 3$ and $3\times 4$ blocks of octahedra, in addition to a tetrahedral site (Figure~\ref{fig:xtalstrucs}a). Assuming an ionic model, the compound can be described as $(\mathrm{Nb}^{5+})_{20}(\mathrm{Nb}^{4+})_2(\mathrm{O}^{2-})_{54}$, with two $4d$ electrons per 22 Nb atoms (1 $e^{-}$ per block, Table~\ref{tab:strucinfo}). \begin{figure}[!htb] \centering \includegraphics[scale=0.23]{Nb22O54_PBE+U_BSDOS} \caption{Spin-polarised bandstructure and electronic density of states of \ce{Nb22O54} (PBE+U, $U_{\mathrm{eff}}=4$ eV). Up and down spins colored in red and blue. High symmetry points are marked on slices through the first Brillouin zone. The flat bands below the Fermi level (dashed line) represent localised states.} \label{fig:Nb22O54_DFT+U_BS} \end{figure} \begin{figure}[!htb] \centering \subfloat[]{\includegraphics[scale=0.225]{Nb22O54_SitePDOS}} \subfloat[]{\includegraphics[scale=0.225]{Nb22O54_BlockPDOS}} \caption{Spin-summed projected density of states (PBE+U) for \ce{Nb22O54}. Fermi level is indicated by the dashed line. a) PDOS for central (gold) and peripheral (green) niobium sites. b) DOS projected for sites in different blocks, demonstrating separate localisation of electrons in $3\times 3$ and $3\times 4$ blocks. Contributions from sites are proportional to the shaded area.} \label{fig:Nb22O54_PDOS} \end{figure} The $\mathbf{a}$ and $\mathbf{c}$ lattice vectors of \ce{Nb22O54} are longer than $\mathbf{b}$, which is perpendicular to the block plane (Fig.~\ref{fig:xtalstrucs}, Table~\ref{tab:strucinfo}). The Brillouin zone (BZ) therefore has one long (along $\mathbf{b}^{\ast}$) and two short directions. The PBE+U spin-polarised bandstructure and electronic density of states (DOS) of \ce{Nb22O54} show a large gap between the valence and conduction bands, which are of oxygen $2p$ and niobium $4d$ character, respectively (Fig.~\ref{fig:Nb22O54_DFT+U_BS}). Two fully occupied flat bands (one for each spin) lie within the band gap, leading to the peaks in the DOS below the Fermi level. The flat bands have a very small one-dimensional dispersion, as evidenced by the shapes of the corresponding peaks in the DOS, and represent localised states. In addition to the flat bands, a set of dispersive bands exists just above the Fermi level, which show the largest dispersion along $\mathbf{b}^{\ast}$. The separation between the flat bands and the rest of the conduction states is smallest at special points lying in the $\mathbf{a}^{\ast}$-$\mathbf{c}^{\ast}$ plane of $\Gamma$ ($Y$, $A$, $B$), and largest in the parallel plane at the BZ boundary ($Z$, $C$, $D$, $E$). Due to this pattern, the dispersive bands are also effectively one-dimensional. With 12 inequivalent niobium sites in the unit cell of \ce{Nb22O54}, site-resolved projected densities of states (PDOS) are complicated and difficult to interpret. More insight is gained by summing PDOS for sets of sites. Figure~\ref{fig:Nb22O54_PDOS}a shows the projection onto different types of niobium sites within the structure, which are classified as central and peripheral, depending on where they sit within the block. We note two things: (1) Both peripheral and central niobium sites contribute to the localised states, even though the contribution of the central sites is greater given the ratio of the two; and (2) only peripheral niobium sites contribute to the unoccupied density of states above the Fermi level (until 0.5 eV above), the contribution from the central sites is exactly zero. The PDOS resolved by block in Fig. \ref{fig:Nb22O54_PDOS}b demonstrates that one localised state is contained in the $3\times 4$ block, and the other, lower energy one, in the $3\times 3$ block. Both blocks contribute roughly equally to the density of unoccupied conduction states. \begin{figure}[!htb] \centering \includegraphics[scale=0.25]{Nb22O54blockspinden} \caption{a) Spin density plot of \ce{Nb22O54}. Niobium and oxygen shown in dark blue and orange, respectively. Purple and light blue represent up and down spin density, respectively. The rectangles outline the $3\times 4$ and $3\times 3$ blocks. Spin density isosurface drawn at a value of 0.03 $e^-/\si{\angstrom}^3$. b) Kohn-Sham orbitals associated with localised states (flat bands in Fig. \ref{fig:Nb22O54_DFT+U_BS}) in $3\times4$ and $3\times3$ blocks, different phases of the orbitals shown in yellow and light green.} \label{fig:Nb22O54_Spindensity} \end{figure} Spin density in \ce{Nb22O54} is predominantly located on the central niobium sites (Fig.~\ref{fig:Nb22O54_Spindensity}a), which also dominate the relevant states as seen from the PDOS (Fig.~\ref{fig:Nb22O54_PDOS}a). One spin is located in each block, and the spin arrangement is antiferromagnetic between the two blocks. However, the ferromagnetic arrangement is only marginally higher in energy (less than 1 meV), indicating very weak spin interactions that are likely a result of the long (nm) magnetic interaction lengths. Kohn--Sham orbitals that are occupied by these localised electrons span the entire block, but only have contribution from niobium sites in the same block (Fig.~\ref{fig:Nb22O54_Spindensity}b). The flat dispersion is a result of the very weak face-on overlap ($\delta$-overlap) between these orbitals along $\mathbf{b}$. Both localised orbitals are similar in appearance, despite the different sizes of the blocks. This suggests that the presence of these states is a general feature of block-type structures. \begin{figure}[!htb] \centering \includegraphics[scale=0.4]{Nb22O54_PBE_BS} \includegraphics[scale=0.4]{Nb22O54_PBE+U_BS} \includegraphics[scale=0.4]{Nb22O54_HSE06_BS} \caption{$\Gamma\to Z$ segment of the bandstructure of \ce{Nb22O54} calculated with different levels of theory. PBE, PBE+U ($U_{\mathrm{eff}}=4.0$ eV), and HSE06, from left to right. Only one spin component is shown for clarity.} \label{fig:Nb22O54_Theorycomp} \end{figure} The results presented above were obtained from PBE+U ($U_{\mathrm{eff}}=4$ eV) calculations. The $\Gamma\to Z$ segment of the \ce{Nb22O54} bandstructure computed with HSE06 and PBE is compared to the PBE+U result in Fig.~\ref{fig:Nb22O54_Theorycomp}. Only the up-spin component is shown, which is associated with the localised electron in the $3\times 3$ block. The bandstructure looks similar for all functionals, and importantly, the relevant feature of localised states, i.e. the flat bands, are present even at the PBE level. However, PBE places the flat bands within the dispersive conduction bands, and both are partially occupied, while both DFT+U and the HSE06 functional place the flat bands below the other conduction bands. PBE also places the opposite spin partner of the localised state in the other block much lower in energy than either PBE+U or HSE06. The precise placement of the flat bands depends on the $U$ value, but in the tested range of 2-5 eV the flat bands are placed below the conduction bands, and the gap between them increases by approximately 200 meV per increase in $U$ by 1 eV. The degree of electron localisation depends on the presence of a gap between flat and dispersive bands. PBE implies metallic behaviour with localised electrons, while HSE06 and PBE+U show full localisation of the electrons. A major difference between the HSE06 and PBE or PBE+U calculations is the size of the gap between valence and conduction bands, which is larger by approximately 1.2 eV for HSE06 compared to PBE+U. The spin density and Kohn-Sham orbitals were plotted from the output of PBE+U calculations, but we note that the results from PBE and HSE06 are visually indistinguishable from the PBE+U results. \subsection{\texorpdfstring{\ce{Nb12O29}}{Nb12O29}} \ce{Nb12O29} is more reduced than \ce{Nb22O54} and hosts two $4d$ electrons per 12 niobium sites (i.e. 2 per block, Table~\ref{tab:strucinfo}). The two \ce{Nb12O29} polymorphs are structurally similar, and only differ in the long-range arrangement of the blocks; in the monoclinic polymorph the blocks form a ribbon along $a$, while in the orthorhombic structure the blocks zig-zag along $c$ (Fig.~\ref{fig:xtalstrucs}). \begin{figure}[!htb] \centering \includegraphics[scale=0.52]{m-Nb12O29_PBE+U4_BSDOS} \caption{Bandstructure and density of states of monoclinic \ce{Nb12O29} (PBE+U). Fermi level indicated by a dashed line. Up and down spins colored in red and blue, respectively. Flat and dispersive bands are present, with strong similarity to those in \ce{Nb22O54}.} \label{fig:mNb12O29_DFT+U_BS_DOS} \end{figure} The bandstructure of monoclinic \ce{Nb12O29} shows two flat bands (one for each spin), which lead to two peaks in the DOS (Fig.~\ref{fig:mNb12O29_DFT+U_BS_DOS}). The shape of the real-space unit cell results in a Brillouin zone with two short and one long dimension, and the bandstructure path segments are similar to those in \ce{Nb22O54}. The bands for both spins lie exactly on top of each other due to the symmetry of the crystal structure, even though there is a spatial separation of spins (Fig.~\ref{fig:mNb12O29_wvfn}a). The flat bands coexist with more dispersive conduction bands, which show a dispersion which is largest in the $\mathbf{b}^{\ast}$ direction, making them effectively one-dimensional. Independent of the position of the flat bands, the larger number of electrons per block requires that some of the electrons fill dispersive conduction bands. This indicates a structural capacity for localised electrons. In \ce{Nb12O29} flat and dispersive bands are interspersed, while in \ce{Nb22O54}, the flat bands lie below the rest of the $d$-bands (cf. Fig.~\ref{fig:Nb22O54_DFT+U_BS}). Similar to \ce{Nb22O54}, the central Nb sites contribute exclusively to the occupied density of states in a narrow region that is associated with the flat bands (Fig.~\ref{fig:mNb12O29_PDOS}). The remainder of the conduction states involve contributions from the peripheral sites. \begin{figure}[!htb] \centering \includegraphics[scale=0.22]{mNb12O29_SitePDOS} \caption{Projected density of states (PBE+U) for central (gold) and peripheral (green) niobium sites in monoclinic \ce{Nb12O29}. Central niobium sites contribute to the density of occupied states only in a narrow window that is associated with the flat bands.} \label{fig:mNb12O29_PDOS} \end{figure} Kohn-Sham orbitals of the localised states (Fig.~\ref{fig:mNb12O29_wvfn}c) look remarkably similar to those in \ce{Nb22O54}, and are predominantly made up of Nb $d$-orbitals lying within the plane of the block. Electrons occupying these localised states are responsible for the non-zero spin density (Fig.~\ref{fig:mNb12O29_wvfn}a). Orbitals associated with dispersive bands (Fig.~\ref{fig:mNb12O29_wvfn}b,d) are made up of $d_{xy}$ and $d_{yz}$ atomic orbitals that are parallel to the crystallographic shear planes. The band dispersion along $\Gamma\to Z$ is explained by a reduction of in-phase overlap of the constituent atomic orbitals along the real-space $\mathbf{b}$ direction. The fact that the contributing atomic orbitals are parallel to the crystallographic shear planes and overlap face-on (Fig.~\ref{fig:mNb12O29_wvfn}b,d), rather than end-on, can be understood from a crystal field argument. For a transition metal ion in an ideal octahedral crystal field, the $t_{2g}$ orbitals form a degenerate set. The \ce{MO6} octahedra in shear phases, however, are far from ideal. When the degeneracy of the $t_{2g}$ orbitals is lifted by a distortion, those $d$-orbitals that do not overlap with any $\sigma$-type ligand orbitals will be lowest in energy and contribute to the low-energy $d$-bands. \begin{figure}[!htb] \centering \includegraphics[scale=0.23]{mNb12O29-wvfn} \caption{Spin density plot (a) and orbitals associated with localised (c) and delocalised (b,d) states in monoclinic \ce{Nb12O29}. Spin density (a) is predominantly located on the central niobium sites, and results from the occupation of localised states (c). Delocalised states have no contribution from the central niobium sites.} \label{fig:mNb12O29_wvfn} \end{figure} The bandstructure of the orthorhombic \ce{Nb12O29} polymorph (Fig.~\ref{fig:oNb12O29}) looks very similar to that of the monoclinic polymorph. In fact, the results on the monoclinic polymorph presented above are transferable to the orthorhombic one. Experimentally, both polymorphs are metallic and show antiferromagnetic spin interactions, but only the monoclinic polymorph shows long-range magnetic order~\cite{andersen2005}. The magnetic susceptibility of \textit{m}-\ce{Nb12O29} can be fit using the Bonner-Fisher form, possibly indicating one-dimensional magnetism~\cite{lappas2002}. The differences are clearly subtle, and the small energy differences (10--20 K, around 1 meV) make comparisons using density-functional theory total energy differences difficult. However, the picture of the electronic structure of \ce{Nb12O29} that emerges is clear: for both polymorphs, conductivity and local moment magnetism are provided by different sets of electrons. Our conclusions on the orthorhombic polymorph are broadly in line with the first-principles study of Lee and Pickett~\cite{lee2015}. Those authors also found a coexistence of localised and delocalised electrons, with the localised spin residing in a large orbital dominated by the central niobium sites of the blocks, with delocalised electrons forming another subset. Our results as well as experimental studies using heat capacity measurements~\cite{cheng2009} and $\mu$SR spectroscopy~\cite{lappas2002} establish the presence of localised magnetic electrons in \textit{m}-\ce{Nb12O29}. We note that a previous study suggested the presence of itinerant moments in \textit{m}-\ce{Nb12O29} on the basis of GGA calculations~\cite{fang2014}. However, the high density of states at the Fermi level that was described to be the reason for the itinerant magnetism in fact arises from the flat band representing a localised state. \begin{figure}[!htb] \centering \includegraphics[scale=0.275]{oNb12O29} \caption{Bandstructure (PBE+U) (a) and spin density (b) of orthorhombic \ce{Nb12O29}. $Y=\mathbf{a}^{\ast}/2$. The orthorhombic and monoclinic \ce{Nb12O29} polymorphs show a strong similarity in their bandstructure and spin density distribution (cf. Figs. \ref{fig:mNb12O29_DFT+U_BS_DOS}, \ref{fig:mNb12O29_wvfn}a).} \label{fig:oNb12O29} \end{figure} \subsection{\texorpdfstring{\ce{Nb25O62} and \ce{Nb47O116}}{ERROR}} The compounds \ce{Nb25O62} and \ce{Nb47O116} are less reduced than \ce{Nb22O54} and host less than one electron per structural block unit (Fig.~\ref{fig:xtalstrucs}, Table~\ref{tab:strucinfo}). The mutually occurring localised and delocalised electronic states that were found above for \ce{Nb22O54} and monoclinic \ce{Nb12O29} are also present in \ce{Nb25O62} and \ce{Nb47O116}. Localised states in blocks of the same size are nearly degenerate, and since only a fraction of the localised states is occupied (less than 1 electron per block), it is very difficult in a first-principles calculation to localise the electrons within a specific block. This could be done if the occupation of particular bands was constrained. Similarly, since the energy of the localised states depends on their occupation, judging the relative position of dispersive and localised states in these two compounds is very difficult. Charge densities for the localised states in the \ce{Nb47O116} and \ce{Nb25O62} are shown in Fig. \ref{fig:Nb25O62_Nb47O116_Elec}. It seems very likely that both \ce{Nb25O62} and \ce{Nb47O116} possess only localised electrons, occupying a fraction of these localised states. Since \ce{Nb47O116} is a unit cell level intergrowth of \ce{Nb22O54} and \ce{Nb25O62}, and \ce{Nb22O54} shows complete localisation of electrons, it is very likely that electrons should also fully localise in \ce{Nb47O116}, at least in those parts of the structure that derive from \ce{Nb22O54}. \begin{figure}[!htb] \centering \includegraphics[scale=0.225]{Nb25O62_Nb47O116-Charge} \caption{Summed charge densities from bands in (a) \ce{Nb25O62} and (b) \ce{Nb47O116}. Selected empty and filled localised states within blocks are framed by rectangles. The same (conventional) unit cell as in Fig. \ref{fig:xtalstrucs} is shown for \ce{Nb25O62}, but a smaller primitive cell for \ce{Nb47O116}.} \label{fig:Nb25O62_Nb47O116_Elec} \end{figure} \subsection{\texorpdfstring{H-\ce{Nb2O5}}{ERROR}} H-\ce{Nb2O5} is the high-temperature phase of niobium pentoxide, and crystallises in space group $P2/m$ (Table \ref{tab:strucinfo}, Fig. \ref{fig:xtalstrucs}). As the parent compound of the crystallographic shear structures, its electronic structure provides a reference. However, since it is fully oxidised, all niobium ions have a $d^0$ configuration and there are no electrons occupying the conduction band. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{HNb2O5-BS_spectral} \caption{Bandstructure of H-\ce{Nb2O5} (PBE). Valence and conduction bands are colored in orange and blue, respectively. Flat and dispersive bands are present above the Fermi level (dashed line) similar to those in \ce{Nb22O54} and \ce{Nb12O29}, but are unoccupied.} \label{fig:HNb2O5_PBE_BS_DOS} \end{figure} The bandstructure of H-\ce{Nb2O5} shows the presence of flat and dispersive bands (Fig. \ref{fig:HNb2O5_PBE_BS_DOS}), similar to those found for the other shear phases above. However, the relative position of these bands depends on whether or not they are occupied. Doping by alkali metal ions is one way to introduce electrons into the conduction band, and in the particular case of H-\ce{Nb2O5} this has a practical relevance. Transition metal oxides in general, and the niobium-based oxide shear phases of this work, are used as electrodes within lithium-ion batteries. Like oxygen removal, lithium intercalation is a method to $n$-dope the material. Similar behaviour can often be observed from charge doping and ion insertion, for example in \ce{Na_xWO3}\cite{walkingshaw2004}. The H-\ce{Nb2O5} phase has been studied extensively for lithium-ion battery applications and it is closely related to other shear phases that have been examined for the same purpose~\cite{griffith2016,griffith2018}. Inserting a single lithium per unit cell into the middle of the $3\times 4$ block results in a localised state similar to those present in the niobium suboxides (Fig. \ref{fig:HNb2O5_Spindensity}). Note that the electron is entirely localised within the $3\times 4$ block, with the $3\times 5$ block remaining empty. Oxygen removal (as in the suboxides above) and lithium intercalation (examined here) clearly result in similar electronic structure features. \begin{figure}[!htb] \centering \includegraphics[scale=0.25]{Li1-H-Nb2O5-spinden} \caption{Spin density plot of lithiated H-\ce{Nb2O5}. A single lithium is located in the middle of the smaller block, inducing a localised state.} \label{fig:HNb2O5_Spindensity} \end{figure} \section{Discussion} Our results establish that the presence of defect-like flat bands and metallic conduction states is an innate feature of block-type structures. This coexistence arises due to the two different types of niobium sites present in the crystal structures; the central \ce{NbO6} octahedra are purely corner-shared, the distance between niobium atoms is larger and orbital overlap is reduced. This isolation results in localised electronic states, while along the crystallographic shear planes, where Nb-Nb distances are smaller and orbitals overlap more strongly, delocalised states are present. Each block can host one localised electron that is, rather unusually, spread over multiple niobium sites. This spread over multiple sites explains why single crystal X-ray diffraction studies on \ce{Nb22O54} and {\it o}-\ce{Nb12O29} do not show the presence of charge ordering~\cite{mcqueen2007}, despite the detection of localised electrons by magnetic measurements~\cite{cava1991a}. As the electronic structure features are ultimately a result of the blocks as structural units, the same principles are likely to apply to other crystallographic shear phases in the \ce{WO3}-\ce{Nb2O5} and \ce{TiO2}-\ce{Nb2O5} phase diagrams. \begin{figure}[!hb] \centering \subfloat[$\nu\leq1$]{\includegraphics[scale=0.22]{semicon}} \subfloat[$\nu>1$]{\includegraphics[scale=0.22]{metal}} \caption{Schematic of bandstructure for (a) filling fraction $\nu\leq$ 1 $e^{-}$/block and (b) $\nu>$ 1 $e^{-}$/block. O $2p$ and Nb $4d$ dominated bands are colored in orange and blue, respectively. Fermi level is indicated by a dashed line, $\mathbf{k}_{\perp}$ designates reciprocal space vector associated with the real space direction perpendicular to the block plane. The relative position of flat and dispersive bands changes with the filling fraction $\nu$.} \label{fig:Nb2O5-x_Scheme} \end{figure} \begin{figure*} \centering \subfloat[]{\includegraphics[scale=0.325]{maginteractions}} \subfloat[]{\includegraphics[scale=0.25]{magorderings}} \caption{a) Possible spin-spin interactions $J_i$ in monoclinic \ce{Nb12O29} along crystallographic $a$, $b$ and $c$ directions. b) Different spin arrangements in monoclinic \ce{Nb12O29}. AFM along $c$ (top left), fully FM (top right), AFM along $a$ (bottom left), AFM along $b$ (bottom right). White and grey blocks are offset by 0.5 $b$ throughout.} \label{fig:mNb12O29_Magnetism} \end{figure*} Regarding possible electronic conduction mechanisms in the niobium suboxides, the relevant quantities are the filling fraction $\nu$ (number of $e^-$ per block), and the energy gap between the flat and dispersive bands. For filling fractions of less than one, only localised states are filled (\ce{Nb25O62} and \ce{Nb47O116}, Fig.~\ref{fig:Nb25O62_Nb47O116_Elec}) and electrons can hop from one filled block to another empty one by a polaron hopping mechanism. The hopping process will have an activation energy. However, in \ce{Nb22O54} all localised states are filled and this hopping mechanism becomes impossible. With no metallic conduction electrons present, thermal excitation from the defect-like flat bands into the dispersive conduction bands might provide the dominant mechanism, as illustrated in Fig.~\ref{fig:Nb2O5-x_Scheme}a. This mechanism is reminiscent of doped semiconductors, and the activation energy associated with this process will depend on the separation between the flat and dispersive bands (cf. Figs~\ref{fig:Nb22O54_Theorycomp}, \ref{fig:Nb2O5-x_Scheme}a). Thermal excitation from flat into dispersive bands is also possible in \ce{Nb25O62} and \ce{Nb47O116}, and could coexist with a polaron hopping mechanism. Finally, in the case of \ce{Nb12O29}, all localised states are filled, but metallic conduction states are also partially filled (Fig.~\ref{fig:Nb2O5-x_Scheme}b). The result is metallic conductivity, which does not require thermal activation. Both R\"uscher et al.~\cite{ruscher1988} and Cava et al.~\cite{cava1991,cava1991a} have studied the conductivity of \ce{Nb2O_{5-$\delta$}} compounds and observed that all phases except \ce{Nb12O29} exhibit thermally activated conductivity. R\"uscher et al. also noted the effectively one-dimensional electron transport properties along the block columns~\cite{ruscher1992}, which are consistent with the calculated band dispersions. In addition, the experiments of Cava et al. show semiconducting electronic behaviour for \ce{Nb25O62} from 4--300 K; \ce{Nb47O116} and \ce{Nb22O54} exhibit semiconducting (thermally activated) conductivity from 0--250 K and from 0--100 K, respectively~\cite{cava1991a}. Beyond those temperatures, there is a metallic-like range of temperatures in which conductivity decreases again. Given this change in the temperature dependence of the conductivity from semiconducting to metallic, the flat bands associated with localised electrons are likely shallow donor levels (Fig.~\ref{fig:Nb2O5-x_Scheme}). The complex interplay between electron localisation and delocalisation in the suboxides is more similar to phenomena occurring in semiconductors on $n$-type doping, but distinctly different from metal-insulator transitions in transition metal oxides~\cite{imada1998}. Our results suggest that a similar phenomenon of crossover from localised to metallic conduction could occur on lithium doping of H-\ce{Nb2O5}, which might be observed with electrochemical, spectroscopic, or magnetic measurements. Magnetic susceptibility measurements on the suboxides show that the number of localised moments increases with the degree of reduction~\cite{ruscher1991,cava1991a}. However, under the assumption that $g=2$, the number of moments calculated from the measurements is smaller than the number of introduced electrons~\cite{ruscher1991,cava1991a}. For \ce{Nb12O29}, this is consistent with one delocalised and one localised electron, but for the remaining suboxides this apparent reduction in the number of local moments is unexpected. Our first-principles results are consistent with complete localisation of electrons until a threshold is exceeded, and therefore all moments should be seen. Deviations in the $g$ value of the electrons might explain the discrepancy. As the electrons in these suboxides are well-localised, electron paramagnetic resonance (EPR) spectroscopy could provide some insight into the nature of the electronic states and the $g$ values. For \ce{Nb22O54} in particular, the different shapes of the magnetic orbitals could be used to detect electrons occupying specific blocks. Another possibility is that localised electrons contributing magnetic moments coexist with magnetically inactive electrons in all suboxides, not just \ce{Nb12O29}. However, we see no evidence for this in our calculations, and the thermally activated conductivity of \ce{Nb22O54}, \ce{Nb47O116} and \ce{Nb25O62} seems inconsistent with the presence of magnetically inactive (Pauli-paramagnetic) metallic electrons. Long-range antiferromagnetic order is observed only in the monoclinic \ce{Nb12O29} phase below 12 K, all other niobium suboxides are paramagnetic~\cite{cava1991,cava1991a}. The Curie--Weiss constants of \ce{Nb2O_{5-$\delta$}} are in the range of \SIrange{0}{24}{\kelvin} (\SIrange{0}{2}{\milli\electronvolt}), and indicate antiferromagnetic interactions that become stronger with increasing degree of reduction~\cite{cava1991a}. In \ce{Nb25O62} and \ce{Nb47O116}, some of the localised states are empty (cf. Fig.~\ref{fig:Nb25O62_Nb47O116_Elec}), and the magnetic lattice is not fully filled. Independent of the strength of interaction, if not all spins have neighbours to couple with, or there is some randomness in the distribution of the spins, long-range magnetic order is unlikely to emerge. With first-principles calculations it is very difficult to address the question of why only {\it m}-\ce{Nb12O29} orders, but \ce{Nb22O54} and {\it o}-\ce{Nb12O29} do not, since the energy differences between different magnetic states are very small. However, we can discuss the possible spin-spin interactions simply based on the shape and orientation of the magnetic orbital within the crystal structure. We will focus in particular on monoclinic \ce{Nb12O29}, but similar considerations apply to the other suboxides. The magnetic orbital lies within the plane of the block. The two closest distances (two neighbours) between spins (two nearest neighbours) are along the block columns, with a separation of about 3.8 \si{\angstrom} ($J_b$, Fig.~\ref{fig:mNb12O29_Magnetism}a bottom). By symmetry, the interaction with spins in the four next-nearest neighbouring blocks along $c$, that are offset by 0.5 $b$, has to be the same ($J_c$, distance 10.6 \si{\angstrom}, Fig.~\ref{fig:mNb12O29_Magnetism}a bottom). In addition to that, each block is connected to two blocks on the same level in monoclinic \ce{Nb12O29} along the $a$ direction ($J_a$, distance 15.9 \si{\angstrom}, Fig.~\ref{fig:mNb12O29_Magnetism}a top), and four others offset by 0.5 $b$ along $a$ (distance 15.3 \si{\angstrom}, Fig.~\ref{fig:mNb12O29_Magnetism}a top). Different spin arrangements are easily obtained from DFT calculations (spin densities are shown in Fig.~\ref{fig:mNb12O29_Magnetism}b), but the energy differences between them are very small (few meV), and change significantly with the level of theory (PBE or PBE+U). Energy differences of a few meV are consistent with the interaction strengths obtained experimentally. The lowest energy magnetic ordering found in our calculations is antiferromagnetic along the $c$ direction (Fig.~\ref{fig:mNb12O29_Magnetism}b, top left). \section{Conclusion} We have shown that the electronic structure features common to $n$-doped crystallographic shear phases include (1)~effectively one-dimensional flat and dispersive bands corresponding to localised and delocalised electronic states (2)~electron localisation in orbitals spanning the block planes, and (3)~the partition of localised and delocalised states between central and peripheral niobium sites. Structural block units are also present in \ce{WO3}-\ce{Nb2O5}~\cite{roth1965} and \ce{TiO2}-\ce{Nb2O5}~\cite{wadsley1961,wadsley1961a} phases, and many of these mixed-metal shear phases have been explored as lithium-ion battery electrodes~\cite{griffith2018,guo2014}. The principles laid out in this work are likely transferable to these compounds, and are important for the interpretation of spectroscopic and electrochemical data. The niobium suboxides show a transition from localised to delocalised electrons, but it is much smoother than commonly observed for metal-insulator transitions in transition metal oxides. In fact, our results portray the suboxides to be closer to $n$-doped semiconductors, but with a limited capacity for localised electrons. Once a filling threshold is exceeded, delocalised metallic electrons are simply added to existing localised electrons. This process is likely to occur in heavily lithium-doped shear phases during battery operation. Similarly, the experimentally observed crossover from localised to delocalised electronic behaviour in \ce{WO_{3-x}}~\cite{salje1984} might have the same underlying mechanism, as \ce{WO_{3-x}} phases also exhibit some amount of crystallographic shear. More broadly, the niobium suboxides are an elegant example of the interplay between crystal and electronic structure, and the balance between electron localisation and delocalisation in oxides of an early transition metal. \begin{acknowledgments} The authors would like to thank Bartomeu Monserrat and Ieuan Seymour for useful discussions. We acknowledge the use of Athena at HPC Midlands+, which was funded by the EPSRC on grant EP/P020232/1, in this research via the EPSRC RAP call of spring 2018. C.P.K. thanks the Winton Programme for the Physics of Sustainability and EPSRC for financial support. K.J.G. thanks the Winston Churchill Foundation of the United States and the Herchel Smith Foundation. K.J.G. and C.P.G. also thank the EPSRC for funding under a programme grant (EP/M009521/1). The authors declare that the data supporting the findings of this study are available within the paper and its Supplementary Material files. \end{acknowledgments}
2,877,628,089,911
arxiv
\section{Introduction} For any uncountable cardinal number $\tau$, the Tychonov and the Cantor cubes (denoted by $I^\tau$ and $D^\tau$, respectively), belong to the class of main geometric objects in the topology of non-metrizable compact Hausdorff spaces. The spaces $I^\tau$ (we denote $I$ the segment $[0,1]$) and $D^\tau$ were first characterized by Shchepin \cite{S}. In particular, the Tychonov cubes are characterized as the homogeneous-by-character nonmetrizable compact Hausdorff absolute retracts \cite{S1}. This characterization was later applied to the study of topology of the {\em functor-powers}, i.e. spaces of the form $F(K^\tau)$, where $K$ is a compact metrizable space and $F$ is a covariant functor in the category of compact Hausdorff spaces. In particular, it was proved that, for uncountable $\tau$, the space $P(I^\tau)$, where $P$ denotes the probability measure functor, is homeomorphic to $I^\tau$ if and only if $\tau=\omega_1$. For the hyperspace functor $\exp$ it is known that $\exp(D^\tau)$ is homeomorphic to $D^\tau$ if and only if $\tau=\omega_1$ and $\exp(I^\tau)$ is not an absolute retract whenever $\tau>\omega$. In this paper we consider the hyperspaces $\mathop{\mathrm{cc}}(X)$ of nonempty compact convex subsets in $X$ for compact convex subsets $X$ in locally convex spaces. For metrizable $X$, this object was investigated by different authors (see, e. g. \cite{NQS}, \cite{M}). In particular, it was proved in \cite{NQS} that the hyperspace of convex compact subsets of the Hilbert cube $Q=I^\omega $ is homeomorphic to $I^\omega $. The aim of this paper is to consider the nonmetrizable compact convex subsets in locally convex spaces. One of our main results is Theorem \ref{t:1}, which characterizes the compact convex spaces $X$ with $\mathop{\mathrm{cc}}(X)$ being an absolute retract. We also show that the space $\mathop{\mathrm{cc}}(X)$ is homeomorphic to $I^{\omega_1}$ (resp. the cone over $I^{\omega_1}$) if and only if $X$ is homeomorphic to $I^{\omega_1}$ (resp. the cone over $I^{\omega_1}$). These results are in the spirit of the corresponding results concerning the functor-powers of compact metric spaces (see \cite{S}). The proofs are based on the spectral analysis of nonmetrizable compact Hausdorff spaces, in particular on the Schepin Spectral Theorem \cite{S} as well as on analysis of the selection type properties of the maps of the hyperspaces of compact convex subsets. The construction $\mathop{\mathrm{cc}}$ determines a functor acting on the category ${\mathbf{Conv}}$ of compact convex subsets of locally convex spaces. The results of this paper demonstrate that the functor $\mathop{\mathrm{cc}} $ is closer to the functor $P$ of probability measures than to the hyperspace functor $\exp$. \section{Preliminaries} All topological spaces are assumed to be Tychonov, all maps are continuous. By $\bar A$ we denote the closure of a subset $A$ of a topological space. Let $X$ be any space. The {\it hyperspace\/} $\exp X$ of $X$ is the space of all nonempty compact subsets in $X$ endowed with the {\it Vietoris\/} topology. A base of this topology is formed by the sets of the form $$ \langle U_1,\dots,U_n\rangle=\bigl\{A\in\exp X\mid A\subset U_1\cup\dots\cup U_n\text{ and }A\cap U_i\neq\emptyset\text{ for every }i\bigr\}, $$ where $U_1,\dots,U_n$ run through the topology of $X$, $n\in\mathbb N$. For a metric space $(X,\rho)$ the Vietoris topology on $\exp(X)$ is induced by the Hausdorff metric $\rho_{\mathrm{H}}$: $$\rho_{\mathrm{H}}(A,B)=\inf\{\varepsilon>0 \mid A\subset O_{\varepsilon}(B),\ B\subset O_{\varepsilon}(A)\}.$$ The hyperspace construction determines a functor in the category ${\mathbf{Comp}}$ of compact Hausdorff spaces and continuous maps. Given a map $f\colon X\to Y$ in ${\mathbf{Comp}}$, we define $\exp(f)\colon \exp(X)\to\exp(Y)$ by $\exp(f)(A)=f(A)$, $A\in \exp(X)$. Let ${\mathbf{Conv}}$ denote the category of compact convex subsets in locally convex spaces and affine continuous maps. If $X$ is an object of ${\mathbf{Conv}}$ we define $$\mathop{\mathrm{cc}}(X)=\{A\in\exp(X)\mid A\mbox{ is convex}\}\subset\exp(X).$$ If $f\colon X\to Y$ is a map in ${\mathbf{Conv}}$, then the map $\mathop{\mathrm{cc}}(f)\colon \mathop{\mathrm{cc}}(X)\to \mathop{\mathrm{cc}}(Y)$ is defined as the restriction of $\exp(f)$ on $\mathop{\mathrm{cc}}(X)$. In the sequel, for a nonempty compact subset $X$ in a locally convex space $Y$, we denote the~closed convex hull map by $h\colon \exp X\to\mathop{\mathrm{cc}}(Y)$. Let $X$ be a subset of a metrizable locally convex space $M$. In the sequel, we identify any point $x\in X$ with the singleton $\{x\}\in\mathop{\mathrm{cc}}(X)$. Recall that the {\it Minkowski operation\/} in $\mathop{\mathrm{cc}}(X)$ is defined as follows: $$\lambda_1A_1+\lambda_2A_2=\{\lambda_1x_1+\lambda_2x_2\mid x_1\in A_1,\ x_2\in A_2\},$$ $\lambda_1,\lambda_2\in\mathbb R$, $A_1,A_2\in\mathop{\mathrm{cc}}(X)$. \begin{lem} Let $X$ be a compact convex subset in a locally convex space. There exists an~embedding $\alpha$ of the space $\mathop{\mathrm{cc}}(X)$ into a~locally convex space $L$ satisfying the condition \begin{equation}\label{f:emb} \alpha(\lambda_1A_1+\lambda_2A_2)=\lambda_1\alpha(A_1)+\lambda_2\alpha(A_2) \end{equation} for every $\lambda_1,\lambda_2\in\mathbb R$, $A_1,A_2\in\mathop{\mathrm{cc}}(X)$. \end{lem} \begin{proof} Let $X$ be a compact convex subset in a metrizable locally convex space $M$. Following \cite{P}, consider the equivalence relation $\sim$ on $\mathop{\mathrm{cc}}(M)\times\mathop{\mathrm{cc}}(M)$ defined by the condition: $(A,B)\sim(C,D)$ if and only if $A+D=B+C$. Denote by $L$ the set of equivalence classes of $\sim$ (in the sequel, we denote by $[A,B]$ the equivalence class that contains $(A,B)$). It is well-known that $L$ is a linear space with respect to the naturally defined operations. Let $U$ be a convex neighborhood of zero in $M$ and define $$U^\ast=\{[A,B]\in L\mid A\subset B+U,\ B\subset A+U\}.$$ The sets $U^\ast$ form a base at zero in $L$. The map $\alpha\colon\mathop{\mathrm{cc}}(X)\to L$ defined by the formula $\alpha(A)=[A,\{0\}]$ is the required embedding. \end{proof} \section{Functor $\mathop{\mathrm{cc}}$ and soft maps} A map $f\colon X\to Y$ is {\it soft} (see \cite{S}) if for every commutative diagram $$\xymatrix{A \ar[r]^{\psi}\ar[d]_i & X\ar[d]^{f} \\ Z\ar[r]_{\varphi} & Y,}$$ where $i\colon A\to Z$ is a closed embedding into a paracompact space $Z$, there exists a map $\Phi\colon Z\to X$ such that $\Phi | A=\psi$ and $f\Phi=\varphi$. In other words, a map is soft if it satisfies the parameterized selection extension property. The following proposition is close to the Michael selection theorem for convex-valued maps \cite{Mi}. \begin{prop}\label{p:soft} Let $f\colon X\to Y$ be an~affine open map of compact convex metrizable subsets of locally convex spaces. Then the~map $\mathop{\mathrm{cc}}(f)\colon \mathop{\mathrm{cc}}(X)\to\mathop{\mathrm{cc}}(Y)$ is soft. \end{prop} \begin{proof} We first prove that the~map $\mathop{\mathrm{cc}}(f)$ is open. It is well-known that the~map $\exp(f)$ is open. Since the diagram $$\xymatrix{{(\exp(f))^{-1}(\mathop{\mathrm{cc}}(Y))}\ar[rr]^h \ar[dr]_{{\exp(f)|(\exp(f))^{-1}(\mathop{\mathrm{cc}}(Y))}\ \ \ \ \ \ }&&{\mathop{\mathrm{cc}}(X)}\ar[dl]^{\mathop{\mathrm{cc}}(f)}\\ &{\mathop{\mathrm{cc}}(Y)}&}$$ is commutative and the closed convex hull map $h$ is a retraction of $(\exp(f))^{-1}(\mathop{\mathrm{cc}}(Y))$ onto $\mathop{\mathrm{cc}}(X)$, we see that the~map $\mathop{\mathrm{cc}}(f)$ is also open. There exists an~embedding $\alpha\colon\mathop{\mathrm{cc}}(X)\to L$ satisfying condition (\ref{f:emb}). Choose a~countable family of~functionals $\{\varphi_1,\varphi_2,\dots\}\subset L^*$ such that this family separates the points and $\varphi_i(\alpha(\mathop{\mathrm{cc}}(X)))\subset[-1/i,1/i]$. Then the~map $\varphi=(\varphi_1,\varphi_2,\dots)$, defined on $\alpha(\mathop{\mathrm{cc}}(X))$, embeds $\alpha(\mathop{\mathrm{cc}}(X))$ into the~Hilbert space $\ell^2$. Denote by $$\xi\colon \varphi(\alpha(\mathop{\mathrm{cc}}(X)))\times\mathop{\mathrm{cc}}(\varphi(\alpha(\mathop{\mathrm{cc}}(X))))\to \varphi(\alpha(\mathop{\mathrm{cc}}(X)))$$ the~nearest point map: $y=\xi(x,A)$ if and only if $\|z-x\|>\|y-x\|$, for every $z\in A\setminus\{y\}$ (here $\|\cdot\|$ denotes the~norm in $\ell^2$). Suppose a~commutative diagram $$\xymatrix{A \ar[r]^{p}\ar@{^{(}->}[d] & {\mathop{\mathrm{cc}}(X)}\ar[d]^{\mathop{\mathrm{cc}}(f)} \\ Z\ar[r]_{q} & {\mathop{\mathrm{cc}}(Y)}}$$ is given, where $A$ is a closed subset of a~paracompact space $Z$. Since $\mathop{\mathrm{cc}}(X)$ is an~absolute retract, there exists a~map $r\colon Z\to \mathop{\mathrm{cc}}(X)$ such that $r|A=p$. Note that for every $B\in\mathop{\mathrm{cc}}(Y)$, the~set $\varphi(\alpha(\mathop{\mathrm{cc}}(f)^{-1}(B)))$ is a~convex closed subset of $\varphi(\alpha(\mathop{\mathrm{cc}}(X)))$, i.e. an~element of the~space $\mathop{\mathrm{cc}}(\varphi(\alpha(\mathop{\mathrm{cc}}(X))))$. Since the~map $\mathop{\mathrm{cc}}(f)$ is open, the~map $$\delta\colon\mathop{\mathrm{cc}}(Y)\to\mathop{\mathrm{cc}}(\varphi(\alpha(\mathop{\mathrm{cc}}(X)))),\ \delta(B)=\varphi(\alpha(\mathop{\mathrm{cc}}(f)^{-1}(B))),$$ is continuous. Define the~map $R\colon Z\to \mathop{\mathrm{cc}}(X)$ by the~formula $$R(z)=\alpha^{-1}(\varphi^{-1}(\xi(\varphi(\alpha (r(z))),\delta(q(z))))),\ z\in Z.$$ It is easy to see that $R$ is continuous, $R|A=p$, and $\mathop{\mathrm{cc}}(f)R=q$. \end{proof} A point $p$ of a set $X$ in a locally convex space $E$ is called an {\em exposed point} of $X$ if there exists a continuous linear functional $f$ on $E$ such that $f(x)> f(p)$, for each $x \in X\setminus\{p\}$. \begin{lem}\label{l:3} Let $f\colon X\to Y$ be an open affine continuous map of compact convex subsets in locally convex spaces such that $|f^{-1}(y)|>1$ for every $y\in Y$. Then $|\mathop{\mathrm{cc}}(f)^{-1}(B)|>1$, for every $B\in \mathop{\mathrm{cc}}(Y)$. \end{lem} \begin{proof} As in the proof of Proposition \ref{p:soft}, one may assume that $X$ is affinely embedded in the Hilbert space $\ell^2$. Let $B\in \mathop{\mathrm{cc}}(Y)$ and $A\in\mathop{\mathrm{cc}}(f)^{-1}(B)$. If $A\neq f^{-1}(B)$, then we define $A'$ as the closure of the convex hull of $A\cup\{x\}$, where $x\in f^{-1}(B)\setminus A$. Then $A'\neq A$ and $A'\in\mathop{\mathrm{cc}}(f)^{-1}(B)$. If $A= f^{-1}(B)$, then it is well-known (see, e.g. \cite{BL}) there exists an exposed point, $x$ of $A$. Since $f$ is open, there exists a neighborhood $U$ of $x$ such that $f(A\setminus U)=B$. In this case we define $A'$ as the closure of the convex hull of $A\setminus U$. Note that $A'\in \mathop{\mathrm{cc}}(f)^{-1}(B)$. That $A\neq A'$ easily follows from the fact that $x$ is an exposed point. \end{proof} \begin{lem}\label{l:op} Suppose that $f\colon X\to Y$ is a continuous affine map of compact convex subsets of locally convex spaces. If the map $\mathop{\mathrm{cc}}(f)$ is open then so is the map $f$. \end{lem} \begin{proof} Suppose the contrary, that $f$ is not open. Then there exists $x\in X$ and a net $(y_\alpha)_{\alpha\in A}$ in $Y$ converging to $y=f(x)$, such that there is no net $(x_\alpha)_{\alpha\in A}$ in $X$ converging to $x$ with $x_\alpha\in f^{-1}(y_\alpha)$, for every $\alpha\in A$. Assuming that the map $\mathop{\mathrm{cc}}(f)$ is open, we obtain that there exists a net $(C_\alpha)_{\alpha\in A}$ in $\mathop{\mathrm{cc}}(X)$ converging to $\{x\}$ and such that $\mathop{\mathrm{cc}}(f)(C_\alpha)=\{y_\alpha\}$, for every $\alpha\in A$. Then, obviously, the net $(c_\alpha)_{\alpha\in A}$ converges to $x$, for every choice $c_\alpha\in C_\alpha$, $\alpha\in A$. This gives a contradiction. \end{proof} A commutative diagram \begin{equation}\label{f:1} \mathcal{D}={\xymatrix{X\ar[r]^f\ar[d]_g& Y\ar[d]^u\\ Z\ar[r]_v& T}} \end{equation} is called {\it soft} if its {\it characteristic map} $$\chi_{\mathcal{D}}=(f,g)\colon X\to Y\times_TZ=\{(y,z)\in Y\times Z\mid u(y)=v(z) \}$$ is soft. \begin{lem}\label{l:soft} Suppose that a commutative diagram $\mathcal{D} $ (see formula (\ref{f:1})) in the category ${\mathbf{Conv}}$ consists of metrizable spaces. If the diagram $\mathop{\mathrm{cc}}(\mathcal{D}) $ is soft, then so is the diagram $\mathcal{D} $. \end{lem} \begin{proof} First we show that the diagram $\mathcal{D}$ is open if such is $\mathop{\mathrm{cc}}(\mathcal{D}) $. Let $(y_i,z_i)_{i=1}^\infty$ be a sequence in $Y\times_TZ$ converging to a point $(y,z)$ and let $x\in X$ be such that $\chi_\mathcal{D}(x)=(y,z)$. Since $\mathop{\mathrm{cc}}(\mathcal{D})$ is soft (and therefore open), there exists a sequence $(A_i)_{i=1}^\infty$ in $\mathop{\mathrm{cc}}(X)$ such that $(f(A_i),g(A_i))=(\{y_i\},\{z_i\})$, for every $i$, and $(A_i)_{i=1}^\infty$ converges to $\{x\}$ in $\mathop{\mathrm{cc}}(X)$. Choose arbitrary $x_i\in A_i$, then $(f(x_i),g(x_i))=(y_i,z_i)$, for every $i$, and $(x_i)_{i=1}^\infty$ converges to $x$ in $X$. This shows that the map $\chi_{\mathcal{D}}$ is open. Now the map $\chi_{\mathcal{D}}$, being an open affine map of convex compact metrizable subspaces of locally convex spaces, is soft. This follows from the Michael Selection Theorem \cite{Mi} (see e.g. \cite{S}). \end{proof} \section{Hyperspaces $\mathop{\mathrm{cc}}(X)$ homeomorphic to Tychonov cubes} We are going to recall some definitions and results related to the Shchepin Spectral Theorem (see \cite{S} for details). In what follows, an {\it inverse system} $\mathcal S= \{X_\alpha,p_{\alpha\beta};\mathcal A\}$ satisfies the~following conditions: \begin{enumerate} \item[1)] $X_\alpha$ are compact Hausdorff spaces; \item[2)] $p_{\alpha\beta}$ are surjective; \item[3)] the partially ordered set $\mathcal A$ (by $\le$) is directed, i.e., for every $\alpha,\beta\in\mathcal A$ there exists $\gamma\in\mathcal A$ with $\alpha\le\gamma$, $\beta\le\gamma$. \end{enumerate} An inverse system $\mathcal S= \{X_\alpha,p_{\alpha\beta};\mathcal A\}$ is called {\it open\/} if all the maps $p_{\alpha\beta}$ are open. An inverse system $\mathcal{S}=\{X_\alpha,p_{\alpha\beta};\mathcal A\}$ is called {\it continuous\/} if for every $\alpha\in\mathcal A$ we have $X_\alpha=\varprojlim\{X_{\alpha '},p_{\alpha '\beta '};\alpha ',\beta '<\alpha\}$. By $\mathrm{w}(X)$ we denote the {\it weight\/} of a~space $X$. An inverse system $\mathcal S=\{X_\alpha,p_{\alpha\beta};\mathcal A\}$ is called a~$\tau$-{\it system}, $\tau$ being a~car\-di\-nal number, if the~following holds: \begin{enumerate} \item[1)] the directed set $\mathcal A$ is $\tau$-complete, i. e. every chain of cardinality $\le\tau$ in $\mathcal A$ has the least upper bound; \item[2)] $\mathcal S$ is continuous; \item[3)] $\mathrm{w}(X_\alpha)\le\tau$, for every $\alpha\in\mathcal A$. \end{enumerate} If $\tau=\omega$, we use the terms $\sigma$-{\it complete\/} and $\sigma$-{\it system.\/} For every $A$, we denote the~family of all countable subsets of $A$ ordered by inclusion by $\mathcal P_\omega(A)$. A standard way to represent a compact Hausdorff space $X$ as a limit of a $\sigma$-system is to embed it into a Tychonov cube $I^\tau$, for some $\tau$. For any countable $A\subset\tau$, let $X_A=p_A(X)$, where $p_A\colon I^\tau\to I^A$ denotes the projection. In this way we obtain an inverse system $\mathcal S=\{X_A,p_{AB};\mathcal P_\omega(\tau)\}$, where, for $A\supset B$, $p_{AB}\colon X_A\to X_B$ denotes the (unique) map with the property $p_B|X=p_{AB}(p_A|X)$. The resulting inverse system $\mathcal S$ is a $\sigma$-system and $X=\varprojlim\mathcal S$. If $X$ is a compact convex subset of a locally convex space, we can affinely embed $X$ into $I^\tau$, for some $\tau$. The above construction gives us an inverse $\sigma$-system $\mathcal S$ in the category ${\mathbf{Conv}}$ such that $X=\varprojlim\mathcal S$. In the sequel, we will use the well-known fact that the functor $\mathop{\mathrm{cc}}$ is continuous in the sense that it commutes with the limits of inverse systems. A compact Hausdorff space $X$ is {\em openly generated} if $X$ is the limit of an inverse $\sigma$-system with open short projections. The {\em absolute retracts} (ARs) are considered in the class of compact Hausdorff spaces. \begin{thm}\label{t:1} Let $X$ be a convex compact subset of a locally convex space. Then the space $\mathop{\mathrm{cc}}(X)$ is an~absolute retract if and only if $X$ is openly generated and of weight $\le\omega_1$. \end{thm} \begin{proof} If $X$ is openly generated and of weight $\le\omega_1$, then $X$ is homeomorhic to $\varprojlim\mathcal S$, where $\mathcal S=\{X_\alpha,p_{\alpha\beta};\omega_1\}$ is an inverse system consisting of convex compact subsets of metrizable locally convex spaces and open maps. Then $\mathop{\mathrm{cc}}(X)$ is homeomorhic to $\varprojlim\mathop{\mathrm{cc}}(\mathcal S)$. Since the spaces $\mathop{\mathrm{cc}}(X_\alpha)$ are ARs and the maps $\mathop{\mathrm{cc}}(p_{\alpha\beta})$ are soft (see Proposition \ref{p:soft}), the space $\mathop{\mathrm{cc}}(X)$ is an AR. Suppose now that $\mathop{\mathrm{cc}}(X)$ is an AR of weight $\ge\omega_2$. It easily follows from standard results of Shchepin's theory that there exists a compact convex space $\tilde X$ of weight $\omega_2$ such that $\mathop{\mathrm{cc}}(\tilde X)$ is an AR (see \cite{S} and also \cite{Ch}, where the case of locally convex spaces is considered). We may assume that $\mathop{\mathrm{cc}}(\tilde X)=\varprojlim\mathop{\mathrm{cc}}(\tilde{\mathcal S})$, where $\tilde{\mathcal S}=\{\tilde X_\alpha,\tilde p_{\alpha\beta};\omega_2\}$ is an inverse system such that for every $\alpha<\omega_2$ the space $\mathop{\mathrm{cc}}(\tilde X_\alpha)$ is an AR and for every $\alpha,\beta$, $\beta\le\alpha<\omega_2$, the map $\mathop{\mathrm{cc}}(\tilde p_{\alpha\beta})$ is soft. In its turn, every $\tilde X_\alpha$ can be represented as $\varprojlim\tilde{\mathcal S}_\alpha$, where $\tilde{\mathcal S}_\alpha=\{\tilde X_{\alpha\gamma},\tilde q_{\gamma\delta}^\alpha;\omega_1 \}$ is an inverse system in ${\mathbf{Conv}}$ and it follows from the results of Chigogidze \cite{Ch} that for every $\alpha,\beta$, where $\beta \le \alpha<\omega_2$, the map $\tilde p_{\alpha\beta}$ is the limit of a morphism $(\tilde p_{\alpha\beta}^\gamma)_{\gamma<\omega_1}\colon\tilde{\mathcal S}_\alpha\to\tilde{\mathcal S}_\beta$ such that the maps $\mathop{\mathrm{cc}}(\tilde p_{\alpha\beta}^\gamma)$ are soft and for every $\gamma\ge\delta$, $\gamma,\delta<\omega_1$, the diagram $$\xymatrix{{\mathop{\mathrm{cc}}(\tilde X_{\alpha\gamma})}\ar[r]^{\mathop{\mathrm{cc}}(\tilde p_{\alpha\beta}^\gamma)} \ar[d]_{\mathop{\mathrm{cc}}(q_{\gamma\delta}^\alpha)}& {\mathop{\mathrm{cc}}(\tilde X_{\beta\gamma})}\ar[d]^{\mathop{\mathrm{cc}}(q_{\gamma\delta}^\beta)}\\ {\mathop{\mathrm{cc}}(\tilde X_{\alpha\delta})}\ar[r]_{\mathop{\mathrm{cc}}(\tilde p_{\alpha\beta}^\delta)} & {\mathop{\mathrm{cc}}(\tilde X_{\beta\delta})}}$$ is soft. Since all the spaces in the above diagram are metrizable, by Lemma \ref{l:soft}, the diagram $$\xymatrix{{\tilde X_{\alpha\gamma}}\ar[r]^{\tilde p_{\alpha\beta}^\gamma} \ar[d]_{q_{\gamma\delta}^\alpha}& {\tilde X_{\beta\gamma}}\ar[d]^{q_{\gamma\delta}^\beta}\\ {\tilde X_{\alpha\delta}}\ar[r]_{\tilde p_{\alpha\beta}^\delta} & {\tilde X_{\beta\delta}}}$$ is also soft. As the limits of soft morphisms, the maps $\tilde p_{\alpha\beta}$ are soft and we conclude that the space $\tilde X$ is an absolute retract. Since the space $\tilde X$ is an AR, it contains a copy of the Tychonov cube $I^{\omega_2}$. It follows from the Shchepin Spectral Theorem that, without loss of generality, one may assume that every $\tilde X_\alpha$ contains the space $(I^{\omega_1})^\alpha$ and for every $\alpha,\beta$, where $\beta\le\alpha<\omega_2$, the map $\tilde p_{\alpha\beta}|(I^{\omega_1})^\alpha$ is the projection map of $(I^{\omega_1})^\alpha$ onto $(I^{\omega_1})^\beta$. Denote by $D$ the Aleksandrov supersequence of weight $\omega_1$, i.e. the one-point compactification of a discrete space of cardinality $\omega_1$. \smallskip {\bf Claim.} There exists $\alpha<\omega_2$ such that the subspace $(I^{\omega_1})^\alpha\subset \tilde X_\alpha$ contains an affinely independent copy of the space $D$. \smallskip {\it Proof of Claim.} Represent $D$ as $\{d_\gamma\mid \gamma\le\omega_1\}$, where $d_{\omega_1}$ denotes the unique non-isolated point of $D$. For $\gamma<\omega_1$, let $r_\gamma\colon D\to \{d_\delta\mid \delta\le\gamma\}\cup \{d_{\omega_1}\}$ denote the retraction that sends $\{d_\delta\mid \gamma<\delta<\omega_1\}$ into $d_{\omega_1}$. Define by transfinite induction maps $f_\gamma\colon D\to (I^{\omega_1})^{\alpha_\gamma}\subset \tilde X_{\alpha_\gamma}$, where $\gamma<\omega_1$ and $\alpha_\gamma<\omega_2$, so that $\alpha_{\gamma}\le\alpha_{\gamma'}$ and $\tilde p_{\alpha_{\gamma'}\alpha_\gamma}f_{\gamma'}=f_\gamma$ for every $\gamma\le\gamma'$. Let $f_0\colon D\to (I^{\omega_1})^{\alpha_0}\subset \tilde X_{\alpha_0}$ be an arbitrary constant map, for some $\alpha_0<\omega_2$. Suppose that, for some $\delta<\omega_1$, maps $f_\gamma$ are already defined for every $\gamma<\delta$ so that $f_\gamma=i_\gamma r_\gamma$ for some embedding $i_\gamma\colon r_\gamma(D)\to \tilde X_{\alpha_\gamma}$. If $\delta$ is a limit ordinal, let $\alpha_\delta=\sup\{\alpha_\gamma\mid\gamma<\delta\}$ and $f_\delta=\varprojlim \{f_\gamma\mid\gamma<\delta\}$. If $\delta=\delta'+1$, let $\alpha_\delta=\alpha_{\delta'}+1$ and find an embedding $i_\delta\colon r_\delta(D)\to (I^{\omega_1})^{\alpha_\delta}\subset \tilde X_{\alpha_\delta}$ such that $\tilde p_{\alpha_{\delta}\alpha_{\delta'}}i_\delta=i_{\delta'}$ and $\tilde p_{\alpha_{\delta}\alpha_{\delta'}}i_\delta(d_\delta)=i_{\delta'} (d_{\delta'})$. Put $f_\delta=i_\delta r_\delta$. Finally, let $\alpha=\sup\{\alpha_\gamma\mid\gamma<\omega_1\}$ and $f=\varprojlim \{f_\gamma\mid\gamma<\omega_1\}$. Claim is thus proved. \smallskip We now return to the proof of the theorem. Without loss of generality, we assume that $D\subset (I^{\omega_1})^\alpha\subset \tilde X_\alpha$ and $D$ is affinely independent in $\tilde X_\alpha$. Recall that $h(D)$ denotes the closed convex hull of $D$ in $\tilde X_\alpha$. We are going to show that the space $(\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha}))^{-1}(h(D))$ does not satisfy the Souslin condition. There exist two maps $s_1,s_2\colon D\to \tilde X_{\alpha+1}$ such that $\tilde p_{\alpha+1,\alpha}s_1=\tilde p_{\alpha+1,\alpha}s_2=1_{D}$ and $s_1(D)\cap s_2(D)=\emptyset$. Let $U_1,U_2$ be neighborhoods of $s_1(D)$ and $s_2(D)$ respectively such that $\bar U_1\cap\bar U_2=\emptyset$. For every isolated point $y\in D$ let $V_y$ be a neighborhood of $y$ in $\tilde X_\alpha$ such that $\bar V_y\cap h(D\setminus\{y\})=\emptyset$. Let $$W_y=\langle \tilde X_{\alpha+1}\setminus (\bar U_2\cap\tilde p_{\alpha+1,\alpha}^{-1}(D\setminus\{y\})), U_2\cap \tilde p_{\alpha+1,\alpha}^{-1}(\bar V_y)\rangle.$$ We are going to show that $\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))\cap W_y\neq\emptyset$. To this end, consider the set $B=h(s_1(D\setminus\{y\})\cup\{s_2(y)\})$. Obviously, $B\in\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))$ and $s_2(y)\in B\cap U_2\cap \tilde p_{\alpha+1,\alpha}^{-1}(\bar V_y)$. In addition, for every $z\in D\setminus\{d_{\omega_1}\}$, $z\neq y$, we have $B\cap \tilde p_{\alpha+1,\alpha}^{-1}(z)=\{s_1(z)\}$, therefore $B\subset \tilde X_{\alpha+1}\setminus (\bar U_2\cap\tilde p_{\alpha+1,\alpha}^{-1}(D\setminus\{y\}))$. We conclude that $B\in W_y$. It remains to prove that for every $y,z\in D\setminus\{d_{\omega_1}\}$, $y\neq z$, we have $W_y\cap W_z\cap \mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))\neq\emptyset$. Indeed, otherwise, for any $A\in W_y\cap W_z\cap \mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))$ we would have $A\cap \tilde p_{\alpha+1,\alpha}^{-1}(y)\subset p_{\alpha+1,\alpha}^{-1}(y)\setminus \bar U_2$ and, on the other hand, $A\cap \tilde p_{\alpha+1,\alpha}^{-1}(y)\subset U_2$, a contradiction. We therefore conclude that $$\{W_y\cap \mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))\mid y\in D\setminus\{d_{\omega_1}\}\}$$ is a family of nonempty disjoint open subsets in $\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))$. Since the space $\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))$ does not satisfy the Souslin condition, we obtain that $\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})^{-1}(h(D))\not\in\ $AR and hence the map $\mathop{\mathrm{cc}}(\tilde p_{\alpha+1,\alpha})$ is not a soft map. This contradiction demonstrates that $\mathrm{w}(X)\le\omega_1$. We are going to show that $X$ is openly generated. Since $\mathop{\mathrm{cc}}(X)$ is an AR of weight $\omega_1$, then there exists an inverse system $\mathcal S=\{X_\alpha,p_{\alpha\beta};\omega_1\}$ consisting of compact metrizable convex spaces and affine maps such that $\mathop{\mathrm{cc}}(X)=\varprojlim \mathop{\mathrm{cc}}(\mathcal S)$. Applying Shchepin's Spectral Theorem, we may additionally assume that all the maps $\mathop{\mathrm{cc}}(p_{\alpha\beta})$, $\beta\le\alpha<\omega_1$, are soft. By Lemma \ref{l:op}, the maps $p_{\alpha\beta}$, $\beta\le\alpha<\omega_1$, are soft and therefore open. \end{proof} \begin{thm} Let $X$ be a convex compact subset of a locally convex space. Then the space $\mathop{\mathrm{cc}}(X)$ is homeomorphic to $I^{\omega_1}$ if and only if $X$ is homeomorphic to $I^{\omega_1}$. \end{thm} \begin{proof} We use the following characterization of the Tychonov cube $I^\tau$, $\tau>\omega$, due to Shchepin \cite{S}: a compact Hausdorff space $X$ of weight $\tau>\omega$ is homeomorphic to the Tychonov cube $I^\tau$ if and only if $X$ is a character homogeneous absolute retract. Recall that a space is called {\it character homogeneous } if the characters of all of its points are equal. If the weight of $X$ is $\omega_1$, then it easily follows from the Shchepin Spectral Theorem \cite{S} that $X$ can be represented as $\varprojlim\mathcal S$, where $\mathcal S=\{X_\alpha,p_{\alpha\beta};\omega_1\}$ is an inverse system consisting of convex compact metrizable subsets in locally convex spaces and affine continuous maps. Since the functor $\mathop{\mathrm{cc}}$ is continuous (see, e.g. \cite{N}), we obtain that $\mathop{\mathrm{cc}}(X)=\varprojlim\{\mathop{\mathrm{cc}}(X_\alpha),\mathop{\mathrm{cc}}(p_{\alpha\beta});\omega_1\}$. Since $\mathop{\mathrm{cc}}(X_\alpha)$ is an absolute retract (see \cite{IZ}) and, by Proposition \ref{p:soft}, the map $\mathop{\mathrm{cc}}(p_{\alpha\beta})$ is soft for every $\alpha,\beta<\omega_1$, $\alpha\ge\beta$, we apply a result of Shchepin (see \cite{S}) to derive that $\mathop{\mathrm{cc}}(X)$ is an absolute retract. If $X$ is character homogeneous, then we can in addition assume that no projection $p_{\alpha\beta}$ possesses one-point preimages. By Lemma \ref{l:3}, the maps $\mathop{\mathrm{cc}}(p_{\alpha\beta})$ do not possess one-point preimages and therefore $\mathop{\mathrm{cc}}(X)$ is character homogeneous. By the mentioned result of Shchepin, $\mathop{\mathrm{cc}}(X)$ is homeomorphic to $I^{\omega_1}$. If $\mathop{\mathrm{cc}}(X)$ is homeomorphic to $I^{\omega_1}$, then there exists an inverse system $\mathcal S=\{X_\alpha,p_{\alpha\beta};\omega_1\}$ consisting of compact metrizable convex spaces and open affine maps such that $\mathop{\mathrm{cc}}(X)=\varprojlim \mathop{\mathrm{cc}}(\mathcal S)$. Applying Shchepin's Spectral Theorem, we may additionally assume that all the maps $\mathop{\mathrm{cc}}(p_{\alpha\beta})$, $\beta\le\alpha<\omega_1$, are soft and do not possess points with one-point preimage. It is then evident that the maps $p_{\alpha\beta}$, $\beta\le\alpha<\omega_1$, do not possess points with one-point preimage. Applying Lemma \ref{l:op} we conclude that the maps $p_{\alpha\beta}$, $\beta\le\alpha<\omega_1$, are open and therefore, by the Michael Selection Theorem, soft. Then $X$ is a character homogeneous AR of weight $\omega_1$. By the cited characterization theorem for $I^{\omega_1}$, the space $X$ is homeomorphic to $I^{\omega_1}$. \end{proof} \section{Cone over Tychonov cube} Define the {\it cone functor} $\mathop{\mathrm{cone}}$ in the category ${\mathbf{Conv}}$ as follows. Given an object $X$ in ${\mathbf{Conv}}$, i.e. a compact convex subset $X$ in a locally convex space $L$, let $\mathop{\mathrm{cone}}(X)$ be the convex hull of the set $X\times\{0\}\cup\{(0,1)\}$ in $L\times\mathbb R$. For a morphism $f\colon X\to Y$ in ${\mathbf{Conv}}$ define $\mathop{\mathrm{cone}}(f)\colon \mathop{\mathrm{cone}}(X)\to\mathop{\mathrm{cone}}(Y)$ as the only affine continuous map that extends $f\times\{0\}\colon X\times\{0\}\to Y\times\{0\}$ and sends $(0,1)\in \mathop{\mathrm{cone}}(X)$ to $(0,1)\in \mathop{\mathrm{cone}}(Y)$. We will need the following notion. A map $f\colon X\to Y$ is called a {\em trivial $Q$-bundle} if there exists a homeomorphism $g\colon X\to Y\times Q$ such that $f=\mathop{\mathrm{pr}}_1g$. The following statement is a characterization theorem for the space $\mathop{\mathrm{cone}}(I^{\omega_1})$ among the convex compact spaces. \begin{prop}\label{p:cone} A convex compactum $X$ is homeomorphic to the space $\mathop{\mathrm{cone}}(I^{\omega_1})$ if and only if $X$ satisfies the properties: \begin{enumerate} \item $X$ is an AR; \item $\mathrm{w}(X)=\omega_1$; and \item there exists a unique point $x\in X$ of countable character. \end{enumerate} \end{prop} \begin{proof} Obviously, if a convex compactum $X$ is homeomorphic to $\mathop{\mathrm{cone}}(I^{\omega_1})$, then $X$ satisfies properties 1)--3). Suppose now that $X$ satisfies 1)--3). Then $X$ is homeomorphic to the limit of a continuous inverse system $\mathcal S=\{X_\alpha,p_{\alpha\beta};\omega_1\}$ in ${\mathbf{Conv}}$ that satisfies the properties \begin{enumerate} \item[(i)] $X_\alpha$ is a convex metrizable compactum for every $\alpha$; \item[(ii)] $p_{\alpha\beta}$ is an open affine map for every $\alpha\ge\beta$; and \item[(iii)] $\{x_\beta\}=\{y\in X_\beta\mid\ |p_{\alpha\beta}^{-1}(y)|=1\}$. \end{enumerate} Passing, if necessary, to a subsystem of $\mathcal S$, one can assume that for every $\alpha$ and every compact subset $K$ of $X_\alpha\setminus\{x_\alpha\}$ the map $$p_{\alpha+1,\alpha}|p_{\alpha+1,\alpha}^{-1}(K)\colon p_{\alpha+1,\alpha}^{-1}(K)\to K$$ satisfies the condition of fibrewise disjoint approximation. The Toru\'nczyk-West characterization theorem \cite{TW} implies that, if $K$ is an AR, the map $p_{\alpha+1,\alpha}|p_{\alpha+1,\alpha}^{-1}(K)$ is a trivial $Q$-bundle and therefore the map $$p_{\alpha+1,\alpha}|p_{\alpha+1,\alpha}^{-1} (X_\alpha\setminus\{x_\alpha\})=p_{\alpha+1,\alpha}| (X_{\alpha+1}\setminus\{x_{\alpha+1}\}),$$ being a locally trivial $Q$-bundle, is a trivial $Q$-bundle (see \cite{C1}). Therefore, the map $p_{\alpha+1,\alpha}$ is homeomorphic to the projection map $\mathop{\mathrm{pr}}_{23}\colon Q\times Q\times [0,1)\to Q\times [0,1)$ (that $X_\alpha\setminus\{x_\alpha\}$ is homeomorphic to $Q\times [0,1)$ follows from the fact that the spaces $Q$ and $\mathop{\mathrm{cone}}(Q)$ are homeomorphic -- see \cite{C}). Passing to the one-point compactifications of these maps we obtain the commutative diagram $$\xymatrix{X_{\alpha+1}\ar[ddd]_{p_{\alpha+1,\alpha}}\ar[rrr]& & &\mathop{\mathrm{cone}}(Q\times Q)\ar[ddd]^{\mathop{\mathrm{cone}}(\mathop{\mathrm{pr}}_2)}\\ &X_{\alpha+1}\setminus\{x_{\alpha+1}\}\ar[r]\ar@{^{(}->}[lu] \ar[d]_{p_{\alpha+1,\alpha}|\dots}&Q\times Q\times[0,1)\ar[d]^{\mathop{\mathrm{pr}}_{23}}\ar@{^{(}->}[ur]& \\ &X_{\alpha}\setminus\{x_{\alpha\}}\ar@{^{(}->}[dl]\ar[r]&Q\times[0,1)\ar@{^{(}->}[dr]&\\ X_{\alpha}\ar[rrr]& & & \mathop{\mathrm{cone}}(Q)}$$ in which the horizontal arrows are homeomorphisms. Therefore $X$ and $\mathop{\mathrm{cone}}(I^{\omega_1})$ are homeomorphic. \end{proof} \begin{thm}\label{t:cone} Let $X$ be an object of the category ${\mathbf{Conv}}$. Then the space $\mathop{\mathrm{cc}}(X)$ is homeomorphic to the cone over the Tychonov cube, $\mathop{\mathrm{cone}}(I^{\omega_1})$, if and only if $X$ is homeomorphic to the space $\mathop{\mathrm{cone}}(I^{\omega_1})$. \end{thm} \begin{proof} Suppose that a convex compact space $X$ is an absolute retract of weight $\omega_1$ with exactly one point $x$, of countable character. It follows from the Shchepin Spectral Theorem (\cite{S}; see also \cite{Ch}) that $X$ can be represented as $\varprojlim\mathcal S$, where $\mathcal S=\{X_\alpha,p_{\alpha\beta};\omega_1\}$ is an inverse system in which every $X_\alpha$ is a metrizable convex compactum and every $p_{\alpha\beta}$, $\alpha\ge\beta$, is an affine map. Denote by $p_\alpha\colon X\to X_\alpha$ the limit projections and let $x_\alpha=p_\alpha(x)$. Passing, if necessary, to a subsystem of $\mathcal S$, one can assume additionally that for every $\alpha\ge\beta$ we have $\{x_\beta\}=\{y\in X_\beta\mid\ |p_{\alpha\beta}^{-1}(y)|=1\}$. Then for every $\alpha\ge\beta$, the map $\mathop{\mathrm{cc}}(p_{\alpha\beta})$ is a soft map and similarly as in the proof of Lemma \ref{l:3}, one can show that $$\{\{x_\beta\}\}=\{A\in \mathop{\mathrm{cc}}(X_\beta)\mid\ |\mathop{\mathrm{cc}}(p_{\alpha\beta})^{-1}(A)|=1\}.$$ We conclude that the space $\mathop{\mathrm{cc}}(X)=\varprojlim(\mathcal{S})$ satisfies the conditions of Proposition \ref{p:cone} and therefore is homeomorphic to the space $\mathop{\mathrm{cone}}(I^{\omega_1})$. Now, if $\mathop{\mathrm{cc}}(X)$ is homeomorphic to $\mathop{\mathrm{cone}}(I^{\omega_1})$, it follows from Theorem \ref{t:1} that $X$ is an AR of weight $\omega_1$. Note that for every point $x$ of countable character in $X$, the point $\{x\}$ is of countable character in $\mathop{\mathrm{cc}}(X)$. We therefore conclude that there is a unique point of countable character in $X$. By Proposition \ref{p:cone}, $X$ is homeomorphic to $\mathop{\mathrm{cone}}(I^{\omega_1})$. \end{proof} \section{Remarks and open problems} \begin{problem} Let $f\colon X\to Y$ be an affine continuous map of compact metrizable compacta in locally convex spaces such that $\dim f^{-1}(y)\ge2$, for every $y\in Y$. Is the map $\mathop{\mathrm{cc}}(f)\colon \mathop{\mathrm{cc}}(X)\to\mathop{\mathrm{cc}}(Y)$ homeomorphic to the projection map $\mathop{\mathrm{pr}}_1\colon Q\times Q\to Q$? \end{problem} Note that there is an open map $f\colon X\to Y$ of metrizable compacta with infinite fibers such that the map $P(f)\colon P(X)\to P(Y)$ is not homeomorphic to the projection map $\mathop{\mathrm{pr}}_1\colon Q\times Q\to Q$ (see \cite{D}). (Recall that $P$ denotes the probability measure functor). \begin{problem} Does every compact convex AR of weight $\tau\ge\omega_1$ contain an affine copy of the Tychonov cube $I^\tau$? \end{problem} It is known that every compact Hausdorff AR of weight $\tau\ge\omega_1$ contains a topological copy of the Tychonov cube $I^\tau$ (see \cite{S1}). The theory of nonmetrizable noncompact absolute extensors which is, in some sense, parallel to that of compact absolute extensors, was elaborated by Chigogidze \cite{Ch}\cite{Ch1}. One can also consider the hyperspaces of compact subsets in the spaces $\mathbb R^\tau$ and conjecture that, for noncountable $\tau$, the hyperspace $\mathop{\mathrm{cc}}(\mathbb R^\tau)$ is homeomorphic to $\mathbb R^\tau$ if and only if $\tau=\omega_1$.
2,877,628,089,912
arxiv
\section{introduction} Light-harvesting complexes consist of several chromophores mutually coupled by dipolar interactions residing within a protein scaffold. Due to their mutual coupling, light-induced excitations on individual chromophores (sites) can undergo transfer from site to site. Excitation energy transfer (EET) has been an interesting subject for decades, not only for its phenomenal efficiency but also for its fundamental role in Nature \cite{foster48}. Recent experiments on the exciton dynamics in photosynthetic bio-molecules (for example, the purple bacteria and the Fenna-Matthew-Olson complex) have brought a long-standing question again into the scientific focus that whether nontrivial quantum coherence effects exist in natural biological systems under physiological conditions \cite{lee07,adolphs06}. In fact, evidence of quantum coherence has been found, suggesting that nontrivial quantum effects may be at the heart of its remarkable excitation transport efficiency \cite{engel07}. Inspired by these experimental results, several studies have attempted to unravel the precise role of quantum coherence in the EET of light-harvesting complexes \cite{mohseni08,plenio08,caruso09,chin10,olaya08,ishizaki09,yang10,hoyer09,sarovar11}, and environmental decoherence and noise have been found to play a crucial role \cite{mohseni08,plenio08,caruso09,chin10,caruso10,shabani11}. In these studies, the system is assumed to be initialized with a single excitation in site 1. This may not be realized precisely under experimental or natural operating conditions. Considering that primitive photosynthetic cells appeared over three billion years prior to any other more complex life-forms, it is not illogical to assume that nature has designed a photosynthetic mechanism using minimal resources to gain maximal energy under the action of evolution. In this perspective, a model which allows a freedom to control the number of excitations in the complex at any time should be taken into account. \begin{figure} \includegraphics*[width=0.6\columnwidth, height=0.7\columnwidth]{fmof1.eps} \caption{(A) The disordered structure of the Fenna-Matthews-Olson (FMO) complex. The FMO complex acts as an energy transfer channel in green sulphur bacteria guiding excitations from the lightharvesting antenna at site 1 to the reaction center at site 8. This picture is re-produced from \cite{shabani11}. (B) The model to describe the FMO complex. Arrows between the cavities represent cavity-cavity couplings. Only couplings above 15 $\mbox{cm}^{-1}$ are shown. The reaction center is modeled by the cavity, numbered 8, that is irreversibly coupled to the site 3. It is worth bearing in mind that the coupling between cavity 1 and cavity 6 is week with respect to the other couplings shown in the figure.} \label{fig1} \end{figure} This paper extends the theoretical formulation presented in a recent paper \cite{caruso09} to a scenario of multi-exciton and further examines issues relevant for realistic light-harvesting complexes. To this end, we identify the FMO complex with coupled cavities and introduce two fundamental decoherence mechanisms (i.e., dephasing and dissipation) into the system. Under the semi-classical approach and with the quantum theory, we calculate respectively the excitation transfer efficiency. The decoherence rates that are optimal to the ETE are found by numerical simulation of the equation of motion. With the optimal decoherence rates, the time evolution of population on each site is presented. We found that the optimal decoherence rates weakly depend on the initial excitation number on site 1. The non-local decoherence slightly alter the ETE, indicating that local decoherence dominates the energy transfer in the FMO complex. The paper is organized as follows. In Sec. {\rm II} we introduce the theoretical model for the FMO dynamics and define the excitation transfer efficiency (ETE) used here. Then, we analyze the ETE and the dynamics in the FMO complex with both semi-classical and quantum theories in Sec.{\rm III}. Finally, we conclude our results in Sec. {\rm IV}. \section{Model description} In photosynthetic antennae, sunlight is absorbed by pigments and the excitation energy is transferred to the photosynthetic reaction center. The locations for these processes are physically and physiologically separated, suggesting that two-level systems are good to model the pigments with single-exciton but not enough for pigments with many excitations. To describe the light-harvesting process with many excitations on each site, we model the FMO complex by a coupled 8-cavity system, see Fig. \ref{fig1}. The effective dynamics can be described by a Hamiltonian describing the coherent exchange of excitations between chromophores or sites, \begin{equation} H=\sum_{j=1}^7\omega_j a_j^{\dagger}a_j+\sum_{i,j=1}^7 g_{ij}(a_i^{\dagger} a_j+ a_ia_j^{\dagger}),\; i\neq j\; , \end{equation} where $a_j^{\dagger}$ ($a_j$) are the creation (annihilation) operators for site $j$, $\omega_j$ is the local site excitation energy, and $g_{ij}$ denotes the hopping rate of an excitation between the sites $i$ and $j$. In the site basis, we follow \cite{adolphs061} and employ the Hamiltonian matrix elements (in units of $\mbox{cm}^{-1}$) \begin{widetext} \begin{equation} H \!=\!\! \left(\!\!\begin{array}{rrrrrrr} \mathbf{215} & \!\mathbf{-104.1} & 5.1 & -4.3 & 4.7 & \mathbf{-15.1} & -7.8 \\ \!\mathbf{-104.1} & \mathbf{220.0} &\mathbf{ 32.6} & 7.1 & 5.4 & 8.3 & 0.8 \\ 5.1 & \mathbf{ 32.6 }& 0.0 & \mathbf{-46.8} & 1.0 & -8.1 & 5.1 \\ -4.3 & 7.1 &\!\mathbf{-46.8} & \mathbf{125.0} &\! \mathbf{-70.7} &\! -14.7 & \mathbf{ -61.5}\\ 4.7 & 5.4 & 1.0 & \!\mathbf{-70.7} & \mathbf{450.0} & \mathbf{ 89.7} & -2.5 \\ \mathbf{-15.1} & 8.3 & -8.1 & -14.7 & \mathbf{89.7} & \mathbf{330.0} & \mathbf{ 32.7} \\ -7.8 & 0.8 & 5.1 & \mathbf{-61.5} & -2.5 & \mathbf{32.7} & \mathbf{280.0} \end{array}\!\! \right). \label{ha} \end{equation} \end{widetext} Here the zero energy has been shifted by 12230 $\mbox{cm}^{-1}$ for all sites, corresponding to a wavelength of $\sim 800 \mbox{nm}$. We note that in units of $\hbar=1$, 1 ps$^{-1}$=5.3 cm$^{-1}$. Then by dividing $g_{ij}$ and $\omega_{j}$ by 5.3, all elements of the Hamiltonian are rescaled in units of ps$^{-1}$. We can find from the Hamiltonian $H$ that in the Fenna-Matthew-Olson complex (FMO), there are two dominating EET pathways: $1\rightarrow 2\rightarrow 3$ and $6\rightarrow (5,7)\rightarrow 4 \rightarrow 3$ (see figure \ref{fig1}). Although the nearest neighbor terms dominate the site to site coupling, significant hopping matrix elements exist between more distant sites. This indicates that coherent transport itself may not explain why the excitation energy transfer is so efficient. To obtain high energy transfer efficiency in the EET process, forward and backward energy transfer rates as well as the dissipation induced by the {\it hot and wet} surrounding environment need to satisfy a detailed balance condition. In the weak dissipation regime, the Lindblad master equation that is able to reliably describe exciton dissipative dynamics reads, \begin{eqnarray} \frac{d\rho}{dt} &=& -i[H,\rho] + {\cal L}_{deph}(\rho) + {\cal L}_{diss}(\rho)\nonumber\\ && + {\cal L}_{NLdeph}(\rho) + {\cal L}_{NLdiss}(\rho) + {\cal L}_{8}(\rho)\;, \label{masterE} \end{eqnarray} where ${\cal L}_{diss}(\rho)$ describes dissipation terms, and ${\cal L}_{deph}(\rho)$ denotes dephasing terms. ${\cal L}_{NLdeph}(\rho)$ and ${\cal L}_{NLdiss}(\rho)$ represent non-local dephasing and dissipation, respectively. Here, \begin{equation} {\cal L}_{diss}(\rho)=\sum_{j=1}^7 \Gamma_{j}(2a_j\rho a_j^{\dagger}-\rho a_j^{\dagger}a_j-a_j^{\dagger}a_j\rho)\;, \end{equation} \begin{equation} {\cal L}_{deph}(\rho)=\sum_{j=1}^7\gamma_j(2n_j{\rho}n_j-\rho{n_j}n_j-n_jn_j\rho), \end{equation} \begin{equation} {\cal L}_{NLdiss}(\rho)=\sum_{i,j=1}^7 \Gamma_{ij}(2a_i\rho a_j^{\dagger}-\rho a_j^{\dagger}a_i-a_j^{\dagger}a_i\rho),i\neq{j},\label{Ndiss} \end{equation} and \begin{equation} {\cal L}_{NLdeph}(\rho)=\sum_{i,j=1}^7\gamma_{ij}(2n_i{\rho}n_j -\rho{n_j}n_i-n_jn_i\rho),i\neq{j},\label{Ndeph} \end{equation} where $n_j=a_j^{\dagger}a_j$ denotes the exciton number operator for site $j$. The local decoherence may come from the couplings of the sites to individual environments, whereas the non-local decoherence ${\cal L}_{NLdeph}(\rho)$ and ${\cal L}_{NLdiss}(\rho)$ can be understood as a result of the interaction between the site and a common environment. From a quantum physical perspective, environmental radiations whose wavelength is larger than the length-scale of the FMO molecule ($\sim 8$ nm) are reasonably assumed to take the role of a common environment. Thus the decoherence considered here is reasonable. Recent work \cite{adolphs06} suggests that it is the site 3 that couples to the reaction center. The total transfer of excitations into the reaction center is measured by the population in the center, numbered 8, which is populated by an irreversible decay process with rate $\Gamma_{8}$ from the site 3. We phenomenologically model this irreversible process by the Lindblad operator, \begin{equation} {\cal L}_{8}(\rho)=\Gamma_{8}(2a_8^{\dagger} a_3\rho a_3^{\dagger}a_8-\rho a_3^{\dagger}a_3a_8a_8^{\dagger}- a_3^{\dagger}a_3a_8a_8^{\dagger}\rho)\;. \end{equation} To match the observation in experiments, we assume there are $N_0$ excitations initially in site 1. The model is completed by introducing a quantity by which we measure the energy transfer efficiency. The rescaled population in the reaction center given by \begin{equation} p_8=\frac{n_8(T)}{N_0}=\frac{\mbox{Tr}(a_8^{\dagger}a_8\rho(T))}{N_0}, \label{effi} \end{equation} at a specific time $T$ is good for this purpose. \begin{figure} \begin{minipage}[t]{0.48\linewidth} \includegraphics*[width=1\columnwidth, height=0.7\columnwidth]{EffG8_1.eps} \end{minipage} \begin{minipage}[t]{0.48\linewidth} \includegraphics*[width=1\columnwidth, height=0.7\columnwidth]{EffG8_2.eps} \end{minipage} \caption{Energy transfer efficiency versus $\Gamma_8$ without decoherence ($\Gamma_j=\gamma_j=\Gamma_{ij}=\gamma_{ij}=0,$ $i,j$=1,2,...,7). (a) For semi-classical approach and, (b) for quantum theory. $N_0=100$ is taken for this plot. }\label{fig2} \end{figure} \begin{figure} \begin{minipage}[t]{0.48\linewidth} \includegraphics*[width=1\columnwidth, height=0.8\columnwidth]{EffT1.eps} \end{minipage} \begin{minipage}[t]{0.48\linewidth} \includegraphics*[width=1\columnwidth, height=0.8\columnwidth]{EffT2.eps} \end{minipage} \caption{(color online) Efficiency as a function of time, only local decoherence is considered. Figure (a) is the results with semi-classical approach, where black solid line is for the null decoherence case ($\Gamma_j=\gamma_j=0$), and blue circle and red triangle are for $\Gamma_j=0,$ $\gamma_j=0.01$ and $\Gamma_j=0,$ $\gamma_j=0.1,$ respectively, $j=1,2,...,7$. Other parameters chosen are $\Gamma_8=1.94$ and $N_0=100$. (b) is the efficiency from quantum theory. Blue solid line stands for optimal efficiency with rates ($\gamma_1, \gamma_2, \gamma_3, \gamma_4, \gamma_5, \gamma_6, \gamma_7$) =(0.74, 24, 0, 5.2, 50.6, 0, 15), $\Gamma_8=0.32$ and the other $\Gamma_j=0.0005$. The efficiency reaches its maximum 91.77$\%$ that is optimized at 5 ps. Red circle and black triangle lines represent the null decoherence case with $\Gamma_8=0.9$ (the corresponding efficiency is 67.06$\%$) and $\Gamma_8=0.32$ (corresponding to efficiency 60.73$\%$), respectively. } \label{fig3} \end{figure} Note that the reaction center is directly connected only to site 3. The decoherence rate $\Gamma_8$ plays an essential role in the excitation transfer. In the next section, we will use the decoherence rates to optimize the transfer efficiency defined in Eq.(\ref{effi}). Several cases are considered, in each case $\Gamma_8$ can not be zero, because null $\Gamma_8$ leads to zero transfer efficiency. \section{energy transfer efficiency} Having this model, we now explore the energy transport in the FMO complex. Two approaches are considered. In the semi-classical approach, we will use the approximation $\langle a_j^{\dagger} a_j\rangle =|\alpha_j|^2$ with $\alpha_j=\mbox{Tr}(\rho a_j).$ Whereas in the quantum regime, we approximate $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle$ as $\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle.$ By these approximations, we can derive the equation of motion for the system and calculate the energy transfer efficiency. We first focus on the case where only local decoherence exists, namely $\Gamma_{ij}=0$ and $ \gamma_{ij}=0$, then move to the case with non-local decoherence. The energy transfer time is taken to $T$ = 5 ps$^{-1}$, which is the relevant time scale in recent experiment \cite{adolphs06}. Our results suggest that it is the careful interplay of quantum mechanical features and the unavoidable couplings to environment that will leads to the optimal system performance. In particular, the local decoherence dominates and actually helps the excitation transfer, as we will show below. \subsection{Semi-classical approach} Define $\alpha_j=\mbox{Tr}(\rho a_j)$ and $n_8=\mbox{Tr}(a_8^{\dagger}a_8\rho)$, the master equation Eq. (\ref{masterE}) yields, \begin{widetext} \begin{eqnarray} \dot{\alpha}_{j}&=&-i({\omega_j}{\alpha_j}+2\sum_{k\neq{j}}{g_{jk}\alpha_k})- \Gamma_j{\alpha_j}-\gamma_j{\alpha_j}-\sum_{k\neq j}\Gamma_{jk}\alpha_k,j\neq3,8,\nonumber\\ \dot{\alpha}_{3}&=&-i({\omega_3}{\alpha_3}+2\sum_{k\neq{3}}{g_{3k}\alpha_k})- \Gamma_3{\alpha_3}-\gamma_3{\alpha_3} -\sum_{k\neq3}\Gamma_{3k}\alpha_k-\Gamma_8(n_8+1)\alpha_3,\nonumber\\ \dot{n}_8&=&2\Gamma_8|\alpha_3|^2(n_8+1), \label{eofm} \end{eqnarray} \end{widetext} where the dot represents time derivative, and the time arguments have been omitted to shorten the representation. To obtain Eq. (\ref{eofm}), the approximation $\langle a_j^{\dagger} a_j\rangle =|\alpha_j|^2$ in the last equation of Eq. (\ref{eofm}) has been made. These equations compose a closed set of equations governing the dynamics of the FMO complexes. As shown in \cite{caruso09}, a completely coherent dynamics is often not most ideal for the excitation transfer from the chromophores to the reaction center, and hence the coherence solely can not explain the very high exciton transfer efficiency observed in experiments. Our model confirms this result. In fact, with $\Gamma_j=\gamma_{j}=\Gamma_{ij}=\gamma_{ij}=0$ ($i,j=1,2,...,7$), the transfer efficiency arrives at its maximum $0.625$ with $\Gamma_8=1.94$ ps$^{-1}$ (see Figs. \ref{fig2}(a) and \ref{fig3}(a)). Further numerical simulations show that under the semi-classical approximation, dissipation and dephasing play similar roles in the dynamics of FMO, see Eq. (\ref{eofm}). Numerical simulations show that neither dissipation nor dephasing can improve the energy transfer efficiency under the semi-classical approximation (see Fig. \ref{fig3}(a)). This is due to the fact that dephasing and dissipation play a same role in the dynamics under the semi-classical approach, see Eq. (\ref{eofm}). Remind that the local dissipation always spoil the exciton transfer from sites to the reaction center, it is not difficult to understand why the decoherence can not help the exciton transfer within the semi-classical approach. \subsection{Quantum theory} We now examine whether the quantum theory can explain the high exciton transfer efficiency. For this purpose, we approximate $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle$ as $\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle.$ Define ${n}_{mm}=\mbox{Tr}(\rho a_{m}^{\dagger} a_{m}), $ and ${n}_{mn}=\mbox{Tr}(\rho a_{m}^{\dagger} a_{n}).$ We obtain from Eq. (\ref{masterE}) \begin{widetext} \begin{eqnarray} \dot{n}_{mm}&=&-i\sum_j{g_{mj}n_{mj}}+i\sum_j{g_{mj}n_{jm}} -2\Gamma_m{n_{mm}}-\sum_j\Gamma_{mj}(n_{jm}+n_{mj}),\nonumber\\ \dot{n}_{mn}&=&i(\omega_m-\omega_n)n_{mn}+i\sum_j{g_{jm}}{n_{jn}}-i\sum_j{g_{jn}}{n_{mj}} -(\Gamma_m+\Gamma_n+\gamma_m+\gamma_n-2\gamma_{mn})n_{mn}\nonumber\\ &&-\sum_j\Gamma_{mj}n_{jn}-\sum_j\Gamma_{nj}n_{mj}-a\Gamma_8{n_{mn}}(n_{88}+1), m\neq{n},\nonumber\\ \dot{n}_{88}&=&2\Gamma_8{n_{33}}(n_{88}+1), \label{fullQ} \end{eqnarray} \end{widetext} where $a=1$ for $m=3$ or $n=3$, and $a=2$ for $m=n=3$, otherwise $a=0$. Eq. (\ref{fullQ}) is accurate when $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle$ $=$ $\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle.$ In fact, our numerical simulations show that this is exactly the case for small number of exciton. For instance, with $N_0=1$ or $2$, Monte Carlo simulations show that $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle-\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle=0.$ On the other hand, our model backs to the two-level model for the FMO complex when $N_0=1$\cite{caruso09}, and the numerical results given by Eq. (\ref{fullQ}) is in agreement with that in \cite{caruso09}. It is difficult to prove $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle-\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle=0$ for any $N_0$. Fortunately, the site 3 and the reaction center 8 are connected only through the irreversible process ${\cal L}_{8}(\rho)$, and this term unlikely creates entanglement between site 3 and 8. Therefore if site 3 and 8 are initially in a separable state, then they will remain unentangled forever. For site 3 and 8 in separable states, $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle=\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle$ holds true. This is not a proof, so we prefer to treat $\langle a_8^{\dagger} a_8 a_3^{\dagger} a_3\rangle=\langle a_8^{\dagger} a_8\rangle \langle a_3^{\dagger} a_3\rangle$ as an approximation. \begin{figure} \begin{minipage}[t]{0.9\linewidth} \includegraphics*[width=1\columnwidth, height=0.6\columnwidth]{Probnn1.eps} \end{minipage} \begin{minipage}[t]{0.9\linewidth} \includegraphics*[width=1\columnwidth, height=0.6\columnwidth]{Probnn2.eps} \end{minipage} \caption{Population ratio of $j$-th site defined by $p_j=\frac{n_{jj}}{N_0}$ as a function of time (number 1,2,...,7 label the sites). (a) is for the semi-classical approach. And (b) is for the quantum theory. Dissipation and dephasing rates are the same as those which optimize the efficiency with initial exciton number $N_0=100$.} \label{fig4} \end{figure} We first study the case where only local decoherence exists, namely, $\Gamma_{ij}=\gamma_{ij}=0.$ By numerically solving Eq. (\ref{fullQ}), we find that without decoherence the maximal energy transfer efficiency is 67.06\% (see Fig. \ref{fig2}(b)), which is a bit larger than that given by the semi-classical approach. However, with the assistance of local decoherence, an efficiency over ninety percent can be obtained (See Fig. \ref{fig3}(b)). In order to obtain such a high energy transfer efficiency, we have to optimize the 15 decoherence rates ($\Gamma_j$, $\gamma_j$, and $\Gamma_8$, $j=1,2,3,...,7)$, this is a time-consuming task. To save the computer time, we specify the dissipation rates to $\Gamma_j=0.0005$ for each site to match the measured life time in experiments, which is the order of nanosecond \cite{adolphs06} for the light-harvesting system. The optimal dephasing rates for highest efficiency are found numerically as ($\gamma_1, \gamma_2, \gamma_3, \gamma_4, \gamma_5, \gamma_6, \gamma_7$) =(0.74, 24, 0, 5.2, 50.6, 0, 15) and $\Gamma_8=0.32$, the corresponding efficiency is 91.77\% for initial $N_0=100$ on set 1.(see Fig. \ref{fig3}). Further numerical simulations show that the energy transfer efficiency can reach over ninety percent for almost arbitrary exciton number $N_0$ (from 1 to 10000) on site 1, though the optimal dephasing rates are different. With the optimal dephasing rates, the ratio of exciton number on each site to the total number of exciton (defined as $p_m=\frac{n_{mm}}{N_0}$) is plotted, see Fig.\ref{fig4} (b). We find that after a rapid increasing (except for site 1), the exciton number decreases for each site, indicating that most of the exciton are transferred to the reaction center at site 8 (see Fig. \ref{fig4}(b)). In contrast, the dynamics from the semi-classical approach (see Fig. \ref{fig4}(a)) is also plotted. More oscillations can be observed, leading to less population transfer to the reaction center. It is interesting to note that the optimal decoherence rates obtained for different exciton number $N_0$ are approximately the same values. To show this point we have plotted in Fig. \ref{fig5} the efficiency as a function of the exciton number $N_0$, the decoherence rates used in this figure are optimal for $N_0=100$. Clearly, the decoherence rates optimal for $N_0=100$ can also result in high transfer efficiency for a wide range of $N_0$. For instance, the efficiency remains above ninety percent at $N_0=200$ with the optimal decoherence rates for $N_0=100$. This observation suggests that the high efficient energy transport, which has been found for a fixed excitation number remains valid for other experimental and natural operating conditions. Biologically, this means the light-harvesting system is robust against the number of photon captured at site 1. Meanwhile, we observe that the dependence on small $N_0$ (see Fig. \ref{fig5}) is stronger than the dependence on large $N_0$, suggesting that the transfer efficiency is sensitive to the variations of small $N_0$. This feature can be understood by carefully examining Eq. (\ref{fullQ}). Clearly, the losing rate of population on site 3 depends on the population in the reaction center, $n_{88}$. As a back action, the gaining rate of population on the reaction center depends on $n_{33}$. \begin{figure} \includegraphics*[width=0.75\columnwidth, height=0.5\columnwidth]{EffN0.eps} \caption{(color online) Efficiency versus excitation number $N_0$ Parameters are chosen in such a way that the efficiency is optimized with $N_0=100$. ($\gamma_1, \gamma_2, \gamma_3, \gamma_4, \gamma_5, \gamma_6, \gamma_7$) =(0.74, 24, 0, 5.2, 50.6, 0, 15), $\Gamma_8=0.32$ and $\Gamma=0.0005$ is for the red triangle line. In contrast, we plot the results without decoherence in green square and blue circle lines with $\Gamma_8=0.32$ (quantum) and $\Gamma_8=1.94$ (semi-classical), respectively. The later two $\Gamma_8$ optimize the transfer efficiency for null decoherence case.}\label{fig5} \end{figure} \subsection{Effect of non-local decoherence} \begin{figure} \includegraphics*[width=0.8\columnwidth, height=0.4\columnwidth]{NLdiss.eps} \caption{(color online) Efficiency as a function of local dissipation rate ($\Gamma_j=\Gamma, j=1,2,...,7$). This figure is plotted to show the effect of non-local decoherence on the transfer efficiency. Red solid line represents a case with nonlocal dissipation ($\Gamma_{ij}=\Gamma$), whereas the blue dash line is for the case without nonlocal dissipation ($\Gamma_{ij}=0$), $i,j=1,2,...,7$. Other parameters chosen are $\Gamma_8=6$, $\gamma_j=0$, and $\gamma_{ij}=0$.}\label{Ndis} \end{figure} In this section, we examine the effect of non-local decoherence on the energy transfer efficiency and the dynamics of the FMO complex. When taking Eqs. (\ref{Ndiss}) and (\ref{Ndeph}) into account, we must guarantee that the non-local decoherence terms with rates $\Gamma_{ij}$ and $\gamma_{ij}$ should keep the positivity and trace preserving of the density matrix \cite{caruso09}. For this reason, we choose all $\Gamma_{ij}$ and $\gamma_{ij}$ positive to optimize the transfer efficiency. We find that the energy transfer efficiency can be increased by properly choosing non-local dephasing rates. For example, with the optimized local decoherence rates (\{$\gamma_1, \gamma_2, \gamma_3, \gamma_4, \gamma_5, \gamma_6, \gamma_7$\} =\{0.74, 24, 0, 5.2, 50.6, 0, 15\}, $\Gamma_8=0.32$, and $\Gamma=0.0005$), the efficiency can be increased from 91.77$\%$ without non-local decoherence to 91.907$\%$ with non-local decoherence rates, $(\gamma_{17},\gamma_{71},\gamma_{25},\gamma_{52})$=(0.74,0.74,24,24) and $\gamma_{ij}=0, \{i,j\}\neq\{1,2,5,7\}$. Extensive numerical simulations show that energy transfer efficiency can not be increased by weak non-local decoherence $\Gamma_{ij}$ (orders of ns$^{-1}$), but strong non-local decoherence can improve the energy transfer efficiency as Fig. \ref{Ndis} shows. \section{Conclusion} In summary, we have studied the dynamics in light-harvesting complexes beyond the single exciton limit and optimized the energy transfer efficiency in the Fenna-Matthew-Olson (FMO) complex. To describe the multi-excitation scenario, we have proposed a new model for the propagation of excitation transfer, the model consists of 7 coupled cavities and a reaction center. Four types of decoherence, including local dephasing, local dissipation, nonlocal dephasing and nonlocal dissipation are considered. To match the life-time of exciton observed in experiment, we fixed the local dissipation rates to 0.0005 ps$^{-1}$. The local dephasing rates that optimizes the transfer efficiency are given by numerical simulation. We find that for multi-excitation case, the energy transfer efficiency can be over 90\% under realistic conditions. Non-local decoherence can slightly increase the efficiency, but it seems not important in the light-harvesting mechanism. Moreover, we find that the transfer efficiency is not sensitive to the initial excitation number at site 1. This suggests that the light-harvesting antenna may capture more photons once and the experimental conditions are flexible to simulate the light-harvesting in FMO complex. \ \ \\ We are indebted to C. P. Sun for suggestions and stimulating talks. Discussions with J. Cheng and S. Yi are acknowledged. This work is supported by NSF of China under grant Nos 61078011 and 10935010, as well as the National Research Foundation and Ministry of Education, Singapore under academic research grant No. WBS: R-710-000-008-271.
2,877,628,089,913
arxiv
\section{The explicit form of the integrable spin-$S$ zigzag two-leg ladder} The integrable spin-$S$ two-leg zigzag ladder with a time-reversal breaking interaction is defined by the lattice Hamiltonian \cite{zvyaginapp}: \begin{eqnarray} && {\cal H} = \sum_n\Big[\theta^2({\cal H}_{n_1,n_1+1} + {\cal H}_{n_2,n_2+1} ) +2({\cal H}_{n_1,n_2} + {\cal H}_{n_1,n_2+1} )+ \nonumber\\ && \{({\cal H}_{n_1,n_1+1} + {\cal H}_{n_2,n_2+1} ),({\cal H}_{n_1,n_2} + {\cal H}_{n_1,n_2+1} )\} + \nonumber\\ && 2i\theta[({\cal H}_{n_1,n_1+1} + {\cal H}_{n_2,n_2+1} ),({\cal H}_{n_1,n_2} + {\cal H}_{n_1,n_2+1} )]\Big], \label{intmodapp} \end{eqnarray} where \begin{eqnarray} && {\cal H}_{n,n+1}(x) = \sum_{j=1}^{2S}\sum_{k=1}^j\frac{k}{k^2+\theta^2}\times \prod_{l=0, l \ne j}^{2S}\frac{2x - l(l+1) + 2S(S+1)}{j(j+1)-l(l+1)},\nonumber\\ && x = {\bf S}_{n} \cdot {\bf S}_{n+1}. \end{eqnarray} When $\theta=0$, the latter is the integrable spin-$S$ Heisenberg chain \cite{BTapp} which belongs to SU(2)$_{2S}$ universality class \cite{affleckapp}. An explicit form of model (\ref{intmodapp}) for $S=1$ can be found in Ref. \cite{tavaresapp}. The Bethe ansatz equations for model (\ref{intmodapp}) are \cite{zvyaginapp,frahmapp}: \begin{eqnarray} && \Big(\frac{\lambda_a - \theta + i n/2}{\lambda_a - \theta - i n/2}\Big)^N\Big(\frac{\lambda_a + \theta + i n/2}{\lambda_a + \theta - i n/2}\Big)^N = \prod_{b=1}^M\frac{\lambda_a - \lambda_b + i}{\lambda_a - \lambda_b - i}, \\ && E = \sum_{a=1}\sum_{\sigma = \pm 1} \frac{n}{(\lambda_a + \sigma\theta)^2 + n^2/4} . \end{eqnarray} \section{Nematic order parameters of the individual ladder model} We consider a single generalized zigzag two-leg spin ladder where the sign of the T-breaking term is such that for instance $g_L>0$ and $g_R <0$. This means that the $2R$ and $1L$ modes hybridize and are gapful. In stark contrast, the other chiral modes $2L$ and $1R$ remain gapless and correspond to the edge modes with quantum critical behavior. Let us try to write the low-energy limit of various order parameters after averaging out the massive degrees of freedom as it is done in \cite{CO}. We first consider the scalar product of staggered magnetizations: \begin{eqnarray} {\bf n}_1 \cdot {\bf n}_2 &\sim& - \langle \mbox{Tr}[\vec\sigma G_1] \mbox{Tr}[\vec\sigma G_2]\rangle \nonumber \\ &\sim& \langle -2 \mbox{Tr} (G_1 G_2) + \mbox{Tr} (G_1) \mbox{Tr} (G_2) \rangle \nonumber \\ &\sim& -2 \langle G_{1L \alpha} G_{2 R \alpha} \rangle G_{2L \beta} G_{1 R \beta} + \langle G_{1L \alpha} G_{2 R \beta} \rangle G_{2L \beta} G_{1 R \alpha}, \label{nematicOP} \end{eqnarray} where $\langle A\rangle$ means the average of operator A in the ground state of the massive theory of ${\cal H}_L$ and we have separated the chiral components of the SU(2)$_n$ WZNW matrices $G_{1,2}$ ($\alpha,\beta= \uparrow, \downarrow$). In the ground state of the current-current model ${\cal H}_L$, we have: $\langle G_{1L \alpha} G_{2 R \beta} \rangle = C \delta_{\alpha \beta}$, $C$ being a non-universal constant. By introducing the emerging SU(2)$_n$ WZNW matrix $U_{\alpha \beta} = G_{2L \alpha} G_{1 R \beta}$, we find the low-energy description from Eq. (\ref{nematicOP}): ${\bf n}_1 \cdot {\bf n}_2 \sim \mbox{Tr} \; U$. A similar approach can be perfomed for the vector chiral order parameter: ${\bf n}_1 \wedge {\bf n}_2 \sim i \mbox{Tr} ( \; \vec\sigma U)$. Finally, the staggered magnetization of each individual chain ${\bf n}_{1,2}$ has a zero vacuum expectation value in the ground state of model ${\cal H}_L$ which means that their two-point correlators decay exponentially at large distance and at low-energy. We now discuss the fate of the twist term ${\bf n}_1 \cdot \overset{\leftrightarrow}{\partial_x} {\bf n}_2 $ after averaging out the massive degrees of freedom of the integrable model ${\cal H}_L$. By performing a similar approach as in Eq. (\ref{nematicOP}), we get: ${\bf n}_1 \cdot \overset{\leftrightarrow}{\partial_x} {\bf n}_2 \sim G_{2L\alpha } \overset{\leftrightarrow}{\partial_x} G_{1 R \alpha}$. Using $\partial_x = i (\partial - {\bar \partial})$ and $\partial_{\tau} = i (\partial + {\bar \partial})$, we find that the projection of the twist term in the low-energy limit reduces to a total derivative of the SU(2)$_n$ WZNW field: ${\bf n}_1 \cdot \overset{\leftrightarrow}{\partial_x} {\bf n}_2 \sim \partial_{\tau} \mbox{Tr} U$ and can be thus neglected. \section{The effective interactions} As is written in the main text, the exchange interactions between local spins and staggered magnetizations of the chains give rise to short range interactions between the latter spins. Below we will provide explicit calculations to support this statement. The interaction is generated by integration over massive modes encoded in staggered magnetization of the ladders. In the second order of perturbation theory in $\eta^a$ we get the following contribution to the action of the spins: \begin{eqnarray} \delta S = -\frac{1}{2}\eta_a^2 \sum_{j,k}\int \mbox{d}\tau_1\mbox{d}\tau_2 S^a_{j+1/2}(\tau_1)\langle\la (n_j -n_{j+1})^a(\tau_1)(n_k - n_{k+1})^a(\tau_2)\rangle\ra S^a_{k+1/2}(\tau_2) \end{eqnarray} We can calculate the correlation functions explicitly. Due to the SU(2) symmetry of the ladder Hamiltonian it is sufficient to do it for $n^z \sim \sin(\sqrt{2\pi}\Phi) = \sin[\sqrt{2\pi}(\varphi + \bar\varphi)]$. We will take advantage of the fact that for a given chain $\varphi$ and $\bar\varphi$ do not couple to each other and one of these field is gapless and the other is gapped. Then we have \begin{eqnarray} && \langle\la \sin[\sqrt{2\pi}\Phi(1)] \sin[\sqrt{2\pi}\Phi(2)]\rangle\ra = \frac{1}{2}\langle\la \mbox{e}^{i\sqrt{2\pi}\varphi(1)}\mbox{e}^{-i\sqrt{2\pi}\varphi(2)}\rangle\ra \langle\la \mbox{e}^{i\sqrt{2\pi}\bar\varphi(1)}\mbox{e}^{-i\sqrt{2\pi}\bar\varphi(2)}\rangle\ra \sim \nonumber\\ && \frac{1}{(v\tau_{12}+ i x_{12})^{1/2}}D(1,2). \end{eqnarray} The correlation function $D(1,2)$ was calculated in \cite{Essler}: \begin{eqnarray} D(\tau,x) \equiv \langle\la \mbox{e}^{i\sqrt{2\pi}\bar\varphi(\tau,x)}\mbox{e}^{-i\sqrt{2\pi}\bar\varphi(0,0)}\rangle\ra \approx Z \Big(\frac{v\tau +i x}{v\tau - i x}\Big)^{1/4} r^{-1/2}\mbox{e}^{-\Delta_{ladder} r}, ~~ r^2 = \tau^2 + (x/v)^2, \end{eqnarray} where $Z\sim 1$, so that \begin{eqnarray} \langle\la n^a(1)n^a(2)\rangle\ra \sim r_{12}^{-1}\exp(-\Delta_{ladder} r_{12}). \end{eqnarray} Since $\eta^z >> \eta^{x,y}$ the most important contribution is to the $S^zS^z$ interaction as is stated in the main text. The Fourier transform of the exchange integral is given by $\omega =0$ transform of the correlation function: \begin{eqnarray} {\cal J}^{aa}(q) \sim -\frac{[\eta^a]^2q^2 }{\sqrt{(qv)^2 + \Delta^2_{ladder}}}\rightarrow \frac{[\eta^a]^2}{v}\partial_x^2K_0(\Delta_{ladder}x/v) \end{eqnarray} This interaction is antiferromagnetic and decays fast in real space (as $1/x^2$) even at small distances. The integration over the massive modes will also introduce interaction between localized spins of different chains. However, since the correlation function between $n_1$ and $n_2$ is zero, this interaction is generated only in the fourth order in $\eta$: \begin{eqnarray} S_{j+1/2}(y)S_{k+1/2}(y+1) S_{l+1/2}(y)S_{m+1/2}(y+1) \langle\la \partial_x n(y,x_1)\partial_x n(y+1,x_2) \partial_x n(y,x_3)\partial_x n(y+1,x_4)\rangle\ra \end{eqnarray} $x_1\approx x_3, x_2 \approx x_4$ so this will generate the interaction of energy densities of the Ising models. \section{Toulouse limit solution of the Kondo-lattice model with forward scattering} We extend the Toulouse limit solution of the Kondo-lattice model with forward scattering presented in the letter to the $n \ge 2$ case. The Hamiltonian is defined by: \begin{eqnarray} {\cal H} = \frac{2\pi v}{n+2} \int \mbox{d} x \Big[:{\bf J}_{R,y+1}^2: + :{\bf J}_{L,y}^2:\Big] + \sum_{l} \eta_a \Big[ J^a_{L,y}(x_l) + J^a_{R,y+1}(x_l)\Big]S^a_{y+1/2}(x_l), \label{Kondoapp} \end{eqnarray} where ${\bf J}_{R,y+1}$ and ${\bf J}_{L,y}$ are chiral SU(2)$_n$ Kac-Moody currents. The starting point of the solution is to express these currents in terms of Z$_N$ parafermions currents $\psi_L,\psi_R$ with conformal weights (1-1/n,0), (0,1-1/n) and chiral bosonic fields $\Phi_{L,R}$ \cite{para}: \begin{eqnarray} J_{L,y}^z &=& \sqrt{\frac{n}{2\pi}}\partial_x \Phi_L, ~~J_{L,y}^{+}=\frac{\sqrt n}{2\pi a_0}\mbox{e}^{i\sqrt{8\pi/n}\Phi_L}\psi^{+}_L \nonumber \\ J_{R,y+1}^z &=& \sqrt{\frac{n}{2\pi}}\partial_x \Phi_R, ~~J_{R,y+1}^{+}=\frac{\sqrt n}{2\pi a_0}\mbox{e}^{-i\sqrt{8\pi/n}\Phi_R}\psi^{+}_R . \end{eqnarray} For the sake of simplicity, we assume that the chiral fields with different chiralities commute between themselves. The interacting part of model (9) reads then as follows: \begin{eqnarray} {\cal H}_{\rm int} = \sum_{l} \eta_z \sqrt{\frac{n}{2\pi}}\partial_x \Phi S^z + \frac{ \eta_{\perp} \sqrt n}{2\pi a_0} S^{+} \mbox{e}^{-i\sqrt{2\pi/n}\Theta} \left[ \mbox{e}^{-i\sqrt{2\pi/n}\Phi} \psi_L + \mbox{e}^{i\sqrt{2\pi/n}\Phi} \psi_R \right] + {\rm H.c.}, \label{Kondoint} \end{eqnarray} where $\Phi = \Phi_L + \Phi_R$ and $\Theta = \Phi_L - \Phi_R$ are respectively the total bosonic field and its dual field. Repeating the same procedure as for $n=1$, we now absorb the phase factor $\mbox{e}^{-i\sqrt{2\pi/n}\Theta}$ into the spin operator by a canonical transformation: $U^{+} S^{+}(x_l) U =S^{+} (x_l) \mbox{e}^{i\sqrt{2\pi/n}\Theta(x_l)}$. The canonical transformation is then defined by: \begin{eqnarray} U = \mbox{e}^{-i \sqrt{\frac{2\pi}{n}} \sum_l \Theta(x_l) S^{z}(x_l)}. \end{eqnarray} Since $\left[ \Theta(x), \Phi(y) \right] = -i \theta( x-y)$, we get the non-trivial transformation for the total bosonic field: \begin{eqnarray} U^{+} \Phi (x) U &=& \Phi (x) + \sqrt{\frac{2\pi}{n}} \sum_l S^{z}(x_l)\theta( x_l -x) \label{toulousetrans} \\ \frac{1}{2} \int \mbox{d} x U^{+} (\partial_x \Phi)^2 U &=& \frac{1}{2} \int \mbox{d} x (\partial_x \Phi)^2 - \sqrt{\frac{2\pi}{n}} \sum_l S^{z}(x_l)\partial_x \Phi (x_l) . \end{eqnarray} The Toulouse limit solution is then defined when $\eta_{z}= 2\pi/n$: \begin{eqnarray} {\cal H}^{'}_{\rm int} = U^{+} {\cal H}_{\rm int} U = \frac{\eta_{\perp} \sqrt{n}}{2\pi a_0} \sum_l S^{+} (x_l) \left[ \mbox{e}^{-i\sqrt{2\pi/n}\Phi} \psi_L + \mbox{e}^{i\sqrt{2\pi/n}\Phi} \psi_R \right](x_l) + {\rm H.c.}, \end{eqnarray} where, in the parafermion fields, we have absorbed phase factors which stem from the non-trivial transformation of the $\Phi$ field (\ref{toulousetrans}) under the canonical transformation. The resulting interaction is strongly relevant and opens spectral gaps for the bulk modes.
2,877,628,089,914
arxiv
\section*{ } \begin{center} {\large\bf Abstract} \end{center} A natural solution to the fermion mass hierarchy problem suggests the existence of a partially conserved chiral symmetry. We show that this can lead to a reasonably natural solution to the solar and atmospheric neutrino problems without fine-tuning or the addition of new low energy fermions. The atmospheric neutrino atmospheric neutrino anomaly is given by large mixing between $\nu_{\mu}$ and $\nu_{\tau}$, with $\Delta m^2_{atm} \sim 10^{-3} \, \mbox{eV}^2$, and the solar neutrino deficit is due to nearly maximal electron neutrino vacuum oscillations. We present an explicit model for the neutrino masses which is an anomaly free Abelian extension of the standard model that also yields a realistic charged fermion spectrum. \thispagestyle{empty} \newpage \section{Introduction} The observed hierarchy of charged fermion masses and quark mixing angles strongly suggests the existence of an approximate chiral flavour symmetry \cite{fn1} beyond the standard model (SM). In a previous paper \cite{natsol} we discussed the implications of such a symmetry for neutrino masses and mixings. We showed that the most natural scenario would correspond to nearly maximal mixing between $\nu_e$ and $\nu_{\mu}$ being responsible for both the solar and atmospheric neutrino problems. However, the recent data on the atmospheric neutrino zenith angle dependence from Super-Kamiokande \cite{SK} indicate that this solution no longer gives an acceptable fit to the atmospheric neutrino data. In this paper we show that approximately conserved chiral symmetries can still lead to a reasonably natural solution to the solar and atmospheric neutrino problems, if we relax the assumptions we made in \cite{natsol}. We shall also present an explicit model for the neutrino masses and mixings, in which the chiral flavour symmetry comes from an Abelian extension of the standard model gauge group. Previously we made two assumptions for the models with approximately conserved chiral symmetries: \begin{enumerate} \renewcommand{\labelenumi}{\roman{enumi}.} \item The low energy fermion spectrum of the model is the same as in the standard model -- in particular we have only three left-handed neutrinos. \item The chiral symmetries lead to elements of the effective light neutrino mass matrix $M_{\nu}$ which are of different orders of magnitude, apart from those elements which are equal due to the symmetry $M_{\nu} = M^T_{\nu}$. \end{enumerate} As we discussed in our earlier paper \cite{natsol}, the only natural solution to the solar and atmospheric neutrino problems with these assumptions is if we have nearly maximal $\nu_e - \nu_{\mu}$ mixing, and small mixing with $\nu_{\tau}$. This no longer gives a good description of the atmospheric neutrino data. We cannot obtain any other types of solution as a direct consequence of the assumptions (i) and (ii). Assumption (i) implies that we must have a $3 \times 3$ symmetric effective Majorana-like neutrino mass matrix. With a hierarchy between the elements of a symmetric mass matrix there are essentially two different forms for the matrix depending on whether or not the diagonal elements dominate all of the eigenvalues. The first case leads to small mixing between all three neutrinos, and this is unsuitable for a solution to the atmospheric neutrino problem. The second case gives large mixing between two nearly degenerate neutrinos, and small mixing with the third (non-degenerate) neutrino. Since we have only three neutrinos we have two independent mass-squared differences ($\Delta m^2_{ij}$) for the neutrinos. The smaller of these $\Delta m^2$s determines the wavelength of oscillation for the two largely mixed neutrinos, which we must take to be $\nu_{\mu}$ and $\nu_{\tau}$ with $\Delta m^2_{23} \sim 10^{-3} \, \mbox{eV}^2$ (and consequently the other mass-squared differences $\Delta m^2_{12} \sim \Delta m^2_{13} > 10^{-3} \, \mbox{eV}^2$) if we wish to explain all of the data on the atmospheric neutrino problem. However, we cannot also explain the solar neutrino problem, since the electron neutrino is then only slightly mixed and the small angle MSW solution requires $\Delta m^2 \sim 10^{-5} \, \mbox{eV}^2$. Hence we see that it is necessary to relax our assumptions. The first assumption was made because of the desire for minimality in our theory. We do not wish to introduce extra low energy fermions unless it is absolutely necessary, and consequently we will retain assumption (i) in this paper. The second assumption is often satisfied in models with chiral (gauged) symmetry breaking; however, it is not uncommon to find two order of magnitude equal elements in the mass matrices. Indeed in the explicit model (based on the anti-grand unified model (AGUT), \cite{AGUT,AGUT1}) from our previous paper we found that the (1, 1) and (2, 2) elements of the neutrino mass matrix were approximately equal, although in that case this did not have any effect on the phenomenology. Hence, in this paper we shall relax the second assumption and consider the case where there are two order of magnitude equal elements in our mass matrix (other than those elements which are exactly equal due to the symmetry of the mass matrix). We do not expect these elements to be exactly equal, since that would generally require fine-tuning which we are careful to avoid. In the next section we discuss the structure of the neutrino mass matrix we would expect to have for natural models of this type, and the phenomenology of the neutrino oscillations. We will show that (with no fine-tuning) we would typically obtain nearly maximal $\nu_e$ vacuum oscillations (with a linear combination of $\nu_{\mu} -\nu_{\tau}$) for the solar neutrinos, and large $\nu_{\mu} - \nu_{\tau}$ oscillations for atmospheric neutrinos. We would expect to see nothing at LSND, much of the parameter space for which has already been ruled out by Karmen \cite{karmen}, and Bugey \cite{bugey}. Whilst there are numerous examples of models \cite{AGUT,AGUT1,Leurer,Ibanez,Ramond} which explain the fermion spectrum using global $U(1)$ symmetries, or which cancel gauged $U(1)$ anomalies using the supersymmetric Green-Schwarz mechanism, it seems to have become a common belief \cite{Bijnens,Binetruy} that it is not possible to construct an anomaly free gauged Abelian extension of the SM which yields a realistic fermion mass spectrum. We present here an explicit anomaly free model with gauge group $SMG \times U(1)^2$ (where $SMG$ is the SM gauge group), which (with a non-minimal Higgs field spectrum) fits the charged fermion mass spectrum and yields solutions to the solar and atmospheric neutrino problems. The charged fermion mass spectrum in this model is identical to that predicted by the AGUT model. However, the neutrino mass spectrum is considerably different from that given by the AGUT, and we show in section \ref{sec:neutrino} that it can yield neutrino masses of the form suggested in section \ref{sec:phenom}. In order to obtain the required neutrino spectrum, it is necessary to introduce an $SU(2)$ triplet Higgs field with a suitable vacuum expectation value. We also discuss some difficulty in naturally obtaining such a vacuum expectation value for this Higgs field from the scalar potential. \section{Neutrino Phenomenology} \label{sec:phenom} In this section we shall examine the possible structures of the effective $3 \times 3$ light neutrino mass matrix, which can arise in models with approximately conserved chiral symmetries in a reasonably natural way. In the following discussion we shall use the convention that \begin{eqnarray} \Delta m^2_{ij} & = & | m^2_{\nu_i} - m^2_{\nu_j} |,\\ \Delta m^2_{12} & < & \Delta m^2_{23}, \end{eqnarray} where $\nu_i$ is the $i$th neutrino mass eigenstate. We then require $\Delta m^2_{23} \sim 10^{-3} \, \mbox{eV}^2$ and large $\nu_{\mu} - \nu_{\tau}$ mixing for the atmospheric neutrinos. We can have several types of solution to the solar neutrino problem, such as the well known MSW and `just-so' solutions to the solar neutrino problem with \begin{equation} \Delta m^2_{solar} \sim 10^{-5}, \, 10^{-10} \, \mbox{eV}^2 \end{equation} respectively. There is also some variation in the solar neutrino fluxes predicted by different solar models and this theoretical uncertainty means that it is also possible to have an `energy-independent' vacuum oscillation solution to the solar neutrino problem \cite{conforto}. By `energy-independent' we mean that $\Delta m^2_{solar}$ is sufficiently large that many oscillation lengths lie between the sun and the earth, and what we observe is the averaged flux suppression which is the same for solar neutrinos of all energies. Hence we can have \begin{equation} 10^{-10} \lapprox \Delta m^2_{12} = \Delta m^2_{solar} \lapprox 10^{-4} \, \mbox{eV}^2, \end{equation} where the upper limit comes from the constraint that electron neutrino mixing does not make a large contribution to the atmospheric neutrinos. This type of solution does not agree well with the solar neutrino data if we take both the experimental and theoretical solar neutrino rates at face value. (The Bahcall-Pinsonneault (BP98) model \cite{BP98} rules out this possibility at $99 \% \,C.L.$) However, we note that there is still some freedom allowed in the choice of solar model. The analysis of \cite{conforto} examines the possibility of having an energy-independent solution if the true solar model lies somewhere within the range of currently allowed solar models. Taking the energy-independent flux suppression ($F$) as a free parameter they find \begin{equation} F = 0.50 \pm 0.06 \end{equation} with a minimum $\chi^2$ of 8. If $F=0.5$ is not a free parameter (as in our model below) then this corresponds to a confidence level of $5 \%$. Even if the BP98 solar model is correct, the requirement for an energy-dependent solution to the solar neutrino problem rests essentially on only one experiment (the Chlorine experiment.) Given the possibility of unknown systematic errors we would prefer to avoid relying too strongly on the result of any single experiment. Hence, whilst the MSW and `just-so' solutions to the solar neutrino problem are empirically favoured we still consider the simpler energy-independent solution (with maximal mixing between two neutrinos) to be a viable solution. The amount of mixing will be large for the vacuum oscillation solutions, and may be either large or small for the MSW solutions. As we saw in our previous paper if we have a completely hierarchical mass matrix (with all independent elements of different orders of magnitude), the only solution to the solar and atmospheric neutrino problems is to have nearly maximal $\nu_e - \nu_{\mu}$ mixing responsible for both, which seems to be no longer compatible with the atmospheric neutrino data. Hence we shall now look at the possible mass matrices with order of magnitude degeneracies between the elements. One possibility would be to have an order of magnitude degeneracy in the charged lepton mass matrix, leading to large mixing coming from the charged sector. It has been shown elsewhere in the literature \cite{Grossman,Ross,Pati} that this can yield an acceptable phenomenology, and we do not consider it further here. So we now consider order of magnitude equal elements in the neutrino mass matrix. There are essentially three types of matrix which could potentially yield an acceptable phenomenology with a small number of approximately equal elements, \begin{equation} \begin{array}{ccc} \mathrm{I} & \mathrm{II} & \mathrm{III} \\ \left(\begin{array}{ccc} A & \times & \times \\ \times & \times & A\\ \times & A & \times \\ \end{array}\right) & \, \left(\begin{array}{ccc} \times & \times & \times\\ \times & A & B\\ \times & B & C\\ \end{array}\right) & \, \left(\begin{array}{ccc} \times & A & B\\ A & \times & \times\\ B & \times & \times\\ \end{array}\right) \end{array} \end{equation} where $\times$ denotes small elements and in each case $A \sim B \sim C$. We shall call these textures I, II and III respectively. From the form of texture I we see that this texture would require the imposition of an exact flavour symmetry relating $(M_{\nu})_{11}$ to $(M_{\nu})_{23}$, for which we have no good reason. Hence we will not use texture I. In order to have a good phenomenology, type II would require $AC \sim B^2$, which is not unlikely to occur by chance. However, it also requires three order of magnitude equal elements in the neutrino mass matrix, which we do not consider likely in most models with approximately conserved chiral symmetries. Nevertheless, it has been obtained in a supersymmetric extension of the standard model with approximately conserved gauged chiral symmetries \cite{Ramond}. Type III has only two approximately equal elements and, as we shall see in section \ref{sec:neutrino}, can occur reasonably naturally in a specific model. In fact type III has previously been considered in the literature in \cite{Barbieri}, where the structure of the mass matrix is assumed to be due to a global $L_e - L_{\mu} - L_{\tau}$ symmetry. The fine-tuned case where $B = A$ corresponds to the popular `bi-maximal mixing' solution to the neutrino problems \cite{bi-max,bi-max2}. All of the textures (I, II, and III) examined here have previously been discussed in \cite{Barbieri2} by three of the authors of \cite{Barbieri}. However, they claim there that flavour symmetries which lead to textures II and III also yield large mixing from the charged lepton mass matrix. We do not find this to be the case here. The mass matrix texture of type III has the eigenvalues: \begin{equation} \pm \sqrt{A^2 + B^2}, 0 \end{equation} and can be diagonalised by the mixing matrix: \begin{eqnarray} U_{\nu} & \sim & \left( \begin{array}{ccc} 1 & 0 & 0\\ 0 & \cos \theta & - \sin \theta \\ 0 & \sin \theta & \cos \theta \\ \end{array} \right) \left( \begin{array}{ccc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0\\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\ 0 & 0 & 1\\ \end{array} \right)\\ & = & \left( \begin{array}{ccc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0\\ \frac{1}{\sqrt{2}} \cos \theta & \frac{1}{\sqrt{2}} \cos \theta & -\sin \theta\\ \frac{1}{\sqrt{2}} \sin \theta & \frac{1}{\sqrt{2}} \sin \theta & \cos \theta\\ \end{array} \right) \label{eq:Unu} \end{eqnarray} where \begin{equation} \tan \theta = \frac{B}{A}. \end{equation} From the first row of eq. \ref{eq:Unu} we can see that $\nu_e$ is maximally mixed between $\nu_1$ and $\nu_2$, so that its mixing does not contribute to the atmospheric neutrino anomaly, and there will be no effect observable at Chooz \cite{Chooz} since we take $\Delta m^2_{12} < 10^{-4} \, \mbox{eV}^2$. The atmospheric neutrino anomaly will be entirely due to large $\nu_{\mu} - \nu_{\tau}$ mixing and, in order that the mixing be large enough, we need $\sin^2 2\theta \gapprox 0.7$ ($95 \% \,C.L$) which requires \begin{equation} \label{eq:b/a} 0.56 \lapprox \frac{B}{A} \lapprox 1.8. \end{equation} So although $A$ and $B$ must be order of magnitude degenerate, it is not necessary to do any fine tuning. The solar neutrino problem is explained by vacuum oscillations, although whether it is an `energy-independent' or a `just-so' solution will depend on the small elements which we have neglected. It is not entirely clear which of these types of solution will be more likely to occur in models with chiral symmetry breaking. We note however that the elements of $M_{\nu}$ which contribute to the $\Delta m^2_{12}$ have to be about 8 orders of magnitude smaller than the large elements A and B for the `just-so' solution. The solar neutrino problem cannot be explained in this model by an MSW type solution, since the mixing of the electron neutrino is too large for this type of solution. \section{Constructing an anomaly free $SMG \times U(1)^2$ model} \label{sec:extension} We now introduce an anomaly free Abelian extension of the SM which we shall use in the next section to obtain a neutrino mass spectrum of the form we have just discussed. This extension has the gauge group \begin{equation} SMG \times U(1)_{f1} \times U(1)_{f2} \end{equation} and we have only the standard model fermion spectrum at low energies. We shall break $U(1)_{f1}$ and $U(1)_{f2}$ with a non-minimal set of three Higgs fields, which are required to give a realistic charged fermion spectrum and which leave the $SMG$ unbroken. The $SMG$ will be broken down to $SU(3) \times U(1)$ by the usual Weinberg-Salam Higgs field, although this will now also carry charges under $U(1)_{f1}$ and $U(1)_{f2}$. We shall also introduce a further Higgs field to generate a realistic spectrum of neutrino masses in the next section. The fermions will each have different charges under the chiral symmetries $U(1)_{f1}$ and $U(1)_{f2}$, which will prevent most of them from acquiring masses by a direct Yukawa coupling with the Weinberg-Salam Higgs field. However, after the spontaneous breaking of $U(1)_{f1}$ and $U(1)_{f2}$ at some high mass scale $M_F$, the charged fermions will all acquire effective mass terms in the low-energy effective theory via diagrams such as figure \ref{fig:fdiag}. The intermediate states are taken to be vector-like fermions of mass $M = O(M_F)$, and we assume that the fundamental couplings are $O(1)$. Figure \ref{fig:fdiag} then gives an effective mass to the bottom quark, \begin{figure} \begin{picture}(40000,10000) \THICKLINES \drawline\fermion[\E\REG](5000,1500)[4000] \drawarrow[\E\ATBASE](\pmidx,\pmidy) \global\advance \pmidy by -2000 \put(\pmidx,\pmidy){$b_L$} \drawline\fermion[\E\REG](9000,1500)[4000] \drawarrow[\E\ATBASE](\pmidx,\pmidy) \global\advance \pmidy by -2000 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](13000,1500)[4000] \drawarrow[\E\ATBASE](\pmidx,\pmidy) \global\advance \pmidy by -2000 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](17000,1500)[4000] \drawarrow[\E\ATBASE](\pmidx,\pmidy) \global\advance \pmidy by -2000 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](21000,1500)[4000] \drawarrow[\E\ATBASE](\pmidx,\pmidy) \global\advance \pmidy by -2000 \put(\pmidx,\pmidy){$b_R$} \drawline\scalar[\N\REG](9000,1500)[4] \global\advance \pmidx by 500 \global\advance \pmidy by 1500 \put(\pmidx,\pmidy){$\phi_{WS}$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](13000,1500)[4] \global\advance \pmidx by 500 \global\advance \pmidy by 1500 \put(\pmidx,\pmidy){$W$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](17000,1500)[4] \global\advance \pmidx by 500 \global\advance \pmidy by 1500 \put(\pmidx,\pmidy){$\theta$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](21000,1500)[4] \global\advance \pmidx by 500 \global\advance \pmidy by 1500 \put(\pmidx,\pmidy){$\theta$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \end{picture} \vskip .3cm \caption{Feynman diagram for bottom quark mass in the full theory. The crosses indicate the couplings of the Higgs fields to the vacuum.} \label{fig:fdiag} \end{figure} \begin{equation} m_b \sim \vev{\phi_{WS}} \frac{\vev{W}}{M_F} \frac{\vev{\theta}^2}{M_F^2}, \end{equation} where $\vev{W}, \vev{\theta}$ are the vacuum expectation values of Higgs fields $W$ and $\theta$ used to spontaneously break the $SMG \times U(1)^2$ down to the standard model. The other charged fermions acquire their mass via similar diagrams. As we discussed earlier we do not wish to extend the low-energy fermion spectrum for reasons of minimality, so we have the usual SM fermion spectrum with their usual representations under $SMG$. The fermion charges under $U(1)_{f1}$ and $U(1)_{f2}$ are then severely constrained by the requirement that all the anomalies involving them cancel. If we denote the charges of the fermions under $U(1)_{f1}$ and $U(1)_{f2}$ by $Q_{fi}(u_L) = u_{Li}$ ($i = 1, 2$) etc., then the anomaly constraints are given by: \begin{eqnarray} \mbox{Tr} [SU(3)^2 U(1)_{fi}] & = & 2(u_{Li} + c_{Li} + t_{Li})\nonumber \\ & & - (u_{Ri} + d_{Ri} + s_{Ri} + c_{Ri} + t_{Ri} + b_{Ri}) = 0, \nonumber\\ \mbox{Tr} [SU(2)^2 U(1)_{fi}] & = & 3 (u_{Li} + c_{Li} + t_{Li}) + e_{Li} + \mu_{Li} + \tau_{Li} = 0,\nonumber\\ \mbox{Tr} [U(1)_Y^2 U(1)_{fi}] & = & u_{Li} + c_{Li} + t_{Li} - 8 (u_{Ri} + c_{Ri} + t_{Ri}) \nonumber\\ & & - 2 (d_{Ri} + s_{Ri} + b_{Ri}) + 3 (e_{Li} + \mu_{Li} + \tau_{Li}) \nonumber\\ & & - 6 (e_{Ri} + \mu_{Ri} + \tau_{Ri}) = 0, \nonumber\\ \mbox{Tr} [U(1)_Y U(1)_{fi}^2] & = & u_{Li}^2 + c_{Li}^2 + t_{Li}^2 - 2 (u_{Ri}^2 + c_{Ri}^2 + t_{Ri}^2) \nonumber\\ & & + d_{Ri}^2 + s_{Ri}^2 + b_{Ri}^2 -(e_{Li}^2 + \mu_{Li}^2 + \tau_{Li}^2) \nonumber\\ & & + e_{Ri}^2 + \mu_{Ri}^2 + \tau_{Ri}^2 = 0, \nonumber\\ \mbox{Tr}[U(1)_{fi} U(1)_{fj} U(1)_{fk}] & = & 6(u_{Li} u_{Lj} u_{Lk} + c_{Li} c_{Lj} c_{Lk} + t_{Li} t_{Lj} t_{Lk}) \nonumber\\ & & -3(d_{Ri} d_{Rj} d_{Rk} + s_{Ri} s_{Rj} s_{Rk} + b_{Ri} b_{Rj} b_{Rk}\nonumber\\ & & + u_{Ri} u_{Rj} u_{Rk} + c_{Ri} c_{Rj} c_{Rk} + t_{Ri} t_{Rj} t_{Rk}) \nonumber\\ & & +2(e_{Li} e_{Lj} e_{Lk} + \mu_{Li} \mu_{Lj} \mu_{Lk} + \tau_{Li} \tau_{Lj} \tau_{Lk})\nonumber\\ & & -(e_{Ri} e_{Rj} e_{Rk} + \mu_{Ri} \mu_{Rj} \mu_{Rk} + \tau_{Ri} \tau_{Rj} \tau_{Rk}) = 0, \nonumber\\ \mbox{Tr}[(\mbox{graviton})^2 U(1)_{fi}] & = & 6(u_{Li} + c_{Li} + t_{Li}) -3(u_{Ri} + d_{Ri} + s_{Ri} + c_{Ri} \nonumber\\ & & + t_{Ri} + b_{Ri}) + 2 (e_{Li} + \mu_{Li} + \tau_{Li}) \nonumber\\ & & - (e_{Ri} + \mu_{Ri} + \tau_{Ri}) = 0. \label{eq:anomaly} \end{eqnarray} \begin{table}[t] \begin{displaymath} \renewcommand{\arraystretch}{1.5} \begin{array}{|c||*{11}{c|}} \hline & \mbox{(1st. gen.)} & c_L & t_L & c_R & s_R & t_R & b_R & \mu_L & \tau_L & \mu_R & \tau_R \\ \hline \hline Q_{f1} & 0 & 0 & 1 & 4 & 0 & 0 & -2 & 0 & -3 & 0 & -6 \\ \hline Q_{f2} & 0 & -1 & 0 & -1 & 1 & -3 & 1 & 3 & 0 & 5 & 1\\ \hline \end{array} \end{displaymath} \caption{An anomaly free choice of Abelian charges for the fermion fields} \label{tab:fcharges} \end{table} A possible choice of charges (which is based on the AGUT Abelian charges) satisfying these constraints is given in table \ref{tab:fcharges} and, as we shall see, a realistic charged fermion mass spectrum can be obtained for these charges by making a suitable choice of Higgs fields. The set of charges in table \ref{tab:fcharges} is not the only one which is anomaly free. For example, the AGUT has four $U(1)$s, with linearly independent sets of charges which satisfy the anomaly constraints of eq. \ref{eq:anomaly}. In the AGUT (\cite{AGUT1}) one of these $U(1)$s is broken before the others at the Planck scale, leaving three unbroken $U(1)$ generators. In this paper we choose the fermion charges to be a linear combination of the charges under these unbroken generators. (Our choice of charges is given by $Q_Y = y_1 + y_2 + y_3, Q_{f1} = 3 y_3$ and $Q_{f2} = -3 y_2 + Q_f$ where $y_{1, 2, 3}$ and $Q_f$ are the AGUT fermion charges of reference \cite{AGUT1}). We could alternatively have chosen to use the charges under the broken $U(1)$ for $Q_{f1}$ or $Q_{f2}$; however, we are unaware of any choice of charges (with only two non-standard model $U(1)$s) involving this broken $U(1)$ which yields a realistic charged fermion spectrum. The Weinberg-Salam Higgs field, $\phi_{WS}$, charges are chosen so that the top quark obtains its mass directly from its Yukawa coupling with $\phi_{WS}$, and $M_t$ is thus unsuppressed. The other fermions cannot couple directly to $\phi_{WS}$ since such couplings are protected by the chiral symmetries. Hence we introduce three other Higgs fields $W$, $\xi$ and $\theta$ to break the $U(1)_{f1}$ and $U(1)_{f2}$ with charges and vacuum expectation values chosen to give a realistic fermion spectrum. The charges and vacuum expectation values of the Higgs fields are given in table \ref{tab:hcharges}. We take the Higgs fields at the fundamental scale to be singlets under the standard model symmetries. The charged fermion effective SM Yukawa matrices are then given by \begin{eqnarray} H_U & \sim & \left ( \begin{array}{ccc} \vev{W}\vev{\theta}^4\vev{\xi}^2 & \vev{W}^2 \vev{\theta}^2 \vev{\xi} & \vev{W}\vev{\theta}^4 \vev{\xi}\\ \vev{W}\vev{\theta}^4\vev{\xi}^3 & \vev{W}^2\vev{\theta}^2 & \vev{W}\vev{\theta}^4 \\ \vev{\xi}^3 & \vev{W}\vev{\theta}^2 & 1 \end{array} \right ), \label{H_U} \\ H_D & \sim & \left ( \begin{array}{ccc} \vev{W}\vev{\theta}^4\vev{\xi}^2 & \vev{W}\vev{\theta}^4\vev{\xi} & \vev{\theta}^6\vev{\xi} \\ \vev{W}\vev{\theta}^4\vev{\xi} & \vev{W}\vev{\theta}^4 & \vev{\theta}^6 \\ \vev{W}^2\vev{\theta}^8\vev{\xi} & \vev{W}^2\vev{\theta}^8 & \vev{W} \vev{\theta}^2 \end{array} \right ), \label{H_D} \\ H_E & \sim & \left ( \begin{array}{ccc} \vev{W}\vev{\theta}^4\vev{\xi}^2 & \vev{W}\vev{\theta}^4\vev{\xi}^3 & \vev{W}\vev{\theta}^8\vev{\xi} \\ \vev{W}\vev{\theta}^4\vev{\xi}^5 & \vev{W}\vev{\theta}^4 & \vev{W}\vev{\theta}^8\vev{\xi}^2 \\ \vev{W}\vev{\theta}^{10}\vev{\xi}^3 & \vev{W}^2\vev{\theta}^8 & \vev{W}\vev{\theta}^2 \end{array} \right ) \label{H_E}, \end{eqnarray} where the Higgs field vacuum expectation values $\vev{W}, \vev{\xi}$ and $\vev{\theta}$ are in units of the fundamental scale, $M_F$. These mass matrices yield exactly the same masses and mixings at the fundamental scale as we obtained in the AGUT model in previous papers \cite{AGUT1}, as can be seen by substituting the Higgs field combination $\theta^2$ in this paper by the Higgs field $T$ in the AGUT, and relabelling the $c_R$ and $t_R$ fields. \begin{table}[t] \begin{displaymath} \renewcommand{\arraystretch}{1.5} \begin{array}{|c||c|c|c|c|}\hline & y/2 & Q_{f1} & Q_{f2} & \mbox{Vacuum expectation value}\\ \hline \hline \phi_{WS} & \frac{1}{2} & -1 & -3 & \\ \hline W & 0 & 3 & \frac{5}{3} & 0.158\\ \hline \theta & 0 & \frac{1}{2} & \frac{1}{6} & 0.266 \\ \hline \xi & 0 & 0 & 1 & 0.099\\ \hline \end{array} \end{displaymath} \caption{Higgs field charges which have been chosen to give a realistic charged fermion spectrum, and the vacuum expectation values for the chiral symmetry breaking Higgs fields in units of the fundamental scale $M_F$.} \label{tab:hcharges} \end{table} This is because (after this trivial relabelling of fermion fields) the charges on the fermion fields are the same as a linear combination of the remaining Abelian fermion charges in the AGUT after one of the AGUT $U(1)$'s is spontaneously broken. The choice of Higgs fields in the $SMG \times U(1)^2$ model is however different and, whilst this leads to the same charged fermion spectrum as in the AGUT (see table \ref{best-fit} for the best fit spectrum from \cite{AGUT1}), it does not yield the same neutrino spectrum. The AGUT cannot produce the same neutrino mass matrix structure (without increasing the number of Higgs fields), since it is not possible to choose a consistent set of non-Abelian representations for the Higgs fields. We shall see however that, within the $SMG \times U(1)^2$ model, we can obtain an acceptable neutrino spectrum. \begin{table} \caption{Best fit to conventional experimental data. All masses are running masses at 1 GeV except the top quark mass which is the pole mass.} \begin{displaymath} \begin{array}{ccc} \hline & \mathrm{Fitted} & \mathrm{Experimental} \\ \hline m_u & 3.6 \mathrm{\; MeV} & 4 \mathrm{\; MeV} \\ m_d & 7.0 \mathrm{\; MeV} & 9 \mathrm{\; MeV} \\ m_e & 0.87 \mathrm{\; MeV} & 0.5 \mathrm{\; MeV} \\ m_c & 1.02 \mathrm{\; GeV} & 1.4 \mathrm{\; GeV} \\ m_s & 400 \mathrm{\; MeV} & 200 \mathrm{\; MeV} \\ m_{\mu} & 88 \mathrm{\; MeV} & 105 \mathrm{\; MeV} \\ M_t & 192 \mathrm{\; GeV} & 180 \mathrm{\; GeV} \\ m_b & 8.3 \mathrm{\; GeV} & 6.3 \mathrm{\; GeV} \\ m_{\tau} & 1.27 \mathrm{\; GeV} & 1.78 \mathrm{\; GeV} \\ V_{us} & 0.18 & 0.22 \\ V_{cb} & 0.018 & 0.041 \\ V_{ub} & 0.0039 & 0.0035 \\ \hline \end{array} \end{displaymath} \label{best-fit} \end{table} \section{Neutrino masses and mixings from an explicit model} \label{sec:neutrino} Neutrino masses can be generated in this model by the Weinberg-Salam Higgs field, via a see-saw like mechanism, giving a dominant off-diagonal element in the neutrino mass matrix, \begin{equation} M_{\nu} \sim \frac{\phi^2_{WS}}{M_F}\left ( \begin{array}{ccc} \vev{W}^2\vev{\theta}^8\vev{\xi}^4 & \vev{W}^2\vev{\theta}^8\vev{\xi} & \vev{W}^2\vev{\theta}\vev{\xi}^3 \\ \vev{W}^2\vev{\theta}^8\vev{\xi} & \vev{W}\vev{\theta}^{10} & \vev{W}^2\vev{\theta} \\ \vev{W}^2\vev{\theta}\vev{\xi}^3 & \vev{W}^2\vev{\theta} & \vev{W}^2\vev{\theta}^2\vev{\xi}^2 \end{array} \right ). \label{hnu} \end{equation} This yields nearly maximal $\nu_{\mu} - \nu_{\tau}$ mixing between a nearly degenerate pair of neutrinos. As we discussed earlier this does not lead to an acceptable phenomenology, and hence we require a different mechanism to generate the dominant contribution to the neutrino masses and mixings. We do this here by introducing an $SU(2)$ triplet Higgs field $\Delta$. The charges on this Higgs field are then chosen so that the (1, 2) and (1, 3) elements of $M_{\nu}$ are suppressed by equal amounts, giving \begin{equation} \left(\frac{y}{2}, Q_{f1}, Q_{f2}\right) = \left(1, \frac{3}{2}, -\frac{3}{2} \right). \end{equation} The neutrino mass matrix, \begin{equation} M_{\nu} \sim \vev{\Delta^0} \vev{\theta}^3\left( \begin{array}{ccc} \vev{\xi}^2 & \vev{\xi} & \vev{\xi}\\ \vev{\xi} & \vev{\xi}^4 & \vev{\xi}^2\\ \vev{\xi} & \vev{\xi}^2 & \vev{\theta}^6 \end{array} \right), \end{equation} is then generated by diagrams such as figure \ref{fig:nmass}. We have ignored CP violating phases here, and there are unknown $O(1)$ factors in front of each of the mass matrix elements. This mass matrix gives \begin{equation} \frac{\Delta m^2_{12}}{\Delta m^2_{23}} \sim \vev{\xi} \end{equation} which is not small enough for the `just-so' or MSW solutions to the solar neutrino problem if we take \begin{equation} \Delta m^2_{23} \sim 10^{-3} \, \mbox{eV}^2 \end{equation} for the atmospheric neutrino problem. Hence we shall use the `energy-independent' vacuum oscillation solution to the solar neutrino problem. The mixing from this mass matrix is similar to that given by eq. \ref{eq:Unu}, although the elements of order $\vev{\Delta^0}\theta^3 \xi^2$ in the mass matrix can have some effect on the mixing leading to some small deviations from the form of eq. \ref{eq:Unu}. The electron neutrino mixing remains very close to maximal regardless of the $O(1)$ factors in the mass matrix, and makes almost no contribution to the atmospheric neutrino mixing. Depending on the $O(1)$ factors the muon and tau neutrino mixing can differ slightly from that given by eq. \ref{eq:Unu}, although if eq. \ref{eq:b/a} is satisfied then the mixing between them remains large enough to solve the atmospheric neutrino problem. \begin{figure} \begin{picture}(40000,3000) \THICKLINES \drawline\fermion[\E\REG](8000,1000)[3000] \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\nu_e$} \drawline\fermion[\E\REG](11000,1000)[3000] \global\advance \pmidy by -1500 \global\advance \pmidx by -500 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](14000,1000)[3000] \global\advance \pmidy by -1500 \global\advance \pmidx by -500 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](17000,1000)[3000] \global\advance \pmidy by -1500 \global\advance \pmidx by -500 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](20000,1000)[3000] \global\advance \pmidy by -1500 \global\advance \pmidx by -500 \put(\pmidx,\pmidy){$M_F$} \drawline\fermion[\E\REG](23000,1000)[3000] \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\overline{\nu}_{\tau}$} \drawline\scalar[\N\REG](11000,1000)[3] \global\advance \pmidx by -2000 \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\Delta^0$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](14000,1000)[3] \global\advance \pmidx by 1000 \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\theta$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](17000,1000)[3] \global\advance \pmidx by 1000 \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\theta$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](20000,1000)[3] \global\advance \pmidx by 1000 \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\theta$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \drawline\scalar[\N\REG](23000,1000)[3] \global\advance \pmidx by 1000 \global\advance \pmidy by 1000 \put(\pmidx,\pmidy){$\xi$} \global\advance \scalarbackx by -530 \global\advance \scalarbacky by -530 \drawline\fermion[\NE\REG](\scalarbackx,\scalarbacky)[1500] \global\advance \scalarbacky by 1060 \drawline\fermion[\SE\REG](\scalarbackx,\scalarbacky)[1500] \end{picture} \vskip .3cm \caption{Example Feynman diagram for neutrino mass in the $SMG \times U(1)^2$ model.} \label{fig:nmass} \end{figure} Hence if we take $\vev{\Delta} \sim 12 \, \mbox{eV}$ to give suitable masses for the atmospheric neutrino problem then we have \begin{eqnarray} \Delta m^2_{12} & \sim & 10^{-4} \, \mbox{eV}^2, \, \sin^2 2\theta_{12} \sim 1 \\ \Delta m^2_{23} & \sim & 10^{-3} \, \mbox{eV}^2, \, \sin^2 2\theta_{23} = 0.7 - 1.0 \end{eqnarray} for the solar and atmospheric neutrinos respectively. This means we will have an electron neutrino flux suppression of $1/2$ for all of the solar neutrinos, and the atmospheric neutrino problem will be due to large $\nu_{\mu} - \nu_{\tau}$ mixing. The neutrino masses are too small to make a significant contribution to dark matter, or to the anomaly observed at LSND \cite{LSND}. Hence we predict that the LSND result will prove to be unfounded. The amplitude of neutrinoless double beta decay is proportional to $(M_{\nu})_{ee}$, which we predict to be $(M_{\nu})_{ee} \sim 2 \times 10^{-3} \, \mbox{eV}$, which is much less than the current limit of $(M_{\nu})_{ee} \le 0.45 \, \mbox{eV}$ \cite{beta} and the sensitivities of current or planned experiments. In obtaining the spectrum of neutrino masses we have simply chosen $\vev{\Delta^0}$ to have the required value for the atmospheric neutrinos. However, there is some unnaturalness in obtaining a suitable value for $\vev{\Delta^0}$ from the scalar potential. If we write down the low energy effective scalar potential we have \begin{eqnarray} V(\phi_{WS}, \Delta) & \sim & \lambda \{ (\phi_{WS}^{\dagger} \phi_{WS})^2 + \lambda^{\prime} (\Delta^{\dagger} \Delta)^2 + \lambda^{\prime \prime} M_F \phi_{WS}^{\dagger2} \Delta \vev{W} \vev{\xi}^2 \vev{\theta} \nonumber \\ & & - \eta M_F^2 \Delta^{\dagger} \Delta - \frac{\mu^2}{\lambda} \phi_{WS}^{\dagger} \phi_{WS}\} \end{eqnarray} where we would typically expect $\lambda^{\prime}, \lambda^{\prime \prime}, \eta = O(1).$ However, this leads to a vacuum expectation value for $\Delta$ of \begin{equation} \vev{\Delta^0} \sim \frac{\vev{\phi_{WS}^2}}{M_F} \vev{W} \vev{\xi}^2 \vev{\theta}. \end{equation} Whilst we can choose $M_F$ to give the required vacuum expectation value for $\Delta$ we then find that, since $\vev{\Delta}$ is much less than the see-saw scale $\frac{\vev{\phi_{WS}}^2}{M_F}$, the neutrino mass matrix is dominated by the see-saw type diagrams which as we noted earlier, do not yield an acceptable phenomenology. Hence, in order to avoid this problem, we would require a $\phi_{WS}^{\dagger2} \Delta$ coupling which is for some unknown reason much larger than expected. Of course the scalar potential is in any case not well understood, since the lightness of the Weinberg-Salam Higgs field is also something of a mystery. It should be noted that, whilst in this case we have some difficulty in obtaining a suitable vacuum expectation value for the triplet Higgs field, this will not necessarily be the case for other models which use this mechanism for generating the neutrino masses. If the see-saw neutrino masses are sufficiently suppressed by the symmetry breaking parameters, then the masses coming from the triplet Higgs field will dominate and there will be no problem. \section{Conclusions} We have shown that models with only the 3 standard model neutrinos (in the low energy spectrum), and chiral symmetry breaking can explain the solar and atmospheric neutrino problems including the Super-Kamiokande zenith angle distribution. This can occur if the chiral symmetry does not lead to (independent) elements in $M_{\nu}$ which are all of different orders of magnitude (as we assumed in a previous paper). The atmospheric neutrino problem is explained by large $\nu_{\mu} - \nu_{\tau}$ mixing, and (for the mass matrix structure we examined) the solar neutrino deficit is due to nearly maximal electron neutrino vacuum oscillations, which can be either `just-so' or `energy-independent'. We presented an explicit model, which is an anomaly free Abelian extension of the SM, yielding this type of phenomenology, although there are unresolved problems in the scalar potential. This model is an extension of a model which gives a realistic 3 parameter fit to the charged fermion masses and mixings. It gives an `energy-independent' solar neutrino suppression of $1/2$, with $\Delta m^2_{solar} \sim 10^{-4} \, \mbox{eV}^2$. We also predict that the signal at LSND will not be confirmed by other experiments, and that the neutrinos will not make a significant contribution to hot dark matter. The prospects for examining this scenario are good. Experiments such as SNO \cite{SNO}, Borexino \cite{Borexino} and KamLand \cite{Kamland} should provide us with more information on the solar neutrino spectrum. Super-Kamiokande will also provide data on the day-night asymmetry and seasonal variations which will be important in determining the type of solution to the solar neutrino problem. Long baseline experiments such as K2K \cite{K2K} and MINOS \cite{MINOS} should enable us to confirm the nature of the atmospheric neutrino oscillations with a better understood neutrino source, and should tell us whether the $\nu_{\mu}$ oscillations are to $\nu_{\tau}$ or a sterile neutrino. The LSND result will also be further tested by Karmen at $95 \% \,C.L.$, and definitively by MiniBoone; neither of which we would expect to find evidence of oscillations. In conclusion, we predict the atmospheric neutrino problem to be due to large $\nu_{\mu} - \nu_{\tau}$ oscillations with $\Delta m^2 \sim 10^{-3} \, \mbox{eV}^2$, and the solar neutrino deficit to be due to electron neutrino vacuum oscillations of either the `just-so' or `energy-independent' type. This scenario should be confirmed or denied by a number of experiments in the near future. \section*{Acknowledgements} H.B.N. and C.F. acknowledge funding from INTAS 93-3316-ext, and the EU grant HMC 94-0621. M.G. is grateful for a PPARC studentship. We would also like to thank M. Jezabek for useful discussions.
2,877,628,089,915
arxiv
\section{Introduction} \label{sec:intro} It has been noted since the seminal work in Refs.\ \cite{barrois,BL} that the quark-quark interaction in the color antitriplet channel is attractive and drives a Cooper pairing instability in quark matter in the limit of high density in which the Fermi energy of the quarks dominates over the one-gluon exchange interaction energy. Because of the relativistic nature of high density quark matter, the color magnetic (transverse) force plays a dominant role in Cooper pairing. This interaction is nonlocal in time just like the electron-phonon interaction in ordinary superconductors \cite{nambu,eliashberg}, but in contrast is long-ranged in the absence of static screening. This long-range nature gives rise to a different dependence of the pairing gap on the QCD coupling constant $g$ from the BCS result \cite{son}. At zero temperature, up to leading order in $g$, the logarithm of the gap arises from the dynamically screened magnetic force that involves Landau damped virtual gluons in a normal medium. Corrections to the gluon self-energy (polarization function) by the gap do not affect the logarithm of the gap up to subleading order in $g$ at zero temperature \cite{rischke}. This is because the gluon self-energy is modified by the gap significantly only at gluon energies of $\lesssim T_c$. The influence of such corrections on the gap equation near the transition temperature $T_c$, however, has yet to be examined. In this paper we investigate the structure of the gap equation near $T_c$ by including the polarization effects of the color superconducting medium on exchanged gluons. We then estimate corrections thereby induced to the parameters characterizing the fourth order terms in the Ginzburg-Landau free energy of a weakly coupled uniform superfluid of massless three-flavor quarks. These corrections, divided by the weak coupling value, are of order $g$, in contrast with the case of a short-range pairing interaction in which the corrections generally contain a factor proportional to the ratio of the transition temperature to the Fermi energy. We find that the polarization corrections keep the color-flavor locked phase the most stable just below $T_c$. Throughout this paper, we consider a system of three-flavor ($uds$) and three-color ($RGB$) massless quarks at temperature $T$ and baryon chemical potential $\mu$, and use units $\hbar=c=1$. We assume that the Fermi momentum is common to all colors and flavors. \section{Gap equation} \label{sec:gap} In this section we address the question of how the gap equation relevant in the weak coupling regime, in which the pairing interaction is induced by one-gluon exchange, is modified by the polarization effects of the color superconducting medium. For this purpose we first consider a $J^P=0^+$ pairing state that is $ud$-isoscalar and $RG$-color antitriplet, since this is one of the simplest states that belong to a color and flavor antisymmetric channel with $J^P=0^+$. This channel has a common transition temperature in the limit of weak coupling \cite{BLR}: \begin{equation} T_c= \frac{2e^\gamma}{\pi} e^{-(\pi^2+4)/8}\ ~\frac{b\mu}{3} ~{\rm exp} \left(-\frac{3\pi^2}{\sqrt{2} g}\right), \label{tc} \end{equation} where ${e^\gamma}/{\pi} = 0.5669\ldots$ and $b=256 \pi^4 (2/3 g^2)^{5/2}$. For the isoscalar pairing state of interest here, a nonzero excitation gap $d({\bf k})$ is open below $T_c$ for quasiparticles having either flavor $u$ or $d$ and either color $R$ or $G$. Here ${\bf k}$ is the momentum associated with the relative coordinate of a quark Cooper pair. In the case in which only the modification of the one-gluon exchange force by a normal medium is included in the random-phase approximation (RPA) \cite{weldon,BMPR} and the normal state Hartree-Fock correction to the quark propagator is ignored, the gap equation reads \cite{PR,I} \begin{eqnarray} d({\bf k}) &=&\frac{g^{2}}{48\pi^{3}} \int d^3 q [D_T(E({\bf q})-E({\bf k}),{\bf q}-{\bf k}) +D_T(E({\bf q})+E({\bf k}),{\bf q}-{\bf k}) \nonumber \\ & & +D_L(E({\bf q})-E({\bf k}),{\bf q}-{\bf k}) +D_L(E({\bf q})+E({\bf k}),{\bf q}-{\bf k})] \nonumber \\ & & \times d({\bf q}) E^{-1}({\bf q})\tanh\left(\frac{E({\bf q})}{2T}\right)\ , \label{deq} \end{eqnarray} where \begin{equation} E({\bf q})= \left[\left(|{\bf q}|-\frac{\mu}{3}\right)^{2}+ d^2({\bf q})\right]^{1/2} \label{Eofq} \end{equation} is the excitation energy, \begin{equation} D_T(p)\simeq {\rm Re}\left[ \frac{1} {|{\bf p}|^{2}-i\pi m_{D}^{2}p_{0}\theta(\sqrt{\pi}m_{D}/2-|{\bf p}|) /4|{\bf p}|}\right] \label{gluePRT} \end{equation} and \begin{equation} D_L(p)\simeq \frac{1}{|{\bf p}|^{2}+m_{D}^{2}}, \label{gluePRL} \end{equation} with the Debye screening mass \begin{equation} m_{D}=\left[\frac{3g^{2}}{2\pi^{2}}\left(\frac{\mu}{3}\right)^2 +\frac{3g^{2}T^{2}}{2}\right]^{1/2}, \label{debye} \end{equation} characterize the transverse and longitudinal parts of the gluon propagator $D(p)$ in the Landau gauge as \begin{equation} D^{\alpha\beta}_{\mu\nu}(p)=-\delta_{\alpha\beta} [P^T_{\mu\nu}D_T(p)+P^L_{\mu\nu}D_L(p)], \end{equation} with the transverse and longitudinal projection operators \begin{equation} P^{T}_{ij}=\delta_{ij}-\frac{p_{i}p_{j}} {|{\bf p}|^{2}}, ~~ P^{T}_{00}=P^{T}_{0i}=P^{T}_{i0}=0, \end{equation} \begin{equation} P^{L}_{\mu\nu}=\frac{p_{\mu}p_{\nu}}{p^{2}}-g_{\mu\nu}-P^{T}_{\mu\nu}. \end{equation} Expressions (\ref{gluePRT}) and (\ref{gluePRL}) are approximate in the sense that they are available in the regime $p_0\ll|{\bf p}|\ll\mu/3$, but duly allow for the Landau damping of transverse virtual gluons and the Debye screening of the longitudinal force in a way sufficient to describe the exact form of the logarithm of the gap magnitude up to subleading order in $g$ \cite{PR}. Note that the Landau damping provides an effective infrared cutoff in the transverse sector, $\sim(\pi m_D^2 |p_0|/4)^{1/3}$, which in turn plays a dominant role in determining the pairing gap. At $T=0$, the solution to the gap equation (\ref{deq}) is known as \cite{PR} \begin{equation} d({\bf k})=\frac23 b\mu e^{-\pi/2{\bar g}} \sin({\bar g} x), \label{prgap} \end{equation} where ${\bar g}\equiv g/3\sqrt2 \pi$, and \begin{equation} x\equiv\ln\left[\frac{2b\mu/3}{||{\bf k}|-\mu/3|+E({\bf k})}\right]. \label{x} \end{equation} The factor $\sin({\bar g}x)$ ensures that the gap is appreciable only for momenta ${\bf k}$ close to the Fermi surface. The exponential term and the sinusoidal $x$ dependence arise from nearly static, Landau-damped magnetic gluons that mediate the long-range part of the magnetic interactions, while both the higher frequency magnetic gluons and Debye-screened electric gluons play a dominant role in determining the pre-exponential factor. For later comparison with the case near $T_c$, we write down the equation for the magnitude of the gap on the Fermi surface, $d_F\equiv d(|{\bf k}|=\mu/3)$, which can be derived from Eq.\ (\ref{deq}) as \cite{PR} \begin{equation} d_F=\frac{2g^2}{(3\pi^2)^2} \left[\ln^2\left(\frac{2\delta}{d_F}\right) +b'\ln\left(\frac{2\delta}{d_F}\right)\right]d_F. \label{gapeq0} \end{equation} Here the cutoff $\delta$, obeying $d_F\ll\delta \ll m_D$, is chosen so that $d(|{\bf k}|>\delta)$ is vanishingly small, and $b'=2\ln(b\mu/3\delta)$. The term associated with $\ln^2(2\delta/d_F)$ comes from soft Landau-damped magnetic gluons, while the term associated with $\ln(2\delta/d_F)$ comes from nonstatic magnetic gluons and Debye-screened electric gluons. The overall coefficient of the $\mu/g^5$ in the pre-exponential factor in Eq.\ (\ref{prgap}) is correct up to a factor of order unity since the quasiparticle wave function renormalization ignored here results in a factor $\exp[-(\pi^2+4)/8]$ \cite{wang}, which appears also in the weak coupling expression for $T_c$, Eq.\ (\ref{tc}). (This renormalization affects the sinusoidal $x$ dependence only through a factor of order $g^2 x$.) On the other hand, polarization by the color superconducting medium, which gives rise to the gap dependence of the pairing interaction, provides even higher order corrections to the zero-temperature gap \cite{rischke}. Near $T_c$, the momentum dependence of the gap can be set equal to that at $T=0$, as in the usual BCS case \cite{PR}. Consequently, \begin{equation} d({\bf q},T)=d_F(T)\sin({\bar g}y), \end{equation} where \begin{equation} y\equiv\ln\left[\frac{2b\mu/3}{||{\bf q}|-\mu/3|+E({\bf q},T=0)}\right]. \label{y} \end{equation} Then, the gap magnitude $d_F$ can be determined by expanding the gap equation (\ref{deq}) up to ${\cal O}(d^3)$ as \begin{eqnarray} d_F&=&\frac{g^{2}}{18\pi^{2}} \int_{0}^{\delta} d(|{\bf q}|-\mu/3) \ln\left(\frac{b\mu/3}{||{\bf q}|-\mu/3|}\right) \nonumber \\ && \times \left\{ d({\bf q}) \frac{\tanh\left(||{\bf q}|-\mu/3|/2T\right)}{||{\bf q}|-\mu/3|} +d^3({\bf q}) \frac{1}{2||{\bf q}|-\mu/3|}\frac{d}{d||{\bf q}|-\mu/3|} \left[\frac{\tanh\left(||{\bf q}|-\mu/3|/2T\right)}{||{\bf q}|-\mu/3|} \right]+\cdots \right\}, \end{eqnarray} where we have noted that the momentum region, ${\bf k}\approx{\bf q}$, contributes dominantly to the integral in Eq.\ (\ref{deq}). We thus obtain \begin{equation} d_F = \left(1-\frac{\pi{\bar g}}{2}\frac{T-T_c}{T_c}\right)d_F -\frac{7\zeta(3){\bar g}}{16 \pi T_c^2} d_F^3 + {\cal O}(d_F^5), \label{geqnct} \end{equation} with the zeta function $\zeta(3)=1.2020\ldots$. Here the coefficients affixed to $d_F$ and $d_F^3$ include the leading contributions with respect to $g$, and we have noted that at $T=T_c$, \begin{eqnarray} 1&=&\frac{g^{2}}{18\pi^{2}} \int_{0}^{\delta} d(|{\bf q}|-\mu/3) \ln\left(\frac{b\mu/3}{||{\bf q}|-\mu/3|}\right) \sin({\bar g}y) \frac{\tanh\left(||{\bf q}|-\mu/3|/2T_c\right)}{||{\bf q}|-\mu/3|} \label{thouless} \end{eqnarray} is satisfied. [In the absence of the quasiparticle wave function renormalization, the solution to Eq.\ (\ref{thouless}) reproduces expression (\ref{tc}) except for a factor $\exp[-(\pi^2+4)/8]$.] The solution to Eq.\ (\ref{geqnct}) reads \begin{equation} d_F=\left[\frac{8\pi^2 T_c^2}{7\zeta(3)}\frac{T-T_c}{T_c}\right]^{1/2}. \label{sol0} \end{equation} As it should, Eq.\ (\ref{geqnct}) is the same as the known result obtained from the Ginzburg-Landau theory (see the next section). We now introduce the effect of the color superconducting medium on the gluon propagator within the RPA. The normal gluon propagator characterized by Eqs.\ (\ref{gluePRT}) and (\ref{gluePRL}) is modified by the pairing gap, in a way dependent on $\alpha$, as \cite{dirk} \begin{equation} D_T^{\alpha}(p)\simeq {\rm Re}\left[ \frac{1} {|{\bf p}|^{2}+(m_M^\alpha)^2 f({\bf p}) -i\pi m_{D}^{2}p_{0}\theta(\sqrt{\pi}m_{D}/2-|{\bf p}|) /4|{\bf p}|}\right] \label{gluePRT2} \end{equation} and \begin{equation} D_L^{\alpha}(p)\simeq \frac{1}{|{\bf p}|^{2}+m_{D}^{2}-3(m_M^\alpha)^2 h({\bf p})}. \label{gluePRL2} \end{equation} Here only leading corrections by the pairing gap have been retained, and \begin{equation} (m_M^\alpha)^2 = \left\{ \begin{array}{ll} 0, & \quad \mbox{$\alpha=1,2,3,$} \\ g^2 K_T d_F^2, & \quad \mbox{$\alpha=4,5,6,7,$} \\ (4/3)g^2 K_T d_F^2, & \quad \mbox{$\alpha=8,$} \end{array} \right. \label{mei2SC} \end{equation} are the Meissner screening masses \cite{II} with the stiffness parameter in the weak coupling limit: \begin{equation} K_T=\frac{7\zeta(3)}{24(\pi T_c)^2}N\left(\frac{\mu}{3}\right), \label{KT} \end{equation} where \begin{equation} N\left(\frac{\mu}{3}\right)=\frac{1}{2\pi^2}\left(\frac{\mu}{3}\right)^2 \label{DOS} \end{equation} is the ideal gas density of states at the Fermi surface. $f({\bf p})$ and $h({\bf p})$ are the dimensionless positive definite functions that characterize the ${\cal O}(d^2)$ corrections to the transverse and longitudinal parts of the irreducible particle-hole bubble \cite{dirk}. These functions reduce to unity in the limit of ${\bf p}\to0$, while decreasing to zero with increasing $|{\bf p}|$. We note that the Landau damping term, corresponding to the energy-dependent term in Eq.\ (\ref{gluePRT2}), does undergo corrections by a factor of $1+{\cal O}(d^2/|{\bf p}|^2)$, but they lead to higher order corrections to the gap equation in $T_c/\mu$ as compared with those coming from the Meissner term $(m_M^{\alpha})^2 f({\bf p})$. Expressions (\ref{gluePRT2}) and (\ref{gluePRL2}) are the straightforward extension of the normal medium forms (\ref{gluePRT}) and (\ref{gluePRL}) to the case of the color superconducting medium. These expressions for $D_T$ and $D_L$ retain consistency with the transverse and longitudinal sum rules obeyed by the static, long-wavelength gluon propagator in normal quark matter \cite{PR} and in color superconducting quark matter \cite{II}. It is important to note that the leading feedback effect of the pairing gap lies in the magnetic sector. This is partly because the corrections to the gluon propagator are roughly of order $m_M^2/(m_D^2 |p_0|)^{2/3}$ in the magnetic sector, while of order $m_M^2/m_D^2$ in the electric sector, and partly because the gluon energy range $|p_0|\ll m_D$ plays a dominant role in the gap equation. In deriving such corrections in the magnetic sector, we first expand $D_T$, Eq.\ (\ref{gluePRT2}), with respect to $m_M^2$, and then substitute into $f({\bf p})$ the form relevant near $T_c$: \begin{equation} f({\bf p})=\frac{6}{7\zeta(3)}\sum_{s=0}^{\infty} \int_0^1 dx \frac{1-x^2}{(s+1/2)[4(s+1/2)^2+(|{\bf p}|x/2\pi T_c)^2]}. \label{f} \end{equation} This form is identical to that encountered in the usual BCS case \cite{LL} since in both cases a pairing gap is open for quasiparticle momenta so close to the Fermi surface that quasiparticles and quasiholes having momenta ${\bf k}_1$ and ${\bf k}_2$ with ${\bf k}_1+{\bf k}_2={\bf p}$ and $|{\bf k}_1|\simeq|{\bf k}_2|\simeq \mu/3$ dominate the ${\cal O}(d^2)$ corrections to the particle-hole bubble. In the London limit ($|{\bf p}|\to0$), one can set $f\simeq1$, while in the Pippard limit ($|{\bf p}|\to\infty$) $f$ behaves as $\propto 1/|{\bf p}|$. As we shall see, the Pippard regime ($|{\bf p}|>2\pi T_c$) is as important to the gap equation as the London regime ($|{\bf p}|<2\pi T_c$). The gap equation modified by the leading feedback effect can be obtained by replacing $D_T$ by \begin{equation} D_T^{\alpha}(p)\simeq {\rm Re}\left[ \frac{1} {|{\bf p}|^{2}-i\pi m_{D}^{2}p_{0}\theta(\sqrt{\pi}m_{D}/2-|{\bf p}|) /4|{\bf p}|}\right] -(m_M^\alpha)^2 f({\bf p}) {\rm Re}\left\{ \frac{1} {[|{\bf p}|^{2}-i\pi m_{D}^{2}p_{0}\theta(\sqrt{\pi}m_{D}/2-|{\bf p}|) /4|{\bf p}|]^2}\right\} \label{gluePRT3} \end{equation} in Eq.\ (\ref{deq}). The modification associated with $m_M^\alpha$ works only for $\alpha=8$. This is partly because the Meissner masses vanish for $\alpha=1$--3 [see Eq.\ (\ref{mei2SC})] and partly because the contributions of gluons of $\alpha=4$--7 to the gap equation vanish due to the color structure of the gap \cite{PR}. Near $T_c$, the gap equation at $|{\bf k}|=\mu/3$ thus reads \begin{eqnarray} d_F&=&\left(1-\frac{\pi{\bar g}}{2}\frac{T-T_c}{T_c}\right)d_F -\frac{7\zeta(3){\bar g}}{16 \pi T_c^2} d_F^3 \nonumber \\ && +\frac{g^{2}}{8\pi^{2}} \int_{0}^{\delta} d(|{\bf q}|-\mu/3) d({\bf q}) \frac{\tanh\left(||{\bf q}|-\mu/3|/2T_c\right)}{||{\bf q}|-\mu/3|} \left(\frac{1}{12}\right) (m_M^{\alpha=8})^2 \frac{1}{2|{\bf q}|^2}F({\bf q}), \nonumber \\ && + {\cal O}(d_F^5), \label{geqnctmod} \end{eqnarray} where \begin{equation} F({\bf q})=\frac13 \int_{-1}^{1}d(\cos\theta) \frac{1-\cos\theta}{(1-\cos\theta)^3 + (\pi m_D^2 ||{\bf q}|-\mu/3|/8\sqrt2 |{\bf q}|^3)^2 \theta(\sqrt{\pi}m_{D}/2-\sqrt2|{\bf q}|\sqrt{1-\cos\theta})} \left[f(y)+y\frac{df(y)}{dy}\right] \label{fq} \end{equation} with $y=|{\bf q}|\sqrt{1-\cos\theta}/\sqrt2 \pi T_c$. Here, $\theta$ is the angle between ${\bf k}$ and ${\bf q}$, we have replaced $|{\bf k}|$ by $\mu/3$ in the gluon propagator, and the factor $1/12$ comes from the color vertex part of $\alpha=8$. Note that the leading term by the feedback effects in Eq.\ (\ref{geqnctmod}) is of third order in $d_F$ and hence does not affect $T_c$. We proceed to examine the leading corrections to the gap equation due to the color superconducting medium. For this purpose, it is useful to divide the gluon momentum $|{\bf p}|\simeq\sqrt2 |{\bf q}|\sqrt{1-\cos\theta}$ and energy $p_0\simeq||{\bf q}|-\mu/3|$ into several regimes shown in Fig.\ 1. The boundary $y=1$ corresponds to the gluon momentum $|{\bf p}|=2\pi T_c$. For $y<1$ (London regime) and $y>1$ (Pippard regime), $f(y)$ has the following forms: \begin{equation} f(y)=\sum_{n=0}^{\infty} a_n y^{2n}, ~~a_n=\frac{24}{7\zeta(3)} \frac{(-1)^n \zeta(2n+3)}{(2n+1)(2n+3)}\left[1-2^{-(2n+3)}\right], \label{fy1} \end{equation} and \begin{equation} f(y)=\frac{6}{7\zeta(3)y}\sum_{s=0}^{\infty}\left\{ \left[\frac{1}{2(s+1/2)^2}+\frac{2}{y^2}\right]\tan^{-1}\frac{y}{2(s+1/2)} -\frac{1}{(s+1/2)y}\right\}. \label{fy2} \end{equation} At $y=1$, as can be seen from Eq.\ (\ref{fq}), the effective infrared cutoff due to the Landau damping of transverse gluons is nonnegligible when $p_0$ is larger than $32\pi^2 T_c^3/m_D^2$. Another important scales are $p_0=\pi T_c$, above which $\tanh(p_0/2T_c)$ in Eq.\ (\ref{geqnctmod}) approaches unity exponentially, and $|{\bf p}|=\sqrt\pi m_D/2$, above which transverse gluons no longer undergo Landau damping. We note that the gap can be regarded as flat in regions a) and c). We also note that the timelike regime ($p_0>|{\bf p}|$), in which the gluon propagator is not described well by the form (\ref{gluePRT3}), can be safely ignored since in this regime no feedback corrections to the gap equation occur up to leading order in $T_c/\mu$. In fact, the gap corrections to the gluon propagator vanish like $\sim d^2/p_0^2$ with increasing $p_0$, as in the $T=0$ case \cite{rischke}. \begin{figure}[t] \begin{center} \includegraphics[width=10cm]{fig1.eps} \end{center} \vspace{-0.5cm} \caption{\label{fig1} The energy-momentum regimes of exchanged magnetic gluons. } \end{figure} At $2\pi T_c < |{\bf p}|$, we can use expression (\ref{fy2}) for $f(y)$. From this expression, we obtain \begin{equation} f(y)+y\frac{df(y)}{dy} =\frac{12}{7\zeta(3)y^2} {\cal F}(y), \label{fydfy0} \end{equation} with \begin{equation} {\cal F}(y)= \sum_{s=0}^{\infty}\left[-\frac{2}{y}\tan^{-1}\frac{y}{2(s+1/2)} +\frac{1}{s+1/2}\right]. \end{equation} The large $y$ asymptotic behavior of $f(y)+ydf(y)/dy$ is then \begin{equation} f(y)+y\frac{df(y)}{dy}\approx\frac{12}{7\zeta(3)y^2}\ln y. \label{fydfy} \end{equation} This behavior is different from $\propto y^{-1}$, which is followed by $f(y)$. As a result, transverse gluons of momenta near $T_c$ rather than near the Pippard limit are essential to calculations of the feedback effect in the magnetic sector. This is a contrast to the case of the weak coupling limit in which transverse gluons of momenta large compared with $T_c$ dominate the pairing interaction since for such momenta, the factor $m_D^2 p_0/|{\bf p}|$ characterizing the Landau damping in the propagator (\ref{gluePRT}) is sufficiently small that the pairing interaction remain essentially long ranged. Using Eqs.\ (\ref{fq}) and (\ref{fy2}), we calculate the contribution to the gap equation (\ref{geqnctmod}) from regions c) and d). The momentum range covering these regions corresponds to the range of $\theta$ satisfying $-1 < \cos\theta < 1-2\pi^2 T_c^2/|{\bf q}|^2$. Up to leading order in $g$, the result from region c) reads \begin{equation} \frac{\pi[3\pi^3-28\zeta(3)f(1)]{\bar g}^2 (m_M^{\alpha=8})^2 d_F} {112\zeta(3)m_D^2}, \label{regionc} \end{equation} while the contribution from region d) is of higher order in $T_c/\mu$ and thus can be ignored. At $|{\bf p}|<2\pi T_c$, where expression (\ref{fy1}) is available for $f(y)$, $f(y)$ and hence $f(y)+ydf(y)/dy$ are almost flat. In $F({\bf q})$, this momentum range corresponds to the range of $\theta$ satisfying $1-2\pi^2 T_c^2/|{\bf q}|^2 < \cos\theta < 1$. The contribution to the gap equation (\ref{geqnctmod}) from region a) becomes, to leading order in $g$, \begin{equation} \frac{\pi{\bar g}^2 (m_M^{\alpha=8})^2 d_F}{4m_D^2}\sum_{n=0}^\infty a_n, \label{regiona} \end{equation} while that from region b) is of higher order in $T_c/\mu$. It is remarkable that the term (\ref{regiona}) is comparable to the term (\ref{regionc}). This suggests that transverse gluons of momenta below and above $2\pi T_c$ are equally important to the feedback effect. We also note that these terms are of order $g^2 d_F^3/T_c^2$ and thus suppressed only by one power $g$ with respect to the term proportional $d_F^3$ in the gap equation (\ref{geqnct}) in the weak coupling limit. This is a contrast to the case of a short-range pairing force in which the leading correction to the third order term due to the superfluid medium is suppressed by one power $T_c/\mu$ \cite{He3}. We remark that the scale of the gluon energy dominant in the gap equation is of order $32\pi^2T_c^3/m_D^2$. Since this is much smaller than the typical momentum scale $\sim 2\pi T_c$, we can safely take the static limit of the modification due to the color superconducting medium. We finally rewrite the gap equation (\ref{geqnctmod}) in such a way that the feedback term has a coefficient up to leading order in $g$, i.e., by combining the contributions (\ref{regionc}) and (\ref{regiona}) from regions c) and a). The result is \begin{equation} d_F=\left(1-\frac{\pi{\bar g}}{2}\frac{T-T_c}{T_c}\right)d_F -\frac{7\zeta(3){\bar g}}{16 \pi T_c^2} d_F^3 +\frac{7\zeta(3)C{\bar g}^2}{32\pi T_c^2} d_F^3 + {\cal O}(d_F^5), \label{geqnctmod2} \end{equation} where $C=\pi^3/63\zeta(3)=0.409434\ldots$. We thus find that the leading feedback effect near $T_c$ acts to increase the gap squared of the isoscalar pairing state by a factor of $(1-C{\bar g}/2)^{-1}$. This is due to the fact that the feedback effect manifests itself as Meissner screening of the color magnetic force of color index $\alpha=8$; this force is repulsive in contrast to the attractive case of $\alpha=1$--3 dominating the pairing interaction. We remark that in the $T=0$ case in which expansion of the gap equation with respect to the gap magnitude is not valid, the terms associated with the logarithm of the gap in the gap equation (\ref{gapeq0}) mainly determine the gap magnitude. In this case, the feedback effect provides corrections beyond these logarithmic terms \cite{rischke}. For the purpose of calculating corrections to the Ginzburg-Landau parameters in the next section, it is instructive to repeat the above calculations for a color-flavor locked (CFL) state, one of the $J^P=0^+$, color and flavor antisymmetric pairing states. In the CFL state, all three flavors and colors are equally gapped in such a way that the pairing gap between a quark of color $a$ and flavor $i$ and a quark of color $b$ and flavor $j$ is characterized by $\kappa(\delta_{ai}\delta_{bj}-\delta_{aj}\delta_{bi})$. For the on-shell gap on the Fermi surface, $\kappa_F$, the gap equation near $T_c$ can be written in the form similar to Eq.\ (\ref{geqnctmod2}) as \begin{equation} \kappa_F=\left(1-\frac{\pi{\bar g}}{2}\frac{T-T_c}{T_c}\right)\kappa_F -\frac{7\zeta(3){\bar g}}{8 \pi T_c^2} \kappa_F^3 -\frac{21\zeta(3)C{\bar g}^2}{8\pi T_c^2} \kappa_F^3 + {\cal O}(\kappa_F^5). \label{geqcfl} \end{equation} Here we have used the Meissner masses in the CFL state \cite{II}, i.e., $(m_M^\alpha)^2=2g^2 K_T \kappa_F^2$ for $\alpha=1$--8. In the limit of $m_M^\alpha\to0$, Eq.\ (\ref{geqcfl}) is equivalent to Eq.\ (104) in Ref.\ \cite{I}. We thus find that the leading feedback effect acts to reduce the gap squared by a factor of $(1+3C{\bar g})^{-1}$. This reduction stems from the fact that Meissner screening of the color magnetic force takes effect equally for $\alpha=1$--8. Note a contrast with the case of the isoscalar pairing state in which the feedback effect acts to increase the gap magnitude. \section{Ginzburg-Landau free energy} \label{sec:GL} We proceed to derive the Ginzburg-Landau free energy of a weakly coupled uniform superfluid of massless three-flavor quarks from the gap equation near $T_c$ as examined in the previous section. Instead of focusing on the isoscalar and CFL pairing states, it is convenient to construct the Ginzburg-Landau free energy for a more general color and flavor antisymmetric channel with $J^P=0^+$ as in Ref.\ \cite{I}. This is because all states belonging to this channel has a common value of $T_c$, which reduces to Eq.\ (\ref{tc}) in the weak coupling limit. This channel is characterized by a complex $3\times3$ gap matrix, $({\bf d}_a)_i$, in color-flavor space \cite{I}, where $a$ ($i$) is the color (flavor) other than two colors (flavors) involved in Cooper pairing. This gap is defined on the mass shell of the quark quasiparticle of momenta on the Fermi surface, and thus reduces to $\delta_{aB}\delta_{is}d_F$ in the isoscalar state and to $\delta_{ai}\kappa_F$ in the CFL state. For $({\bf d}_a)_i$, one can write down the thermodynamic potential density difference $\Delta\Omega=\Omega_s-\Omega_n$ between the superfluid and normal phases near $T_c$ as \cite{I} \begin{equation} \Delta\Omega= \bar{\alpha} \sum_{a}|{\mathbf{d}}_a|^2 +\beta_1(\sum_{a}|{\mathbf{d}}_a|^2)^2 +\beta_2 \sum_{ab}|{\mathbf{d}}_a^{\ast}\cdot {\mathbf{d}}_b|^2. \label{gl} \end{equation} Here each term is invariant with respect to $U(1)$ global gauge transformations and color and flavor rotations. In evaluating the coefficients in Eq.\ (\ref{gl}) by including the leading feedback effect, it is convenient to integrate the gap equations (\ref{geqnctmod2}) and (\ref{geqcfl}) and then map the results onto Eq.\ (\ref{gl}). We thus obtain \begin{eqnarray} {\bar\alpha}&=&4N(\mu/3)\ln\left(\frac{T}{T_{c}}\right), \\ \beta_{1}&=&\frac{7\zeta(3)}{8(\pi T_c)^{2}} \left(1+\frac{13}{2}C{\bar g}\right) N(\mu/3), \label{beta1n} \\ \beta_2&=&\frac{7\zeta(3)}{8(\pi T_c)^{2}} \left(1-\frac{15}{2}C{\bar g}\right) N(\mu/3), \label{beta2n} \end{eqnarray} which reproduce the known relation $\beta_1=\beta_2$ in the weak coupling limit \cite{I}. We find that up to leading order in $g$, the polarization effects of the color superconducting medium give rise to only ${\cal O}(g)$ corrections of the coefficients $\beta_1$ and $\beta_2$. It is nonetheless important to note that these corrections to $\beta_1$ and $\beta_2$ work in the opposite directions in such a way as to decrease and increase the gap magnitude in the CFL and isoscalar states, respectively. We can thus conclude that whether or not the leading feedback effect acts to reduce the gap magnitude depends on the color structure of the gap. \section{Phase diagram} \label{sec:phase} We turn to the construction of the phase diagram in the space of the parameters characterizing the fourth order terms in the Ginzburg-Landau free energy derived in the previous section. This phase diagram can be obtained by minimizing the thermodynamic potential difference $\Delta\Omega$ with respect to $({\bf d}_a)_i$ for various values of $\beta_1$ and $\beta_2$ \cite{I}. The result, exhibited in Fig.\ 2, is the same as Fig.\ 1 in Ref.\ \cite{I} except that the present values of $\beta_1$ and $\beta_2$ given by Eqs.\ (\ref{beta1n}) and (\ref{beta2n}) include the ${\cal O}(g)$ feedback corrections. This figure shows that even for such values of $\beta_1$ and $\beta_2$, as long as ${\bar g}$ is sufficiently smaller than unity, the CFL phase, which generally satisfies \begin{equation} {\bf d}_{R}^{*}\cdot{\bf d}_{G}={\bf d}_{G}^{*}\cdot{\bf d}_{B} ={\bf d}_{B}^{*}\cdot{\bf d}_{R}=0, ~~ |{\bf d}_{R}|^{2}=|{\bf d}_{G}|^{2}=|{\bf d}_{B}|^{2}, \label{opcfl} \end{equation} is still favored over the two-flavor color superconducting (2SC) state fulfilling \begin{equation} {\bf d}_{R}\parallel{\bf d}_{G}\parallel{\bf d}_{B}. \label{opis} \end{equation} Note that the 2SC state contains the isoscalar state analyzed in Sec.\ \ref{sec:gap}. \begin{figure}[t] \begin{center} \includegraphics[width=15cm]{fig2.eps} \end{center} \vspace{-0.5cm} \caption{\label{fig2} Phase diagram near $T_c$, exhibiting regions where the 2SC and CFL phases are reached by a second order transition as well as where superfluid states are reached by a first order transition since the overall fourth order term in the Ginzburg-Landau free energy (\ref{gl}) can be negative. The parameters $\beta_1$ and $\beta_2$ are the fourth order coefficients in the Ginzburg-Landau free energy. The cross denotes the weak coupling limit, and the circle denotes the result including the polarization effects of the color superconducting medium with ${\bar g}=0.1$. } \end{figure} \section{Conclusions} \label{sec:concl} We have examined the role played by the gap dependence of the pairing interaction in the gap equation for a weakly coupled uniform superfluid of three-flavor massless quarks near the transition temperature $T_c$. The corrections induced by this role to the parameters characterizing the fourth order terms in the Ginzburg-Landau free energy result in an increase of $\beta_1$ by a factor of $1+13C{\bar g}/2$ and an decrease of $\beta_2$ by a factor of $1-15C{\bar g}/2$. The magnitude of these changes comes from the Meissner screening of the color magnetic force that dominates the interaction between quarks in the weak coupling regime as an essentially long-range force. We thus see a contrast with the case of a short-range pairing interaction in which the corrections are suppressed by one power $T_c/\mu$. We also note that the changes in $\beta_1$ and $\beta_2$ are in the direction of increasing the gap magnitude of the 2SC state and decreasing that of the CFL state. This direction reflects the fact that not only the Meissner screening of the color magnetic force but also the color indices of magnetic gluons dominating the pairing interaction depend on the color structure of the pairing gap. We have finally found that the feedback corrections, as long as ${\bar g}\ll 1$, keep the color-flavor locked phase the most stable just below $T_c$. The present result for the parameters $\beta_1$ and $\beta_2$, mainly through its effect on the gap magnitude near $T_c$, provides a way of studying strong coupling modifications on the previous weak coupling calculations based on the Ginzburg-Landau theory with $\beta_1=\beta_2$. Those calculations include the phase diagram \cite{I} as discussed in Sec.\ \ref{sec:phase} and its extension to nonzero quark masses \cite{IMTH}, responses to rotation and magnetic fields \cite{III,GR,cflmv}, and fluctuation-induced first order transition \cite{IV}. Qualitatively, however, no significant changes are expected. Nonetheless, the tendency that $\beta_1>\beta_2$, if remaining at low densities, could be significant for the normal-super interfacial energy and the interaction between widely separated magnetic vortices for CFL quark matter. This is because both quantities, which are sensitive to the ratio between $\beta_1$ and $\beta_2$ \cite{GR,cflmv}, control the criterion of whether or not the CFL state can allow magnetic vortices to form. \acknowledgments We are grateful to Gordon Baym, Tetsuo Hatsuda, Taeko Matsuura, and Motoi Tachibana for helpful discussions. We acknowledge the hospitality of the Institute for Nuclear Theory at the University of Washington, where this work was initiated.
2,877,628,089,916
arxiv
\section{Introduction} In atomic physics the term universality refers to phenomena which are a result of a two-body scattering length $a$ much larger than the range $R$ of the underlying potential and do not depend on any further parameters describing the two-body interaction. The non-relativistic three-body system also exhibits universal properties if $a\gg R$, but an additional three-body parameter is needed for the theoretical description of observables. Therefore, one three-body observable can be used (e.g. the minimum of the three-body recombination rate $a_{*0}$) to predict all other low-energy observables of such systems. A particularly interesting signature of universality in the three-body system is a tower of infinitely many bound states ({\it Efimov states}) in the limit $a = \pm \infty$ with an accumulation point at the scattering threshold and a geometric spectrum : \begin{eqnarray} E^{(n)}_T = (e^{-2\pi/s_0})^{n-n_*} \hbar^2 \kappa^2_* /m, \label{kappa-star} \end{eqnarray} where $\kappa_*$ is the binding wavenumber of the branch of Efimov states labeled by $n_*$. The three-body system displays therefore discrete scaling symmetry in the universal limit with a scaling factor factor $e^{\pi/s_0}$. In the case of identical bosons, $s_0 \approx 1.00624$ and the discrete scaling factor is $e^{\pi/s_0} \approx 22.7$. These results were first derived in the 1970's by Vitaly Efimov \cite{Efimov70,Efimov71} and were rederived in the last decade in the framework of effective field theories (EFT) \cite{Bedaque:1998kg,Braaten:2004rn}. Recently, experimental evidence for Efimov physics was found by the Innsbruck group \cite{Grimm06}. Using a magnetic field to control the scattering length {\it via} a Feshbach resonance, they measured the recombination rate of cold $^{133}$Cs atoms and observed a resonant enhancement in the three-body recombination rate at $a\approx -850 a_0$ which occurs because an Efimov state is close to the 3-atom threshold for that value of $a$. The three-body recombination rate for atoms with large scattering length at non-zero temperature has been calculated with a number of different models or based on the universality of atoms with large scattering lengths \cite{DSE-04,LKJ07,Jonsell06,YFT06,MS07}. However, a striking way to demonstrate universality is to describe observables of one system with information which has been extracted from a completely different system. In \cite{Braaten:2006qx}, the authors considered Efimov's radial laws which parameterize the three-atom S-matrix in terms of six real universal functions which depend only on a dimensionless scaling variable, $x=(m a^2 E/\hbar^2)^{1/2}$, and phase factors which only contain the three-body parameter. In this work, simplifying assumptions justified over a restricted range of $x$ were made to reduce the six universal functions required to parameterize the three-body recombination rate to just a single function. This function was then extracted from microscopic calculations of the recombination rates for $^4$He atoms by Suno {\it et al.}\cite{Suno:2002}. In a recent paper, Shepard \cite{Shepard:2007gj} calculated the recombination rates from atom-dimer elastic scattering phase shifts for four different $^4$He potentials (the so-called HDFB, TTY, LM2M2 and HFDB3FCII potentials) and was able to obtain two universal functions. Here, we relax all but one of the simplifying assumptions made in \cite{Braaten:2006qx} and extract a set of three independent universal functions capable of parameterizing the three-body recombination rate over a wide range of energies. We test the performance of these universal functions using ``data'' generated from phase shift calculations\cite{Shepard:2007gj} employing artificial short-range $^4$He potentials. Finally, we use the new universal functions to calculate the scattering length and temperature dependent recombination rate for $^{133}$Cs atoms as measured by the Innsbruck group\cite{Grimm06} and comment on our results. \section{Three-Body Recombination} Three-body recombination is a process in which three atoms collide to form a diatomic molecule (dimer). If the scattering length is positive and large compared to the range of the interaction, we have to differentiate between deep and shallow dimers. Shallow dimers have an approximate binding energy of $E_{\rm shallow}\simeq\hbar^2/(m a^2)\ll\hbar^2/(m R^2)$. The binding energy of deep dimers cannot be expressed in terms of the effective range parameters and $E_{\rm deep}\gtrsim\hbar^2/(m R^2)$. If the underlying interaction supports deep bound states, recombination processes can occur for either sign of $a$. In a cold thermal gas of atoms, recombination processes lead to a change in the number density of atoms $n_A$ \begin{equation} \frac{\hbox{d}}{\hbox{d}t}n_A=-L_3\,n_A^3~, \end{equation} where $L_3$ denotes the loss rate constant. The recombination coefficient, to which $L_3$ is proportional, can be decomposed into \begin{equation} K_3(E)=K_{\rm shallow}(E)+K_{\rm deep}(E)~, \end{equation} and the recombination rate into the shallow dimer can be further decomposed into contributions from the channels in which the the total orbital angular momentum of the three atoms has a definite quantum number $J$ according to \begin{equation} K_{\rm shallow}(E)=\sum_{J=0}^{\infty}K^{(J)}(E)~. \end{equation} For now, let us consider recombination {\it via} the shallow dimer only. If the collision energy $E$ is small compared to the natural energy scale $\hbar^2/(m R^2)$, the recombination rate $K_{\rm shallow}(E)$ is a universal function of the collision energy $E$ , scattering length $a$ and three-body parameter $a_{*0}$. The universal function depends on the dimensionless scaling variable defined as \begin{equation} x=(m a^2E/\hbar^2)^{1/2}~. \end{equation} For $J>0$ the recombination rate does not depend on the three-body parameter $a_{*0}$ and the implications of universality are therefore particularly simple, namely \begin{equation} \label{eq:KJ} K^{(J)}=f_J(x) \hbar a^4/m~. \end{equation} However, $K^{(0)}$ depends log-periodically on $a_{*0}$ ({\it this} is the signature of Efimov physics!) and is related to the S-matrix for elastic atom-dimer scattering through \begin{equation} K^{(0)}(E)=\frac{k}{x^4}(1-|S_{AD,AD}|^2)~, \end{equation} Efimov's radial law then gives the dependence on complex {\it universal} functions and the three-body parameter $a_{*0}$ which defines the scattering length for which the recombination rate has a minimum as \begin{equation} \label{eq:smatrix} S_{AD,AD}=s_{22}(x)+\frac{s_{12}^2(x)\,e^{2is_0\ln(a/a_{*0})}} {1-s_{11}(x)e^{2is_0\ln(a/a_{*0})}}~. \end{equation} The functions $s_{11}$ and $s_{12}$ are known at threshold \begin{eqnarray} \label{eq:threshold} \nonumber s_{11}(0)&=&-e^{-2\pi s_0}~,\\ \nonumber s_{12}(0)&=&\sqrt{1-e^{-4\pi s_0}}e^{i\delta_\infty}~,\\ s_{22}(0)&=&e^{2i\delta_\infty}e^{-2\pi s_0}~, \end{eqnarray} with $\delta_\infty=1.737$. It follows that $|s_{11}(0)|\simeq 0.002$. The first simplifying assumptions being made in \cite{Braaten:2006qx} was that this function remains small ({\it i.e.}; $\ll 1$) for all $x$ and can be ignored. Then the energy dependent recombination rate can be written as \begin{eqnarray} \label{eq:recrate1} \nonumber K^{(0)}(E)&=&\frac{144 \sqrt{3}\pi^2}{x^4}\Bigl[1-\bigl(r_{22}^2-r_{12}^4 +2 r_{22}r_{12}^2 \cos[\Phi +2 s_0 \log(a/a_{*0})]\bigr)\Bigr]\frac{\hbar a^4}{m}~,\\ \eea where we have set $s_{ij}=r_{ij}\exp(i\phi_{ij})$ and $\Phi=\phi_{22}-2\phi_{12}$. Under the assumption that $s_{11}$ can be neglected the recombination rate depends therefore on the three real-valued function $r_{12}(x)$, $r_{22}(x)$ and $\Phi(x)$. It is worth noting that the expression in Eq.~(\ref{eq:recrate1}) is symmetric under exchange of $r_{12}^2$ and $r_{22}$. However, the threshold conditions in Eq.~(\ref{eq:threshold}) can be used to to attribute the correct fit solutions to the universal function. As also discussed in Ref.~\cite{Braaten:2006qx}, the effects of deep dimers can easily be incorporated through one additional parameter $\eta_*$ by making the substitution \begin{equation} \ln a_{*0} \rightarrow \ln a_{*0} - i\eta_*/s_0~ \end{equation} in, {\it e.g.} Eq.~(\ref{eq:recrate1}). Employing unitarity the resulting effect on the recombination into shallow dimers can be written as \cite{Braaten:2008kx} \begin{eqnarray} \nonumber K_{\rm shallow}^{(0)}(E) &=& \frac{144 \sqrt{3} \pi^2}{x^4} \biggl( 1 - \left| s_{22}(x) + s_{12}(x)^2 e^{2i \theta_{*0} -2 \eta_*}\right|^2 - (1 - e^{-4 \eta_*}) |s_{12}(x)|^2 \biggr) \frac{\hbar a^4}{m} \,.\\ \label{Kshallow-s} \eea Note that in deriving this expression we assumed again that $s_{11}\approx 0$. In the same manner one can derive an expression for the recombination rate into deep dimers \begin{eqnarray} K_{\rm deep}(E) &=& \frac{144 \sqrt{3}\pi^2}{ x^4}(1 - e^{-4 \eta_*}) \big( 1 - |s_{12}(x)|^2 \big) \frac{\hbar a^4}{m} \,. \label{Kdeep-s} \end{eqnarray} \section{Alternative Parameterizations} Starting with S-matrix element for 3-atom to dimer-atom scattering, it was shown in \cite{Braaten:2006qx} that under the assumption $s_{11}=0$ the recombination rate can be written as \begin{eqnarray} \nonumber \label{eq:reco-h} K^{(0)}(E)&=&C_{\rm{max}} \biggl|\biggl(\sin[s_0 \ln(\frac{a}{a_{*0}})]\bigl(1+h_1(x)+i h_3(x)\bigr)\\ &&\hspace{50mm}+\cos[s_0\ln(\frac{a}{a_{*0}})] \bigl(h_2(x)+i h_4(x)\bigr)\biggr)\biggr|^2\frac{\hbar a^4}{m}~, \end{eqnarray} where $C_{\rm{max}}\approx 67.1$ and the $h_i$ are real-valued functions of $x$. Additionally,it was assumed that the imaginary part of the above amplitude can be neglected \begin{eqnarray} K^{(0)}(E) &=& C_{\rm max} \big| \sin[s_0 \ln(a/a_{*0})] (1 + h_1(x)) + \cos[s_0 \ln(a/a_{*0})] h_2(x) \big|^2 \hbar a^4/m\,. \label{K3-app} \end{eqnarray} This is well justified by direct calculations of the $J=0$ recombination rates for $^4$He atoms which display pronounced minima at approximately $E_{\rm breakup}\simeq20$~mK \cite{Suno:2002} and which can be explained by this assumption. Then the functions $h_3$ and $h_4$ can be set to 0 in Eq.~(\ref{eq:recrate1}). The resulting expressions were employed in \cite{Shepard:2007gj} to extract $h_1$ and $h_2$ for $x<1.1$. Although $h_1$ and $h_2$ were determined by fitting to values of $K^{(0)}(E)$ calculated using just two of the four atom-atom potentials considered, they were found to accurately account for the results for all 4 potentials as expected from universality. We have recalculated the $h$-functions using the results for the three-body recombination obtained using the LM2M2 and HFDB3FCII potentials and have fitted a polynomial to our results over the energy range $0<x<1.2$ \begin{eqnarray} \label{eq:hfit} \nonumber h_1(x)&=&-0.0234437 x + 0.0550298 x^2 - 1.03776 x^3 + 1.18985 x^4 - 0.471592 x^5~,\\ h_2(x)&=&0.0338266 x - 0.233836 x^2 + 0.182564 x^3 - 0.0895055 x^4 + 0.0461793 x^5~. \end{eqnarray} The functions are displayed in Fig.~\ref{fig:hfunctions}. \begin{figure} \centering \includegraphics[width=10cm,angle=0,clip=true]{hfunctions.eps} \caption{The scaling functions $h_1$ (solid line) and $h_2$ as a function of x.} \label{fig:hfunctions} \end{figure} The effect of deep dimers on the recombination rate into the shallow dimer can easily be incorporated by making the substitution $\ln a_{*0} \rightarrow \ln a_{*0} - i\eta_*/s_0$ in Eq.~(\ref{eq:reco-h}) \begin{eqnarray} K^{(0)}(E) &=& C_{\rm max} \Big[ \cosh^2 \eta_* \big( \sin[s_0 \ln(a/a_{*0})] (1 + h_1(x)) + \cos[s_0 \ln(a/a_{*0})] h_2(x) \big)^2 \nonumber \\ && \hspace{0.25cm} + \sinh^2 \eta_* \big( \cos[s_0 \ln(a/a_{*0})] (1 + h_1(x)) - \sin[s_0 \ln(a/a_{*0})] h_2(x) \big)^2 \Big] \frac{\hbar a^4}{m} \,. \label{K3-app:deep} \end{eqnarray} To take the effects of the recombination rate into deep dimers into account it was assumed in \cite{Braaten:2006qx} that $K_{\rm deep}(E)$ is a function varying slowly with energy and that it can therefore be approximated with \begin{equation} \label{eq:kdeep-h} K_{\rm deep}=\frac{C_{\rm}}{4}(1-e^{-4\eta_*})\frac{\hbar a^4}{m}~. \end{equation} \section{Extraction of the Universal Functions} By fitting Eq.(\ref{eq:recrate1}) to the recombination rates of all four $^4$He potentials, we were able to determine the functions $r_{12}(x)$, $r_{22}(x)$ and $\Phi(x)$. Our results are smooth functions for $x>0.2$ and the radial functions approach the known threshold values from Eq.~(\ref{eq:threshold}) for decreasing $x$. For $x<0.2$, we are not able to find a reliable fit which is indicated by the rapid variation of the function $\Phi$ in Fig.~\ref{fig:h1h2} in this region. To display the qualities of our fit we compare the exact recombination rates obtained with the TTY and HDFB potentials to the rates calculated with the newly obtained universal functions \begin{figure} \centering \includegraphics[width=10cm,angle=0,clip=true]{TTYandHFDBcomp.eps} \caption{The exact recombination rates and the corresponding results obtained with scaling (solid lines) and universal functions (dashed lines) of the HFDB (circles) and TTY (triangles) potentials.} \label{fig:reco-original} \end{figure} These results for these functions are displayed in Fig.~\ref{fig:reco-original}. This figure contains also the recombination rate obtained with the $h$-functions. While the new set of universal functions seem to provide slightly better results for the HFDB potential at larger energies, the $h$-functions perform equally well for these potentials at lower energies. \begin{figure}[t] \centerline{\includegraphics[width=10cm,angle=0,clip=true]{r12-approximated.eps}} \centerline{\includegraphics[width=10cm,angle=0,clip=true]{r22-approximated.eps}} \centerline{\includegraphics[width=10cm,angle=0,clip=true]{phi-approximated.eps}} \caption{The universal functions $r_{12}, r_{22}$ and $\Phi$ as function of $x$.} \label{fig:h1h2} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=10cm,angle=0,clip=true]{FAUXcomparisonV3.eps}} \caption{The exact recombination rates and the corresponding results obtained with the $h$-functions (solid lines) and the universal functions (dashed lines) for the potentials I (circles), II (squares) and III (triangles) potentials.} \label{fig:faux} \end{figure} To test our new parameterizations we have generated three artificial potentials (which we call I, II and III) characterized by different three-body parameters $a_{*0}$ (with $a/a_{*0}=1.384, 1.188$ and $1.780$, respectively) but having approximately the same ratio of $R/a$ as the real $^4$He potentials used in this work. We have calculated the recombination rates for these potentials and use these results to benchmark our universal functions. Our results are displayed in Fig.\ref{fig:faux}. We find that the new set of functions is capable of describing the recombination rates of these potentials over a relatively large range of $x$. Again we benchmark also the rates obtained with $h_1$ and $h_2$ and find that this set of scaling functions describes the exact results better at $x<1$ than the scaling functions $r_{12}(x)$, $r_{22}(x)$ and $\Phi(x)$. This is surprising at first sight since one certainly expects to obtain a better description of the recombination rate with three instead of two functions. We speculate that the functional form in Eq.~(\ref{eq:recrate1}) results in stronger constrains on the universal functions than Eq.~(\ref{eq:reco-h}) does on the $h$ functions. All the potentials, however, contain finite range effects which are not accounted for in Eq.~(\ref{eq:smatrix}). It is therefore very likely that better fits -- using the same approximation -- can be obtained from recombination rates calculated in the exact zero-range limit. \section{Results for Cesium} In the previous section we found that we can obtain a very good overall description of the recombination rate of systems with a large scattering length if we employ the functions $h_1$ and $h_2$ for energies smaller than $E_{\rm shallow}$ and the universal function $r_{12}(x)$, $r_{22}(x)$ and $\Phi(x)$ for energies larger than $E_{\rm shallow}$. Using these functions at energies close to the minimum in the recombination guarantees a more appropriate treatment of the effect of deep dimers on the recombination rate, which are expected to have the largest effect in this region. The form of the functions $f_J(x)$ in Eq.(\ref{eq:KJ}) and therefore the contribution to the recombination from channels with higher total angular momentum $J$ has been previously analyzed in \cite{Braaten:2006qx,Shepard:2007gj}, we thus take these channels into account by using appropriate parameterizations for the functions $f_J(x)$. $^{133}$Cs atoms can recombine into deep and shallow dimers. As mentioned above, a deep dimer is so strongly bound that it cannot be described within the EFT for short-range interactions as the binding energy is larger than $\hbar^2/(m R^2)$. We account for such processes by letting $\ln a_{*0} \rightarrow \ln a_{*0} - i\eta_*/s_0$ as also discussed above. We then calculate the temperature dependent recombination rate by calculating \begin{eqnarray} \alpha(T) = \frac{\int_0^\infty dE \, E^2 \, e^{-E/(k_B T)} \, K_3(E)} {6 \int_0^\infty dE \, E^2 \, e^{-E/(k_B T)}} \,. \label{alpha-T} \end{eqnarray} The weight factor $E^2$ comes from using hyperspherical variables for the Jacobi momenta. In Fig. \ref{fig:rho_cesium} we display our results for the recombination length $\rho_3 = \left({\textstyle \frac{m K_3}{\sqrt{3} \hbar}} \right)^{1/4}$ of $^{133}$Cs atoms. It can be seen that the results agree very well with the experimental results obtained by the Innsbruck group at $T=200$ nK. \begin{figure}[t] \centerline{\includegraphics*[width=10cm,angle=0,clip=true]{results.eps}} \caption{The 3-body recombination length $\rho_3$ for $^{133}$Cs for $a_{*0}=210 a_0$ and three different values of the parameter $\eta_*$:0 (solid line), 0.01 (dashed line), and 0.06 (dotted lines) plotted together with the experimental results of the Innsbruck experiment (triangles) \cite{Grimm06}}. \label{fig:rho_cesium} \end{figure} \section{Summary} In this paper we have used the results from different $^4$He atom-atom potentials to extract and to test the predictive power of universal functions. In doing so, we have relaxed all but one simplifying assumptions which was made in previous work \cite{Shepard:2007gj,Braaten:2006qx}. We have determined a third universal scaling function which allows for a description of the three-body recombination rate of systems with large scattering length over a greater range of breakup energies. We have tested the quality of our parameterizations with artificial finite range potentials which are appreciably different from the original Helium potentials but which display universal effects in three-body sector. We have found that our three real universal functions can describe the recombination of these artificial potentials reasonably well which gives further evidence that the assumptions made in \cite{Braaten:2006qx} were well justified. We also found, however, that the previously calculated scaling functions $h_1$ and $h_2$ give an overall better description of the recombination rate for energies $E < E_{\rm shallow}$. The scaling functions $h_1$ and $h_2$ which can be represented analytically with a simple polynomial fit given in Eq.~(\ref{eq:hfit}) are therefore a useful tool to test recombination rate calculations for systems with large scattering length. Finally, we have used both sets of universal functions together to compute the recombination length for $^{133}$Cs atoms for different values of the parameter $\eta_*$ which approximately accounts for the effect of deep dimer states and have compared our results with experimental data obtained by the Innsbruck group\cite{Grimm06}. Although our results show very good agreement with the data, sensitivity to $\eta_*$ is insufficient to permit a precise determination of this parameter. Overall, we consider our results to be an excellent example of how few-body systems with large scattering length exhibit universal features. The low-energy properties of $^4$He atoms allow us to compute accurately the low-energy properties of a gas of a completely different element, $^{133}$Cs, which at first glance has little in common with $^4$He. Nevertheless, we point out that the results cannot be thought of as complete treatment of the problem at hand. For example, not only did we make the assumption that $s_{11}$ does not contribute significantly to the recombination coefficients, we also extracted the functions from data sets obtained with finite range potentials. Although the impact of range corrections is known to be small for realistic Helium atom-atom potentials as $R/a\sim 0.1$, it needs to be pointed out that range corrections are expected to be sizable for large enough energies. To obtain all universal functions $s_{ij}$ relevant to the recombination rate, a calculation in the limit $R\rightarrow 0$ seems therefore to be necessary \footnote{This has been done \cite{Braaten:2008kx} in which appeared after the first submission of this paper}. Furthermore, it is already understood how to include range corrections systematically in the framework of effective field theory \cite{Bedaque:2002yg,Hammer:2001gh,Platter:2006ev}. Indeed, this approach has already been used to calculate range corrections to the recombination rate into a shallow dimer \cite{Hammer:2006zs,Platter:2008cx}. Thus, further effort should be devoted to include these effects in the calculation of the energy-dependent recombination rate. \begin{acknowledgments} We are thankful to Eric Braaten and Daniel Phillips for useful discussions and comments on the manuscript. This work was supported in part by the Department of Energy under grant DE-FG02-93ER40756, by the National Science Foundation under Grant No.~PHY--0354916. \end{acknowledgments}
2,877,628,089,917
arxiv
\section{Introduction} Direct numerical simulations are crucial to the investigation of a vast array of multiphase flows, such as fuel injection \cite{lefebvre_atomization_2017} and cavitation \cite{boyd_numerical_2019,boyd_numerical_2018, boyd_beamed_2020}, as they are able to provide high-level details that are difficult to measure in experiments \cite{tryggvason_direct_2011}. Accurately resolving the sharp interface separating the gas and liquid phases is critical to interfacial multiphase flow simulations. The methods for numerically resolving the interface are loosely categorized into diffuse-interface and sharp-interface methods. The diffuse-interface methods involve the advection of a volume or mass fraction, which is allowed to diffuse over a few cells, and such methods are more popular for compressible flows \cite{saurel_multiphase_1999,johnsen_implementation_2006, boyd_diffuse-interface_2021, boyd_numerical_2021}. For incompressible flows, it is possible to achieve a genuine sharp interface and to constrain the interface thickness to one cell. Significant progress has been made in the development of interface-capturing methods, including the volume-of-fluid (VOF), front-tracking, and level-set methods, in the past decades \cite{scardovelli_direct_1999, unverdi_front-tracking_1992, sussman_level_1994}. Now it is viable to accurately simulate multiphase flows with complex interface deformation and topology change, assuming the mesh resolution is sufficiently high. Multiphase flows with phase change are essential to a wide range of industrial applications, such as the injection of volatile liquid fuels \cite{sher_flash-boiling_2008} and boiling flows \cite{villegas_direct_2017}. The inclusion of phase change introduces additional complexities in resolving interfacial multiphase flows. While a transition from liquid to vapor can happen at temperatures below the saturation temperature, ($T_{sat}$), more rapid vaporization will occur if the phase change happens at the boiling point, $T_{sat}$. In the present study, we only consider phase change due to vaporization as opposed to condensation \cite{son_temperature_2021, samkhaniani_numerical_2016}. More important, we will focus on vaporization that is driven by heat transfer, instead of the slower process of evaporation due to vapor concentration gradients \cite{sazhin_advanced_2006, shao_computational_2018}. Nevertheless, with minor modifications, the present methods can be used to simulate multiphase flows with condensation and concentration-gradient-driven evaporation. High-resolution detailed numerical simulations have been shown to be an essential tool to investigate liquid-gas multiphase flows with phase change, and various computational methods have been developed to track the sharp interface with interfacial vaporization. Recent numerical approaches in the literature include front tracking \cite{irfan_front_2017}, Lattice Boltzmann \cite{safari_extended_2013, haghani-hassan-abadi_phase-change_2021}, level-set/ghost fluid \cite{gibou_level_2007,lee_direct_2017, shao_computational_2018}, and Volume-of-Fluid (VOF) methods \cite{welch_volume_2000, hardt_evaporation_2008,ma_numerical_2013, sato_sharp-interface_2013, perez-raya_modeling_2016, datta_modeling_2017, ding_volume_2017, wilson_phase-change_2019, wang_vaporization_2019, palmore_volume_2019, scapin_volume--fluid_2020, bures_direct_2021, malan_geometric_2021}. We refer the readers to Refs. \cite{kharangate_review_2017, tryggvason_direct_2005} for a more comprehensive review of the numerical approaches to account for phase change. In the present study, we focus on developing a numerical framework to simulate interfacial multiphase flows with phase change based on the VOF method. The VOF method is based on the one-fluid approach, for which only one velocity field is used for both phases. Without phase change, the velocities for the liquid and gas on the two sides of an interface are identical, and thus the velocities in the cells in the vicinity of the interface are ready to be used to advect the interface. However, when phase change like vaporization occurs, the velocity is discontinuous across the interface, as vaporization creates the Stefan flow in the gas phase. The interface velocity is more similar to the liquid velocity instead of the gas velocity. In such a case, the ``one-fluid" velocity field cannot be directly used to advect the volume-fraction function. In the recent works of Malan \textit{et al.~} \citep{malan_geometric_2021} and Bure\v{s} \textit{et al.~} \citep{bures_direct_2021}, an additional pressure projection step has been used to compute a separate velocity field, \textit{i.e.}, the divergence-free liquid velocity across the interface, which is then used for the VOF advection. It is computationally costly to solve the additional pressure Poisson equation for this additional velocity in the whole domain. As a result, they have included two immersed boundaries on the gas and liquid sides of the interface and only solve the Poisson equation in a reduced region near the interface. Nevertheless, this inclusion of additional immersed boundaries is also undesirable, especially for parallel codes that use a more complicated mesh such as an adaptive octree mesh. To accurately compute the vaporization rate it is important to solve the thermal diffusion and the temperature field near the interface. The recent works by Gao \textit{et al.~} \cite{gao_effect_2022} and Zhao \textit{et al.~} \cite{zhao_boiling_2022} have used the embedded boundary method \cite{johansen_cartesian_1998,schwartz_cartesian_2006} for the thermal flux calculation in each phase. The embedded boundary method typically requires the embedded boundary/surface to be continuous at the cell surfaces. However, this is not guaranteed in the Piece-wise Linear Interface Calculation (PLIC) VOF reconstruction. The different intersecting positions between the cell surface and the reconstructed interfaces from the two adjacent cells create additional complexity in calculating the thermal diffusion flux. Additionally, the temperature gradient near the interface is highly dependent on the normal, which may exhibit small cell-to-cell fluctuations due to the PLIC VOF reconstruction. These two factors are likely the cause of the artificial velocity currents present in \cite{gao_effect_2022}. In the method of Zhao \textit{et al.~} \cite{zhao_boiling_2022}, the VOF advection is handled using a two-velocity approach, similar to Malan \textit{et al.~} \citep{malan_geometric_2021}, whereas, in the method of Gao \textit{et al.~} \cite{gao_effect_2022}, they distributed the vaporization-induced source term to nearby pure gas and liquid cells but not the interfacial cells, following the method by Hardt and Wondra \cite{hardt_evaporation_2008}. As a result, the velocity at the interfacial cells is not influenced and can be used for the VOF advection. Gao \textit{et al.~} \cite{gao_effect_2022} diffused the source to wide regions in both gas and liquid sides of the interface for numerical stability. Yet, velocity fluctuations were still observed near the interfaces. The goal of the present paper is to develop a simple and robust VOF method to resolve interfacial multiphase flows with vaporization. The long-term goal is to use the newly developed method to fully characterize the breakup dynamics of vaporizing bulk liquids, such as aerobreakup of drops in a high-temperature and high-speed flow through high-fidelity simulation. Extensive experimental and numerical studies have been conducted to understand drop breakup dynamics when phase change is absent \cite{hsiang_drop_1995, theofanous_physics_2008, meng_numerical_2018, jain_secondary_2019, jackiw_aerodynamic_2021}, however, the interaction between drop deformation/breakup and vaporization is not fully understood. In practical spray applications, due to the large geometric scale and the huge number of drops involved, it is inviable to resolve the interface of each individual drop. For those macro-scale simulations, the Euler-Lagrange point-particle simulations are typically used \cite{duke-walker_evaporation_2021, dahal_numerical_2017, gallot-lavallee_large_2021, salman_lagrangian_2004}. Since the drop-scale interfacial and flow physics are not resolved, the mass and energy transfer between the drop and the surrounding gas must be represented by sub-grid models, similar to the drag model for momentum transfer \cite{maxey_equation_1983, balachandar_turbulent_2010}. The conventional models to predict drop vaporization are empirical correlations for spherical drops \cite{renksizbulut_experimental_1983} and they will be invalid for drops with a Weber number close to or larger than the critical Weber number \cite{hsiang_drop_1995}. Models that can accurately predict the vaporization rate of drops undergoing significant deformation and even breakup remain to be established. High-fidelity interface-resolved simulations enabled by the present study will play an essential role in the development of such models. The present method is implemented in the open-source solver \emph{Basilisk} \citep{popinet_gerris_2003,popinet_accurate_2009, popinet_quadtree-adaptive_2015}. Validation tests and simulations are then performed using the modified \emph{Basilisk} code. The key advantage of \emph{Basilisk} is that it provides an infrastructure for octree/quadtree mesh for spatial discretization, which allows an important capability to adapt the mesh in user-defined regions. This adaptive mesh refinement (AMR) feature is crucial for the efficient simulation of interfacial multiphase flows with vaporization. A VOF model with phase change on a quad-tree mesh has been implemented by Wang and Yang \cite{wang_vaporization_2019} in the \emph{Gerris} code, the predecessor to \emph{Basilisk}. One key difference between the two codes is on the parallelization of the adaptive mesh \cite{popinet_gerris_2003,popinet_quadtree-adaptive_2015}. While \emph{Gerris} uses a domain decomposition, \emph{Basilisk} decomposes the tree directly and thus achieves improved parallel performance for large-scale simulations using a large number of processors and refinement levels. Due to the limitation of the \emph{Gerris} code, the VOF model of Wang and Yang \cite{wang_vaporization_2019} was only tested with 2D simulations of drop vaporization. The two very recent works mentioned above by Gao \textit{et al.~} \cite{gao_effect_2022} and Zhao \textit{et al.~} \cite{zhao_boiling_2022} have also built a phase-change model on the \emph{Basilisk} solver. The present method distinguishes itself from the previous studies by the following essential features. First, the present method has used a consistent approach to advect VOF (mass), momentum, and energy across the interface. This consistency between mass and momentum advection has been shown to be important for interfacial multiphase flows with large-density contrast \cite{zhang_modeling_2020}. The consistency among mass, momentum, and energy is also shown to be important to get accurate pressure and velocity across the interface for compressible flows \cite{zhang_direct_2021}. Second, we have proposed a novel treatment to handle the vaporization-induced volumetric source, by which the VOF represented interfaces can be advected accurately and the Stefan flow in the gas phase can be rigorously captured. This treatment will not require an additional velocity field and solving an additional Poisson equation, as needed in the previous methods \cite{malan_geometric_2021, bures_direct_2021, zhao_boiling_2022} and thus is more efficient and easy to implement. The present method carries a similar spirit to the method of Hardt and Wondra (HW) \cite{hardt_evaporation_2008}, which was adopted by Gao \textit{et al.~} \cite{gao_effect_2022} and others \cite{wang_vaporization_2019, georgoulas_enhanced_2017}. We also distribute the source to cells near the interface but not at the interfacial cells to preserve the correct velocity at the interfacial cells for VOF advection. In the HW method, a diffusion equation is required and the source is distributed to both gas and liquid cells. In contrast, we account only for the contribution of the volumetric source that is induced by density differences between the vapor and liquid, which is responsible for the introduction of the Stefan flow, and only distribute the source to a compact layer of pure gas cells near the interface in a mass-conservative way. The rest of the paper is organized as follows. The physical model will be presented in Section \ref{section:governing_eqs}. The numerical methods are then introduced in Section \ref{section:numerical_methods}. The overall simulation approaches are validated through a series of tests in Section \ref{section:validation}. We first start with the classic 1D Stefan and sucking problems (Sections \ref{section:stefan} and \ref{section:sucking}), then we move on to the 3D bubble growth in superheated liquid (Section \ref{section:bubble}). For these cases, analytical solutions are available to validate the simulation results. Furthermore, we have simulated more realistic cases such as film boiling (Section \ref{section:film_boil}) and vaporization of a moving drop in a high-temperature environment (Section \ref{section:droplet}). The simulation results were then compared against former numerical and experimental studies. Finally, a large-scale simulation of the aerobreakup of a vaporizing drop has been performed to demonstrate the capability of the present method in resolving complex interfacial multiphase flows with vaporization (Section \ref{3D_droplet}). \section{Multiphase models and governing equations} \label{section:governing_eqs} In the present study, a simulation framework for liquid-gas two-phase flows with phase change is developed. For the velocity field, the two phases are considered as ``one fluid" and share the same velocity field. The momentum conservation equation for both phases is \begin{align} & \rho \left( \frac{\partial \ub}{\partial t}+ \ub \cdot \nabla \ub \right)= - \nabla p + \nabla \cdot (2\mu \Db) + \rho \mathbf{g}+ \sigma \kappa \delta_{\gamma} \nb_{\gamma} \label{eq:momentum} \end{align} where $\ub$, $p$, $\mu$, $\rho$, $\sigma$, and $\kappa$ are the velocity, pressure, dynamic viscosity, density, surface tension coefficient, and interfacial curvature, respectively. The interface normal is denoted by $\nb_{\gamma}$, where the subscript $\gamma$ indicates properties associated with the interface. The surface tension is a singular force localized on the sharp interface using the Dirac distribution function $\delta_{\gamma}$. The deformation tensor is defined as $\Db = (\nabla \ub + \nabla \ub^T)/2$. Gravity, which can be easily included, is neglected in the cases considered in the present paper. The two different phases are distinguished by the color function $c$, which follows the advection equation, \begin{align} & \frac{\partial c}{\partial t}+ \ub \cdot \nabla c = \frac{-s_{\gamma}}{\rho_l} \label{eq:vof_advection} \end{align} It is taken that $c=1$ and 0 for liquid and gas phases, respectively. Therefore, the volume-averaged color function in a control volume is equivalent to the liquid volume fraction. The density and viscosity are calculated based on the liquid volume fraction using the arithmetic mean: \begin{align} \rho & =c\rho_l+(1-c)\rho_g\, ,\\ \mu & =c\mu_l+(1-c)\mu_g\, . \end{align} The volumetric source term ($s_{\gamma}$) on the right-hand side is associated with phase change. The phase change will also modify the continuity equation as \begin{align} & \nabla \cdot \ub= s_{\gamma} \left( \frac{1}{\rho_g}-\frac{1}{\rho_l} \right) \,, \label{eq:divergence1} \end{align} where the subscripts $l$ and $g$ denote the liquid and gas properties, and the velocity field is divergence-free only in the region away from the interface. As will be discussed in the later section, the projection method is employed and as a result, the pressure Poisson equation will be solved to guarantee the projected velocity satisfies the continuity equation. Furthermore, we will distribute the volumetric source at the interface $s_\gamma$ to adjacent pure gas cells, and the discretized distributed volumetric source is denoted by $\hat{s}$. The volumetric source term ($s_{\gamma}$) depends on the rate of vaporization ($j_{\gamma}$) and the interfacial area density ($\phi_{\gamma}$), \begin{align} & s_{\gamma} = j_{\gamma} \phi_{\gamma}\, . \label{eq:sm} \end{align} The rate of phase change ($j_{\gamma}$) is determined based on the heat fluxes from both sides of the interface, \begin{align} & j_{\gamma} =\frac{1}{h_{l,g}}\left(k_l (\nabla T_l)_{\gamma} \cdot \nb_{\gamma} - k_g (\nabla T_g)_{\gamma} \cdot \nb_{\gamma}\right) \label{eq:j_gamma} \end{align} where $T$, $k$, and $h_{l,g}$ are the temperature, thermal conductivity, and latent heat, respectively. Note that the rate of phase change here is governed by the temperature gradient, instead of the vapor concentration gradient \cite{sazhin_advanced_2006, shao_computational_2018}. For the expression given here, $j_\gamma >0 $ for vaporization and $j_\gamma <0 $ for condensation. The energy (temperature) equations in the gas and liquid regions are \cite{sato_sharp-interface_2013, bures_direct_2021, malan_geometric_2021}, \begin{align} & \rho_g C_{p,g} \left( \pd{T_g}{t} + \ub \cdot \nabla T_g \right) = \nabla \cdot (k_g \nabla T_g) \, , \label{eq:temp_gas}\,\\ & \rho_l C_{p,l}\left(\pd{T_l}{t} + \ub \cdot \nabla T_l \right) = \nabla \cdot (k_l \nabla T_l) \, \label{eq:temp_liq} \end{align} with the Dirichlet boundary condition at the vaporizing interface $(T_g)_\gamma=(T_l)_\gamma=T_{sat}$. The isobaric specific heat for gas and liquid are denoted by $C_{p,g}$ and $C_{p,l}$, respectively. Keeping the two temperature fields makes it easier to apply the temperature boundary condition at the interface. It is worth noting that there are no additional source terms in the energy equations due to phase change since it has been implicitly accounted for by the boundary condition at the interface. \section{Numerical methods} \label{section:numerical_methods} The governing equations are solved using a finite volume approach based on the projection method. The advection of the color function is solved using a geometric VOF method \cite{weymouth_conservative_2010}. The advection of momentum near the interface is conducted in a manner consistent with the VOF advection \cite{fuster_all-mach_2018, zhang_modeling_2020}. The surface tension term in the momentum equation is discretized using the balanced-force continuum-surface-force method \cite{francois_balanced-force_2006}. The height-function method is used for curvature calculation \cite{popinet_accurate_2009}. A staggered-in-time discretization of the volume-fraction/density and pressure leads to a formally second-order accurate time discretization. The quadtree/octree mesh is used to discretize the 2D/3D spatial domains, providing important flexibility to dynamically refine the mesh in user-defined regions. The adaptation criterion is based on the wavelet estimate of the discretization errors of the user-defined variables \cite{van_hooft_towards_2018}. Since the present study is focused on the new development for the inclusion of phase change, the above methods will be briefly reviewed below (section \ref{section:vof}, \ref{sec:mom_adv}). The detailed implementation of the methods without phase change can be found in previous studies \cite{popinet_accurate_2009, zhang_modeling_2020}. To include vaporization in the numerical model, we need to solve the energy equations for both phases to obtain the temperature near the interface (section \ref{section:energy}). Then based on the temperature gradients on both sides of the interface, the vaporization rate is estimated (section \ref{section:dT}). Furthermore, the interface recession due to the reduction in liquid volume (referred to as interface shifting) needs to be accounted for by modifying the VOF field (section \ref{section:infc_shifting}). Finally, the additional volumetric source due to phase change will be added to the pressure equation to account for the non-zero divergence for the velocity near the interface and the resulting Stefan flow due to the expansion of the dense liquid into a gaseous state (section \ref{section:source}). A new treatment is proposed to handle the volumetric source to guarantee that the velocity at the interface is correctly represented and can be directly used in VOF advection. These procedures will be described below in sequence. \subsection{VOF method} \label{section:vof} The advection equation for the color function, Eq.~\eqref{eq:vof_advection}, is solved using a geometric VOF method. The interface in each computational cell is reconstructed as a planar surface based on the cell-average color function (liquid volume fraction). The interface normal $\nb_\gamma$ is computed based on the Mixed Youngs-Centered (MYC) method \cite{aulisa_interface_2007}. The detailed implementation of the method on an octree mesh was given by Popinet \cite{popinet_accurate_2009}. Equation \eqref{eq:vof_advection} can be rewritten in conservative form as \begin{align} \frac{\partial c}{\partial t} = - \nabla \cdot \left( c \ub\right ) + c_c \nabla \cdot \ub + \frac{-s_{\gamma}}{\rho_l} \label{eq:vof_advection_conserve} \end{align} where the first two terms on the right-hand side are for the regular VOF method without phase change, while the last term represents the additional shift of interface due to phase change, the discussion of which will be given later in section \ref{section:infc_shifting}. The VOF advection is conducted in a direction-split manner, taking a 2D example, \begin{align} \frac{c_{i,j}^{*}- c_{i,j}^{n}}{\Delta t} & = - \frac{ F_{c,i+1/2,j}-F_{c,i-1/2,j}}{\Delta V}+\left(c_c \pd{u}{x}\right)_{i,j}\, , \label{eq:adv-x_color_func}\\ \frac{c_{i,j}^{n+1}- c_{i,j}^{*}}{\Delta t} & = - \frac{ G_{c,i,j+1/2}-G_{c,i,j-1/2}}{\Delta V}+\left(c_c \pd{v}{y}\right)_{i,j}\, , \label{eq:adv-y_color_func} \end{align} where $\Delta V$ is the cell volume, and the superscript $c^*$ denotes the auxiliary color-function. The value of the color function at the cell center is denoted by $c_c$, which is taken to be $c_c=1$ if ${c}>0.5$ and 0 if ${c}<0.5$. It was proved by Weymouth and Yue \cite{weymouth_conservative_2010} that the value of $c_c$ must be kept as a constant for all sweep directions to achieve exact mass conservation. The VOF fluxes in $x$ and $y$ directions are denoted as $F_c$ and $G_c$, respectively. The flux on the right surface in $x$ direction, $F_{c,i+1/2,j}$, is calculated as \begin{align} F_{c,i+1/2,j} = c_{a} u_{f,i+1/2,j} S \, , \label{eq:vof_flux} \end{align} where $u_{f,i+1/2,j}$ is the u-velocity at the cell surface and $S$ is the surface area. The volume fraction of liquid that is advected across the cell surface over $\Delta t$ is $c_a$, which is calculated based on the reconstruction of the interface. The VOF fluxes in the other directions are calculated similarly. \subsection{VOF-consistent momentum advection} \label{sec:mom_adv} It is important to advect the momentum across the interface consistently with the VOF (mass) advection, as shown in previous studies \cite{vaudor_consistent_2017, arrufat_momentum-conserving_2020, zhang_modeling_2020}. To make the advection of momentum and mass consistent, the momentum for the liquid and gas phases are handled separately: \begin{align} \frac{(c \rho_l \ub)^a - (c\rho_l \ub)^n}{\Delta t} & = - \nabla \cdot (c\rho_l \ub \ub ) + (c\rho_l \ub)_c \nabla \cdot \ub\,, \\ \frac{((1-c)\rho_g \ub)^a - ((1-c)\rho_g \ub)^n}{\Delta t} & = - \nabla \cdot ((1-c)\rho_g \ub \ub )+ ((1-c)\rho_g \ub)_c \nabla \cdot \ub\,, \end{align} where the superscript $^a$ denotes the auxiliary variables accounting only for the advection term. The momentum is advected as a tracer associated with VOF advection non-diffusely \cite{lopez-herrera_electrokinetic_2015}. The momentum flux for each phase is computed as the product the VOF flux for the corresponding phase and the momentum per unit volume to be advected \cite{zhang_modeling_2020}. Taking the x-momentum for the liquid phase as an example, \begin{align} \frac{(c \rho_l u)_{i,j}^{*}- (c \rho_l u)_{i,j}^{n}}{\Delta t} &= - \frac{ F_{u,i+1/2,j}-F_{u,i-1/2,j}}{\Delta V}+\left((c\rho_l u)_c \pd{u}{x}\right)_{i,j}\, , \label{eq:adv-x_xmom}\\ \frac{(c \rho_l u)_{i,j}^{a}- (c \rho_l u)_{i,j}^{*}}{\Delta t} & = - \frac{ G_{u,i,j+1/2}-G_{u,i,j-1/2}}{\Delta V}+\left((c\rho_l u)_c \pd{v}{y}\right)_{i,j}\, , \label{eq:adv-y_xmom} \end{align} where $F_u$ and $G_u$ denote the $x$-momentum fluxes for in $x$ and $y$ directions. The x-momentum flux on the right surface of the cell is calculated as \begin{align} F_{u,i+1/2,j} = (\rho_l u)_a F_{c,i+1/2,j} \, , \label{eq:mom_flux} \end{align} where $(\rho_l u)_a$ is the momentum per unit volume to be advected. The Bell-Collela-Glaz (BCG) second-order upwind scheme \cite{bell_second-order_1989} is used for the reconstruction of $(\rho_l u)_a$ in the upwind cell of the surface where the flux is to be evaluated. The generalized minmod slope limiter is employed to compute the gradient. The liquid momentum $(c\rho_l u)_c$ at the cell center is the analog of $c_c$, and $(c\rho_l u)_c=c \rho_l u$ if ${c}>0.5$ and 0 if ${c}<0.5$. After the advection of momentum in all directions, the unified velocity is obtained by \begin{align} \ub^a_{i,j} = \frac{(c \rho_l \ub)_{i,j}^{a} + ((1-c)\rho_g \ub)_{i,j}^{a}}{(c \rho_l)_{i,j}^{a} + ((1-c)\rho_g)_{i,j}^{a}}\, . \end{align} \subsection{Energy equations and consistent energy advection} \label{section:energy} The energy equations for both phases, \textit{i.e.}, Eqs.~\eqref{eq:temp_gas}-\eqref{eq:temp_liq}, are solved with the Dirichlet boundary condition at the vaporizing interface $(T_g)_\gamma=(T_l)_\gamma=T_{sat}$. The boundary conditions at the interfaces are invoked by setting $T_g=T_{sat}$ in the cells with $f>0$ and $T_l=T_{sat}$ in cells with $f<1$ \cite{lalanne_numerical_2021}. Note that more sophisticated immersed Dirichlet boundary conditions have been proposed recently \cite{gao_effect_2022, zhao_boiling_2022}. Nevertheless, the simple treatment here seems to be sufficient to yield accurate results. The energy fluxes for the advection terms, \textit{e.g.}, $ \ub \cdot \nabla (\rho_l C_{p,l} T_l)$, are computed similarly to the momentum fluxes discussed previously, namely the energy is advected as a tracer associated with the VOF advection. In such a way, the numerical diffusion across the interface can be avoided when we advect energy across the interface. Furthermore, the advection methods for mass, momentum, and energy across the interface are consistent. The consistent advection approach is similar to the methods of Zhang \textit{et al.~} \cite{zhang_direct_2021} for compressible interfacial multiphase flows and thus will make the future extension of the present methods toward compressible flows easier. The energy to be advected in the upwind cell is computed based on the linear reconstruction of the variable using the Bell–Colella–Glaz scheme and the minmod slope limiter \citep{bell_second-order_1989, popinet_gerris_2003}. The time integration of the diffusion terms is treated fully implicitly. {When the temperature for the vapor and liquid are lower than $T_{sat}$, there will be no vaporization, since here we neglect the evaporation effect due to the gradient of vapor concentration. There will be no need to make any adjustments to the interfacial temperature. Nevertheless, as the temperature on either side of the interface increases to be over the saturation temperature, which is the scenario of interest in the present study, vaporization will occur. In such a case, the vapor is assumed to be saturated at the interface, and the temperature at the interface is taken to be fixed at the saturation temperature, $T_{sat}$. The heat sink due to the latent heat of vaporization is implicitly included. The interfacial temperature is then used as the boundary condition for solving the diffusion term in each phase. In all the tests considered in the present study, the interfaces are always saturated. } A potential improvement for spatial discretization of the diffusion term is to consider a sub-grid embedded boundary condition for the interface temperature \cite{malan_geometric_2021, bures_direct_2021, gao_effect_2022, zhao_boiling_2022}. Nevertheless, the present methods already yield very accurate predictions as shown later in the validation studies (section \ref{section:validation}). {Additionally, previous studies using embedded boundary models for the interface have shown artificial velocity fluctuations near the interface \cite{gao_effect_2022}. This is likely due to the use of the VOF interface normal to compute the interfacial temperature gradient: even in simple interface advection cases, the direction of the interface normal in cells exhibits fluctuations between time steps. The fluctuations in the interface normal will result in fluctuations in the computed interfacial temperature gradient and, eventually, the rate of vaporization. } Therefore, if one wants to take advantage of the sub-grid interfacial location, this challenge must be carefully tackled, and such an extension will be relegated to future works. \subsection{Calculation of vaporization rate} \label{section:dT} The vaporization rate $j_{\gamma}$ is computed in every interfacial cell ($1>c>0$). The determination of $j_{\gamma}$ (Eq.~\eqref{eq:j_gamma}) requires the temperature gradient on both the liquid and gas sides of the interface: $(\nabla T_l)_{\gamma}$ and $(\nabla T_g)_{\gamma}$. The calculations for the gas and liquid sides are similar. Here we take the gas temperature gradient as an example to demonstrate the procedures. First of all, it is assumed that the temperature gradient is aligned with the interface normal near the interface, therefore, the projection of the temperature gradient to the interface normal is approximated by its magnitude, \textit{i.e.}, $(\nabla T_g)_{\gamma} \cdot \nb_{\gamma} \approx ||(\nabla T_g)_{\gamma}|| $, where $||\cdot ||$ denotes the magnitude of a vector. The magnitude of the gas temperature gradient at the interface is then obtained by extrapolation from the neighboring pure gas cells (${c}=0$). Considering the interfacial cell $\hat{i}, \hat{j}$ in a 2D $5\times 5$ stencil, in which the temperature gradient is approximated as \begin{align} ||(\nabla T_g)_\gamma|| = ||(\nabla T_g)_{\hat{i},\hat{j}}|| =\sum_{i=\hat{i}-2,j=\hat{j}-2}^{i=\hat{i}+2,j=\hat{j}+2} \hat{w}_{i,j} ||\nabla T_{g}||_{i,j},\, \label{eq:temp_grad} \end{align} where the normalized weight $w_{i,j}$ for a cell $i,j$ in the stencil is computed as \begin{align} \hat{w}_{i,j} = \frac{w_{i,j}}{\sum _{i=\hat{i}-2,j=\hat{j}-2}^{i=\hat{i}+2,j=\hat{j}+2}w_{i,j}}\,, \label{eq:weight_tg} \end{align} where \begin{align} \,w_{i,j}= \begin{cases} \xi_{i,j} ||\mathbf{d}_{i,j}||^2, & \text{if}\ c=0\\ 0, & \text{if}\ c>0\, . \end{cases} \label{eq:weight_tg_1} \end{align} In interfacial and liquid cells, $w_{i,j}=0$, and in pure gas cells $w_{i,j}$ is associated with the distance vector from the center of the pure gas cell to the center of the interfacial cell, \textit{i.e.}, $\mathbf{d}_{i,j} = \mathbf{x}_{\hat{i},\hat{j}}-\mathbf{x}_{i,j}$, and its projection to the interfacial normal direction, \textit{i.e.}, $\xi_{i,j}=|\nb_{\gamma} \cdot \mathbf{d}_{i,j}|$, see Fig.~\ref{fig:fig_cell}(c). The central difference approximation is used to calculate the temperature gradient, we have used only the pure gas cells to avoid computing the temperature gradient across the interface. The large stencil used here is to make sure at least one pure gas cell can be found. As a result, the primary contributions to the temperature gradient at the interfacial cell are taken from the pure gas cells where the central difference can be used without the temperature from the interfacial cells, see Fig.~\ref{fig:fig_cell}(c). Note that the same procedure applies to the temperature gradient on the liquid side $||(\nabla T_l)_{\gamma}||$. With the gas and liquid temperature gradients in the interfacial cell, the rate of vaporization for an interfacial cell can be computed according to Eq.~\eqref{eq:j_gamma}. The volumetric source in the interfacial cell due to phase change $s_\gamma$ is then computed using Eq.~\eqref{eq:sm}. Note that the interfacial area density in an interfacial cell is evaluated by $\phi_{\gamma}=A_{\gamma}/\Delta V$, where $A_{\gamma}$ is the area of the VOF reconstructed interface in the cell. \subsection{Vaporization-induced interface shifting} \label{section:infc_shifting} The contribution of vaporization on the liquid phase results in an additional shift of the interface toward the liquid side. The vaporization-induced interfacial velocity $\ub_{\gamma}$ is normal to the interface, namely $\ub_\gamma = u_\gamma \nb_\gamma$, see Fig.~\ref{fig:fig_cell}(b)\cite{malan_geometric_2021}, where $u_\gamma$ is expressed as \begin{align} u_{\gamma} = -\frac{j_{\gamma}}{\rho_l}.\, \label{eq:u_shift} \end{align} and the negative sign indicates the direction is the opposite of the interface normal. The interface shifting is handled explicitly by shifting the reconstructed VOF interface by the distance over one time step as \begin{align} \Delta d_{\gamma} = -\frac{j_{\gamma}}{\rho_l} \Delta t \,. \label{eq:shift_dist} \end{align} The volume fraction of the interfacial cell $c$ is then updated based on the shifted interface. Occasionally, the interface may leave the current cell if the liquid volume fraction is very small in the previous time step. In such a case, the interface moves to a neighboring pure liquid cell. Then that liquid cell will become an interfacial cell, with the liquid volume fraction computed based on the interface location. Since phase change induces a new velocity scale $u_\gamma$, it is accounted for in the CFL condition for the time step calculation. \begin{figure}[tbp] \begin{center} \includegraphics [width=1\columnwidth]{HD_Fig_01_cells} \end{center} \caption{Key steps of the VOF advection with phase change. (a) the VOF advection based on the liquid velocity, (b) shifting the interface by $\Delta d_{\gamma}$, (c) computing the vaporization source term ($s_{\gamma}$), and (d) distributing the $s_{\gamma}$ to $\hat{s}$ in the near-by pure gas cells.} \label{fig:fig_cell} \end{figure} \subsection{Pressure equations and volumetric-source distribution} \label{section:source} When the higher-density liquid turns into lower-density vapor, the Stefan flow is introduced in the gas phase and the Stefan flow velocity is \begin{align} u_\text{st} = j_{\gamma} \left( \frac{1}{\rho_g} - \frac{1}{\rho_l}\right) \, \label{eq:u_stef} \end{align} This production of volume due to the different densities is represented by the divergence source term in the continuity Eq.\,\eqref{eq:divergence1}. Since in the projection method the continuity equation is incorporated by solving the pressure Poisson equation, an additional source term appears in the Poisson equation. The velocity jump across the interface creates a challenge in advecting the color function using the VOF method \cite{malan_geometric_2021, zhao_boiling_2022, gao_effect_2022}. When phase change is absent, the gas and liquid velocities are the same in the interfacial cell, and the velocity for the interfacial cell is ready to be used to compute the VOF fluxes and to advect the interface. However, when the phase change occurs, the gas and liquid velocities jump across the interface. As a result, the velocity in the interfacial cell, which is generally biased towards the gas/vapor velocity, is \emph{not} the correct velocity of the interface. One way to overcome this issue is to solve an additional Poisson equation to extrapolate the liquid velocity across the interface \cite{malan_geometric_2021, bures_direct_2021, gao_effect_2022}. This treatment will result in a divergence-free liquid velocity at the interface for VOF advection and yield accurate results for the vaporizing interface. However, this approach requires an additional Poisson equation, which increases the computational expense. In the method of Malan \textit{et al.~} \cite{malan_geometric_2021}, the boundary conditions on the two artificial embedded boundaries on both sides of the interface are specified to solve the Poisson equation. For moving or deforming interfaces, these embedded boundary conditions need to be imposed every time step, which further increases the computational cost and algorithm complexity, in particular for adaptive octree mesh. In the present study, we propose a simple yet accurate method to resolve this numerical challenge in obtaining the current interface advection velocity when phase change is present. The volumetric source due to vaporization $s_{\gamma}$ is first calculated according to the steps described in Section \ref{section:dT}, based on the temperature gradients for gas and liquid in the interfacial cells. Then instead of applying the volumetric source $s_\gamma$ right at the interfacial cell, we distribute the volumetric source to the nearest pure gas cells in a $5^3$ stencil in 3D ($5^2$ in 2D). The distribution of the volumetric source from the interfacial cell $(\hat{i}, \hat{j})$ to a pure gas cell $(i, j)$ in the stencil ($\hat{i}-2\le i\le\hat{i}+2$ and $\hat{j}-2\le j\le\hat{j}+2$) is \begin{equation} (\hat{s}_{i,j})_{\hat{i}, \hat{j}} = \hat{m}_{i,j} (s_\gamma)_{\hat{i}, \hat{j}}\,, \end{equation} where $\hat{m}_{i,j}$ is the normalized weight and is defined similarly to Eq.~\eqref{eq:weight_tg} as \begin{align} \hat{m}_{i,j} = \frac{m_{i,j}}{\sum _{i=\hat{i}-2,j=\hat{j}-2}^{i=\hat{i}+2,j=\hat{j}+2}m_{i,j}}\,, \label{eq:weight_vs} \end{align} where \begin{align} m_{i,j}= \begin{cases} {\xi_a}/{||\mathbf{d}_{i,j}||}, & \text{if}\ c=0\\ 0, & \text{if}\ c>0\, . \end{cases} \label{eq:weight_vs_1} \end{align} As a result, only pure gas cells in the stencil will receive a distribution. The weight for a gas cell is related to the ratio between the projection of the distance vector $\mathbf{d}_{i,j} = \mathbf{x}_{\hat{i},\hat{j}}-\mathbf{x}_{i,j}$ to the interfacial normal direction, \textit{i.e.}, $\xi_{i,j}=|\nb_{\gamma} \cdot \mathbf{d}_{i,j}|$, and the norm of $\mathbf{d}_{i,j}$. The gas cell in the stencil that is more aligned with the interface normal will thus receive a bigger distribution. Note that the distributed volumetric source in a pure gas cell, $\hat{s}$, is the sum of the source distribution from all the nearby interfacial cells. The integration of the distributed source $\hat{s}$ in all pure gas cells is equal to the integration of the volumetric source $s_\gamma$ over all the interfacial cells, \begin{align} \sum_{1>c_{i,j}>0} (s_{\gamma})_{i,j} = \sum_{c_{i,j}=0} \hat{s}_{i,j} \end{align} ensuring the conservation of the generated vapor volume from the whole interface. Note that here we have used the 2D case to explain the algorithm, and the algorithm is very similar for 3D cases. Finally, the distributed source is added to the pressure Poisson equation as \begin{align} \nabla \cdot \left( \frac{\Delta t}{\rho} \nabla p \right) = \nabla \cdot \ub^{**} - \hat{s} \left( \frac{1}{\rho_g}-\frac{1}{\rho_l} \right) \,, \label{eq:poisson0} \end{align} where $\ub^{**}$ is the auxiliary velocity that accounts for all the terms in the momentum equation except the pressure term. There are several important advantages for the present treatment of the vaporization-induced volumetric source. First of all, by donating the volumetric source from the interfacial to the neighboring gas cells, the velocity in the interfacial cell will not be ``contaminated" by the Stefan flow and will remain as the liquid velocity by which the interface moves (before we impose the additional shifting discussed in section \ref{section:infc_shifting}). Furthermore, the distributed volumetric source in the pure gas cells will induce the Stefan flow in the gas region near the interface. As a numerical approximation, we have moved the volumetric source away from the interface, which will slightly modify the Stefan flow right next to the interface. However, since the relocation distance for $s_\gamma$ to the neighboring cell is typically less than two grid cells, its influence is very small, as will be shown later in the test results (Section \ref{section:validation}). The present method also distinguishes itself from the source term distribution method by Hardt and Wondra (HW) \cite{hardt_evaporation_2008, gao_effect_2022} in several important aspects. At first, the present method does not need to solve an additional diffusion equation as required in the HW method. The direct distribution of the source in a compact stencil is beneficial in keeping the effective thickness of the volumetric source small. In general, we have used a $5^3$ stencil centered at the interfacial cell, therefore, the effective thickness is about two cells. It is also possible to use a more compact $3^3$ stencil; however, it will be less robust when there is a large distortion of the interface and there will be no pure gas cells present in the stencil. It is also worth noting that the present method assumes the vapor region is reasonably well resolved and there is at least one pure gas cell in the stencil. Secondly, in the Hardt and Wondra (HW) method the source is distributed in both pure gas and liquid cells as $s_\gamma/\rho_g$ and $s_\gamma/\rho_l$, respectively, the present method only distributes the volumetric source due to density difference, $s_\gamma (1/\rho_g-1/\rho_l)$, in the pure gas cells and the disappearance of mass in the liquid side is accounted for by shifting the VOF-represented interface explicitly (see section \ref{section:infc_shifting}). While both methods will capture the Stefan flow and interface motion, the present method only needs to modify the velocity field on the gas side and thus has a more compact region of the source distribution. It is worth noting that, vaporization will only modify the velocity and pressure fields and will induce the Stefan flow when the densities of the two phases are different (see Eqs.~\eqref{eq:divergence1} and \eqref{eq:poisson0}). The Stefan flow only appears in the lighter gas phase ($\rho_g\ll \rho_l$), therefore, distributing the volumetric source only in the gas phase is more consistent with the vaporizing flow physics. \subsection{Staggered-in-time temporal discretization} The above spatially discrete equations are temporally discretized using the second-order staggered-in-time method in the \emph{Baslisk} solver \cite{popinet_accurate_2009}. The temporally discrete equations are given below, which can be combined with the spatial discretization discussed above to achieve the fully discrete equations. \begin{itemize} \item Advection equation \begin{align} & \frac{c^{n+\half}-c^{n-\half}}{\Delta t} = - \nabla \cdot \left( c^n \mathbf{u^n}\right ) + c_c^n \nabla \cdot \mathbf{u^n} + \left(\frac{-s_{\gamma}}{\rho_l} \right)^{n-\half} \label{eq:vof_advect} \end{align} \item Momentum equation - prediction step \begin{align} \frac{\big(\rho_l c \ub\big)^{*} - \big(\rho_l c \ub\big)^{n}}{\Delta t} & = - \ub^n \cdot \nabla (\rho_l \ub)^n \, , \label{eq:mom_adv1}\\ \frac{\big(\rho_g (1-c) \ub\big)^{*} - \big(\rho_g (1-c) \ub\big)^{n}}{\Delta t} & =- \ub^n \cdot \nabla (\rho_g \ub)^n\, , \label{eq:mom_adv2}\\ \ub^* & = \frac{\big(\rho_l c \ub\big)^{*}+\big(\rho_g (1-c) \ub \big)^{*}}{\rho_l c^{n+1/2} + \rho_g (1-c^{n+1/2})}\label{eq:mom_adv3}\\ \rho^{n+\half} \frac{\ub^{**} - \ub^* }{\Delta t} - \nabla \cdot \left( \mu^{n+\half} \Db^{**}\right) & = \nabla \cdot \left( \mu^{n+\half} \Db^*\right) + (\sigma \kappa \delta_{\gamma} \nb_{\gamma})^{n+\half} \label{eq:u_star} \end{align} \item Pressure Poisson equation \begin{align} & \nabla \cdot \left( \frac{\Delta t}{\rho^{n+\half}} \nabla p^{n+\half} \right) = \nabla \cdot \ub^{**} - \left( \hat{s} \left( \frac{1}{\rho_g}-\frac{1}{\rho_l} \right) \right)^{n+\half} \label{eq:poisson} \end{align} \item Momentum equation - projection step \begin{align} \ub^{n+1}=\ub^{**} - \frac{\Delta t}{\rho^{n+\half}} \nabla p^{n+\half} \label{eq:u_update} \end{align} \item Energy equation \begin{align} \frac{(\rho_l C_{p,l} T_l)^{n+\half} - (\rho_l C_{p,l} T_l) ^{n-\half}}{\Delta t} & = - \ub^n \cdot \nabla (\rho_l C_{p,l} T_l )^{n} + \nabla \cdot (k_l \nabla T_l)^{n+\half} \label{eq:energy_liq}\, \end{align} \begin{align} \frac{(\rho_g C_{p,g} T_g)^{n+\half} - (\rho_g C_{p,g} T_g)^{n-\half}}{\Delta t} & = - \ub^n \cdot \nabla (\rho_g C_{p,g}T_g )^{n} + \nabla \cdot (k \nabla T_g)^{n+\half} \label{eq:energy_gas}\, \end{align} \end{itemize} Finally, the overall solution steps are summarized in Algorithm \ref{tab:algor}. \begin{algorithm}[H] \caption{Full algorithm summary}\label{alg:cap} \begin{algorithmic} \State Initialization of $f$, $\ub$, $T_l$, $T_g$ \While{$t<t_{end}$} \State Calculate $\Delta t$ based on CFL constraint \State VOF reconstruction; \State VOF advection neglecting the source term due to phase change (Eqs.\ \eqref{eq:adv-x_color_func}, \eqref{eq:adv-y_color_func}, \eqref{eq:vof_advect}); \State VOF-consistent advection of momentum and energy (Eqs.~\eqref{eq:mom_adv1}-\eqref{eq:mom_adv3} and Eqs.~\eqref{eq:energy_liq}-\eqref{eq:energy_gas}) \State Solve diffusion terms in energy equations to obtain $T_{l}^{n+\half}$ and $T_{g}^{n+\half}$ (Eqs.~\eqref{eq:energy_liq}-\eqref{eq:energy_gas}) \State Compute the volumetric source term $s_{\gamma}$ (Eqs.~\eqref{eq:sm}-\eqref{eq:j_gamma} and \eqref{eq:temp_grad}-\eqref{eq:weight_tg}) \State Shift reconstructed interface by $\Delta d_{\gamma}$ (Eq.~\eqref{eq:shift_dist}) to account for the phase-change term in the advection equation (Eq.~\eqref{eq:vof_advect}) \State Distribute $s_{\gamma}$ to neighboring pure gas cells and compute $\hat{s}$ \State Solve diffusion term in momentum equation to obtain $\ub^{**}$ (Eq. \eqref{eq:u_star}) \State Compute $p^{n+\half}$ by solving the Poisson Eq. \eqref{eq:poisson} \State Correct $\ub^{n+1}$ by projection (Eq. \eqref{eq:u_update}) \EndWhile \end{algorithmic} \label{tab:algor} \end{algorithm} \subsection{Quadtree/Octree Mesh} The physical models and numerical methods described above have been implemented in the \emph{Basilisk} code using adaptive octree/quadtree meshes. The maximum level of refinement $L$ can be compared with a fixed grid resolution of $2^L$ cells in each coordinate direction; \textit{i.e.}, $L9$ corresponds to $2^9=512$ cells in the $x$ direction or $512^3$ cells in 3D. Note that the finite volume cells have equal dimensions (square in 2D or cubic 3D), \textit{i.e.}, $\Delta x_{min}=\Delta y_{min}$. The mesh adaptation algorithm is based on a wavelet-estimated discretization error \cite{popinet_quadtree-adaptive_2015, van_hooft_towards_2018}, where the refinement criteria are based on temperature ($T$), volume fraction ($c$), and velocity ($\ub$). The advantage of using an adaptive mesh is that a higher grid resolution is only used in the user-defined regions; \textit{i.e.}, near the interface, so that the total number of computational cells can be significantly reduced. To maintain the simplicity in the interface temperature gradient calculation (section \ref{section:dT}) and $s_{\gamma}$ distribution (section \ref{section:source}), the mesh near the interface is always refined to the maximum level (5 cells on either side of the interface). This is achieved by refining the mesh based on a level-set function. The addition to the computational cost for this treatment is minor because the temperature gradient is typically large near the interface and requires the maximum level of grid refinement. The \emph{Basilisk} code has different parallelization options. In this study, we have used parallelization based on tree decomposition and MPI \cite{popinet_quadtree-adaptive_2015}. As a result, the computational domain is split into blocks with irregular shapes, instead of simple boxes as in conventional domain decomposition. This advanced parallelization technique guarantees good performance for large-scale simulations using a large number of processors and refinement levels. When an interfacial cell is located at the block boundary, the procedures for the distribution of the volumetric source (section \ref{section:source}) and the shifting of the interface (section \ref{section:infc_shifting}) involve the modification of variables in the ghost cells of an MPI block. An additional MPI communication is thus required for block boundary with interfacial cells. Nevertheless, the computational cost is small compared to that for regular communications between blocks. \section{Validation} \label{section:validation} To validate the present methods and their implementation in the \emph{Basilisk} code, a series of test cases were performed, see Table~\ref{tab:test_cases}. The purpose of each test is also listed. The CFL number is set to 0.2 for all cases. The two new velocities induced by vaporization, \textit{i.e.}, the Stefan flow velocity (Eq.~\eqref{eq:u_stef}) and the vaporization-induced interface shifting velocity $u_\gamma$ (Eq.~\eqref{eq:u_shift}) are also accounted for in the calculation of the time step. \begin{table*}[tbp] \centering \begin{tabular}{c c c } \hline Case & Section & Purpose \\ \hline Stefan Problem & \ref{section:stefan} & Vaporization due to heated gas \\ Sucking problem & \ref{section:sucking} & Vaporization due to heated liquid\\ Bubble growth & \ref{section:bubble} & 3D interface motion due to vaporization\\ Film boiling & \ref{section:film_boil} & Complex deformation of vaporizing interface\\ \hline \end{tabular} \caption{Summary of the validation cases considered in the present paper.} \label{tab:test_cases} \end{table*} \subsection{1D Stefan Problem} \label{section:stefan} The 1D Stefan problem is a common test case \cite{welch_volume_2000, shao_computational_2018, malan_geometric_2021, sato_sharp-interface_2013, hardt_evaporation_2008, bures_direct_2021, gao_effect_2022}. The domain consists of a vapor region next to a heated wall and a liquid region, see Fig.~\ref{fig:stef_suck_schematic}. The wall temperature, $T_w> T_{sat}$, is fixed. Initially, the liquid is at saturation temperature ($T_\text{sat}$). The liquid is vaporized at the interface due to the heat flux from the vapor. The vapor generated at the interface pushes the liquid to the right. The liquid will move out from the right edge of the domain, on which the pressure outlet boundary condition (BC) is imposed. The analytical solution for the temporal evolution of the interface position $x_\gamma$ is given as \begin{align} x_{\gamma}(t) = 2 \beta \sqrt{\alpha_g t}, \, \label{eq:stef_interface} \end{align} where $\alpha_g=k_g/(\rho_g C_{p,g})$ is the gas thermal diffusivity and $\beta$ is computed from the transcendental equation \begin{align} \beta \exp{(\beta^2)} \erf{(\beta)} = \frac{C_{p,g}(T_w - T_\text{sat})}{h_{lg} \sqrt{\pi}}\,, \label{eq:stef_beta} \end{align} where $\erf()$ is the Gauss error function. The temperature in the vapor region, as a function of space and time, is given as \begin{align} T_g(x,t) = T_w + \left(\frac{T_\text{sat} - T_w}{\erf(\beta)}\right)\erf{\left(\frac{x}{2 \sqrt{\alpha_g t}}\right)}.\, \label{eq:stef_temperature} \end{align} \begin{figure}[tbp] \begin{center} \includegraphics [width=0.7\columnwidth]{HD_Fig_02_stef_plus_suck} \end{center} \caption{Schematics for the 1D Stefan and sucking tests: (a) computational domain and (b) temperature distribution. Note that $x_{\gamma}$ is the interface location.} \label{fig:stef_suck_schematic} \end{figure} \begin{table*}[tbp] \centering \begin{tabular}{l l l l l l l} \hline Property & \multicolumn{2}{c}{Water} & \multicolumn{2}{c}{Fluid A} & \multicolumn{2}{c}{Fluid B} \\ & Liquid & Vapor & Liquid & Vapor & Liquid & Vapor \\ \hline $\rho$ $[kg/m^3]$ & $958.4$ & $0.597$ & $2.5$ & $0.25$ & $200$ & $5$ \\ $k$ $[W \, m^{-1} \, K^{-1}]$ & $0.679$ & $0.025$ & $0.07$ & $0.007$ & $1$ & $1$\\ $C_{p,g}$ $[J \, kg^{-1} \, K^{-1}]$ & $4216$ & $2030$ & $2.5$ & $1$ & $200$ & $200$ \\ $\mu$ $[Pa \, s]$ & $2.8\text{e-}{4}$ & $1.26\text{e-}{5}$& $7\text{e-}{3}$ & $7\text{e-}{4}$ & $0.1$ & $0.005$\\ $h_{lg}$ $[J \, kg^{-1}]$ & $2.26\text{e}{6}$ &- & $100$ & - & $1\text{e}{4}$ &-\\ $T_\text{sat}$ $[K]$ & $373.15$ &- & $1$ & - & $1$ &- \\ $\sigma$ $[N \, m^{-1}]$ & $0.0728$ &- & $0.001$ & - & $0.1$ &- \\ \hline \end{tabular} \caption{Properties of the saturated water (section \ref{section:stefan}), Fluid A (sections \ref{section:sucking}-\ref{section:bubble}), and Fluid B (\ref{section:film_boil}). } \label{tab:properties} \end{table*} \begin{figure}[tbp] \begin{center} \includegraphics [width=1\columnwidth]{HD_Fig_03_stef_results} \end{center} \caption{Numerical results for the 1D Stefan problem for three levels of grid refinement: $L5$, $L6$, and $L7$, compared with the exact solution. (a) Temporal evolution of the interface position $x_\gamma$. (b) The overall temperature $T$ ($T=T_g$ in vapor, $T=T_l$ in liquid, and $T=T_{sat}$ at the interface) and (c) the $u$-velocity distributions at the final time $t=0.12\,$s.} \label{fig:stefan_results} \end{figure} Though the flow is 1D, simulations were performed on a 2D square domain with edge length $L_0 = 1\, $mm, see Fig.~\ref{fig:stef_suck_schematic}. The exact solution at $t=0.027$ s was used as the initial condition and the corresponding initial width of the vapor region is $H_0=0.1\,$mm. The fluid is water and the properties are listed in Table~\ref{tab:properties}. The wall temperature is $T_w=383.15\,$K. The simulation results are presented in Fig.~\ref{fig:stefan_results} for three levels of grid refinement: $L5$, $L6$, and $L7$. It can be observed that the temporal evolution of the interface location for all three meshes matches very well with the analytical solution (see Fig.~\ref{fig:stefan_results}(a)). The simulations have been run to $t=1.2\,$s. The spatial distribution of temperature at the final time is shown in Fig.~\ref{fig:stefan_results}(b). Though we have solved $T_g$ and $T_l$ separately, here we plot the overall temperature $T$, which is defined as $T=T_g$ in vapor, $T=T_l$ in liquid, and $T=T_{sat}$ at the interface. It is clearly seen that the numerical results converge to the exact solution as the mesh is refined. It is also observed that the spatial variation of temperature in the vapor region is approximately linear. That is why even a coarse mesh is sufficient to yield a good approximation. Finally, the spatial variation of the $u$-velocity is shown in Fig.~\ref{fig:stefan_results}(c), compared with the exact solution. Ideally, the velocity in the vapor is zero and that in the liquid is constant in space, $u_l(t) = \dot{x}_{\gamma}$. Due to the distribution of the volumetric source to the pure gas cells, the numerical results show non-zero vapor velocity in a narrow region on the left of the interface. When the mesh is refined, the thickness of the non-zero velocity region reduces. It is worth noting that the liquid velocity is preserved to be constant on the right of the interface, which converges to the exact value as the cell size decreases. If the source is distributed to both the gas and liquid cells as in the HW method \cite{gao_effect_2022}, numerical smearing will also appear in the liquid velocity. \subsection{1D sucking problem} \label{section:sucking} Another common validation case is the 1D sucking problem, also known as the boiling interface problem \citep{welch_volume_2000, shao_computational_2018, sato_sharp-interface_2013, bures_direct_2021, zhao_boiling_2022}. The temperature distribution is what differentiates this problem from the Stefan problem, see Fig.~\ref{fig:stef_suck_schematic}(b). Both the wall and vapor temperature are at the saturation temperature, \textit{i.e.}, $T_w=T_g=T_\text{sat}$, while the liquid is superheated and the liquid temperature at the right boundary of the domain, which is far from the interface, is fixed at $T_{\infty}$. The heat flux from the liquid to the interface results in vaporization. Similarly, the generated vapor pushes the liquid to the right. The analytical solution for the interface position has the identical expression as that for the Stefan problem, \textit{i.e.}, Eq.~\eqref{eq:stef_interface}, though the parameter $\beta$ is computed from a different transcendental equation \begin{align} \exp{(\beta^2)} \erf{(\beta)}\left[\beta - \frac{(T_{\infty} - T_\text{sat}) C_{p,g} k_l \sqrt{\alpha_g} \exp{\left(-\beta^2 \frac{\rho_g^2 \alpha_g}{\rho_l^2 \alpha_l} \right)}}{h_{lg} k_g \sqrt{\pi \alpha_l} \erfc{ \left( \beta \frac{\rho_g \sqrt{\alpha_g}}{\rho_l \sqrt{\alpha_l}}\right)}} \right] = 0\, . \label{eq:suck_beta} \end{align} The exact solution of the liquid temperature is given as \begin{align} T_l(x,t) = T_{\infty} - \left( \frac{ T_{\infty}-T_\text{sat} }{ \erfc{ \beta \frac{\rho_g \sqrt{\alpha_g}}{\rho_l \sqrt{\alpha_l}}}} \right) \erfc {\left(\frac{x}{2 \sqrt{\alpha_l t}} + \beta \frac{ \rho_g-\rho_l}{\rho_l} \sqrt{\frac{\alpha_g}{\alpha_l}} \right)}.\, \label{eq:suck_T} \end{align} \begin{figure}[tbp] \begin{center} \includegraphics [width=1\columnwidth]{HD_Fig_04_suck_results} \end{center} \caption{Numerical results for the 1D Stefan problem for three levels of grid refinement: $L6$, $L7$, and $L8$, compared with the exact solution. (a) Temporal evolution of the interface position $x_\gamma$. (b)-(c) The overall temperature $T$ and (d) the $u-$velocity distributions at the final time $t=2.1\,$s. Note that (c) is a close-up of the temperature distribution near the interface - see annotation in (b). } \label{fig:sucking_results} \end{figure} \begin{table*}[tbp] \centering \begin{tabular}{c c c c} \hline Maximum grid level & Final interface location & Relative error (\%) & $\mathcal{O}$\\ \hline $L6$ & 0.086 & 4.2 & -\\ $L7$ & 0.088 & 1.5 & 1.46\\ $L8$ & 0.089 & 0.6 & 1.42\\ \hline \end{tabular} \caption{The relative error of the interface position at the final time for the 1D sucking problem and the order of convergence $\mathcal{O}$.} \label{tab:cases} \end{table*} In this test case, we have set $L_0 = 1\,$m, $H_0=0.05\,$m, and $T_{\infty}/T_{sat}=3$. The fluid properties (Fluid A) are provided in Table~\ref{tab:properties}. {The exact solution at $t=0.1$ s was used as the initial condition, and then the simulation was run to $t=2.1\,$s.} The different meshes $L6$, $L7$, and $L8$ were used and the results are presented in Fig.~\ref{fig:sucking_results}. It is clearly shown that the numerical results converge toward the exact solution when the mesh is refined. The trajectory of the interface and final temperature profile for the mesh $L8$ match the exact solution very well. Different from the Stefan problem, the liquid temperature near the interface rises rapidly and non-linearly in $x$ in the sucking problem. The interface positions at the final time ($t=2.1$ s) for different meshes are provided in Table~\ref{tab:cases}, from which it can be seen that the relative error decreases with the cell size. The order of convergence is approximated using the error of the final position ($\epsilon$) \begin{align} \mathcal{O} = \frac{ln(\epsilon_{\Delta x}/\epsilon_{\Delta x/2})}{ln(2)} \end{align} Similar to the 1D Stefan problem, the distribution of $\hat{s}$ to the pure gas cells results in a non-zero vapor velocity in a narrow region on the left of the interface and as the mesh is refined, the thickness of the non-zero velocity region reduces (Fig.~\ref{fig:sucking_results}(d)). More importantly, it is clearly shown that the liquid velocity on the right of the interface converges to the exact solution. \subsection{Spherical bubble growth in superheated liquid} \label{section:bubble} In this test, we simulate the growth of a spherical vapor bubble surrounded by the superheated liquid \cite{malan_geometric_2021,bures_direct_2021, sato_sharp-interface_2013, zhao_boiling_2022,gao_effect_2022}. The vapor is at saturation temperature $T_\text{sat}$ and the liquid temperature in the far-field is fixed at $T_{\infty}>T_\text{sat}$. The heat flux from the liquid side drives vaporization and bubble growth. The gravity effect is ignored, so the bubble remains spherical as it grows. This problem is the spherically symmetric analog of the 1D sucking problem. The analytical solution for the temporal evolution of the bubble radius ($R$) is \begin{align} R(t) = 2 \beta \sqrt{\alpha_l t}, \, \label{eq:bubble_radius} \end{align} where $\beta$ is obtained by solving the following equation, \begin{align} 2 \beta^2 \int_{0}^{1} exp\left( -\beta^2 \left( (1-\zeta)^{-2} - 2(1-\frac{\rho_g}{\rho_l}) \zeta -1 \right) \right) \,d\zeta \notag\\ = \frac{\rho_l C_{p,l}(T_{\infty}-T_\text{sat})}{\rho_g(h_{lg} + (C_{p,l} - C_{p,g})(T_{\infty}-T_\text{sat}))}\, . \label{eq:bubble_beta} \end{align} The liquid temperature is expressed as a function of $r$ and $t$ as \begin{align} T_l(r,t) = T_{\infty} - 2 \beta^2 \left( \frac{\rho_g(h_{lg} + (C_{p,l} - C_{p,g})(T_{\infty}-T_\text{sat}))}{\rho_l C_{p,l}} \right) \notag \\ \int_{1-R(t)/r}^{1} exp\left( -\beta^2 \left( (1-\zeta)^{-2} - 2(1-\frac{\rho_g}{\rho_l}) \zeta -1 \right) \right) \,d\zeta\, . \label{eq:bubble_T} \end{align} Figure~\ref{fig:3D_bubble} shows the initial bubble ($R_0=1.2$ m) in the cubic computational domain with edge length $L_0=5R_0$. The octree mesh is plotted on the $x$-$y$ plane, and it can be seen that the mesh is refined to the maximum level near the interface to guarantee the interface and temperature gradient will be well captured. The fluid properties (Fluid A) are provided in Table~\ref{tab:properties} \cite{malan_geometric_2021}. The exact solution for $t=0.5\,$s is used as the initial condition, and the simulation is run to $t=0.21\,$s. \begin{figure}[tbp] \begin{center} \includegraphics [width=0.7\columnwidth]{HD_Fig_05_3D_setup_clean_annotated} \end{center} \caption{The 3D computational domain for a spherical bubble in a superheated liquid.} \label{fig:3D_bubble} \end{figure} The temperature distribution on the central $x$-$y$ cross-section is shown in Fig.~\ref{fig:bubble_cross_section}(a). It can be seen that the bubble surface is spherical and smooth, as it is supposed to be. The temperature is fixed at $T_{sat}$ inside the bubble and increases radially outward in the liquid from the interface location. A close-up of the interface is shown in Fig.~\ref{fig:bubble_cross_section}(b), where the velocity field (vectors) and the magnitude of the temperature gradient (color) are plotted. The velocity jump across the interface can be clearly seen. More importantly, the velocity field obtained by the present method does not show artificial fluctuations of velocity magnitude or direction inside the bubble and at the interface, as observed in the previous studies (see Figs.~9-10 in Ref.~\cite{gao_effect_2022} and Fig.~8 in Ref.~\cite{malan_geometric_2021}). The rate of vaporization ($j_\gamma$), computed based on the temperature gradient, is plotted in Fig.~\ref{fig:bubble_cross_section}(c), which is non-zero only in the interfacial cells. The volumetric source $s_\gamma$ is computed from $j_{\gamma}$ using Eq.~\eqref{eq:sm} and is then distributed to the neighboring pure gas cells, and the distributed source $\hat{s}$ is plotted in Fig.~\ref{fig:bubble_cross_section}(d). It can be seen that $\hat{s}$ is smoothly distributed in the nearby pure gas cells and the gas cells close to the interface receive a bigger share. \begin{figure}[tbp] \begin{center} \includegraphics [width=1.\columnwidth]{HD_Fig_06_bubble_cross_section} \end{center} \caption{Bubble $x$-$y$ plane cross-section at $t=0.21$ showing (a) the temperature distribution and the velocity quiver plot. Three close-up depictions of cells near the interface showing (b) the magnitude in the temperature gradient, (c) the rate of phase change at the interface ($j_{\gamma}$), and (d) the distributed phase-change source term ($\hat{s}$).} \label{fig:bubble_cross_section} \end{figure} The temporal evolution of the bubble radius and the radial distribution of temperature at the final time are presented in Fig.~\ref{fig:bubble_growth_rate}. It is observed that both the numerical results converge toward the analytical solution when the mesh is refined from $L7$ to $L9$. The results for $L9$ agree with the exact solution very well. The values of the bubble radius and the relative errors at the final time for various levels of refinement are provided in Table~\ref{tab:bubble_results}. It is clear that the error is reduced when the mesh is refined. \begin{table*}[tbp] \centering \begin{tabular}{c c c c} \hline Maximum grid level & Final radius & Relative error (\%) & $\mathcal{O}$\\ \hline $L7$ & 0.231 & 3.7 & -\\ $L8$ & 0.236 & 1.44 & 1.36\\ $L9$ & 0.239 & 0.26 & 2.47\\ \hline \end{tabular} \caption{The relative error of the interface position at the final time for the spherical bubble problem and the order of convergence $\mathcal{O}$.} \label{tab:bubble_results} \end{table*} \begin{figure}[tbp] \begin{center} \includegraphics [width=1\columnwidth]{HD_Fig_07_bubble_growth_results} \end{center} \caption{Numerical results for the spherical bubble growth problem for three levels of grid refinement: $L7$, $L8$, and $L9$, compared with the exact solution. (a) Temporal evolution of the radial interface position $r_\gamma$. (b) Temperature distribution at the final time $t=0.21\,$s} \label{fig:bubble_growth_rate} \end{figure} \subsection{Film boiling} \label{section:film_boil} The last validation test case to be presented is the 2D film boiling problem \citep{berenson_film-boiling_1961, klimenko_film_1981, sun_modeling_2014, esmaeeli_computations_2004, tomar_numerical_2005, hardt_evaporation_2008}. The buoyancy effect is included which triggers the Rayleigh-Taylor instability (RTI) at the interface. Compared to the previous tests, which involve only 1D flows, this test exhibits 2D flows and more complex interface deformation. The most unstable wavelength for inviscid RTI with surface tension is given as \begin{align} \lambda_d = 2 \pi \sqrt{\frac{3 \sigma}{(\rho_l-\rho_g)g}}\,, \label{eq:film_wavelength}\, \end{align} based on this we have set the computational domain $x=[-\lambda_d/2,\lambda_d/2]$ and $y=[0,\lambda_d]$, so that the width of the domain covers one most-unstable wavelength. The initial interface is perturbed with a single mode and the interfacial position is expressed as \begin{align} y=\frac{\lambda_d}{128}\left[ 4+cos\left( \frac{2 \pi x}{\lambda_d}\right)\right]. \label{eq:film_init} \end{align} \begin{figure}[tbp] \begin{center} \includegraphics [width=0.7\columnwidth]{HD_Fig_08_film_domain} \end{center} \caption{The schematic of the computational domain for the film boiling problem.} \label{fig:film_domain} \end{figure} The fluid properties are provided in Table~\ref{tab:properties} (Fluid B) and the gravitational acceleration is $g=9.81\,$m/s. The bottom wall is a no-slip boundary at a constant elevated temperature $T_{wall}=T_\text{sat}+5\,K$. The left and right boundaries are symmetry boundary conditions and the top boundary is an outlet boundary condition. Initially, the fluids are stationary and the temperature is varied linearly in the vapor between the wall and the interface. \begin{figure}[tbp] \begin{center} \includegraphics [width=1.0\columnwidth]{HD_Fig_09_shape_converge_SUN} \end{center} \caption{(a) The temporal evolution of the interface from $t=0.237\,s$ to $t=0.453\,s$ at increments of $0.024\,s$, and (b) the interface shape of the film boiling problem at three levels of refinement ($L9$, $L10$, and $L11$) compared to the results from Sun \textit{et al.~} \citep{sun_modeling_2014} at $t\approx 0.24s$.} \label{fig:film_converge} \end{figure} Figure~\ref{fig:film_converge}(a) shows the temporal development of the interface from $t=0.237\,$ to $0.453\,$s at a constant increment of $0.024\,$s. The development of the RTI bubble can be clearly seen. In Fig.~\ref{fig:film_converge}(b), the present results at $t\approx 0.24s$ for different mesh refinement levels are compared to the numerical results of Sun \ etl \citep{sun_modeling_2014}. Again, the present numerical results converge as the mesh is refined, and the results for L11 are in good agreement with the previous numerical results using a uniform mesh. The solution appears to have converged at $L10$ as the difference between results for $L10$ and $L11$ is almost invisible (Fig.~\ref{fig:film_converge}). \section{Deformation and breakup of a vaporizing drop} The results for the above validation cases and the good agreement with exact solutions and previous numerical studies have affirmed that the present methods and the implementation in the \emph{Basilisk} code are capable of accurately resolving interfacial liquid-gas two-phase flows with vaporization. In this section, the code will be applied to simulate the aerodynamic breakup of a vaporizing drop in a uniform hot gas stream. It is considered here that a spherical drop at saturated temperature ($T=T_\text{sat}$) is stationary at $t=0$ and is suddenly exposed to an unbounded uniform hot vapor stream ($T=T_\infty$) at $t=0^+$. When vaporization is absent ($T_\infty= T_\text{sat}$), the drop dynamics and topology evolution are fully determined by the densities and viscosities of the drop liquid and the gas, $\rho_l,\ \mu_l,\ \rho_g,\ \mu_g$, the surface tension $\sigma$, the initial drop diameter $D_0$, and the uniform gas stream velocity $U_\infty$. The subscript $0$ is used to represent the initial state. These parameters lead to four independent dimensionless parameters: the Weber number, $\text{We}=\rho_g U_\infty^2 D_0/\sigma$, the Reynolds number, $\text{Re}=\rho_g U_\infty D_0/\mu_g$, the Ohnesorge number, $\text{Oh}=\mu_l/\sqrt{\rho_l D_0 \sigma}$, and the gas-to-liquid density ratio, $\eta=\rho_g/\rho_l$ \citep{pilch_use_1987, hsiang_near-limit_1992, joseph_breakup_1999, guildenbecher_secondary_2009}. Alternative dimensionless parameters can be defined based on the above four parameters \citep{guildenbecher_secondary_2009}. For drops with low Oh, such as millimeter drops of low-viscosity liquids like water, the viscous stress is small compared to surface tension and thus the latter is the dominant stabilization mechanism against drop deformation and breakup. In such cases, $\text{We}$ is the most important parameter and is typically used to characterize the breakup modes. For low-Oh drops, the critical Weber is number $\text{We}_{cr}=11\pm2$ \cite{guildenbecher_secondary_2009, theofanous_physics_2012}, and the drop will break only when $\text{We}> \text{We}_{cr}$. Recent detailed numerical simulations of drop aerobreakup indicated that 2D axisymmetric simulations yield good approximation only for non-breaking drops at low We and Re \cite{mahmood_effects_2021}. For drops with high We and Re, which will break in a bag or multi-bag mode, fully 3D simulations are required to accurately capture the drop dynamics. Here we consider two cases of different We. In the first case, we considered a water drop at $\text{We}=1.5$ and $20<\text{Re}<200$. Since $\text{We}$ is significantly lower than $\text{We}_{cr}$, the drop will not break and the deformation is mild, so 2D axisymmetric simulations were performed. In the second case, we have considered an acetone drop at $\text{We}=62.7$. The drop will break and thus a fully 3D simulation was conducted. When $T_\infty> T_\text{sat}$, vaporization will occur and the rate is controlled by the Stefan number (also referred to as the Spalding and Jakob numbers), $\text{St}= C_{p,g}(T_{\infty}-T_\text{sat})/h_{lg}$. In both cases considered here, $\text{St}$ is about 0.1. When the drop deformation is not negligible, the increase in the interfacial area will result in an increase in the drop vaporization rate. In such cases, the heat and mass transfer between the drop and the surrounding vapor will deviate from the models for spherical drops \cite{renksizbulut_experimental_1983}, which is strictly valid for zero We. Sub-grid drop vaporization models for drops are important to the accurate simulation of sprays in practical applications. For sprays consisting of a large number of drops, it is inviable to resolve the interface for each individual drop. Instead, a drop is modeled as a point mass (or a group of drops as a parcel), which is known as the Lagrangian point-particle (LPP) approach \cite{balachandar_scaling_2009}. Since the flow and temperature fields in the drop scale are not resolved, physical models are required to account for the momentum, heat, and mass transfer between the drop and the surrounding gas and to predict the children droplets generated after breakup \cite{wert_rationally-based_1995, dai_temporal_2001, kuo_maximum_2022}. For drops with finite We, the drop deformation/breakup and vaporization are closely coupled. LPP models that can accurately capture this coupling effect remain to be established. High-fidelity detailed numerical simulations, enabled by the present method, can provide crucial insights into the currently unclear physics and, thus, are important to the development of such models in the future. \subsection{Axisymmetric 2D simulations for a vaporizing drop at low We} \label{section:droplet} We first consider a water drop at a low Weber number. The water properties are provided in Table~\ref{tab:properties} where the Prandtl number of water in steam is $\text{Pr}\approx1$. The drop is initially spherical with a radius $R_0$. A parametric study has been performed by varying the Reynolds number, \textit{i.e.}, $22<\text{Re}<200$, and there are in total 25 cases simulated. For each case, we have modified the free stream velocity $U_\infty$ and surface tension ($\sigma$) simultaneously to keep the Weber number fixed at $\text{We}=1.5$. Due to the low We and Re, the drop deformation is mild. Therefore, 2D axisymmetric simulations are sufficient to capture the drop dynamics and vaporization. For all cases, the free-stream temperature is set to $T_{\infty}\approx600\,$K, which yields $\text{St}=0.1$. \begin{figure}[tbp] \begin{center} \includegraphics [width=0.9\columnwidth]{HD_Fig_10_moving_drop} \end{center} \caption{The computational domain for 2D-axisymmetric simulation of the vaporization of a freely moving drop in a hot vapor stream.} \label{fig:moving_droplet_diagram} \end{figure} The computational domain for the 2D axisymmetric simulation is shown in Fig.~\ref{fig:moving_droplet_diagram}. The domain is a square with the edge length $L_0=8D_0$. The drop is initially located at $x=1.5D_0$. For all cases, we have run the simulations with $L12$ (equivalent to $\Delta x = D_0/512$) to $t^*=0.16$, unless stated otherwise. Note that $t^*$ is the dimensionless time defined as \cite{ranger_aerodynamic_1969} \begin{align} t^* = t\frac{U_{\infty}}{D_0\sqrt{\rho_l/\rho_g}}. \label{eq:t_star} \end{align} Within this short time duration, the drop velocity change is very small due to the large density contrast between the liquid and vapor. As a result, the relative velocity can be approximated as unchanged, so Re and We remain to be their initial values. After a short time duration for the velocity and thermal boundary layer to develop from the initial condition, the drop vaporization rate reaches an approximate quasi-steady state, until then we will measure its value. To shorten the transition to the quasi-steady state, the initial condition for the temperature is specified as \begin{align} T(r)= \begin{cases} T_\text{sat}\,,& \text{if } r\leq R_0\\ T_{\infty}\,,& \text{if } r\geq R_{\infty}\\ \frac{R-R_0}{R_{\infty}}(T_{\infty}-T_\text{sat})+ T_\text{sat}\,,& \text{otherwise} \end{cases} \label{eq:init_temp_drop} \end{align} where we set {{$R_{\infty}=1.25R_0$}} as shown in Fig.~\ref{fig:moving_droplet_diagram}. As long as the artificial thermal boundary layer thickness is small, $R_{\infty}-R_0\ll R_0$, the specific value of $R_{\infty}$ is immaterial and will not influence the reported results. For all the cases simulated, a quasi-steady state was reached before $t^*=0.06$. Extensive experimental studies have been conducted for the vaporization of a spherical drop at a quasi-steady state, from which the empirical relations for the Nusselt (Nu) number have been extracted \citep{renksizbulut_experimental_1983, sazhin_advanced_2006, haywood_detailed_1989, chiang_numerical_1992, yuen_heat-transfer_1978}. A commonly used empirical model is the one developed by Renksizbulut and Yuen \citep{renksizbulut_experimental_1983}, which can be expressed as \begin{align} Z_f \equiv [Nu(1 + \text{St})^{0.7} - 2]\text{Pr}_f^{-{1}/{3}} = 0.57 \text{Re}_f^{{1}/{2}} \, \label{eq:empirical} \end{align} where $\text{Re}_f=2R\rho_{g,\infty} |\ub_{\infty}-\ub_d|/\mu_{f}$ and $\text{Pr}_f=C_{p,f} \mu_{g,f}/ k_f$ are the Reynolds and Prandtl numbers based the film properties. The subscript $f$ indicates that the parameters for the gas film surrounding the drop ($T_f=(T_{\infty}+T_\text{sat})/2$). The variation in gas properties due to temperature variation in the film is ignored, so $\text{Re}_f=\text{Re}_{\infty}$. The drop vaporization experiments were conducted for a range of Reynolds, Stefan, and Prandtl numbers: $25<\text{Re}<2000$, $0.07<\text{St}<2.79$, and $0.7<\text{Pr}<1$ \citep{renksizbulut_experimental_1983}. Equation \eqref{eq:empirical} is therefore valid within these parameter ranges. Furthermore, it is considered the drop temperature is initially at $T_\text{sat}$, and heat transfer by radiation is neglected. Based on the convective heat transfer characterized by Nu, the rate of change of drop volume can then be computed as \begin{align} \dot{V_l} = \frac{- \dot{q} A_s }{\rho_l h_{lg}}\,, \label{eq:volume_evap_rate} \end{align} where the drop surface area is $A_s=4\pi R_0^2$, the convective heat transfer coefficient is $h=\text{Nu}k_g/(2R_0)$, and the rate of heat transfer is $\dot{q}=h(T_{\infty}-T_\text{sat})$. Figure~\ref{fig:moving_drop_temperature} shows the temperature and velocity fields at the end of the simulation ($t^*=0.16$) for $\text{Re}= 200$. As expected, the temperature gradient is higher on the windward surface of the drop (Figure~\ref{fig:moving_drop_temperature}), as a result, the majority of the vaporization occurs near the front stagnation point \cite{renksizbulut_numerical_1983}. On the leeward side of the drop, the temperature gradient is much lower since the gas temperature is low in the wake, where the gas is cooled by the low-temperature drop. \begin{figure}[tbp] \begin{center} \includegraphics [width=1.\columnwidth]{HD_Fig_11_droplet_temperature} \end{center} \caption{Temperature (top), the temperature gradient magnitude, and velocity field (bottom) for a moving water drop in high-temperature vapor steam at $t^*=0.16$ for $\text{We}=1.5$ and $\text{Re}=200$. The black contour line represents the drop surface. For better visualization, the velocity vectors are only plotted in the mesh refinement level 6. Note that $r^*=r/R_0$ and $z^*=z/R_0$.} \label{fig:moving_drop_temperature} \end{figure} The simulation results for the temporal evolution of the drop volume are shown in Fig.~\ref{fig:moving_drop_result}, compared with the empirical model (Eq.~\eqref{eq:volume_evap_rate}). The simulation results \emph{without} the Stefan flow are also shown to demonstrate the effect of the Stefan flow on the rate of vaporization. The Stefan flow is turned off by manually setting $\hat{s}=0$ in Eq. \eqref{eq:poisson}. Note we still consider the vaporization when we turn off the Stefan flow; \textit{i.e.}, $\hat{s}=0$ and $s_{\gamma}\neq0$, resulting in a reduction in the droplet volume (Fig.~\ref{fig:moving_drop_result}). The empirical model without the Stefan flow will simplify Eq.~\eqref{eq:empirical} by setting $\text{St}=0$ \citep{sazhin_advanced_2006}. For both cases, with and without Stefan flow, the simulation results converge as the refinement level increases from $L10$ to $L12$. The simulation results for $L12$ and the empirical models agree very well. The Nusselt number measured from the simulations is compared with the predictions of the empirical model in Table~\ref{tab:nu_droplet} and an excellent agreement is observed. The Stefan flow of lower-temperature vapor (at $T_{sat}$) will reduce the temperature gradient at the interface and thus will result in a decrease in the rate of heat and mass transfer. Therefore, ignoring the Stefan flow will lead to an overestimation of the rate of drop vaporization, therefore, it is important to accurately capture the Stefan flow in a simulation. \begin{figure}[tbp] \begin{center} \includegraphics [width=1.\columnwidth]{HD_Fig_12_vol_NU_converge_stef} \end{center} \caption{Simulation results for the decrease of drop volume due to vaporization, with and without the Stefan flow, for \text{Re}=126. The corresponding empirical correlations are also shown for comparison. To visualize the rate of liquid volume loss, the droplet volume is non-dimensionalized using $V_l^{**}=V_l/V_l(t^*=0.06)$, where the quasi-steady state is achieved before $t^*=0.06$.} \label{fig:moving_drop_result} \end{figure} \begin{table*}[tbp] \centering \begin{tabular}{c c c } \hline Case & Nu & Nu (without Stefan flow) \\ \hline L10 & 7.12 & 7.84 \\ L11 & 7.38 & 8.22 \\ L12 & 7.53 & 8.43 \\ Empirical & 7.42 & 8.44 \\ \hline \end{tabular} \caption{The Nusselt numbers with and without Stefan flow for various levels of grid refinement ($L10$, $L11$, $L12$) compared to the empirically predicted Nu.} \label{tab:nu_droplet} \end{table*} Finally, the simulation results for different Re (with mesh $L11$) are shown in Fig.~\ref{fig:empir_vs_sim}, compared with the empirical correlation (Eq.~\eqref{eq:empirical}). The parameter $Z_f$ in Eq.~\eqref{eq:empirical} scales with $\text{Re}^{1/2}$, and the simulation results agree well with this scaling relation. The computed values of $Z_f$ match quite well with the empirical model for the whole range of Re considered. The small discrepancy may be due to the small drop deformation in the simulations since we did not constrain the drop to be perfectly spherical as in the experiment. The good agreement observed here further validates the present methods in simulating the vaporization of a slightly deforming drop. \begin{figure}[tbp] \begin{center} \includegraphics [width=0.8\textwidth]{HD_Fig_13_water_steam_NU} \end{center} \caption{Simulation results for different $\text{Re}$ between 20 and 200, compared with the empirical correlation ($Z_f$ is defined in Eq.~\eqref{eq:empirical}).} \label{fig:empir_vs_sim} \end{figure} \subsection{Fully 3D simulation for the breakup of a vaporizing drop} \label{3D_droplet} The value of We has been increased in the second case, as a result, the vaporizing drop deforms significantly and eventually breaks. The drop fluid is acetone and the initial diameter is 4.1 $\mu m$. The free-stream gas is acetone vapor. The liquid and vapor properties are provided in Table~\ref{tab:properties_acetone}. The free-stream gas velocity, pressure, and temperature are $U_\infty=214.4$ m/s, $p_\infty=285$ kPa, and $T_\infty=402$ K, respectively. These parameters are chosen to be similar to the post-shock conditions for a planar shock wave with Mach number 1.6, inspired by the shock tube experiment of Duke-Walker \textit{et al.~} \cite{duke-walker_evaporation_2021}, though the present simulation has neglected the effects of shock-drop interaction and compressibility. A 3D simulation is performed in this case and Fig.~\ref{fig:3D_moving_droplet_diagram} shows the cubic computational domain, the edge size of which is $L_0=16D_0$. The origin is at the center of the left surface of the domain. The initial position of the drop is $(x,y,z)=(1.5D_0,0,0)$. The simulation has been run to $t^*=2$. \begin{table*}[tbp] \centering \begin{tabular}{l l l l l} \hline Property & \multicolumn{2}{c}{Acetone} \\ & Liquid & Vapor \\ \hline $\rho$ $[kg/m^3]$ & $710$ & $5.11$ & \\ $k$ $[W \, m^{-1} \, K^{-1}]$ & $0.156$ & $0.0166$\\ $C_{p,g}$ $[J \, kg^{-1} \, K^{-1}]$ & $2420$ & $1460$ \\ $\mu$ $[Pa \, s]$ & $1.85 \times 10^{-4}$ & $9.59\times 10^{-6}$\\ $h_{lg}$ $[J \, kg^{-1}]$ & $4.88 \times 10^5$ &- \\ $T_\text{sat}$ $[K]$ & $359$ &- \\ $\sigma$ $[N \, m^{-1}]$ & $0.0153$ &- \\ \hline \end{tabular} \caption{Acetone fluid properties.} \label{tab:properties_acetone} \end{table*} \begin{figure}[tbp] \begin{center} \includegraphics [width=0.7\columnwidth]{HD_Fig_14_POV_3D} \end{center} \caption{Schematic of the 3D computational domain for 3D acetone drop exposed to a uniform high-temperature vapor stream.} \label{fig:3D_moving_droplet_diagram} \end{figure} To simplify the analysis, the drop is assumed to be initially at $T_\text{sat}$ and the variation in the gas properties due to temperature is ignored. The key non-dimensional parameters are $\text{We}=62.7$, $\text{Re}=468$, $\text{Oh}=0.0277$, $\eta=0.0072$, $\text{Pr}=0.844$, and $\text{St}=0.127$. The time step is determined based on $\text{CFL}=0.5$. The maximum level of refinement is $L=12$, resulting in a minimum cell dimension $\Delta x=D_0/256$. The total number of octree cells is approximately 11 million (equivalent to 69 billion uniform Cartesian cells). The simulation has been run on the Anvil machine for 3 days using 256 cores (AMD Epyc Milan processor). The deformation and breakup of the vaporizing acetone drop are shown in Fig.~\ref{fig:acetone_drop}. The time snapshots are from $t^*=0$ to 2.0 with an increment of $0.1$. To better illustrate the interaction between the drop and the surrounding vapor, Fig.~\ref{fig:acetone_drop_u_T} shows the drop surface along with the contours of the temperature $(T)$ and velocity magnitude $(\| \mathbf{u} \|)$ from the central $x$-$z$ and $x$-$y$ planes. In a very short time after the initially stationary drop is exposed to the gas stream, the viscous and capillary effects are negligible and the flow field is dominated by the inviscid mechanisms. The two stagnation points can be seen on both the windward and leeward poles of the drop. As a result, the drop is compressed in the streamwise direction, and the shape of the drop is approximately symmetric with respect to the central $y$-$z$ plane normal to the streamwise direction, see $t^*=0$ to 0.2. As time evolves, the boundary layer separates and the wake starts to form. The leeward side of the drop becomes flat ($t^*=0.3$ to 0.5). The streamwise compression of the drop results in lateral expansion and an ``edge" of high curvature is formed at the periphery. The high gas velocity at the periphery results in a strong shear, and the shear Kelvin-Helmholtz (KH) like instability drives the edge roll over the back of the drop, turning the drop to a bowl with the opening facing downstream (sometimes also referred to as a backward bag) ($t^*=0.6$ to 1.3). As the drop continues to expand in the lateral direction, Rayleigh-Taylor (RT) instability develops on the windward surface near the periphery, turning the drop from a bowl to a shape similar to a Sombrero hat ($t^*=1.4$ to 1.7). The drop at $t^*=1.7$ consists of both a backward bag near the central axis and a forward ring bag. This complex shape is the outcome of both the KH and RT instabilities and this drop morphology has been observed for moderate We when $\eta$ is not too small \cite{marcotte_density_2019, jain_secondary_2019}. The high-speed gas blows in the forward ring bag and causes it to inflate rapidly, resulting in a fast decrease in the sheet thickness. At around $t^*=1.8$, holes appear in the forward ring bag. The expansion and merging of multiple holes disintegrate the ring bag, forming a large number of small children drops, an unbroken backward bag, and a circular rim ($t^*=1.8$ to 2.0). To simulate the subsequent breakup of the remaining backward bag and rim, a longer simulation in a larger domain is required. Yet such a simulation is out of the scope of this paper since the purpose of the present test is to demonstrate the capability of the present methods. \begin{figure}[htp] \begin{center} \includegraphics [width=0.8\columnwidth]{HD_Fig_15_droplet_annotated} \end{center} \caption{Morphological evolution of the vaporizing drop from $t^*=0$ to $2$ at time increments of $\Delta t^*=0.1$.} \label{fig:acetone_drop} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics [width=0.99\columnwidth]{HD_Fig_16_u_T_droplet_annotated} \end{center} \caption{Snapshots of the drop surface, the temperature ($T$) on the central $x$-$y$ plane, and the velocity magnitude ($\| \mathbf{u} \|$) on the central $y$-$z$ plane from $t^*=0$ to $2$ at time increments of $\Delta t^*=0.4$. } \label{fig:acetone_drop_u_T} \end{figure} The vaporization here is driven by the superheated free stream, and the rate of vaporization depends on the vapor temperature gradient near the interface. It can be observed from Fig.~\ref{fig:acetone_drop_u_T} that the temperature gradient magnitude is higher on the windward side of the drop, where stronger vaporization occurs. The vapor generated is at a lower temperature ($T_\text{sat}$) compared to the free-stream temperature ($T_\infty$). {The low-temperature vapor is entrained in the wake and mixed with the free-stream hot vapor. As a result, the temperature gradient on the leeward side of the drop is lower and less vapor is produced, see Fig.~\ref{fig:acetone_drop_u_T}(c)-(d).} As the drop deforms, the drop surface area increases. Corresponding to that, the drop vaporization rate (the rate of loss of drop volume) also increases. The time evolution of the drop volume is shown in Fig.~\ref{fig:fig_vol}(a). The empirical relation for a non-deformable spherical drop (Eq. \eqref{eq:empirical}) is also plotted for comparison. When the drop is perfectly spherical, the volume decreases at a constant rate. For $t^*=0$ to $0.4$, the drop remains approximately spherical, see Fig.~\ref{fig:acetone_drop}. As a result, the drop volume decreases linearly in time, and the simulation results agree well with the empirical correlation, similar to the low-We drop considered previously in section~\ref{section:droplet}. As time evolves and the drop deformation becomes more significant, the vaporization rate goes up and the drop volume decreases significantly faster, due to the increase of drop surface area, in particular the area of the windward surface. To better illustrate the relation between the drop vaporization rate and the surface area, Fig.~\ref{fig:fig_vol}(b) shows the temporal evolution of the normalized vaporization rate $(dV_l/dt)^*=(dV_l/dt)/(dV_l/dt)_0$ and the normalized projected frontal area of the drop, $A_p^*=A_p/A_{p,0}$. The results clearly show that the increase in frontal area is closely related to the increasing rate of volume loss. The inflation of the forward ring bag starts at approximately $t^*=1.6$, after which the increase of surface area seems to contribute little to the enhancement of vaporization, and $(dV_l/dt)^*$ reaches a plateau at about 15. This is probably due to the fact that the strong convective effect in the gas flow in the forward ring bag has reduced the temperature gradient magnitude near the interface, see Fig.~\ref{fig:acetone_drop_u_T}(e). After the forward ring bag breaks, the rapid vaporization of the small drops generated dominates the loss of the total liquid volume over time. Yet it should be noted that even though a high mesh resolution has been used in the present simulation, some of the tiny children droplets generated are still not well resolved. The temperature gradient and vaporization rate for those small drops may be underestimated. Since the Weber number for these tiny drops is very small, they typically exhibit a spherical shape with very mild deformation. Therefore, it will be more computationally efficient to model the vaporization of these tiny drops through the Lagrangian point-particle approach, instead of resolving them with the VOF method and a finer mesh, see for example the multiscale VOF-LPP model in the previous studies for drop dynamics \cite{herrmann_parallel_2010, ling_multiscale_2015, zuzio_improved_2018}. Nevertheless, such a model that can incorporate heat transfer and vaporization remains to be established. \begin{figure}[htp] \begin{center} \includegraphics [width=1\columnwidth]{HD_Fig_17_3d_drop_vol} \end{center} \caption{Temporal evolution of (a) the drop volume ($V_l^*=V_l/V_{l,0} $) and (b) the rate of vaporization normalized by empirical vaporization rate ($[dV_l/dt]^*$) and the projected area of the droplet ($A_p^*=A_p/A_{p,0}$). The empirical correlation (Eq. \eqref{eq:empirical}) is also shown in (a) for comparison. The vertical dotted line indicates the onset of breakup of the forward ring bag ($t^*\approx1.7$). } \label{fig:fig_vol} \end{figure} \section{Conclusions} \label{conclusions} A consistent volume-of-fluid method has been developed in the present study for the direct numerical simulation of interfacial multiphase flows with phase change. The geometric VOF method is based on Piece-wise Linear Interface Calculation (PLIC) reconstruction and the advection of momentum and energy is consistent with the VOF advection. A novel method is proposed to distribute the volumetric source, induced by vaporization at the interfacial cells, to the neighboring pure gas cells. With this simple but elegant treatment, the velocity at the interfacial cells is not influenced by the Stefan vapor flow and can be used to advect the interface directly. The additional shifting of the interface toward the liquid side due to vaporization is handled geometrically, using the interface normal obtained in interface reconstruction. The present method does not require an additional pressure equation and projection as needed in previous studies. Furthermore, the Stefan flows near the interface do not exhibit any artificial numerical oscillations, as observed in previous studies, due to the estimate of the vaporization rate. The present methods have been implemented in the open-source solver \emph{Basilisk} using the octree/quad-tree mesh. The developed simulation framework has been tested against various benchmark cases. For all cases considered, the simulation results converge as the mesh is refined. The simulation results agree very well with the exact solutions for the 1D Stefan and sucking problems and the growth of a 3D spherical bubble in superheated liquid. The code has also been used to simulate a 2D film boiling problem and the simulation results agree well with previous numerical results obtained using a uniform mesh. To further test the capability of the present methods in resolving freely moving vaporizing drop, 2D axisymmetric simulations have been conducted to study the vaporization of a water drop in a high-temperature superheated vapor stream. The Weber number is 1.5 and thus the drop deformation is mild. A parametric study is carried out by varying the Reynolds number (Re) from 20 to 200. The simulation results agree very well with the empirical correlation for a spherical drop for all Re considered. The test results also affirm that the Stefan flow is important to the heat transfer between the drop and the surrounding vapor and also the vaporization rate. The drop volume loss due to vaporization will be overestimated if the Stefan flow is ignored. Finally, a fully 3D simulation was performed for the aerodynamic breakup of an acetone drop in a high-speed and high-temperature vapor stream. The free-stream properties are similar to the post-shock conditions for a planar shock with Mach number 1.6, though the compressibility effect is not considered. With the help of adaptive mesh refinement, we were able to accurately resolve the drop surface and the temperature field near the interface. The minimum cell size in the octree mesh is equivalent to 256 cells across the initial drop diameter. Due to the high Weber number ($\text{We}=62.7$), the drop breakup is in the multi-mode regime where the drop deforms to a Sombrero-hat shape, consisting of a backward bag at the center and a forward ring bag near the periphery. Eventually, the forward ring bag breaks, forming numerous small children droplets. The vaporization rate of the drop initially agrees with the empirical correlation, which is expected, since the drop is still approximately spherical. As the drop deformation becomes significant, the increase of frontal surface area results in a significantly increased rate of vaporization. The decrease of drop volume in time becomes nonlinear and is much faster than the empirical correlation for a spherical drop. The present results indicated that the effects of We and drop deformation are important to drop vaporization if the drop Weber number is finite, though a systematic parametric numerical investigation using the present simulation framework to fully characterize the effects of important parameters like We and Re will be relegated to our future work.
2,877,628,089,918
arxiv
\section{Introduction} \label{sec:intro} \vspace{-5pt} Millimeter-wave (mmWave) communications are widely adopted in the fifth generation multi-input-multi-output (MIMO) systems due to the massive available bandwidth and tiny wavelength \cite{Rappaport}. While the path loss they incur is a core challenge, it can be mitigated by beamforming \cite{Hur}. Due to the limited resulting number of paths, mmWave MIMO signaling is of interest for localization \cite{Hua,Saloranta,Shahmansoori}. In \cite{Saloranta,Shahmansoori}, localization based on classic compressed sensing is pursued by exploiting multipath sparsity. However, the performance is limited by quantization error and grid resolution. In \cite{Shahmansoori}, a space-alternating generalized expectation maximization (SAGE) algorithm is proposed to refine the channel estimates for localization, initialized with the channel parameters that are coarsely estimated via a modified distributed compressed sensing simultaneous orthogonal matching pursuit (DCS-SOMP) scheme \cite{Duarte}. However, this method suffers from local minima when the signal-to-noise ratio (SNR) is low or when the initialization is not sufficiently accurate. On the other hand, atomic norm minimization \cite{chi2020harnessing,candes2014towards,TangG} (ANM, \emph{a.k.a.} total variation minimization) has emerged as a convex optimization framework for estimating continuous valued parameters without relying on discretization. ANM is robust to noise \cite{bhaskar2013atomic,da2020stable}. It has previously been employed for the purpose of localization \cite{Wu1,Wu2,TangW}, but without multipath considerations. In \cite{Tsai}, ANM is used for channel estimation, but cannot be used for localization as time-of-arrival is not considered in the model. In this paper, we design an ANM based approach for high-accuracy localization and orientation estimation using mmWave MIMO orthogonal frequency-division multiplexing (OFDM) signaling. The main contributions of this paper are:\vspace{-6pt} \begin{enumerate}[1)] \item A novel {\it virtual channel matrix} is designed for mmWave MIMO OFDM multi-path channels. \vspace{-6pt} \item A multi-dimensional atomic norm based channel estimator is proposed for positioning purposes, where the structure of the proposed virtual channel matrix is explicitly exploited to simultaneously estimate TOAs, angle-of-arrivals (AOA), and angle-of-departures (AOD) with super-resolution; \vspace{-6pt} \item To accurately recover location and orientation, a weighted least squares scheme is proposed based on the extended invariance principle (EXIP) \cite{Stoica}, where the designed weight matrix is compatible with the ANM channel estimator;\vspace{-6pt} \item Numerical comparisons to the DCS-SOMP based method \cite{Shahmansoori} show that the proposed scheme offers more than $7$dB gain with respect to the root-mean-square error (RMSE) of estimation when a small number of antennas are employed. Furthermore, the proposed method nearly achieves the Cram\'{e}r-Rao lower bound (CRLB) \cite{Shahmansoori} in many cases. \end{enumerate} \vspace{-20pt} \section{Signal Model}\label{sec:signal} \vspace{-7pt} We adopt the narrowband channel model of \cite{Shahmansoori}, where a single base station (BS) is equipped with $N_t$ antennas and a target has $N_r$ antennas. The locations of the BS and the target are denoted by $\boldsymbol{q}=\left[q_{x}, q_{y}\right]^{\mathrm{T}} \in \mathbb{R}^{2}$ and $\boldsymbol{p}=\left[q_{x}, q_{y}\right]^{\mathrm{T}} \in \mathbb{R}^{2}$, respectively, where $\boldsymbol{q}$ is known while $\boldsymbol{p}$ is to be estimated. In addition, there is an unknown orientation of the target's antenna array, denoted by $\theta_o$. Assume that one line-of-sight (LOS) path and $K$ non-line-of-sight (NLOS) paths exist in the mmWave MIMO OFDM channel and the total number of the paths is known. The $k$-th NLOS path is produced by a scatterer at an unknown location $\boldsymbol{s}_k=\left[s_{k,x}, s_{k,y}\right]^{\mathrm{T}} \in \mathbb{R}^{2}$. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{systemmodel} \vspace{-10pt} \caption{~{System model.}\vspace{-15pt}} \label{systemmodel} \end{figure} Denoting by $N$ the number of the sub-carriers, we transmit $G$ OFDM pilot signals with carrier frequency $f_c$ and bandwidth $B\ll f_c$. Given the $g$-th pilot signal over the $n$-th sub-carrier $\boldsymbol{s}^{(g,n)}$ \footnote{The pilot signals are assumed to be known at the receiver and $\boldsymbol{s}^{(g,n)}\in\mathbb{C}^{N_t}$ is a general expression that permits the incorporation of the beamforming matrix, the design of which is beyond the scope of this paper.}, the $g$-th received signal over the $n$-th sub-carrier is given by \vspace{-5pt} \begin{equation} \boldsymbol{y}^{(g,n)}=\boldsymbol{H}^{(n)} \boldsymbol{s}^{(g,n)}+\boldsymbol{w}^{(g,n)},\label{rsignal}\vspace{-5pt} \end{equation} where $\boldsymbol{w}^{(g,n)} \sim \mathcal{CN}(\bm{0},\sigma^2 \bm{I}_{N_r})$ is an independent, zero-mean, complex Gaussian vector with variance $\sigma^2$. We denote by $\bm{a}_N(f) \in \mathbb{C}^N$ the Fourier vector \( \bm{a}_N(f) \triangleq \frac{1}{\sqrt{N}} \left[1, e^{-j 2\pi f}, \dots, e^{-j 2\pi (N-1)f}\right]^{\mathrm{T}}. \) The $n$-th sub-carrier channel matrix $\boldsymbol{H}^{(n)}$ with $0\leq n\leq N-1$ is then given by\vspace{-4pt} \begin{equation} \boldsymbol{H}^{(n)}\triangleq\sum_{k=0}^{K}\gamma_k e^{\frac{-j 2\pi n\tau_k}{N T_{s}}}\boldsymbol{\alpha}(\theta_{\mathrm{Rx},k})\boldsymbol{ \beta}\left(\theta_{\mathrm{Tx},k}\right)^{\mathrm{H}}, \label{channelmatrix_subcarrier}\vspace{-4pt} \end{equation} where $T_s\triangleq\frac{1}{B}$ is the sampling period, $\gamma_k\triangleq\sqrt{N_tN_r}\frac{h_{k}}{\sqrt{\rho_{k}}}$ is the channel coefficient of the $k$-th path, while $\rho_k$ and $h_k$ represents the path loss and the complex channel gain, respectively. The operator $[\cdot]^{\mathrm{H}}$ is the Hermitian transpose, and the steering vectors of the system, {\it i.e.,} $\boldsymbol{\alpha}(\theta_{\mathrm{Rx}})$ and $\boldsymbol{\beta}(\theta_{\mathrm{Tx}})$, are defined as $\boldsymbol{\alpha}(\theta_{\mathrm{Rx}})\triangleq \bm{a}_{N_r}\left(\frac{d \sin(\theta_{\mathrm{Rx}})}{\lambda_c} \right)$, $\boldsymbol{\beta}(\theta_{\mathrm{Tx}}) \triangleq \bm{a}_{N_t}\left(\frac{d \sin(\theta_{\mathrm{Tx}})}{\lambda_c} \right)$, where $d$ is the distance between antennas and $\lambda_c\triangleq \frac{c}{f_c}$ is the wavelength with $c$ being the speed of light. From the geometry shown in Fig. \ref{systemmodel}, the TOA, AOA, and AOD of each path, {\it i.e.,} $\tau_{k}$, $\theta_{\mathrm{Rx}, k}$, and $\theta_{\mathrm{Tx}, k}$, with $0\leq k\leq K$, are\vspace{-5pt} \begin{subequations}\label{eq:geometricMapping} \begin{align} \tau_{0} &=\frac{\|\boldsymbol{p}-\boldsymbol{q}\|_{2}} { c},\label{tau0}\vspace{-4pt} \\ \tau_{k} &=\frac{\left\|\boldsymbol{q}-\boldsymbol{s}_{k}\right\|_{2} +\left\|\boldsymbol{p}-\boldsymbol{s}_{k}\right\|_{2}} {c}, \quad k>0, \vspace{-4pt}\\ \theta_{\mathrm{Tx}, 0} &=\arctan \left(\frac{p_{y}-q_{y} }{p_{x}-q_{x}}\right), \vspace{-3pt}\\ \theta_{\mathrm{Tx}, k} &=\arctan \left(\frac{s_{k,y}-q_{y}} {s_{k,x}-q_{x}}\right), \quad k>0,\vspace{-3pt}\\ \vspace{-3pt} \theta_{\mathrm{Rx}, 0} &=\pi+\arctan \left(\frac{p_{y}-q_{y}} {p_{x}-q_{x}}\right)-\theta_o,\vspace{-3pt}\\ \theta_{\mathrm{Rx}, k} &=\pi+\arctan \left(\frac{p_{y}-s_{k,y}}{p_{x}-s_{k,x}}\right)-\theta_o, \quad k>0, \label{thetarxk}\vspace{-3pt} \end{align} \end{subequations} where $k=0$ corresponds to the LOS path. By stacking the received signal given in (\ref{rsignal}), we have\vspace{-3pt} \begin{equation} \boldsymbol{Y} = \boldsymbol{H}\boldsymbol{S} + \boldsymbol{W};\vspace{-3pt}\label{signalmodelstacked} \vspace{-3pt} \end{equation} where $\boldsymbol{Y} \triangleq\left[\left(\boldsymbol{Y}^{(0)}\right)^{\mathrm{T}}, \left(\boldsymbol{Y}^{(1)}\right)^{\mathrm{T}}, \ldots, \left(\boldsymbol{Y}^{(N-1)}\right)^{\mathrm{T}}\right]^{\mathrm{T}}$, $\boldsymbol{H}\triangleq\operatorname{diag}\left\{\boldsymbol{H}^{(n)}\right\}$, $\boldsymbol{S} \triangleq\left[\left(\boldsymbol{S}^{(0)}\right)^{\mathrm{T}}, \left(\boldsymbol{S}^{(1)}\right)^{\mathrm{T}}, \ldots, \left(\boldsymbol{S}^{(N-1)}\right)^{\mathrm{T}}\right]^{\mathrm{T}}$, and $\boldsymbol{W} \triangleq\left[\left(\boldsymbol{W}^{(0)}\right)^{\mathrm{T}}, \left(\boldsymbol{W}^{(1)}\right)^{\mathrm{T}}, \ldots, \left(\boldsymbol{W}^{(N-1)}\right)^{\mathrm{T}}\right]^{\mathrm{T}}$. with $\boldsymbol{Y}^{(n)} \triangleq \left[\boldsymbol{y}^{(1,n)}, \boldsymbol{y}^{(2,n)}, \ldots, \boldsymbol{y}^{(G,n)}\right]$, $\boldsymbol{S}^{(n)} \triangleq\left[ \boldsymbol{s}^{(1,n)}, \boldsymbol{s}^{(2,n)}, \right.$ $\left.\ldots, \boldsymbol{s}^{(G,n)}\right]$, and $\boldsymbol{W}^{(n)} \triangleq\left[ \boldsymbol{w}^{(1,n)}, \boldsymbol{w}^{(2,n)}, \ldots, \boldsymbol{w}^{(G,n)}\right]$. Furthermore, it is assumed that the receiver knows the transmitted symbols $\boldsymbol{S}$, and aims to estimate its orientation $\theta_o$ and target position $\bm{p}$. \vspace{-10pt} \section{Structure of mmWave MIMO OFDM narrowband channels}\label{sec:structure} In this section, we present the structure of mmWave MIMO OFDM narrowband channel matrices. Without loss of generality, $N$ is assumed to be an odd integer, and we formulate a novel virtual channel matrix $\bm{H}_v$ to jointly exploit the received signal from all sub-carriers for localization as \vspace{-3pt} \begin{equation} \begin{aligned} &\boldsymbol{H}_v\triangleq\sum_{k=0}^{K}l_k \left({\boldsymbol{\xi}}(\tau_k)\otimes\boldsymbol{\alpha}(\theta_{\mathrm{Rx},k})\right)\left({\boldsymbol{\xi}}(-\tau_k)\otimes\boldsymbol{ \beta}\left(\theta_{\mathrm{Tx},k}\right)\right)^{\mathrm{H}}, \end{aligned}\label{virtualchannel}\vspace{-3pt} \end{equation} where the operator $\otimes$ represents the Kronecker product, \mbox{$l_k \triangleq \frac{(N+1)\sqrt{N_tN_r}h_{k}}{2\sqrt{\rho_{k}}}$} and ${\boldsymbol{\xi}}(\tau)\triangleq \sqrt{\frac{2}{N+1}} \bm{a}_{\frac{N+1}{2}}(\frac{\tau}{N T_s})$. We make some key observations regarding $\boldsymbol{H}_v$ in \eqref{virtualchannel},\vspace{-3pt} \begin{itemize} \item[O1)] ${\boldsymbol{H}}_v$ is a low-rank matrix provided that $K+1\ll \min(N_r, N_t)< \min\left(\frac{(N+1)N_r}{2}, \frac{(N+1)N_r}{2}\right)$; \vspace{-6pt} \item[O2)] ${\boldsymbol{H}}_v$ has the same rank as $\bm{H}^{(n)}$, for any $n$; \vspace{-6pt} \item[O3)] ${\boldsymbol{H}}_v$ is a block Hankel matrix, {\it i.e.}, the ($i,j$)-th $N_r \times N_t$ block matrix ${\boldsymbol{H}}_v^{(i,j)}$ of ${\boldsymbol{H}}_v$ verifies ${\boldsymbol{H}}_v^{(i,j)}={\boldsymbol{H}}_v^{(k,z)}$ if $i+j=k+z$ for any $1\leq i,j,k,z\leq \frac{N+1}{2} $\vspace{-6pt} \item[O4)] ${\boldsymbol{H}}_v^{(i,j)}={\boldsymbol{H}}^{(i+j-2)}$ holds for any $1\leq i,j\leq \frac{N+1}{2} $, defining an automorphism $g$ between ${\boldsymbol{H}}_v$ and $\boldsymbol{H}$ with $g(\boldsymbol{H}_v)=\boldsymbol{H}$.\vspace{-3pt} \end{itemize} \vspace{-10pt} \section{Atomic norm based localization with orientation estimation}\label{sec:method} Based on the structural properties of the mmWave MIMO OFDM narrowband channel (O1-O4) discussed in Section \ref{sec:structure}, the low-rank property of each sub-carrier channel matrix can be ensured by exploiting the structure of the introduced virtual channel matrix. In this section, we first propose a tractable optimization problem to estimate the virtual channel matrix as well as the individual channel parameters based on ANM. Then, given the mappings (\ref{tau0})-(\ref{thetarxk}) between $\boldsymbol{\eta}\triangleq\{\tau_k,\theta_{\mathrm{Tx},k},\theta_{\mathrm{Rx},k}\}_{k \in \{0,1\ldots, K\}}$ and $\tilde{\boldsymbol{\eta}}\triangleq\{\boldsymbol q,\theta_o, \{\boldsymbol s_k\}_{k \in \{0,1\ldots, K\}}\}$, we estimate the orientation and location of the target via a weighted least squares problem based on the EXIP \cite{Stoica}, which is compatible with our proposed ANM based channel estimator.\vspace{-3pt} \subsection{Channel estimation}\label{sec:channelestimation} From O1, $\boldsymbol{H}_v$ is a low-rank matrix. We harness the low-rank structure of $\boldsymbol{H}_v$ by defining the atomic set $\bm{\mathcal{A}}$ as \vspace{-5pt} \begin{multline}\label{eq:atomicSet} \boldsymbol{\mathcal{A}}\triangleq \big\{ \boldsymbol{A}\left(\tau,\theta_{\mathrm{Rx}}, \theta_{\mathrm{Tx}}\right) \triangleq\boldsymbol{\chi}(\tau,\theta_{\mathrm{Rx}})\boldsymbol{\zeta}(\tau,\theta_{\mathrm{Tx}})^{\mathrm{H}}|\\ \frac{d \sin \left(\theta_{\mathrm{Rx}}\right)}{\lambda_{c}}, {\frac{d \sin \left(\theta_{\mathrm{Tx}}\right)}{\lambda_{c}} }\in(-\frac{1}{2},\frac{1}{2}],\frac{\tau}{NT_s } \in(0,1].\big\}\vspace{-3pt} \end{multline} where ${{\boldsymbol{\chi}}}(\tau,\theta_{\mathrm{}})\triangleq{\boldsymbol{\xi}}(\tau)\otimes\boldsymbol{ \alpha}\left(\theta_{\mathrm{}}\right) $ and ${{\boldsymbol{\zeta}}}(\tau,\theta_{\mathrm{}})\triangleq{\boldsymbol{\xi}}(-\tau)\otimes\boldsymbol{ \beta}\left(\theta_{\mathrm{}}\right).$ Proposition \ref{prop:equivalenceOfTheAtomicNorm} states the conditions under which the atomic norm $\left\Vert \cdot \right\Vert_{\mathcal{A}}$ induced by the atomic set \eqref{eq:atomicSet}, defined as \vspace{-3pt}$${\|\boldsymbol{H}_v \|}_{\mathcal{A}}\triangleq\inf\left\{\sum_{k}\left|{\tilde{l}_{k}}\right| \mid \boldsymbol{H}_v=\sum_{k} \tilde{l}_{k} \boldsymbol{A}\left(\tau_k,\theta_{\mathrm{Rx},k}, \theta_{\mathrm{Tx},k}\right)\right\}\vspace{-3pt}$$ can be calculated by solving a semidefinite program (SDP). \begin{proposition}\label{prop:equivalenceOfTheAtomicNorm} Given the two conditions\vspace{-5pt} \begin{itemize} \item [C1)] $N_r, N_t\geq257$ and $N\geq513$\footnote{The condition C1 is just a technical requirement and is not necessary for the practical use \cite{Yang1,Tsai}.}, \vspace{-6pt} \item [C2)] $\Delta_{\min}\left(\frac{d \sin \left(\theta_{\mathrm{Rx}}\right)}{\lambda_c}\right)\geq\frac{1}{\lfloor\frac{N_r-1}{4}\rfloor}$, $\Delta_{\min}\left(\frac{d \sin \left(\theta_{\mathrm{Tx}}\right)}{\lambda_c}\right)\geq\frac{1}{\lfloor\frac{N_t-1}{4}\rfloor}$, and $\Delta_{\min}(\frac{\tau}{NT_s})\geq\frac{1}{\lfloor\frac{N-1}{8}\rfloor}$, where $\Delta_{\min}(\kappa)\triangleq\min_{i\neq j}\min(|\kappa_i-\kappa_j|,1-|\kappa_i-\kappa_j|)$\footnote{The condition C2 indicates that the bandwidth and the number of transmit and receive antennas need to be large enough to ensure that the TOAs, AODs, and AOAs of the $K+1$ paths, respectively, are sufficiently separated. Such conditions are necessary for the tightness of the ANM \cite{da2018tight}.}.\vspace{-3pt} \end{itemize} the atomic norm ${\|\boldsymbol{H}_v \|}_{\mathcal{A}}$ is equivalently given by\vspace{-3pt} \begin{equation} \begin{aligned} &\inf_{\boldsymbol{V},\boldsymbol{U},\boldsymbol{H}_v}\quad \frac{1}{2}\operatorname{Tr}\left(\boldsymbol{J}\right) \\ &\quad\text { s.t. }\boldsymbol{J}\triangleq\left[\begin{array}{cc} \mathcal{T}_2({{\boldsymbol{U}}}) & {\boldsymbol{H}_v} \\ \boldsymbol{H}_v^{\mathrm{H}} & \mathcal{T}_2({{\boldsymbol{V}}}) \end{array}\right] \succeq \mathbf{0},\\ &\quad\quad\ \ \boldsymbol{H}_v^{(i,j)}=\boldsymbol{H}_v^{(k,z)},\textrm{ if } i+j=k+z, \forall i, j, k, z, \end{aligned}\label{atomicnorm}\vspace{-3pt} \end{equation} where $\operatorname{Tr}(\cdot)$ denotes the trace of a matrix; $\mathcal{T}_2(\cdot)$ is a 2-level Toeplitz matrix constructed based on a matrix and its definition can be found in \cite{Yang2}. \label{equivalence} \end{proposition} \begin{proof} Let $\|{\boldsymbol{H}_v}\|$ represent the objective value in (\ref{atomicnorm}) and define $\operatorname{SDP}(\boldsymbol{H}_v)$ according to \cite[Eq. (35)]{Tsai} as ,\vspace{-5pt} \begin{equation} \begin{aligned} &\operatorname{SDP}(\boldsymbol{H}_v)\triangleq\inf_{\boldsymbol{V},\boldsymbol{U},\boldsymbol{H}_v}\quad \frac{1}{2}\operatorname{Tr}\left(\boldsymbol{J}\right) \\ &\quad\quad\quad\quad\quad\quad\ \text { s.t. }\boldsymbol{J}\triangleq\left[\begin{array}{cc} \mathcal{T}_2({{\boldsymbol{U}}}) & {\boldsymbol{H}_v} \\ \boldsymbol{H}_v^{\mathrm{H}} & \mathcal{T}_2({{\boldsymbol{V}}}) \end{array}\right] \succeq \mathbf{0}. \end{aligned}\label{SDP}\vspace{-3pt} \end{equation} The inequality $\|\boldsymbol{H}_v\| \geq \operatorname{SDP}(\boldsymbol{H}_v)$ holds based on the definitions of the key quantities. It can be shown from \cite[Lemma 1]{Tsai} that $\|\boldsymbol{H}_v\|\leq {\|\boldsymbol{H}_v\|}_{\mathcal{A}}$. Furthermore, from \cite[Lemma 2]{Tsai} the equality $\operatorname{SDP}(\boldsymbol{H}_v)={\|\boldsymbol{H}_v\|}_{\mathcal{A}}$ holds when conditions C1 and C2 are satisfied. Therefore, we conclude that $\|\boldsymbol{H}_v\|={\|\boldsymbol{H}_v\|}_{\mathcal{A}}$ given conditions C1 and C2. \end{proof} \begin{remark} A Toeplitz-Hankel formulation is proposed in \cite{Cho} for the recovery of one-dimensional signals, which is proved to be equivalent to the atomic norm when the Hankel matrix therein admits a Vandermonde decomposition. Though the formulation in \cite{Cho} might be extended for the multi-dimensional case, the extended formulation still does not fit our signal model since $\boldsymbol{H}_v$ is not a Hankel matrix. \end{remark} From the observations (O1-O4) and Proposition \ref{equivalence}, the atomic norm denoiser of the virtual matrix $\bm{H}_v$ is formulated as, \vspace{-3pt} \begin{align} (\boldsymbol{\hat{{{U}}}}, \boldsymbol{\hat{{{ V}}}},\boldsymbol{\hat{{H}}}_v)={}&\mathop{\arg\min}_{\boldsymbol{V},\boldsymbol{U},\boldsymbol{H}_v}\quad \frac{\epsilon}{2}\operatorname{Tr}\left(\boldsymbol{J}\right)+\frac{1}{2}{\|\boldsymbol{Y}-g(\boldsymbol{H}_v)\boldsymbol{S}\|}^2_\mathrm{F} \nonumber \\ \text { s.t. }& \boldsymbol{J}\triangleq\left[\begin{array}{cc} \mathcal{T}_2({{\boldsymbol{U}}}) & {\boldsymbol{H}_v} \nonumber\\ \boldsymbol{H}_v^{\mathrm{H}} & \mathcal{T}_2({{\boldsymbol{V}}}) \end{array}\right] \succeq \mathbf{0}, \nonumber\\ {}& \boldsymbol{H}_v^{(i,j)}=\boldsymbol{H}_v^{(k,z)},\text{ if } i+j=k+z. \label{optprob}\vspace{-3pt} \end{align} where $\epsilon\varpropto\sigma\sqrt{(\frac{N+1}{2})^2N_rN_t\log((\frac{N+1}{2})^2N_rN_t)}$ (see \cite{Tsai}) is a regularization parameter and ${\|\cdot\|}_\mathrm{F}$ is the Frobenius norm. Furthermore, it is possible to estimate the TOAs, AODs and AOAs ($\hat\tau_k$, $\hat\theta_{\mathrm{Tx},k}$, and $\hat\theta_{\mathrm{Rx},k}$) from the Vandermonde decomposition of the solution $(\boldsymbol{\hat{{{U}}}}, \boldsymbol{\hat{{{ V}}}},\boldsymbol{\hat{{H}}}_v)$ of (\ref{optprob}). The estimated channel parameters corresponding to the same path are paired via the matrix pencil and pairing algorithm~\cite{Yang2}. Note that, distinct from \cite{Beygi,Elnakeeb,Li}, a {\em multi-dimensional} ANM based estimator is proposed in this section and all the location-relevant parameters can be simultaneously recovered with super-resolution by harnessing the structure of $\bm{H}_v$. \vspace{-10pt} \subsection{Localization and orientation estimation} \vspace{-5pt} We assume that the LOS path is that with the smallest TOA. Though the estimated location and orientation can be directly computed based on the geometry of the LOS path, more accurate estimates can be achieved by exploiting the geometry of the NLOS paths \cite{Shahmansoori}. Once the parameter $\boldsymbol{\eta}$, which parametrizes (\ref{optprob}) given the channel coefficients $\{a_k\}_{k \in \{0,1\ldots, K\}}$ is estimated through the procedure presented in Section \ref{sec:channelestimation}, the final step consists in recovering the location and orientation from \eqref{eq:geometricMapping}. Since we make no assumptions on the path loss model in the signal model, knowledge of the channel coefficients do not improve the accuracy of the localization and orientation estimation. In addition, $\{\boldsymbol p,\theta_o, \{\boldsymbol{s}_k, a_k\}_{k \in \{0,1\ldots, K\}}\}$ can be used to re-parametrize the optimization problem in (\ref{optprob}). Therefore, we fix the estimated channel coefficients\footnote{We can substitute the estimated $\boldsymbol{\eta}$ into (\ref{signalmodelstacked}) to achieve a system of linear equations to compute the estimates of channel coefficients \cite{Li}.} and propose a weighted least squares problem to achieve an accurate localization and orientation estimation, with the estimates of all the paths, {\it i.e.,} $\hat{\boldsymbol\eta}$, exploited,\vspace{-5pt} \begin{equation} {\hat{\tilde{\boldsymbol\eta}}} = \arg\min_{\tilde{\boldsymbol\eta}} \left(\hat{\boldsymbol\eta}-f({\tilde{\boldsymbol\eta}})\right)^{\mathrm{T}}\mathcal{D}\left(\hat{\boldsymbol\eta}-f({\tilde{\boldsymbol\eta}})\right), \label{exip_etatilde}\vspace{-5pt} \end{equation} where the mapping $f(\cdot)$ is defined according to the geometry, as described in (\ref{tau0})-(\ref{thetarxk}), with $f({\tilde{\boldsymbol\eta}})={\boldsymbol\eta}$. Inspired by the EXIP \cite{Stoica,Shahmansoori}, we denote by $L(\boldsymbol{\eta})$ the objective function in (\ref{optprob}) and use the Hessian matrix as the weight matrix in (\ref{exip_etatilde}), {\it i.e.,} \vspace{-6pt} \begin{equation} \begin{aligned} \boldsymbol{\mathcal{D}} \triangleq &\left[\begin{array}{llll} \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\tau_1}\partial{\tau_1}} &\frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\tau_1}\partial{\theta_{\mathrm{Tx},1}}} & \cdots&\frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\tau_1}\partial{\theta_{\mathrm{Rx},L}}}\\ \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\theta_{\mathrm{Tx},1}}\partial{\tau_1}}& \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\theta_{\mathrm{Tx},1}}\partial{\theta_{\mathrm{Tx},1}}}& \cdots & \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\theta_{\mathrm{Tx},1}}\partial{\theta_{\mathrm{Rx},L}}}\\ & & \vdots & \\ \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\theta_{\mathrm{Rx},L}}\partial{\tau_1}}& \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\theta_{\mathrm{Rx},L}}\partial{\theta_{\mathrm{Tx},1}}}& \cdots & \frac{\partial^2{L}(\hat{\boldsymbol\eta})}{\partial{\theta_{\mathrm{Rx},L}}\partial{\theta_{\mathrm{Rx},L}}}\\ \end{array}\right], \end{aligned}\vspace{-3pt} \end{equation} which depends on the channel parameters estimated via the proposed ANM based method of Section \ref{sec:channelestimation}. The non-linear least squares problem in (\ref{exip_etatilde}) can be solved via the Levenberg-Marquard-Fletcher algorithm \cite{Fletcher}. The parameters in $\tilde{\boldsymbol{\eta}}$ are initialized with the values $\boldsymbol{\hat p}_{\text{LOS}}$, $\hat{\theta}_{o,\text{LOS}}$, $\{\hat s_{k,y,\text{LOS}}\}_{k \in \{0,1\ldots, K\}}$, and $\{\hat s_{k,x,\text{LOS}}\}_{k \in \{0,1\ldots, K\}}$, which are derived in the following set of equations, \vspace{-7pt} \begin{subequations}\label{eq:locEstimator} \begin{align} \boldsymbol{\hat p}_{\text{LOS}} &= \boldsymbol{q} + c\hat\tau_0[\cos({\hat\theta}_{\mathrm{Tx},0}),\sin(\hat\theta_{\mathrm{Tx},0})]^{\mathrm{T}}.\label{hatp}\vspace{-6pt}\\ \hat{\theta}_{_o,\text{LOS}} &= \pi + \hat\theta_{\mathrm{Tx},0} - \hat\theta_{\mathrm{Rx},0}.\\ \hat s_{k,y,\text{LOS}} &= \tan(\hat\theta_{\mathrm{Tx},k})(\hat s_{k,x}-q_{x})+q_{y}. \label{hatsky}\vspace{-6pt} \end{align} \vspace{-15pt} \begin{equation} \begin{aligned} &\hat s_{k,x,\text{LOS}}= \\ & \frac{\tan(\hat\theta_{\mathrm{Tx},k})q_{x}-\tan(\hat\theta_{\mathrm{Rx},k}+\hat{\theta}_{_o,\text{LOS}})\hat p_{\text{LOS},x}+\hat p_{\text{LOS},y}-q_{y}}{\tan(\hat\theta_{\mathrm{Tx},k})-\tan(\hat\theta_{\mathrm{Rx},k}+\hat{\theta}_{_o,\text{LOS}})}. \label{hatskx} \end{aligned} \end{equation} \end{subequations} \vspace{-5pt} \section{Numerical Results}\label{sec:sim} \vspace{-5pt} In this section, we evaluate the performance of our proposed scheme. In all of the numerical results, we set $f_c$, $B$, $c$, $N$, $N_r$, $N_t$, $G$, $K$, and $d$ to $60$ GHz, $100$ MHz, $300$ m/us, $15$, $16$, $16$, $16$, $2$, and $\frac{\lambda_c}{2}$, respectively. The BS is located at $[0 \text{ m},0 \text{ m}]^{\mathrm{T}}$ while the target is at $[20 \text{ m},5 \text{ m}]^{\mathrm{T}}$ with an orientation $\theta_o=0.2$ rad. The scatterers corresponding to two NLOS paths are placed at $[7.45 \text{ m}, 8.54 \text{ m}]^{\mathrm{T}}$ and $[19.89\text{ m}, -6.05 \text{ m}]^{\mathrm{T}}$, respectively. The channel coefficients are generated based on the free-space path loss model \cite{Goldsmith} in the simulation and the pilot signals are set as random complex values uniformly distributed on the unit circle. Note that condition C1 of Proposition \ref{equivalence} is generally not satisfied for the given $N$, $N_r$ and $N_t$ in our experiments; however, we still achieve strong performance. \begin{figure}[htbp] \centering\vspace{-4pt} \includegraphics[scale=0.535]{updatedRMSE}\vspace{-12pt} \caption{{(a) The RMSE of TOA estimation; (b) The RMSE of AOA and AOD estimation; (c) The RMSE of orientation estimation; (d) The RMSE of localization.}\vspace{-10pt}}% \label{numericalresults} \end{figure} The RMSEs of channel parameter estimation using our scheme are shown in Figs. \ref{numericalresults} (a) and (b), where the performance of DCS-SOMP \cite{Duarte,Shahmansoori} and CRLB \cite{Shahmansoori} are given as comparisons. As observed in Figs. \ref{numericalresults} (a) and (b), our proposed scheme outperforms DCS-SOMP, due to the fact that the grids of the AOAs and AODs are not dense enough for DCS-SOMP ({\it i.e.,} $N_t=N_r=16$); the estimation accuracy of our scheme does not rely on a grid resolution. Furthermore, the RMSEs of TOA, AOA, and AOD estimation using our scheme are close to or coincide with the corresponding CRLB curves according to Figs. \ref{numericalresults} (a) and (b). Due to the quality of our super-resolution channel estimation, lower RMSEs for localization and orientation estimation are achieved as seen in Figs. \ref{numericalresults} (c) and (d) versus the DCS-SOMP based method \cite{Shahmansoori}\footnote{To make a fair comparison, the refinement of estimates of channel parameters in \cite{Shahmansoori} is not implemented for both schemes. Note that, compared to DCS-SOMP, our scheme could provide more accurate estimates for the initialization of the refinement stage to avoid local optima.}. In addition, there is only around 2dB gap between the RMSE for localization or orientation estimation using our proposed scheme and the CRLB curves, verifying the efficacy of our design. \vspace{-5pt} \section{Conclusions}\label{sec:con}\vspace{-5pt} In this paper, a multi-dimensional atomic norm based method is proposed for high-accuracy localization and orientation estimation in mmWave MIMO OFDM systems. To effectively estimate all of the location-relevant channel parameters with super-resolution, a novel virtual channel matrix is designed and its structure is fully exploited. Using the estimates of all the paths, a weighted least squares problem is proposed based on the extended invariance principle to accurately recover the location and orientation. The new method offers strong improvements with respect to the RMSE of estimation over prior work \cite{Shahmansoori} (more than $7$ dB gain). Furthermore, with the proposed method, the RMSEs of channel estimation, localization and orientation estimation are close to, or coincide with the corresponding CRLBs. \vfill \pagebreak \renewcommand*{\bibfont}{\small} \printbibliography \end{document}
2,877,628,089,919
arxiv
\section{Changing culture of the practice of science: before 20th century.} Science before the turn of the 20th century was practiced by individual researches mostly motivated by the urge to find some "transcendental truths" in natural phenomena. Some of the leading discovers made in the 16th, 17th, 18th, and the 19th century are: Overthrowing of the geocentric model of the universe and the introduction of the heliocentric model by Nicolaus Copernicus (1543); law of inertia (1613), observational astronomy, and the scientific method of experimentation by Galileo Galilei; Kepler's three laws of the planetary motion (1619); Many pioneering discoveries in almost all the fields of natural science and mathematics by Issac Newton (Principia 1687); atomic theory in chemistry by John Dalton (1805); magnetic effects of electric current by Oersted (1820). Sadi Carnot (1824), Lord Kelvin (1848), Rudolf Clausius (1850) and others laid the foundations of classical thermodynamics; Statistical mechanics was developed by Maxwell, Boltzmann, and Gibbs; Investigations of Faraday, Maxwell, and many other investigators lead to the theory of electromagnetism (1862); X-rays were discovered by Wilhelm Roentgen (in 1895); Radioactivity by Henri Becquerel (in 1896); Electron by J. J. Thomson (in 1897); etc. However, data shows that only a few thousand of papers were published\cite{gian}. Majority of the papers published had some impact. And the phenomenon of the collaborative research was very rare (most of the time, papers published were single-authored). Majority of professors/investigators use of publish a couple of dozens of papers in their entire scientific career. Consider the example of Josiah Gibbs (1839-1903) who published less than two dozen papers in his entire career. Gibbs case is a typical case of a scientist of that era (in terms of the number of publications). \section{Physics in 20th century} From 1900 to 1950, some of the main discoveries made are: Energy quantum by M. Planck (1900); nuclear atom by Rutherford (1911); superconductivity by Onnes (1911); special and general relativity by Einstein (1905, and 1915 respectively); crystal structures and X-ray diffraction (William and Lawrence Bragg, 1915); Quantum mechanics (M. Planck, N. Bohr, A. Einstein, W. Heisenberg, E. Schroedinger, M Born, P. Dirac and others); Raman effect (by C. V. Raman); Dirac's theory of electrons and positrons; nuclear fission and fusion; Quantum electrodynamics; etc. The later half of the 19th century and the first half of the 20th century can be said "the golden age" of physics. From 1950 to Current times, some of the main discoveries made are : Nuclear and particle physics; solid state physics; semiconductors and transistors; electroweak theory and the emergence of standard model; Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity; parity violation; quantum Hall effects; renormalization; QCD; BEC realized; Higgs boson observed; gravitational waves observed, etc. After WWII, there is an outpouring of the scientific activity. This is due to technological developments, new instruments of investigations, and due to the vast applicability of quantum mechanics into atomic, solid state, and nuclear physics. This outpouring is evident from how the scientific journals evolved from pre-WWII era to post-WWII era. Take for example, the Phys. Rev. The Phys Rev started on 1893, as a single volume. In 1958, PRL was introduced; and in 1970, PRA, PRB, PRC, and PRD were introduced. Currently APS is publishing an array of journals: Phys. Rev. A, B, C, D. Phys. Rev. E; Physics; Phys. Rev. X; Phys. Rev. Applied; Phys. Rev. Fluids; Phys. Rev. Materials; Phys. Rev. Research; PRX Quantum etc. In the year 1910 Phys. Rev. published articles with 1484 printed pages in total, whereas in 2019, only the Phys. Rev. B (one journal out of the whole array) published 98,398 articles in just six months (from Jan 2011 to June 2019). If each article contains roughly 10 pages, then this amounts to {\it one million} pages! \section{What has gone wrong in current times? (Are we going from facts to pathology?)} There were some thousands of papers published before 1900. By 1960, two million scientific papers have been published! Current times, over two million scientific papers are being published every year! What was produced between 1650 to 1960 (over three hundred years) is now being produced every year (in terms of volume, not number of discoveries)\cite{gian}. But the number of impactful discoveries made in the last half of the 20th century to the current time is much less than the number of papers published in this time period (in comparison to that in the 19th and the first half of the 20th century). Concrete data\cite{fort} shows that there is a linear growth of ideas but there is an exponential growth of the number of papers published (figure 1). \begin{figure}[h!] \begin{center} \begin{tabular}{cc} \includegraphics[width =5.5cm]{fig1.pdf}& \includegraphics[width =5.5cm]{fig2.pdf}\\ (a)&(b)\\ \end{tabular} \end{center} \caption{(a) Exponential growth of paper production. (b) Only linear growth of ideas. For actual data and graphs, refer to\cite{fort}.} \label{f1} \end{figure} This strange trend in the data raises serious questions: Are we going from facts to pathology? Is it due to scientific developments only (new fields, new subfields)? Or is it due to the ever growing number of active scientists? What is happening? \section{Reasons (for floods of papers).} Simplest reason one can give is that there are more number of active scientists working in very competitive environments (as compared to the situation in the 19th and early 20th century). Data shows that in 1930s there was one scientist per 10,000 people in the whole world. This number in 2013 swelled to one scientist per 700 people\cite{gian}. This is 14 fold increase! There were 3 lakh scientists in 1935. And this number grows to 10 million in 2016. Even if the "production rate" per scientist (that is the average number of papers published per scientist) remains the same, this sheer rise in the number of active scientists could explain the larger production of the number of published papers\footnote{In old culture, majority of professors/investigators use of publish a couple of dozens of papers in their entire career. Currently, majority of professors/investigators publish over 100 papers in their entire scientific career. So average number of papers published {\it per scientist} has also increased, at least in the physical sciences. Collaborations increase the production rate further.} But the question raised in the previous section: Why are the number of impactful discoveries made much less than the number of papers published? It could be due to the following reasons: \begin{enumerate} \item There are distortions in the practice of science: Old trend: theoretical understanding of the experimental facts (Max Planck and black body radiation; Einstein and photoelectric effect etc). Passion to find some ``transcendental truths''. New tread: More theoretical ideas and more ``proposed'' experiments. So more papers! More generalizations and more abstractions Imagine a hypothetical situation, do a calculation, produce a paper! This is due to the pressure to publish. This drastically reduce the ratio of the number of impactful discoveries to the total number of papers published. \item Importance of numbers of papers published in jobs, getting pdf positions, getting funds etc. This put emphasis on numbers along with quality. As we discuss below, the metrics that measure quality are not foolproof. \item Sheer scientific culture that promotes the number business (refer to footnote 1). \item Emergence of competitive environments. This further leads to a constant pressure to build huge groups/collaborations etc. This results in overproduction. There is a trend to follow hot topics/fashionable topics/hypes\cite{eva}. Invent strategies to sell and over-sell (intense fights with editors and referees). \item Old times: Mostly single author or couple of authors; Now majority of papers come out of collaborations (Collaborations sometimes are required). Collaborations has the potential to increase volume, but whether it leads to an out-put of a great quality publications? Surely a point to ponder over!\cite{henri}. \item Pressure to publish in reputed journals, rejections, and then the phenomenon of predatory journals: you pay and publish. Easy and low quality publications. Loss of impactful science. And most importantly the misuse of the concept of ``open access". \item Core open scientific problems are limited in number. There are problems posed by ``Nature" and then there are "man-made" problems. The main scientific problems in theoretical physics can be counted: Beyond standard model physics; What is small CP violation trying to tell us? GUT: ``Wedding" of QCD with electroweak theory; Neutrino mass problem; Cuprate high temperature superconductivity; Physics of heavy fermions metals etc. Electroweak theory is renormalizable, but there is no Quantum theory of gravity which is renormalizable; Can there be a theory using which one can compute the mass of an electron? The problem of dark matter etc. \begin{figure}[!h] \begin{center} \includegraphics[height=5cm]{fig3.pdf} \caption{Sets of peripheral problems.} \end{center} \label{f2} \end{figure} To meet the high demands of competition and job issues one is forced to invent easy and peripheral problems (refer to figure (2)). Working on peripheral problems leads to publications. It can be helpful to a student in his/her career. It can be helpful in promotions/jobs. It may be helpful in getting some awards, fellowship of academies etc. But it leads to a loss of the impactful science. It takes years to solve a hard problem, but an easy problem can be solved in lesser time. The current academic culture supports the latter one, unfortunately! \item There are experimentally underdetermined problems. When the available input experimental parameters required to validate a theory are much less in number, then there is a large parameter space to play with. Many theoretical models/scenario are possible (as the problem remains observationally/experimentally under-determined). This may lead to burden to literature. Care must be exercised when publishing in such areas. \item Addiction to publish: I know a person who gets some sort of irritation (mental upsetting) if he does not get a paper in a month's time. So every month he must get a paper (low quality, high quality, through students, through collaborators, it does not matter!). Will that large volume of papers (containing solutions of easy and man-made problems) be useful? Will this not burden referees/editors/and students who are just entering the field? \item Vicious cycle: The more production of papers leads to getting more funding and further leading to bigger collaborations/groups. This in turn leads to a greater production of papers. Thus, a ``vicious cycle" starts! (refer to figure 3). \begin{figure}[!h] \begin{center} \includegraphics[height=5cm]{fig4.pdf} \caption{A ``vicious cycle".} \end{center} \label{f2} \end{figure} But there is a difference between an industrial production and scientific solution of a riddle posed by Nature. It is the duty of the funding agencies that they should not put emphasis on the quantity, and while measuring the quality of the scientific output through the standard measures, they must realize that these measures (metrics, impact factors of journals etc) are themselves are distorting the good culture (as discussed below). Actual reading of the scientific papers by an expert panel can help save the situation. \end{enumerate} \section{Consequences (on individuals; on sociology; and on science itself).} Emphasis on the number of publications (or culture of producing a large number of publications) leads to \begin{enumerate} \item a shift of focus from central difficult problems to more peripheral and easy problems. It is easy to solve an easy problem and more difficult to solve a difficult problem. \item loss of cohesiveness of a given field and unnecessary fragmentations within the subfields. Leads to difficulties for a beginner to understand and penetrate into a given field (One has to deal with a huge volume of literature, much of which may not be relevant). The human capacity for information processing is limited. There are physiological limitations. One can read a couple of papers published per day in one's special area (refer to arXive). How can one even read abstracts if hundreds of papers are being published in one's research topic? It will be difficult to keep oneself updated and unnecessary fragmentations (within sub-fields) occur. Surely the cohesiveness of that field is under danger! \item the phenomenon of ``re-discovering''. Literature survey takes time (lots of time when there are lots of papers to study). Easy way out: cut it! One remains ignorant about what has been published. Then the phenomenon of ``rediscover it'' happens (most of the time unintentionally, but sometimes intentionally too, just to give a flavor of originality to one's own work)\cite{gian}. \item fashionable/hot topics sometimes displace some important and core scientific problems. If some of the leading persons moves to a hot topic then majority of the community "jumps" to that hot topic, to ``catch-up'' in the race! Is it good? Will spirit of science gain from it? Many fundamental topics has been displaced by fashion! Dirac said: ``Quantum mechanics is a provisional theory.'' And many giants believed so. The problem of quantum mechanics has been displaced! Practice of it is not in line with the market and fashion approach! The foundations of quantum mechanics is a difficult problem, working on it will not lead to quick and many publications. There is another aspect to this issue. Consider a person very motivated and very deeply interested in the foundations of quantum mechanics. Will the current "market driven approach" support him? The answer is: "No"! The person will not get publications, and will be filtered out of the system. Otherwise he/she has to do two types of research in parallel: One computational research which will lead to quicker publications and will support him/her in the system. And in the rest of the time he/she can work on the foundations of quantum mechanics. But it requires lots of self discipline, and very wise division of time and energy! Who is finally under loss? Foundations of quantum mechanics may be under loss! \item loss of diversity; loss of free exploration; loss of independent minds; and loss of passion too!\cite{eva}. \item unbearable pressure on editors and referees. Most researchers think that their work is excellent, and majority of them do all kinds of efforts and fights with the editorial board members to publish their results. This leads to loss of interest of editors and referees. \item an introduction of ``selection staff'' (or professional editors) into editorial panels of reputed journals. As expert editors cannot do justice to hundreds of papers submitted per day, a new layer has been introduced in editorial boards of many journals. This selection staff's duty is to reject! (sometimes 80-90 percent papers are rejected----arguing on the grounds of scope, and type of readership associated with the journal\cite{gian}). Are the decisions of the selection staff always unbiased? No! Who is responsible? overproduction! \item plagiarism and clever plagiarism. There are uncountable cases of plagiarism. Overproduction and rush to publish has led to more and more cases of plagiarism. But thanks to recent powerful softwares that can trace out plagiarism! And hopefully, it is getting under control. However, "clever Plagiarism" is more "elusive". Stealing an already published idea and rewording it in a different form is a case of clever plagiarism\cite{gian}. Unfortunately, no software can catch it! Only moral values, and respect for your fellow researchers in the field can save your science from this evil. Demand for a large number of publications is one of the reasons for these evil practices in science. \end{enumerate} \section{Phenomenon of predatory journals.} Why are there so many predatory journals in recent times? Because there are so many papers produced that standard time-tested journals are not able to cope up with such floods of papers. They reject. They have to! This prompts paid and predatory journals, and this leads to the degradation of the quality and standards of scientific research. \begin{figure}[!h] \begin{center} \includegraphics[height=6cm]{fig5.pdf} \caption{Predatory journals and misuse of the open access.} \end{center} \end{figure} Figure (4) depicts how the business of predatory journals grow. Overproduction is the root cause. \section{Some Indices} Surely the indices (impact factor of a journal and h-index of a scientist) were created to improve the quality of scientific work and to improve scientific culture. But people always find loopholes and then these indices can be exploited and can be misused. Let us briefly go into these issues: \subsection{Impact factor of a journal} Impact factor (IF) of a journal is defined as: \begin{equation} IF = \frac{Total~ number ~of ~citations~ for~ all ~the ~papers ~ published ~by~ a~ journal~ in ~a~ given~ year}{Total ~number~ of ~papers ~ published ~by ~that~ journal~ in~ that~ year}. \end{equation} It is a good parameter for a journal, but some commercial journals (whose main aim is to get profit) find ways to manipulate it. As Gianfranco\cite{gian} argues, editors of such journals invite review articles from respected/famous scientists. This leads to citations, and higher IF. In addition, editors tend to select papers that follow fashion, and have the potential to attract audience! This distorts the good scientific culture. This sometimes badly affects journals having more respect to moral values and good scientific culture. Those suffer. It is like one TV channel (which showcases entertainment and hot news) outperforming another TV channel (which is more culturally oriented (like classical stuff)). So, only solution is that the scientific community should not be too obsessed by the impact factor business of scientific journals. \subsection{h-index} Can a single number define the research work of a scientist? The answer is no! But in typical situations it is a good indicator. Problem comes when this is made the basis of promotions, getting jobs, giving funds, etc. Only a careful study of one's scientific work by an expert can evaluate that person's scientific output honestly. But it takes a lot of time! So, panels consisting of experts consider the easier counting method. However, it has obvious flaws. Just for example: h-index of Albert Einstein = 44. There are so many researchers whose h-index is >44! Can we compare? Peter Higgs has h-index=11. There are so many scientists whose h-index is greater than 11! Again, can we compare? No, we cannot compare this way. People who support h-index can argue that these are special cases, not general. Yes, they are right in that. But time tested good indicators: passion; novelty, and depth of works done; diversity; curiosity etc cannot be quantified and h-index misses these. These are the good indicators and are impossible to formulate in numbers!\cite{henri}. \section{What is good in current times?} The concept of arXive is the novel one! One gets immediate publicity and the ownership of ideas (the problems of the "theft of ideas" in the pre-arXive era is thus solved to some extent)\cite{henri}. Information technology and communications has made access to scientific publications very easy. Old culture of going to the library, finding the hard copy of the required journal, finding the required article in it, and then getting it photocopied is all gone. With online access, one can read the soft copy in one's laptop or desktop, or one can get it printed on a printer. In fact, if you have good Internet connectivity, you can sit anywhere in the world and do theoretical research (You need some papers, a pencil, and the Internet, thats all!). The case of experimental research is different. The author would like to make a request to the community that if you find a good/appropriate arXive paper (whether it is published or not) please don't hesitate to cite it in your work. \section{What should be done? Collective efforts?} As mentioned before, data shows that the number of active scientists is growing fast (In 1930s: 1 scientist per 10,000 people. In 2013: 1 scientist per 700 people). But, the number of fundamental problems remains the same! Current scientific sociology resolves it by inventing and working on peripheral problems. This will not help science to advance further. There should be some mechanism of knowing how many scientific positions/post doc positions etc are available (average, say, over a couple of years) and how many phd students are joining the academic research (again, average over a couple of years). This kind of data can balance the job opportunity equations, and can lessen the pressure to compete. Thus can lessen the pressure to publish. But it will be great if new riddles are discovered. Real progress will happen when more scientists are interested in experimental and observational science! Because, it is the experimental and observational science that can discover new scientific problems/riddles worth working on! Scientific community has to seriously think about it! Currently, the division seems very skewed. Also, there has to be very tight correlation between the experimental work and the theoretical work. The current academic culture of pressure and competition is not making students as true scholars. Current system is producing skilled manpower! Skilled to produce papers! If there is less pressure to publish, students can go to libraries and spend hours reading there, deepening and widening their knowledge of the topics of their interest. More stimulating and motivating environment is required. How can the excessive load on some international journals be reduced? And how can the phenomenon of predatory journals be controlled. I think due respect to one's national journals must be given. Develop a culture where publishing in a national journal is a matter of prestige. If one of your national journal has got a very bad reputation (for publishing articles which have been rejected by almost all relevant journals), then time is ripe to launch a new journal! And forget the old one. National science academies must look into these matters, and should launch new journals (run by an academic staff), and set high standards of quality, and encourage researchers to send their manuscripts for publication. During jobs, publications in national journals should be given the weightage. It is like ``we love and respect our country'' then why don't ``we love and respect our national journals?'' This will reduce excessive load on some international journals! There should be more conferences organized by the academic staff (not by the commercial organizers) with plenty of discussion time. Be aware of predatory conferences (their sole purpose is to get financial profit)\cite{gian}. \section{Conclusion} I would like to end with a quote from Gianfranco Pacchioni\cite{gian}: ``The pressure towards achieving new results is daily, and leaves little room, if any at all, to ponder the meaning of what one does.'' Therefore, it is the time to publish less, novel and complete works. Only collective and world wide efforts can solve these grave problems of the practice of science. Not you and me alone! \section*{Acknowledgments} This article is inspired by "Round table on science in h-index era", 22th--24th July, ICAM Global summit, 2020. \section{Changing culture of the practice of science: before 20th century.} Science before the turn of the 20th century was practiced by individual researches mostly motivated by the urge to find some "transcendental truths" in natural phenomena. Some of the leading discovers made in the 16th, 17th, 18th, and the 19th century are: Overthrowing of the geocentric model of the universe and the introduction of the heliocentric model by Nicolaus Copernicus (1543); law of inertia (1613), observational astronomy, and the scientific method of experimentation by Galileo Galilei; Kepler's three laws of the planetary motion (1619); Many pioneering discoveries in almost all the fields of natural science and mathematics by Issac Newton (Principia 1687); atomic theory in chemistry by John Dalton (1805); magnetic effects of electric current by Oersted (1820). Sadi Carnot (1824), Lord Kelvin (1848), Rudolf Clausius (1850) and others laid the foundations of classical thermodynamics; Statistical mechanics was developed by Maxwell, Boltzmann, and Gibbs; Investigations of Faraday, Maxwell, and many other investigators lead to the theory of electromagnetism (1862); X-rays were discovered by Wilhelm Roentgen (in 1895); Radioactivity by Henri Becquerel (in 1896); Electron by J. J. Thomson (in 1897); etc. However, data shows that only a few thousand of papers were published\cite{gian}. Majority of the papers published had some impact. And the phenomenon of the collaborative research was very rare (most of the time, papers published were single-authored). Majority of professors/investigators use of publish a couple of dozens of papers in their entire scientific career. Consider the example of Josiah Gibbs (1839-1903) who published less than two dozen papers in his entire career. Gibbs case is a typical case of a scientist of that era (in terms of the number of publications). \section{Physics in 20th century} From 1900 to 1950, some of the main discoveries made are: Energy quantum by M. Planck (1900); nuclear atom by Rutherford (1911); superconductivity by Onnes (1911); special and general relativity by Einstein (1905, and 1915 respectively); crystal structures and X-ray diffraction (William and Lawrence Bragg, 1915); Quantum mechanics (M. Planck, N. Bohr, A. Einstein, W. Heisenberg, E. Schroedinger, M Born, P. Dirac and others); Raman effect (by C. V. Raman); Dirac's theory of electrons and positrons; nuclear fission and fusion; Quantum electrodynamics; etc. The later half of the 19th century and the first half of the 20th century can be said "the golden age" of physics. From 1950 to Current times, some of the main discoveries made are : Nuclear and particle physics; solid state physics; semiconductors and transistors; electroweak theory and the emergence of standard model; Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity; parity violation; quantum Hall effects; renormalization; QCD; BEC realized; Higgs boson observed; gravitational waves observed, etc. After WWII, there is an outpouring of the scientific activity. This is due to technological developments, new instruments of investigations, and due to the vast applicability of quantum mechanics into atomic, solid state, and nuclear physics. This outpouring is evident from how the scientific journals evolved from pre-WWII era to post-WWII era. Take for example, the Phys. Rev. The Phys Rev started on 1893, as a single volume. In 1958, PRL was introduced; and in 1970, PRA, PRB, PRC, and PRD were introduced. Currently APS is publishing an array of journals: Phys. Rev. A, B, C, D. Phys. Rev. E; Physics; Phys. Rev. X; Phys. Rev. Applied; Phys. Rev. Fluids; Phys. Rev. Materials; Phys. Rev. Research; PRX Quantum etc. In the year 1910 Phys. Rev. published articles with 1484 printed pages in total, whereas in 2019, only the Phys. Rev. B (one journal out of the whole array) published 98,398 articles in just six months (from Jan 2011 to June 2019). If each article contains roughly 10 pages, then this amounts to {\it one million} pages! \section{What has gone wrong in current times? (Are we going from facts to pathology?)} There were some thousands of papers published before 1900. By 1960, two million scientific papers have been published! Current times, over two million scientific papers are being published every year! What was produced between 1650 to 1960 (over three hundred years) is now being produced every year (in terms of volume, not number of discoveries)\cite{gian}. But the number of impactful discoveries made in the last half of the 20th century to the current time is much less than the number of papers published in this time period (in comparison to that in the 19th and the first half of the 20th century). Concrete data\cite{fort} shows that there is a linear growth of ideas but there is an exponential growth of the number of papers published (figure 1). \begin{figure}[h!] \begin{center} \begin{tabular}{cc} \includegraphics[width =5.5cm]{fig1.pdf}& \includegraphics[width =5.5cm]{fig2.pdf}\\ (a)&(b)\\ \end{tabular} \end{center} \caption{(a) Exponential growth of paper production. (b) Only linear growth of ideas. For actual data and graphs, refer to\cite{fort}.} \label{f1} \end{figure} This strange trend in the data raises serious questions: Are we going from facts to pathology? Is it due to scientific developments only (new fields, new subfields)? Or is it due to the ever growing number of active scientists? What is happening? \section{Reasons (for floods of papers).} Simplest reason one can give is that there are more number of active scientists working in very competitive environments (as compared to the situation in the 19th and early 20th century). Data shows that in 1930s there was one scientist per 10,000 people in the whole world. This number in 2013 swelled to one scientist per 700 people\cite{gian}. This is 14 fold increase! There were 3 lakh scientists in 1935. And this number grows to 10 million in 2016. Even if the "production rate" per scientist (that is the average number of papers published per scientist) remains the same, this sheer rise in the number of active scientists could explain the larger production of the number of published papers\footnote{In old culture, majority of professors/investigators use of publish a couple of dozens of papers in their entire career. Currently, majority of professors/investigators publish over 100 papers in their entire scientific career. So average number of papers published {\it per scientist} has also increased, at least in the physical sciences. Collaborations increase the production rate further.} But the question raised in the previous section: Why are the number of impactful discoveries made much less than the number of papers published? It could be due to the following reasons: \begin{enumerate} \item There are distortions in the practice of science: Old trend: theoretical understanding of the experimental facts (Max Planck and black body radiation; Einstein and photoelectric effect etc). Passion to find some ``transcendental truths''. New tread: More theoretical ideas and more ``proposed'' experiments. So more papers! More generalizations and more abstractions Imagine a hypothetical situation, do a calculation, produce a paper! This is due to the pressure to publish. This drastically reduce the ratio of the number of impactful discoveries to the total number of papers published. \item Importance of numbers of papers published in jobs, getting pdf positions, getting funds etc. This put emphasis on numbers along with quality. As we discuss below, the metrics that measure quality are not foolproof. \item Sheer scientific culture that promotes the number business (refer to footnote 1). \item Emergence of competitive environments. This further leads to a constant pressure to build huge groups/collaborations etc. This results in overproduction. There is a trend to follow hot topics/fashionable topics/hypes\cite{eva}. Invent strategies to sell and over-sell (intense fights with editors and referees). \item Old times: Mostly single author or couple of authors; Now majority of papers come out of collaborations (Collaborations sometimes are required). Collaborations has the potential to increase volume, but whether it leads to an out-put of a great quality publications? Surely a point to ponder over!\cite{henri}. \item Pressure to publish in reputed journals, rejections, and then the phenomenon of predatory journals: you pay and publish. Easy and low quality publications. Loss of impactful science. And most importantly the misuse of the concept of ``open access". \item Core open scientific problems are limited in number. There are problems posed by ``Nature" and then there are "man-made" problems. The main scientific problems in theoretical physics can be counted: Beyond standard model physics; What is small CP violation trying to tell us? GUT: ``Wedding" of QCD with electroweak theory; Neutrino mass problem; Cuprate high temperature superconductivity; Physics of heavy fermions metals etc. Electroweak theory is renormalizable, but there is no Quantum theory of gravity which is renormalizable; Can there be a theory using which one can compute the mass of an electron? The problem of dark matter etc. \begin{figure}[!h] \begin{center} \includegraphics[height=5cm]{fig3.pdf} \caption{Sets of peripheral problems.} \end{center} \label{f2} \end{figure} To meet the high demands of competition and job issues one is forced to invent easy and peripheral problems (refer to figure (2)). Working on peripheral problems leads to publications. It can be helpful to a student in his/her career. It can be helpful in promotions/jobs. It may be helpful in getting some awards, fellowship of academies etc. But it leads to a loss of the impactful science. It takes years to solve a hard problem, but an easy problem can be solved in lesser time. The current academic culture supports the latter one, unfortunately! \item There are experimentally underdetermined problems. When the available input experimental parameters required to validate a theory are much less in number, then there is a large parameter space to play with. Many theoretical models/scenario are possible (as the problem remains observationally/experimentally under-determined). This may lead to burden to literature. Care must be exercised when publishing in such areas. \item Addiction to publish: I know a person who gets some sort of irritation (mental upsetting) if he does not get a paper in a month's time. So every month he must get a paper (low quality, high quality, through students, through collaborators, it does not matter!). Will that large volume of papers (containing solutions of easy and man-made problems) be useful? Will this not burden referees/editors/and students who are just entering the field? \item Vicious cycle: The more production of papers leads to getting more funding and further leading to bigger collaborations/groups. This in turn leads to a greater production of papers. Thus, a ``vicious cycle" starts! (refer to figure 3). \begin{figure}[!h] \begin{center} \includegraphics[height=5cm]{fig4.pdf} \caption{A ``vicious cycle".} \end{center} \label{f2} \end{figure} But there is a difference between an industrial production and scientific solution of a riddle posed by Nature. It is the duty of the funding agencies that they should not put emphasis on the quantity, and while measuring the quality of the scientific output through the standard measures, they must realize that these measures (metrics, impact factors of journals etc) are themselves are distorting the good culture (as discussed below). Actual reading of the scientific papers by an expert panel can help save the situation. \end{enumerate} \section{Consequences (on individuals; on sociology; and on science itself).} Emphasis on the number of publications (or culture of producing a large number of publications) leads to \begin{enumerate} \item a shift of focus from central difficult problems to more peripheral and easy problems. It is easy to solve an easy problem and more difficult to solve a difficult problem. \item loss of cohesiveness of a given field and unnecessary fragmentations within the subfields. Leads to difficulties for a beginner to understand and penetrate into a given field (One has to deal with a huge volume of literature, much of which may not be relevant). The human capacity for information processing is limited. There are physiological limitations. One can read a couple of papers published per day in one's special area (refer to arXive). How can one even read abstracts if hundreds of papers are being published in one's research topic? It will be difficult to keep oneself updated and unnecessary fragmentations (within sub-fields) occur. Surely the cohesiveness of that field is under danger! \item the phenomenon of ``re-discovering''. Literature survey takes time (lots of time when there are lots of papers to study). Easy way out: cut it! One remains ignorant about what has been published. Then the phenomenon of ``rediscover it'' happens (most of the time unintentionally, but sometimes intentionally too, just to give a flavor of originality to one's own work)\cite{gian}. \item fashionable/hot topics sometimes displace some important and core scientific problems. If some of the leading persons moves to a hot topic then majority of the community "jumps" to that hot topic, to ``catch-up'' in the race! Is it good? Will spirit of science gain from it? Many fundamental topics has been displaced by fashion! Dirac said: ``Quantum mechanics is a provisional theory.'' And many giants believed so. The problem of quantum mechanics has been displaced! Practice of it is not in line with the market and fashion approach! The foundations of quantum mechanics is a difficult problem, working on it will not lead to quick and many publications. There is another aspect to this issue. Consider a person very motivated and very deeply interested in the foundations of quantum mechanics. Will the current "market driven approach" support him? The answer is: "No"! The person will not get publications, and will be filtered out of the system. Otherwise he/she has to do two types of research in parallel: One computational research which will lead to quicker publications and will support him/her in the system. And in the rest of the time he/she can work on the foundations of quantum mechanics. But it requires lots of self discipline, and very wise division of time and energy! Who is finally under loss? Foundations of quantum mechanics may be under loss! \item loss of diversity; loss of free exploration; loss of independent minds; and loss of passion too!\cite{eva}. \item unbearable pressure on editors and referees. Most researchers think that their work is excellent, and majority of them do all kinds of efforts and fights with the editorial board members to publish their results. This leads to loss of interest of editors and referees. \item an introduction of ``selection staff'' (or professional editors) into editorial panels of reputed journals. As expert editors cannot do justice to hundreds of papers submitted per day, a new layer has been introduced in editorial boards of many journals. This selection staff's duty is to reject! (sometimes 80-90 percent papers are rejected----arguing on the grounds of scope, and type of readership associated with the journal\cite{gian}). Are the decisions of the selection staff always unbiased? No! Who is responsible? overproduction! \item plagiarism and clever plagiarism. There are uncountable cases of plagiarism. Overproduction and rush to publish has led to more and more cases of plagiarism. But thanks to recent powerful softwares that can trace out plagiarism! And hopefully, it is getting under control. However, "clever Plagiarism" is more "elusive". Stealing an already published idea and rewording it in a different form is a case of clever plagiarism\cite{gian}. Unfortunately, no software can catch it! Only moral values, and respect for your fellow researchers in the field can save your science from this evil. Demand for a large number of publications is one of the reasons for these evil practices in science. \end{enumerate} \section{Phenomenon of predatory journals.} Why are there so many predatory journals in recent times? Because there are so many papers produced that standard time-tested journals are not able to cope up with such floods of papers. They reject. They have to! This prompts paid and predatory journals, and this leads to the degradation of the quality and standards of scientific research. \begin{figure}[!h] \begin{center} \includegraphics[height=6cm]{fig5.pdf} \caption{Predatory journals and misuse of the open access.} \end{center} \end{figure} Figure (4) depicts how the business of predatory journals grow. Overproduction is the root cause. \section{Some Indices} Surely the indices (impact factor of a journal and h-index of a scientist) were created to improve the quality of scientific work and to improve scientific culture. But people always find loopholes and then these indices can be exploited and can be misused. Let us briefly go into these issues: \subsection{Impact factor of a journal} Impact factor (IF) of a journal is defined as: \begin{equation} IF = \frac{Total~ number ~of ~citations~ for~ all ~the ~papers ~ published ~by~ a~ journal~ in ~a~ given~ year}{Total ~number~ of ~papers ~ published ~by ~that~ journal~ in~ that~ year}. \end{equation} It is a good parameter for a journal, but some commercial journals (whose main aim is to get profit) find ways to manipulate it. As Gianfranco\cite{gian} argues, editors of such journals invite review articles from respected/famous scientists. This leads to citations, and higher IF. In addition, editors tend to select papers that follow fashion, and have the potential to attract audience! This distorts the good scientific culture. This sometimes badly affects journals having more respect to moral values and good scientific culture. Those suffer. It is like one TV channel (which showcases entertainment and hot news) outperforming another TV channel (which is more culturally oriented (like classical stuff)). So, only solution is that the scientific community should not be too obsessed by the impact factor business of scientific journals. \subsection{h-index} Can a single number define the research work of a scientist? The answer is no! But in typical situations it is a good indicator. Problem comes when this is made the basis of promotions, getting jobs, giving funds, etc. Only a careful study of one's scientific work by an expert can evaluate that person's scientific output honestly. But it takes a lot of time! So, panels consisting of experts consider the easier counting method. However, it has obvious flaws. Just for example: h-index of Albert Einstein = 44. There are so many researchers whose h-index is >44! Can we compare? Peter Higgs has h-index=11. There are so many scientists whose h-index is greater than 11! Again, can we compare? No, we cannot compare this way. People who support h-index can argue that these are special cases, not general. Yes, they are right in that. But time tested good indicators: passion; novelty, and depth of works done; diversity; curiosity etc cannot be quantified and h-index misses these. These are the good indicators and are impossible to formulate in numbers!\cite{henri}. \section{What is good in current times?} The concept of arXive is the novel one! One gets immediate publicity and the ownership of ideas (the problems of the "theft of ideas" in the pre-arXive era is thus solved to some extent)\cite{henri}. Information technology and communications has made access to scientific publications very easy. Old culture of going to the library, finding the hard copy of the required journal, finding the required article in it, and then getting it photocopied is all gone. With online access, one can read the soft copy in one's laptop or desktop, or one can get it printed on a printer. In fact, if you have good Internet connectivity, you can sit anywhere in the world and do theoretical research (You need some papers, a pencil, and the Internet, thats all!). The case of experimental research is different. The author would like to make a request to the community that if you find a good/appropriate arXive paper (whether it is published or not) please don't hesitate to cite it in your work. \section{What should be done? Collective efforts?} As mentioned before, data shows that the number of active scientists is growing fast (In 1930s: 1 scientist per 10,000 people. In 2013: 1 scientist per 700 people). But, the number of fundamental problems remains the same! Current scientific sociology resolves it by inventing and working on peripheral problems. This will not help science to advance further. There should be some mechanism of knowing how many scientific positions/post doc positions etc are available (average, say, over a couple of years) and how many phd students are joining the academic research (again, average over a couple of years). This kind of data can balance the job opportunity equations, and can lessen the pressure to compete. Thus can lessen the pressure to publish. But it will be great if new riddles are discovered. Real progress will happen when more scientists are interested in experimental and observational science! Because, it is the experimental and observational science that can discover new scientific problems/riddles worth working on! Scientific community has to seriously think about it! Currently, the division seems very skewed. Also, there has to be very tight correlation between the experimental work and the theoretical work. The current academic culture of pressure and competition is not making students as true scholars. Current system is producing skilled manpower! Skilled to produce papers! If there is less pressure to publish, students can go to libraries and spend hours reading there, deepening and widening their knowledge of the topics of their interest. More stimulating and motivating environment is required. How can the excessive load on some international journals be reduced? And how can the phenomenon of predatory journals be controlled. I think due respect to one's national journals must be given. Develop a culture where publishing in a national journal is a matter of prestige. If one of your national journal has got a very bad reputation (for publishing articles which have been rejected by almost all relevant journals), then time is ripe to launch a new journal! And forget the old one. National science academies must look into these matters, and should launch new journals (run by an academic staff), and set high standards of quality, and encourage researchers to send their manuscripts for publication. During jobs, publications in national journals should be given the weightage. It is like ``we love and respect our country'' then why don't ``we love and respect our national journals?'' This will reduce excessive load on some international journals! There should be more conferences organized by the academic staff (not by the commercial organizers) with plenty of discussion time. Be aware of predatory conferences (their sole purpose is to get financial profit)\cite{gian}. \section{Conclusion} I would like to end with a quote from Gianfranco Pacchioni\cite{gian}: ``The pressure towards achieving new results is daily, and leaves little room, if any at all, to ponder the meaning of what one does.'' Therefore, it is the time to publish less, novel and complete works. Only collective and world wide efforts can solve these grave problems of the practice of science. Not you and me alone! \section*{Acknowledgments} This article is inspired by "Round table on science in h-index era", 22th--24th July, ICAM Global summit, 2020.
2,877,628,089,920
arxiv
\section{Introduction} In \cite{GKZ}, Gangl, Kaneko and Zagier gave $\mathbb{Q}$-linear relations among double zeta values of odd indices $\zeta({\rm odd},{\rm odd})$ by using even period polynomials of modular forms for ${\rm SL}(2,\mathbb{Z})$. The purpose of this paper is to generalize their result to colored multiple zeta values and even period polynomials of modular forms of level $N$. More precisely, Gangl, Kaneko and Zagier's theorem and our main theorem are formulated as identities in the formal double zeta space, which is a lift of the space of double zeta values satisfying the double shuffle relations. Let us recall the theory of period polynomials for congruence subgroups in \cite{Pasol_Popa_periodpolynomial}. Fix a nonnegative integer $w$. We denote by $V_{w}$ the space of degree $w$ homogeneous polynomials in $X$ and $Y$. We define the right action of ${\rm GL}(2,\mathbb{Z})$ on $V_{w}$ by \[ \left.P(X,Y)\right|_{\gamma}=P(aX+bY,cX+dY)\quad\text{for }\gamma=\left(\begin{array}{cc} a & b\\ c & d \end{array}\right)\in{\rm GL}(2,\mathbb{Z}). \] We use the following notations for some elements of ${\rm GL}(2,\mathbb{Z})$: \[ \epsilon\coloneqq\left(\begin{array}{cc} -1 & 0\\ 0 & 1 \end{array}\right),\ J\coloneqq\left(\begin{array}{cc} -1 & 0\\ 0 & -1 \end{array}\right),\ S\coloneqq\left(\begin{array}{cc} 0 & -1\\ 1 & 0 \end{array}\right),\ U\coloneqq\left(\begin{array}{cc} 1 & -1\\ 1 & 0 \end{array}\right),\ T\coloneqq US^{-1}=\left(\begin{array}{cc} 1 & 1\\ 0 & 1 \end{array}\right). \] Let $\Gamma$ be a congruence subgroup of ${\rm SL}(2,\mathbb{Z})$ such that $\epsilon\Gamma\epsilon=\Gamma$. We denote by $\tilde{V}_{w}^{\Gamma}$ the space of maps $P:\Gamma\backslash{\rm SL}(2,\mathbb{Z})\to V_{w}$. We define the right action of ${\rm GL}(2,\mathbb{Z})$ on $\tilde{V}_{w}^{\Gamma}$ by $\left.P\right|_{\gamma}(C)=\left.P(C\gamma^{-1})\right|_{\gamma}$ for $\gamma\in{\rm SL}(2,\mathbb{Z})$ and $\left.P\right|_{\epsilon}(C)=\left.P(\epsilon C\epsilon)\right|_{\epsilon}$. We define the subspaces $\tilde{V}_{w}^{\Gamma}\supset V_{w}^{\Gamma}\supset W_{w}^{\Gamma}\supset C_{w}^{\Gamma}$ by \[ V_{w}^{\Gamma}\coloneqq\{P\in\tilde{V}_{w}^{\Gamma}:\left.P\right|_{J}=P\}, \] \[ W_{w}^{\Gamma}\coloneqq\{P\in V_{w}^{\Gamma}:\left.P\right|_{1+S}=\left.P\right|_{1+U+U^{2}}=0\}, \] and \[ C_{w}^{\Gamma}\coloneqq\{\left.P\right|_{1-S}\,:\,P\in V_{w}^{\Gamma},\ \left.P\right|_{T}=P\}. \] Since the action by $\epsilon$ preserve $W_{w}^{\Gamma}$ and $C_{w}^{\Gamma}$, they are decomposed into $\pm1$-eigenspaces, denoted by $W_{w}^{\Gamma,\pm}$ and $C_{w}^{\Gamma,\pm}$, respectively. For $f\in S_{w+2}(\Gamma)$ and $C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})$, define $\left.f\right|_{C}\in S_{w+2}(C^{-1}\Gamma C)$ by $\left.f\right|_{C}(z)=(cz+d)^{-w-2}f(\frac{az+b}{cz+d})$ where $\left(\begin{smallmatrix}a & b\\ c & d \end{smallmatrix}\right)\in C$. Furthermore, for $f\in S_{w+2}(\Gamma)$, define $\rho_{f}\in V_{w}^{\Gamma}\otimes\mathbb{C}$ by \[ \rho_{f}(C)=\int_{0}^{i\infty}\left.f\right|_{C}(z)\cdot(zY-X)^{w}dz. \] Then we can show $\rho_{f}\in W_{w}^{\Gamma}\otimes\mathbb{C}$ for $f\in S_{w+2}(\Gamma)$. We put $\rho_{f}^{\pm}=\frac{1}{2}(\rho_{f}\pm\left.\rho_{f}\right|_{\epsilon})$. Then, Popa-Pasol's restatement \cite[Theorem 2.1]{Pasol_Popa_periodpolynomial} of Eichler--Shimura isomorphism says that the maps $S_{w+2}(\Gamma)\to W_{w}^{\Gamma,\pm}\otimes\mathbb{C}$, $f\mapsto\rho_{f}^{\pm}$ give rise to isomorphisms \[ S_{w+2}(\Gamma)\simeq(W_{w}^{\Gamma,\pm}/C_{w}^{\Gamma,\pm})\otimes\mathbb{C}. \] Furthermore, we put \[ \bar{W}_{w}^{\Gamma,\pm}\coloneqq\begin{cases} W_{w}^{\Gamma,\pm} & w:\,{\rm even}\\ W_{w}^{\Gamma,\mp} & w:\,{\rm odd}. \end{cases} \] Note that $W_{w}^{{\rm SL}(2,\mathbb{Z}),\pm}$ is canonically isomorphic to \[ W_{w}^{\pm}\coloneqq\{P\in V_{w}:\left.P\right|_{1+S}=\left.P\right|_{1+U+U^{2}}=0,\,\left.P\right|_{\epsilon}=\pm P\} \] treated in \cite{GKZ}. Let us return to the double zeta values. The colored multiple zeta values of level $N$ are generalization of multiple zeta values defined by \[ \zeta{k_{1},\dots,k_{d} \choose z_{1},\dots,z_{d}}=\sum_{0<m_{1}<\cdots<m_{d}}\frac{z_{1}^{m_{1}}\cdots z_{d}^{m_{d}}}{m_{1}^{k_{1}}\cdots m_{d}^{k_{d}}}\in\mathbb{C} \] where $k_{1},\dots,k_{d}$ are positive integer and $z_{1},\dots,z_{d}$ are $N$-th roots of unity with $(k_{d},z_{d})\neq(1,1)$. Furthermore, we define the shuffle (resp. harmonic) regularized colored multiple zeta values $\zeta^{\shuffle}{k_{1},\dots,k_{d} \choose z_{1},\dots,z_{d}}\in\mathbb{C}[T]$ (resp. $\zeta^{*}{k_{1},\dots,k_{d} \choose z_{1},\dots,z_{d}}\in\mathbb{C}[T]$) by shuffle (resp. harmonic) regularization. They equal to $\zeta{k_{1},\dots,k_{d} \choose z_{1},\dots,z_{d}}$ if $(k_{d},z_{d})\neq(1,1)$ and satisfy the regularized double shuffle relations (\cite{Rac02}, \cite{AK04}, \cite[Chapter 13]{ZhaoBook}): \begin{align} \zeta^{\shuffle}\binom{r}{\zeta_{N}^{a}}\zeta^{\shuffle}\binom{s}{\zeta_{N}^{b}} & =\sum_{j=0}^{r-1}\binom{s-1+j}{j}\zeta^{\shuffle}\binom{r-j,s+j}{\zeta_{N}^{a-b},\zeta_{N}^{b}}+\sum_{j=0}^{s-1}{r-1+j \choose j}\zeta^{\shuffle}{s-j,r+j \choose \zeta_{N}^{b-a},\zeta_{N}^{a}}\nonumber \\ & =\zeta^{*}\binom{r,s}{\zeta_{N}^{a},\zeta_{N}^{b}}+\zeta^{*}\binom{s,r}{\zeta_{N}^{b},\zeta_{N}^{a}}+\zeta\binom{r+s}{\zeta_{N}^{a+b}}\label{eq:dsh_intro} \end{align} where $\zeta_{N}\coloneqq\exp(2\pi i/N)$. Furthermore, it is known that $\zeta^{*}\binom{r,s}{\zeta_{N}^{a},\zeta_{N}^{b}}=\zeta^{\shuffle}\binom{r,s}{\zeta_{N}^{a},\zeta_{N}^{b}}$ except for the case $r=s=1$, $a=b=0$. Let $k\geq2$ and $N\geq1$ with $(k,N)\neq(2,1)$. Based on the regularized double shuffle relations, we define the \emph{formal double zeta space} of level $N$ and weight $k$ as $\mathbb{Q}$-vector space $\mathcal{D}_{k,N}$ with generator \[ Z_{a,b}^{r,s},P_{a,b}^{r,s},Z_{c}^{k}\quad(r+s=k,\,r,s\geq1,\,a,b,c\in\mathbb{Z}/N\mathbb{Z},\,{\rm gcd}(a,b,N)=1) \] and relations \[ P_{a,b}^{r,s}=\sum_{j=0}^{r-1}\binom{s-1+j}{j}Z_{a-b,b}^{r-j,s+j}+\sum_{j=0}^{s-1}{r-1+j \choose j}Z_{b-a,a}^{s-j,r+j}=Z_{a,b}^{r,s}+Z_{b,a}^{s,r}+Z_{a+b}^{r+s}. \] Furthermore, based on the property \begin{equation} \zeta\binom{k}{\zeta_{N}^{a}}+(-1)^{k}\zeta\binom{k}{\zeta_{N}^{-a}}=\sum_{m\neq0}\frac{\zeta_{N}^{am}}{m^{k}}=-\frac{B_{k}(a/N)}{k!}(2\pi i)^{k}\in\mathbb{Q}\cdot(2\pi i)^{k},\label{eq:euler_intro} \end{equation} define $\mathcal{P}_{k,N}^{{\rm ev}}\subset\mathcal{D}_{k,N}$ as the vector subspace spanned by \[ \left\{ P_{a,b}^{r,s}+(-1)^{r}P_{-a,b}^{r,s}+(-1)^{s}P_{a,-b}^{r,s}+(-1)^{r+s}P_{-a,-b}^{r,s}:\substack{r+s=k,r,s\geq1\\ a,b\in\mathbb{Z}/N\mathbb{Z},\,\gcd(a,b,N)\equiv1 } \right\} \cup\left\{ Z_{a}^{k}+(-1)^{k}Z_{-a}^{k}:a\in\mathbb{Z}/N\mathbb{Z}\right\} . \] This is a generalization of the formal double zeta space $\mathcal{D}_{k}$ and its subspace $\mathcal{P}_{k}^{{\rm ev}}$ introduced in \cite{GKZ}. Note that by (\ref{eq:dsh_intro}) and (\ref{eq:euler_intro}) the $\mathbb{Q}$-linear map \[ \Phi_{k,N}:\mathcal{D}_{k,N}\to\mathbb{C}\ ;\ Z_{a,b}^{r,s}\mapsto\zeta^{\shuffle}\binom{r,s}{\zeta_{N}^{a},\zeta_{N}^{b}},\,P_{a,b}^{r,s}\mapsto\zeta^{\shuffle}\binom{r}{\zeta_{N}^{a}}\zeta^{\shuffle}\binom{s}{\zeta_{N}^{b}},\,Z_{c}^{k}\mapsto\zeta^{\shuffle}\binom{k}{\zeta_{N}^{c}} \] is well-defined and \[ \Phi_{k,N}(\mathcal{P}_{k,N}^{{\rm ev}})\subset\mathbb{Q}\cdot(2\pi i)^{k}. \] Note that the $\mathbb{Q}$-linear combination $\sum_{r,s,a,b}c_{a,b}^{r,s}Z_{a,b}^{r,s}$ is in $\Phi(\mathcal{P}_{k,N}^{{\rm ev}})$ if the coefficients have the properties \[ c_{a,b}^{r,s}=c_{b,a}^{s,r}=(-1)^{r}c_{-a,b}^{r,s}=(-1)^{s}c_{a,-b}^{r,s} \] for all $r,s,a,b$. Furthermore, as an analogy of $\zeta^{\mathfrak{m}}({\rm odd},{\rm odd})$, define the set of (odd,odd)-type formal colored double zetas of level $N$ as \[ \left\{ \frac{1}{4}\sum_{\rho,\sigma\in\{\pm1\}}\rho^{r+1}\sigma^{s+1}Z_{\rho a,\sigma b}^{r,s}\,:\,\begin{aligned}0\leq a,b\leq N/2,\\ \text{\ensuremath{2a\notin N\mathbb{Z}} or \ensuremath{r} is odd},\\ \text{\ensuremath{2b\notin N\mathbb{Z}} or \ensuremath{s} is odd} \end{aligned} \right\} . \] Note that $\mathbb{Q}$-linear combinations of (odd,odd)-type formal colored double zetas are equivalent to the sums of the form \[ \sum_{a,b,r,s}c_{a,b}^{r,s}Z_{a,b}^{r,s} \] with \[ c_{a,b}^{r,s}=(-1)^{r+1}c_{-a,b}^{r,s}=(-1)^{s+1}c_{a,-b}^{r,s}. \] Under this setting, Gangl--Kaneko--Zagier's result can be stated as follows. \begin{thm}[\cite{GKZ}] \label{thm:GKZ}Fix an even positive integer $k$ and put $w=k-2$. Let $P(X,Y)\in W_{w}^{+}$ and define $q^{r,s}\in\mathbb{Q}$ by \[ P(X-Y,X)=\sum_{\substack{r+s=w\\ r,s\geq0 } }\frac{1}{r!s!}q^{r+1,s+1}X^{r}Y^{s}. \] Put $Z^{r,s}\coloneqq Z_{0,0}^{r,s}\in\mathcal{D}_{r+s,1}$ and $Z^{k}\coloneqq Z_{0}^{k}\in\mathcal{D}_{k,1}$. Then $q_{r,s}=q_{s,r}$ for $r,s$ even and \[ 3\sum_{\substack{r+s=k\\ r,s:{\rm odd} } }q^{r,s}Z^{r,s}=-\sum_{\substack{r+s=k\\ r,s,{\rm even} } }q^{r,s}Z^{r,s}-(\sum_{r+s=k}q^{r,s})Z^{k}\quad\in\mathcal{P}_{k,1}^{{\rm ev}}. \] Conversely, all the linear combinations of $Z^{r,s}$ with $r,s\geq1$ odd belonging to $\mathcal{P}_{k,1}^{{\rm ev}}$ are obtained by this way. \end{thm} \begin{cor}[\cite{GKZ}] \label{cor:GKZ}Let $P(X,Y)\in W_{w}^{+}$ and $q^{r,s}\in\mathbb{Q}$ be as in Theorem \ref{thm:GKZ}. Then \[ \sum_{\substack{r+s=k\\ r,s:{\rm odd} } }q^{r,s}\zeta(r,s)\in\mathbb{Q}\cdot(2\pi i)^{k}. \] \end{cor} Let $A(N)$ the set of pairs $(c,d)\in\mathbb{Z}/N\mathbb{Z}$ satisfying ${\rm gcd}(a,b,N)=1$. Then there is a bijection \[ \Gamma_{1}(N)\backslash{\rm SL}(2,\mathbb{Z})\simeq A(N)\quad;\quad\Gamma\left(\begin{array}{cc} a & b\\ c & d \end{array}\right)\mapsto(c,d). \] For $(a,b)\in A(N)$, we denote the corresponding element in $\Gamma\backslash{\rm SL}(2,\mathbb{Z})$ by $C_{a,b}$. The following is the main theorem of this paper, which gives a generalization of Theorem \ref{thm:GKZ}. \begin{thm} \label{thm:main_intro}Fix $N\geq1$ and $w\geq0$, and put $k=w+2$. Let $P\in\bar{W}_{w}^{\Gamma_{1}(N),+}$ and define $q_{a,b}^{r,s}\in\mathbb{Q}$ for $r,s,a,b$ with $r,s\geq1$, $r+s=k$, and $(a,b)\in A(N)$ by \[ P(C_{a,-a+b})(X-Y,X)=\sum_{r,s}\frac{1}{r!s!}q_{a,b}^{r+1,s+1}X^{r}Y^{s}, \] and put \[ q_{a,b}^{r,s,{\rm ev}}=\frac{1}{2}\left(q_{a,b}^{r,s}+(-1)^{r}q_{a,b}^{r,s}\right),\quad q_{a,b}^{r,s,{\rm od}}=\frac{1}{2}\left(q_{a,b}^{r,s}-(-1)^{r}q_{a,b}^{r,s}\right). \] Then \[ q_{a,b}^{r,s,{\rm od}}=(-1)^{r+1}q_{-a,b}^{r,s,{\rm od}}=(-1)^{s+1}q_{a,-b}^{r,s,{\rm od}},\quad q_{a,b}^{r,s,{\rm ev}}=(-1)^{r}q_{-a,b}^{r,s,{\rm ev}}=(-1)^{s}q_{a,-b}^{r,s,{\rm ev}}=q_{b,a}^{s,r,{\rm ev}} \] and \begin{align*} 3\sum_{r,s,a,b}q_{a,b}^{r,s,{\rm od}}Z_{a,b}^{r,s} & =-\sum_{r,s,a,b}q_{a,b}^{r,s,{\rm ev}}Z_{a,b}^{r,s}-\sum_{r,s,a,b}q_{a,b}^{r,s}Z_{a+b}^{r+s}\quad\in\mathcal{P}_{k,N}^{{\rm ev}}. \end{align*} Conversely, a linear combination \[ \sum_{r,s,a,b}c_{a,b}^{r,s}Z_{a,b}^{r,s}\quad(c_{a,b}^{r,s}=(-1)^{r+1}c_{-a,b}^{r,s}=(-1)^{s+1}c_{a,-b}^{r,s}) \] belongs to $\in\mathcal{P}_{k,N}^{{\rm ev}}$ if and only if $c_{a,b}^{r,s}=q_{a,b}^{r,s,{\rm od}}$ arising in this way. \end{thm} As a corollary, we have the following generalization of Corollary \ref{cor:GKZ}: \begin{cor} Fix $N\geq1$ and $w\geq1$, and put $k=w+2$. Let $P\in\bar{W}_{w}^{\Gamma_{1}(N),+}$ and $q_{a,b}^{r,s,{\rm od}}\in\mathbb{Q}$ be as in Theorem \ref{thm:main_intro}. Then \[ \sum_{r,s,a,b}q_{a,b}^{r,s,{\rm od}}\zeta^{\shuffle}{r,s \choose \zeta_{N}^{a},\zeta_{N}^{b}}\in\mathbb{Q}\cdot(2\pi i)^{k}. \] \end{cor} \section{Proof of main theorem} In this section, we give a proof of Theorem \ref{thm:main_intro}. \subsection{The dual vector space of $V_{w}^{\Gamma}$} Fix an integer $w\ge0$. We denote by $\tilde{\bar{V}}_{w}^{\Gamma}$ (resp. $\bar{V}_{w}^{\Gamma}$) a ${\rm GL}(2,\mathbb{Z})$-module which is just a copy of $\tilde{V}_{w}^{\Gamma}$ (resp. $V_{w}^{\Gamma}$) as a ${\rm SL}(2,\mathbb{Z})$-module, but the action of $\epsilon$ is defined by \[ \left.P\right|_{\epsilon}(C)=\left.P(J\epsilon C\epsilon)\right|_{\epsilon} \] for $C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})$. Note that $\bar{W}_{w}^{\Gamma,\pm}$ is naturally identified with \[ \{P\in\bar{V}_{w}^{\Gamma}:\left.P\right|_{1+S}=\left.P\right|_{1+U+U^{2}}=0,\,\left.P\right|_{\epsilon}=\pm P\}. \] Following \cite{Pasol_Popa_periodpolynomial}, define pairings on $V_{w}\times V_{w}$ and $V_{w}^{\Gamma}\times\bar{V}_{w}^{\Gamma}$ by \[ \langle\sum_{r=0}^{w}a_{r}X^{r}Y^{w-r},\sum_{r=0}^{w}b_{r}X^{r}Y^{w-r}\rangle=\sum_{r=0}^{w}(-1)^{w-r}{w \choose r}^{-1}a_{w}b_{w-r} \] and \[ \langle\langle P,Q\rangle\rangle=\frac{1}{[{\rm SL}(2,\mathbb{Z}):\Gamma]}\sum_{C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})}\langle P(C),Q(C)\rangle\qquad(P\in V_{w}^{\Gamma},Q\in\bar{V}_{w}^{\Gamma}). \] By this pairing, we regard $\bar{V}_{w}^{\Gamma}$ as the dual vector space of $V_{w}^{\Gamma}$. Then for $P,Q\in V_{w}$ and $g\in{\rm GL}(2,\mathbb{Z})$, we have $\langle\left.P\right|_{g},Q\rangle=\langle P,\left.Q\right|_{g^{\vee}}\rangle$ where $g^{\vee}=\det(g)g^{-1}\in{\rm GL}(2,\mathbb{Z})$ (see \cite[Section 3]{Pasol_Popa_periodpolynomial}). Thus, we have \[ \langle\langle\left.P\right|_{g},Q\rangle\rangle=\langle\langle P,\left.Q\right|_{g^{-1}}\rangle\rangle \] for $P\in V_{w}^{\Gamma}$, $Q\in\bar{V}_{w}^{\Gamma}$, and $g\in{\rm GL}(2,\mathbb{Z})$. \subsection{Properties of finite dimensional vector spaces with $\mathrm{PGL}(2,\mathbb{Z})$-action} Let $A$ be a finite dimensional $\mathbb{Q}$-vector space with a right $\mathrm{PGL}(2,\mathbb{Z})$-action. For such $A$, we define the subspaces $\mathcal{W}(A)$, $A^{\pm}$ and $\mathcal{W}^{\pm}(A)$ of $A$ by \[ \mathcal{W}(A)=\{P\in A:\left.P\right|_{1+S}=\left.P\right|_{(1+U+U^{2})(1-S)}=0\}, \] \[ A^{\pm}=\{P\in A:\left.P\right|_{\epsilon}=\pm P\}, \] \[ \mathcal{W}^{\pm}(A)=\mathcal{W}(A)\cap A^{\pm}. \] Note that $\mathcal{W}(A)=\mathcal{W}^{+}(A)\oplus\mathcal{W}^{-}(A)$ since $\epsilon$ preserves $\mathcal{W}(A)$. Furthermore, we denote by $A^{\vee}$ the dual vector space of $A$. We also regard $A^{\vee}$ as a right $\mathrm{PGL}(2,\mathbb{Z})$-module by the action \begin{equation} \langle\left.P\right|_{g},Q\rangle=\langle P,\left.Q\right|_{g^{-1}}\rangle\qquad(P\in A^{\vee},Q\in A,g\in\mathrm{PGL}(2,\mathbb{Z})).\label{eq:dual_action} \end{equation} When $A=V_{w}^{\Gamma}$ or $\bar{V}_{w}^{\Gamma}$, $\mathcal{W}^{\pm}(A)$ coincides with the set of period polynomials. \begin{prop} \label{prop:W_general}We have \[ \mathcal{W}^{\pm}(V_{w}^{\Gamma})=W_{w}^{\Gamma,\pm}, \] and \[ \mathcal{W}^{\pm}(\bar{V}_{w}^{\Gamma})=\bar{W}_{w}^{\Gamma,\pm}. \] \end{prop} \begin{proof} Note that it is enough to show that $\mathcal{W}(V_{w}^{\Gamma})=W_{w}^{\Gamma}$. Recall that the definitions of $\mathcal{W}(V_{w}^{\Gamma})$ and $W_{w}^{\Gamma}$ are given by \begin{align*} \mathcal{W}(V_{w}^{\Gamma}) & =\{P\in V_{w}^{\Gamma}:\left.P\right|_{1+S}=0,\,\left.P\right|_{(1+U+U^{2})(1-S)}=0\},\\ W_{w}^{\Gamma} & =\{P\in V_{w}^{\Gamma}:\left.P\right|_{1+S}=0,\,\left.P\right|_{1+U+U^{2}}=0\}. \end{align*} Thus the proposition is equivalent to $\left.P\right|_{1+U+U^{2}}=0$ for $P\in\mathcal{W}(V_{w}^{\Gamma})$. Let $P$ be any element of $\mathcal{W}(V_{w}^{\Gamma})$ and put $Q=\left.P\right|_{1+U+U^{2}}$. Then \[ \left.Q\right|_{S}=\left.Q\right|_{U}=Q. \] Since ${\rm SL}(2,\mathbb{Z})$ is generated by $S$ and $U$, we have $Q=\left.Q\right|_{g}$ for $g\in{\rm SL}(2,\mathbb{Z})$. Since $\Gamma$ is a finite index subgroup of ${\rm SL}(2,\mathbb{Z})$, there exists a positive integer $N$ such that $CT^{N}=C$ for any $C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})$. Then, for $C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})$, we have \[ Q(C)=\left.Q\right|_{T^{N}}(C)=\left.Q(CT^{-N})\right|_{T^{N}}=\left.Q(C)\right|_{T^{N}}, \] which implies \[ Q(C)\in\mathbb{Q}Y^{w}. \] Furthermore, we also have \[ Q(C)=\left.Q\right|_{S}(C)=\left.Q(CS^{-1})\right|_{S}\in\mathbb{Q}X^{w}. \] Thus $Q=0$ if $w>0$. Hence the case $w>0$ is proved. Assume that $w=0$. Then $Q(C)=Q(C')$ for any $C,C'\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})$ since \[ P(C)=\left.P\right|_{g}(C)=P(Cg^{-1})=P(C') \] where $g$ is an element of ${\rm SL}(2,\mathbb{Z})$ satisfying $Cg^{-1}=C'$. Let $\alpha\coloneqq Q(C)$ which does not depend on the choice of $C$. Then \begin{align*} \alpha & =\frac{1}{[{\rm SL}(2,\mathbb{Z}):\Gamma]}\sum_{C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})}Q(C)\\ & =\frac{1}{[{\rm SL}(2,\mathbb{Z}):\Gamma]}\sum_{C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})}\left.P\right|_{1+U+U^{2}}(C)\\ & =\frac{3}{[{\rm SL}(2,\mathbb{Z}):\Gamma]}\sum_{C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})}P(C)\\ & =\frac{3}{2[{\rm SL}(2,\mathbb{Z}):\Gamma]}\sum_{C\in\Gamma\backslash{\rm SL}(2,\mathbb{Z})}\left.P\right|_{1+S}(C)\\ & =0. \end{align*} Thus $Q=0$, which completes the proof. \end{proof} \begin{prop} \label{prop:fA}Let $f_{A}:A^{+}\to A/A^{-}$ be a map defined by \[ f_{A}(P)=(\left.P\right|_{1+U-U^{2}S}\bmod A^{-}), \] $B=A^{\vee}$ the dual vector space of $A$, and $f_{A}^{*}:B^{+}\to B/B^{-}$ the dual homomorphism of $f$. Then \[ \ker f_{A}^{*}=\mathcal{W}^{+}(B). \] Furthermore, there is an bijection \[ \iota:\mathcal{W}^{+}(A)\simeq\ker(f_{A}) \] given by $\iota(P)=\left.P\right|_{U(1+\epsilon)}$, and thus $\dim_{\mathbb{Q}}\mathcal{W}^{+}(A)=\dim_{\mathbb{Q}}\mathcal{W}^{+}(B)$. \end{prop} \begin{proof} Note that $P$ is in $\ker f_{A}^{*}$ if and only if $\left.P\right|_{(1+U^{2}-SU)(1+\epsilon)}=0$. For $P\in\mathcal{W}^{+}(B)$, we have \[ \left.P\right|_{(1+U^{2}-SU)(1+\epsilon)}=\left.P\right|_{(1+U^{2}+U)(1+\epsilon)}=0, \] and thus $\mathcal{W}^{+}(B)\subset\ker f^{*}$. On the other hand, if $P\in\ker f^{*}$, then \begin{equation} 0=\left.P\right|_{(1+U^{2}-SU)(1+\epsilon)}=\left.P\right|_{2+U^{2}-SU+SUS-U^{2}S}.\label{eq:eA1} \end{equation} By acting $S$ to (\ref{eq:eA1}), we also have \begin{equation} 0=\left.P\right|_{2S+U^{2}S-SUS+SU-U^{2}}.\label{eq:eA2} \end{equation} By adding (\ref{eq:eA1}) and (\ref{eq:eA2}), we get \begin{equation} \left.P\right|_{1+S}=0.\label{eq:eA3} \end{equation} By (\ref{eq:eA1}) and (\ref{eq:eA2}), we have \begin{equation} 0=\left.P\right|_{(1+U^{2}+U)(1-S)}.\label{eq:eA4} \end{equation} By (\ref{eq:eA1}) and (\ref{eq:eA2}), we have $P\in\mathcal{W}^{+}(B)$, which completes the proof of \begin{equation} \ker f_{A}^{*}=\mathcal{W}^{+}(B).\label{eq:ker_fAstar=00003DWplusB} \end{equation} The map $\iota$ is well-defined since \begin{align*} \left.P\right|_{U(1+\epsilon)(1+U-U^{2}S)(1+\epsilon)} & =\left.P\right|_{U(1+\epsilon)(U-U^{2}S)(1+\epsilon)}+2\left.P\right|_{U(1+\epsilon)}\\ & =\left.P\right|_{U(1+\epsilon)(1-\epsilon S)(U-U^{2}S)}+2\left.P\right|_{U(1+\epsilon)}\\ & =\left.P\right|_{(U-U\epsilon S)(1+\epsilon)(U-U^{2}S)}+2\left.P\right|_{U(1+\epsilon)}\\ & =-2\left.P\right|_{(U-U^{2}S)}+2\left.P\right|_{U(1+\epsilon)}\\ & =0 \end{align*} for $P\in\mathcal{W}^{+}(A)$. If $P\in\mathcal{W}^{+}(A)$ and $\iota(P)=0$, then \[ 0=\left.\iota(P)\right|_{1-S}=\left.P\right|_{U(1+\epsilon)(1-S)}=\left.P\right|_{(U-U^{2}S)(1-S)}=\left.P\right|_{(1+U+U^{2})(1-S)-(1-S)}=-2P. \] Thus $\iota$ is injective and \begin{equation} \dim_{\mathbb{Q}}\mathcal{W}^{+}(A)\leq\dim_{\mathbb{Q}}\ker(f_{A}).\label{eq:dim_ineq1} \end{equation} Since $\dim_{\mathbb{Q}}A^{+}=\dim_{\mathbb{Q}}(A/A^{-})$, we have $\dim_{\mathbb{Q}}\ker(f_{A})=\dim_{\mathbb{Q}}\ker f_{A}^{*}$. Thus, by (\ref{eq:ker_fAstar=00003DWplusB}), we have \begin{equation} \dim_{\mathbb{Q}}\ker(f_{A})=\dim_{\mathbb{Q}}\mathcal{W}^{+}(B).\label{eq:dim_eq1} \end{equation} By changing $A$ and $B$ in (\ref{eq:dim_ineq1}) and (\ref{eq:dim_eq1}), we also have \begin{equation} \dim_{\mathbb{Q}}\mathcal{W}^{+}(B)\leq\dim_{\mathbb{Q}}\ker(f_{B})\label{eq:dim_ineq2} \end{equation} and \begin{equation} \dim_{\mathbb{Q}}\ker(f_{B})=\dim_{\mathbb{Q}}\mathcal{W}^{+}(A).\label{eq:dim_eq2} \end{equation} By (\ref{eq:dim_ineq1}), (\ref{eq:dim_eq1}), (\ref{eq:dim_ineq2}), and (\ref{eq:dim_eq2}), we have \[ \dim_{\mathbb{Q}}\mathcal{W}^{+}(A)=\dim_{\mathbb{Q}}\ker(f_{A})=\dim_{\mathbb{Q}}\mathcal{W}^{+}(B)=\dim_{\mathbb{Q}}\ker(f_{B}). \] Thus $\iota$ is bijection and the proposition is proved. \end{proof} \subsection{Proof of main theorem} For $C=C_{a,b}\in\Gamma_{1}(N)\backslash{\rm SL}(2,\mathbb{Z})$, define $\mathbb{Q}$-linear maps $\lambda_{C}$, $\lambda_{C}^{\mathcal{S}}$ and $\lambda_{C}^{\mathcal{P}}$ from $V_{w}$ to $\mathcal{D}_{w+2,N}$ by \begin{align*} \lambda_{C}(X^{r}Y^{s}) & \coloneqq r!s!Z_{a,b}^{r+1,s+1},\\ \lambda_{C}^{\mathcal{S}}(X^{r}Y^{s}) & \coloneqq r!s!Z_{a+b}^{r+s+2},\\ \lambda_{C}^{\mathcal{P}}(X^{r}Y^{s}) & \coloneqq r!s!P_{a,b}^{r+1,s+1}. \end{align*} Furthermore, for $\bullet\in\{\emptyset,\mathcal{S},\mathcal{P}\}$, define $\lambda^{\bullet}:\tilde{\bar{V}}_{w}^{\Gamma_{1}(N)}\to\mathcal{D}_{w+2,N}$ by \[ \lambda^{\bullet}(P)=\sum_{C\in\Gamma_{1}(N)\backslash{\rm SL}(2,\mathbb{Z})}\lambda_{C}^{\bullet}(P(CS)). \] Let us rewrite the defining relations of $\mathcal{D}_{w+2,N}$: \begin{equation} P_{a,b}^{r+1,s+1}=Z_{a,b}^{r+1,s+1}+Z_{b,a}^{s+1,r+1}+Z_{a+b}^{r+s+2}=\sum_{j=0}^{r}\binom{s+j}{j}Z_{a-b,b}^{r-j+1,s+j+1}+\sum_{j=0}^{s-1}{r-1+j \choose j}Z_{b-a,a}^{s-j+,r+j+1}\label{eq:dsh1} \end{equation} by using $\lambda$, $\lambda^{\mathcal{S}}$ and $\lambda^{\mathcal{P}}$. Put \[ A(N,w)=\{(r,s,a,b):r,s\geq0,\,r+s=w,\,(a,b)\in A(N)\}. \] For $(r,s,a,b)\in A(N,w)$, define $Q_{a,b}^{r,s}\in\tilde{\bar{V}}_{w}^{\Gamma}$ by $Q_{a,b}^{r,s}(C)=\delta_{C_{a,b}S,C}\frac{X^{r}Y^{s}}{r!s!}$. Note that for $m\in\{0,1\}$ and $\gamma\in\epsilon^{m}{\rm SL}(2,\mathbb{Z})$, we have \begin{align*} \lambda^{\bullet}(\left.Q_{a,b}^{r,s}\right|_{\gamma}) & =\sum_{C}\lambda_{C}^{\bullet}(\left.Q_{a,b}^{r,s}\right|_{\gamma}(CS))=\sum_{C}\lambda_{C}^{\bullet}(\left.Q_{a,b}^{r,s}(J^{m}\epsilon^{m}CS\gamma^{-1})\right|_{\gamma})=\sum_{C}\delta_{C_{a,b}S,J^{m}\epsilon^{m}CS\gamma^{-1}}\lambda_{C}^{\bullet}(\left.X^{r}Y^{s}\right|_{\gamma})\\ & =\lambda_{J^{m}\epsilon^{m}C_{a,b}S\gamma S^{-1}}^{\bullet}(\left.X^{r}Y^{s}\right|_{\gamma})=\lambda_{\epsilon^{m}C_{a,b}(\gamma^{t})^{-1}}(\left.X^{r}Y^{s}\right|_{\gamma}) \end{align*} where $\gamma^{t}$ is the transposed matrix of $\gamma$. Then the terms in (\ref{eq:dsh1}) can be described as follows: \begin{align*} P_{a,b}^{r+1,s+1} & =\frac{1}{r!s!}\lambda^{\mathcal{P}}(Q_{a,b}^{r,s}),\\ Z_{a,b}^{r+1,s+1} & =\frac{1}{r!s!}\lambda_{C_{a,b}}(X^{r}Y^{s})=\lambda(Q_{a,b}^{r,s}),\\ Z_{b,a}^{s+1,r+1} & =\frac{1}{r!s!}\lambda_{\epsilon C_{a,b}\epsilon S}(\left.X^{r}Y^{s}\right|_{\epsilon S})=\lambda(\left.Q_{a,b}^{r,s}\right|_{\epsilon S}),\\ Z_{a+b}^{r+s+2} & =\frac{1}{r!s!}\lambda^{\mathcal{S}}(Q_{a,b}^{r,s}),\\ \sum_{j=0}^{r}\binom{s+j}{j}Z_{a-b,b}^{r-j+1,s+j+1} & =\frac{1}{r!s!}\lambda_{C_{a-b,b}}(\left.X^{r}Y^{s}\right|_{T})=\lambda(\left.Q_{a,b}^{r,s}\right|_{T}),\\ \sum_{j=0}^{s}{r+j \choose j}Z_{b-a,a}^{s-j+1,r+j+1} & =\frac{1}{r!s!}\lambda_{C_{b-a,a}}(\left.X^{r}Y^{s}\right|_{\epsilon ST})=\lambda(\left.Q_{a,b}^{r,s}\right|_{\epsilon ST}). \end{align*} Thus (\ref{eq:dsh1}) is equivalent to \begin{equation} \lambda^{\mathcal{P}}(P)=\lambda(\left.P\right|_{(1+\epsilon S)})+\lambda^{\mathcal{S}}(P)=\lambda(\left.P\right|_{(1+\epsilon S)T})\label{eq:dsh2} \end{equation} for $P=Q_{a,b}^{r,s}\in\tilde{\bar{V}}_{w}^{\Gamma}$. Let $\bar{V}_{w}^{\Gamma,\pm}$ be the $\pm$-eigenspaces of $\bar{V}_{w}^{\Gamma}$ for the action of $\epsilon$. Let $\delta:\bar{V}_{w}^{\Gamma}\to\bar{V}_{w}^{\Gamma}/\bar{V}_{w}^{\Gamma,-}$ be a map defined by \[ \delta(P)=(\left.P\right|_{1+SU^{2}S-SU}\bmod\bar{V}_{w}^{\Gamma,-}). \] Then the induced dual map $\delta^{*}:V_{w}^{\Gamma,+}\to V_{w}^{\Gamma}$ is expressed as \[ \delta^{*}(P)=\left.P\right|_{1+(SU^{2}S)^{-1}-(SU)^{-1}}=\left.P\right|_{1+SUS-U^{2}S}. \] \begin{prop} \label{prop:kernel_delta_star}For an element $P$ of $V_{w}^{\Gamma,+}$, the following conditions are equivalent. \begin{enumerate} \item $\delta^{*}(P)\in V_{w}^{\Gamma,-}$, \item $\delta^{*}(P)=0$, \item $P\in W_{w}^{\Gamma,+}$. \end{enumerate} \end{prop} \begin{proof} The implication (3)$\Rightarrow$(2) follows from the direct calculation and the implication (2)$\Rightarrow$(1) is trivial. Thus the remaining is a proof of (1)$\Rightarrow$(3). Assume that $\delta^{*}(P)\in V_{w}^{\Gamma,-}$ and $P\in V_{w}^{\Gamma,+}$. Then \begin{align} 0 & =\left.\delta^{*}(P)\right|_{1+\epsilon}=\left.P\right|_{(1+SUS-U^{2}S)(1+\epsilon)}=\left.P\right|_{2+SUS-U^{2}S+U^{2}-SU}.\label{eq:e1} \end{align} By acting $S$ to (\ref{eq:e1}), we also have \begin{equation} 0=\left.P\right|_{2S+SU-U^{2}+U^{2}S-SUS}.\label{eq:e2} \end{equation} By adding (\ref{eq:e1}) and (\ref{eq:e2}), we obtain \begin{equation} \left.P\right|_{1+S}=0.\label{eq:e3} \end{equation} By (\ref{eq:e2}) and (\ref{eq:e3}), we have \begin{equation} \left.P\right|_{(1+U+U^{2})(S-1)}=0\label{eq:e4} \end{equation} By (\ref{eq:e3}), (\ref{eq:e4}) and Proposition \ref{prop:W_general}, we have $P\in W_{w}^{\Gamma,+}$. Hence (1)$\Rightarrow$(3) is also proved. \end{proof} \begin{lem} \label{lem:dsh_in_kerdelta}We have \[ \lambda^{-1}(\mathcal{P}_{k,N}^{{\rm ev}})\subset\ker\delta. \] \end{lem} \begin{proof} By definition, $\lambda^{-1}(\mathcal{P}_{k,N}^{{\rm ev}})$ is spanned by \[ \{\left.P\right|_{1+\epsilon S}:P\in\bar{V}_{w}^{\Gamma_{1}(N),-}\}\cup\{\left.P\right|_{(1+\epsilon S)(1-T)}:P\in\bar{V}_{w}^{\Gamma_{1}(N)}\}. \] Note that $Q\in\ker\delta$ if and only if $\left.Q\right|_{(1+SU^{2}S-SU)(1+\epsilon)}$. For $P\in\bar{V}_{w}^{\Gamma_{1}(N)}$, we have \[ \left.P\right|_{(1+\epsilon S)(1+SU^{2}S-SU)(1+\epsilon)}=\left.P\right|_{(1+\epsilon S)(1+SU^{2}S-SU)(1+\epsilon)}=\left.P\right|_{(1+\epsilon)(1+S)}, \] where the last expression vanishes if $P\in\bar{V}_{w}^{\Gamma_{1}(N),-}$. Thus \[ \{\left.P\right|_{1+\epsilon S}:P\in\bar{V}_{w}^{\Gamma_{1}(N),-}\}\subset\ker\delta. \] Furthermore, for $P\in\bar{V}_{w}^{\Gamma_{1}(N)}$, we have \[ \left.P\right|_{(1+\epsilon S)(1-T)(1+SU^{2}S-SU)(1+\epsilon)}=\left.P\right|_{(1+\epsilon S)(1-\epsilon S)(1+SU^{2}S-SU-US-S+U^{2})}=0. \] Thus, we also have \[ \{\left.P\right|_{(1+\epsilon S)(1-T)}:P\in\bar{V}_{w}^{\Gamma_{1}(N)}\}\subset\ker\delta. \] Hence the lemma is proved. \end{proof} \begin{prop} \label{prop:P_R_injectivity}Let $P\in W_{w}^{\Gamma,+}$. If $\left.P\right|_{U(1+\epsilon)}=0$ then $P=0$. \end{prop} \begin{proof} It is just a special case of injectivity of $\iota$ in Proposition \ref{prop:fA}. \end{proof} Note that \[ \mathcal{P}_{k,N}^{{\rm ev}}=\lambda(V_{w}^{\Gamma_{1}(N),-,{\rm sym}})+\lambda^{\mathcal{S}}(V_{w}^{\Gamma_{1}(N)}) \] where \[ V_{w}^{\Gamma_{1}(N),-,{\rm sym}}\coloneqq\{u\in V_{w}^{\Gamma_{1}(N),-}:\left.u\right|_{\epsilon S}=u\}. \] The following is a refined version of Theorem \ref{thm:main_intro}. \begin{thm} \label{thm:main}Let $P\in\bar{W}_{w}^{\Gamma,+}$. Put $Q\coloneqq\left.P\right|_{U}$ and $Q^{\pm}=\frac{1}{2}\left.Q\right|_{1\pm\epsilon}\in V_{w}^{\Gamma,\pm}$. Then \[ Q^{-}\in V_{w}^{\Gamma_{1}(N),-,{\rm sym}} \] and \[ 3\lambda(Q^{+})=-\lambda(Q^{-})-\lambda^{\mathcal{S}}(Q)\in\mathcal{P}_{k,N}^{{\rm ev}}. \] Conversely, $\lambda(R)$ with \textbf{$R\in\bar{V}_{w}^{\Gamma,+}$} belongs to $\mathcal{P}_{k,N}^{{\rm ev}}$ if and only if $R=Q^{+}$ arising in this way. \end{thm} \begin{proof} Note that \[ \left.Q\right|_{\epsilon S-1}=\left.P\right|_{U(\epsilon S-1)}=P, \] and thus \[ \left.Q^{-}\right|_{\epsilon S-1}=\frac{1}{2}\left.Q\right|_{(\epsilon S-1)(1-\epsilon)}=\frac{1}{2}\left.P\right|_{1-\epsilon}=0, \] which implies \[ Q^{-}\in V_{w}^{\Gamma_{1}(N),-,{\rm sym}}. \] By (\ref{eq:dsh2}), \begin{align*} -\lambda^{\mathcal{S}}(Q) & =\lambda(\left.Q\right|_{(1+\epsilon S)(1-T)})\\ & =\lambda(\left.P\right|_{U(1+\epsilon S)(1-T)})\\ & =\lambda(\left.P\right|_{U(2+\epsilon)})\\ & =\frac{1}{2}\lambda(\left.P\right|_{U(2+\epsilon)(1+\epsilon)})+\frac{1}{2}\lambda(\left.P\right|_{U(2+\epsilon)(1-\epsilon)}). \end{align*} Here \[ \frac{1}{2}\lambda(\left.P\right|_{U(2+\epsilon)(1+\epsilon)})=\frac{3}{2}\lambda(\left.P\right|_{U(1+\epsilon)})=3\lambda(Q^{+}) \] and \begin{align*} \frac{1}{2}\lambda(\left.P\right|_{U(2+\epsilon)(1-\epsilon)}) & =\frac{1}{2}\lambda(\left.P\right|_{U(1-\epsilon)})=\lambda(Q^{-}). \end{align*} Thus \[ 3\lambda(Q^{+})=-\lambda(Q^{-})-\lambda^{\mathcal{S}}(Q), \] which completes the proof the theorem except for the converse part. By Lemma \ref{lem:dsh_in_kerdelta}, \begin{equation} \dim_{\mathbb{Q}}\lambda^{-1}(\mathcal{P}_{k,N}^{{\rm ev}})\leq\dim_{\mathbb{Q}}(\ker\delta).\label{eq:ineq1} \end{equation} By Proposition \ref{prop:kernel_delta_star}, \begin{equation} \dim_{\mathbb{Q}}\ker(\delta)=\dim_{\mathbb{Q}}W_{w}^{\Gamma,+}.\label{eq:eq2} \end{equation} By Proposition \ref{prop:fA}, we have \begin{equation} \dim_{\mathbb{Q}}W_{w}^{\Gamma,+}=\dim_{\mathbb{Q}}\bar{W}_{w}^{\Gamma,+}.\label{eq:eq3} \end{equation} By (\ref{eq:ineq1}), (\ref{eq:eq2}), and (\ref{eq:eq3}), we have \begin{equation} \dim_{\mathbb{Q}}\lambda^{-1}(\mathcal{P}_{k,N}^{{\rm ev}})\leq\dim_{\mathbb{Q}}\bar{W}_{w}^{\Gamma,+}.\label{eq:tar_ineq_1} \end{equation} Therefore, the converse part follows from (\ref{eq:tar_ineq_1}) since the $\mathbb{Q}$-linear map $P\mapsto Q^{+}$ embeds $\bar{W}_{w}^{\Gamma,+}$ into $\lambda^{-1}(\mathcal{P}_{k,N}^{{\rm ev}})$ by Proposition \ref{prop:P_R_injectivity}. \end{proof} If we explicitly write down Theorem \ref{thm:main} by the coefficients of period polynomials, we can show Theorem \ref{thm:main_intro} as follows. \begin{proof}[Proof of Theorem \ref{thm:main}] By definition, \[ P(C_{a,-a+b})(X+Y,-X)=\left.P\right|_{U}(C_{a,b}S), \] and thus \[ \left.P\right|_{U}(C_{a,b}S)=\sum_{r,s}\frac{1}{r!s!}q_{a,b}^{r+1,s+1}X^{r}Y^{s}. \] Thus \[ \left.P\right|_{U}=\sum_{(r,s,a,b)\in A(w,N)}q_{a,b}^{r+1,s+1}Q_{a,b}^{r,s}. \] Let $Q=\left.P\right|_{U}$ and $Q^{\pm}=\frac{1}{2}\left.Q\right|_{1\pm\epsilon}$ as in Theorem \ref{thm:main}. Then \begin{align*} Q & =\sum_{(r,s,a,b)\in A(w,N)}q_{a,b}^{r+1,s+1}Q_{a,b}^{r,s},\\ Q^{+} & =\sum_{(r,s,a,b)\in A(w,N)}q_{a,b}^{r+1,s+1,{\rm od}}Q_{a,b}^{r,s},\\ Q^{-} & =\sum_{(r,s,a,b)\in A(w,N)}q_{a,b}^{r+1,s+1,{\rm ev}}Q_{a,b}^{r,s}, \end{align*} and thus the conditions \[ q_{b,a}^{s,r,{\rm od}}=(-1)^{r+1}q_{-a,b}^{r,s,{\rm od}}=(-1)^{s+1}q_{a,-b}^{r,s,{\rm od}},\quad q_{a,b}^{r,s,{\rm ev}}=(-1)^{r}q_{-a,b}^{r,s,{\rm ev}}=(-1)^{s}q_{a,-b}^{r,s,{\rm ev}}=q_{b,a}^{s,r,{\rm ev}} \] follows from $Q^{\pm}\in\tilde{\bar{V}}_{w}^{\Gamma,\pm}$ and $Q^{-}\in V_{w}^{\Gamma_{1}(N),-,{\rm sym}}$. Furthermore, by \[ 3\lambda(Q^{+})=-\lambda(Q^{-})-\lambda^{\mathcal{S}}(Q), \] we have \[ 3\sum_{r,s,a,b}q_{a,b}^{r,s,{\rm od}}Z_{a,b}^{r,s}=-\sum_{r,s,a,b}q_{a,b}^{r,s,{\rm ev}}Z_{a,b}^{r,s}-\sum_{r,s,a,b}q_{a,b}^{r,s}Z_{a+b}^{r+s} \] since $\lambda(Q_{a,b}^{r,s})=Z_{a.b}^{r+1,s+1}$ and $\lambda^{\mathcal{S}}(Q_{a,b}^{r,s})=Z_{a,b}^{r+s+2}$. Finally, the converse part follows from the converse part of Theorem \ref{thm:main} since \[ \sum_{r,s,a,b}c_{a,b}^{r,s}Z_{a,b}^{r,s}=\lambda(\sum_{r,s,a,b}c_{a,b}^{r,s}Q_{a,b}^{r-1,s-1}) \] and \[ \sum_{r,s,a,b}c_{a,b}^{r,s}Q_{a,b}^{r-1,s-1}\in\bar{V}_{w}^{\Gamma,+}.\qedhere \] \end{proof} \subsection*{Acknowledgements} This work was supported by JSPS KAKENHI Grant Numbers JP18K13392 and JP22K03244. \bibliographystyle{plain}
2,877,628,089,921
arxiv
\section{Introduction} The discovery of topologically nontrivial quantum states in condensed matter systems, such as in topological insulators\cite{bernevig2006TI, zhang2009TI, xia2009TI, fu2007TI, fu2007TI2, Teo2008TI, hsieh2008TI, chen2009BiTe}, Dirac semimetals\cite{Young2012dirac,Liu2014Na3Bi, Liu2014Cd3As2, Xu2015dirac} and Weyl semimetals\cite{Wan2011weyl, Burkov2011weyl, Weng2015weyl, Lu2015weyl, Xu2015TaAs, Lv2015TaAs, Yang2015weyl, Huang2015weyl, Xu2015weyl}, provides a platform for investigating particles with Dirac-like linear dispersions. This allows testing of particle physics predictions in crystalline solids, while the topologically protected nature of the states may prove useful for spintronics. Many of these materials also exhibit extremely high magnetoresistance (XMR), with potential applications in reading out magnetically-stored data. Recently, the lanthanum monopnictides La$X$ ($X$ = P, As, Sb and Bi) were predicted to be topological insulators\cite{Fuliang2015}. This inspired an explosion of mainly transport work on LaSb\cite{Kasuya1993,Tafti2016,Zeng2016}, LaBi\cite{Kasuya1993,Stepanov2015,Sun2016,Wu2016,Kumar2016}, and YSb\cite{Ghimire2016,Yu2016,Pavlosiuk2016}, all of which crystallize in the well-known NaCl structure. All show a large unsaturated magnetoresistance, which is often attributed to the quantum limit of the Dirac fermions \cite{Abrikosov1998}, but may also arise from complicated factors such as electron-hole compensation \cite{Yang1999, Mun2012}. A recent angle-resolved photoemission spectroscopy (ARPES) study on WTe$_2$ suggested that its anomalously large magnetoresistance may be attributable to spin and orbital angular momentum textures which would suppress backscattering of the quasiparticles\cite{Jiang2015}. Similar to the case of Cd$_3$As$_2$\cite{liang2015CdAs}, such a mechanism would be progressively invalidated under a magnetic field, thus causing large magnetoresistance. First-principles calculations of LaBi and LaSb based on the two-band model, however, claimed that these materials are topologically trivial and their extremely high magnetoresistance could be attributed to perfect electron-hole compensation, without recourse to topologically nontrivial states\cite{Guo2016}. Recent ARPES experiments indicate that LaSb is topologically trivial and its properties are well explained by electron-hole compensation\cite{Zeng2016}. On the other hand, ARPES experiments on LaBi show linear band dispersion\cite{Wu2016} and indicate that LaBi may host an odd number of surface Dirac cones\cite{Nayak2016}, making it topologically nontrivial. One Dirac cone is located at the surface Brillouin zone (SBZ) center while the other two are found at the SBZ corner\cite{Nayak2016}. In YSb, meanwhile, the negative Hall coefficient indicates that electrons are the dominant carriers, calling into question the perfect compensation of electrons and holes\cite{Yu2016}. Unfortunately, the Dirac cones in LaBi were not very clearly resolved\cite{Nayak2016}, necessitating a closer investigation of its band structure and suggesting that a re-examination of LaSb would also be desirable. In this paper, we report the surface and bulk electronic structures of LaBi and LaSb by ARPES. We identify a clear band anti-crossing along the $\bar{\varGamma}$--$\bar{X}$ direction of LaBi, which perfectly matches with the calculated results\cite{Nayak2016}. We show clear evidence that LaBi hosts one Dirac cone at the SBZ center and two Dirac-cone-like surface bands at the zone corner. The results on LaSb differ with the previous ARPES data\cite{Zeng2016} --- we do find evidence of Dirac-cone-like surface bands. Moreover, both surface and near-surface bulk bands exhibit circular dichroism (CD). Our results unveil the exotic surface states in LaBi and LaSb, which might be topologically nontrivial in nature. In addition to electron-hole compensation, forbidden backscattering in surface and near-surface bulk bands likely contributes to the low zero-field resistivity, which would give rise to anomalous XMR when applying magnetic field in LaBi and LaSb. \begin{figure*}[htb] \includegraphics[width=\textwidth]{Fig1.pdf} \caption{\label{transport}Transport results on LaSb and LaBi. Resistivity of (a) LaSb and (b) LaBi under various fields parallel to (001). These samples had residual resistivity ratios (RRRs) of 334 and 100, respectively. The insets compare the field-dependence of selected samples at 1.8\,K, and demonstrate clear Shubnikov-de~Haas oscillations. (c) and (d) show the temperature derivatives of the data in panels a and b, respectively. (e) Quantum oscillations were clearly visible in LaSb down to low fields at 1.8\,K, and implied at least two distinct frequencies as shown in panel f [colors as for the inset to panel a].} \end{figure*} \section{Experimental} Black, blocky crystals of typical dimension $5\times 5\times 5$\,mm$^3$ were grown from metal fluxes, closely following established techniques for this family of materials\cite{Canfield1991}. To grow LaSb, La metal powder (Alfa Aesar, 99.9\%), Sb spheres (Alfa Aesar, 99.999\%), and Sn pieces (Aladdin, 99.999\%) were sealed under vacuum in a quartz tube, with atomic ratio 1.5:1:20. The small La excess was used to compensate for loss by reaction to the quartz. To grow LaBi, La powder, Bi powder (Alfa, 99.5\%), and In pieces (Alfa, 99.99\%) were sealed under vacuum in a quartz tube, with atomic ratio 1:1:20. The mixtures were heated to 1050$^\circ$C, held for 2 hours, cooled over the course of 5--7~days to 750$^\circ$C, then cooled freely to room temperature. A small temperature gradient was applied, with the bottom end of the quartz tube cooler, to encourage crystallization. The resulting solidified ingot was then remelted at 350$^\circ$C and the tin or indium flux was centrifuged off, revealing the crystals. In both cases, the first crystals obtained were large and of excellent quality, so no further attempts to optimize the growth process were made. Resistivity measurements were performed between 1.8 and 300\,K in fields up to 9\,T in a Quantum Design PPMS by a standard four-probe technique, with a drive current of 8\,mA; for each compound, several pieces of the same crystal were measured. Quantum oscillations were isolated by subtracting a quartic polynomial fit from the field-dependent resistivity data, before performing a Fourier transform. Since both Sn and In superconduct within the measurement window, the resistivity also provides a test for remnant metal flux. High-resolution ARPES measurements were performed at beamline 5-4 of the Stanford Synchrotron Radiation Lightsource (SSRL), using Scienta R4000 electron analyzers. The experimental geometry is depicted in Fig.~\ref{FS}(c). The overall energy resolution was 10\,meV, and the angular resolution was 0.3$^\circ$. All samples were cleaved \textit{in situ} under ultrahigh vacuum conditions and measured at temperatures less than 15\,K. During measurements, the spectroscopy qualities were carefully monitored to avoid degradation of the data due to sample aging. \section{Results} \begin{figure*}[htb] \includegraphics[width=\textwidth]{Fig2.pdf} \caption{\label{FS} (a) Crystal structure of LaBi and LaSb. (b) Brillouin zone of bulk LaSb/LaBi and the (001)-projected surface Brillouin zone. (c) The ARPES experimental setup. The analyzer slit is vertical to the mirror plane. The emission plane is defined by the analyzer slit and the sample surface normal. The angle between the direction of the beam and the emission plane is $5.83^{\circ}$. In the emission plane, the angle between the projection of the beam direction and the sample normal direction is $55^{\circ}$. (d), (f) False-color plots of the photoemission intensity at the Fermi energy ($E_F$) of LaBi and LaSb, respectively, integrated over the energy window ($E_F-15$\,meV, $E_F + 15$\,meV). These Fermi surface maps have been fourfold-symmetrized. The intensity around $\bar{X}$ of LaSb has been enhanced to compensate for the weak signal. (e), (g) Schematic representations of the Fermi surfaces in panels d and f, respectively; pocket sizes have been exaggerated for clarity. Data in panels d and f were taken with 30 eV linearly-polarized and 24 eV circularly-polarized photons, respectively.} \end{figure*} \begin{figure*}[htb] \includegraphics[width=\textwidth]{Fig3.pdf} \caption{\label{bands}Band structures of LaBi and LaSb. (a-c) Photoemission intensity $I(k, \omega)$ of LaBi and its corresponding 2D curvature intensity plot along the $\bar{\varGamma}$--$\bar{X}$ direction [cuts \#1 -- 3, respectively, in Fig.~\ref{FS}(e)]. (d-f) Photoemission intensity $I(k, \omega)$ of LaSb and its corresponding 2D curvature intensity plot along cuts \#1 -- 3, respectively, in Fig.~\ref{FS}(g)]. The Greek letters indicate bulk bands, while $s_1$ through $s_3$ label surface bands. Data in panels a-c and d-f were taken with 22 eV and 26 eV photons, respectively.} \end{figure*} The resistivity $\rho$ of both LaSb and LaBi, shown in Fig.~\ref{transport}, was measured to verify that our samples behaved as in other reports. The residual resistivity ratio RRR = $\rho$(300\,K)/$\rho(T\rightarrow 0$\,K), which often serves as a measure of sample quality, varied significantly between different pieces of the same crystal, but in all cases was high. At 1.8\,K, Shubnikov-de~Haas oscillations were clearly visible at least as low as 3\,T, indicative of the remarkably high crystal quality readily obtained in these materials. The highest-RRR crystal of each material showed a possible drop in resistivity at low temperatures, suggestive of remnant flux; however, the quantum oscillations were strongest in these samples, indicating that the RRR reflects the intinsic sample quality. The suspect low-temperature points were neglected in all analysis. The lowest quantum oscillation frequencies in LaSb of 210.4 and 430.8\,T correspond to extremal areas perpendicular to the cubic (001) axis of 2.008 and 4.112\,nm$^2$, respectively. Quantum oscillations were not investigated in detail, since comprehensive angle-dependent studies are already available on both LaBi\cite{Yoshida2001,Kitazawa1983,Hasegawa1985} and LaSb\cite{Kitazawa1983,Hasegawa1985,Settai1993,Yoshida2000} and our frequencies are fully consistent with previous reports. Higher magnetoresistance was generally obtained for samples with smaller cross-sections, as would be expected if the surface and bulk provided separate transport channels, but the number of samples measured does not allow for a reliable conclusion in this regard. In such a scenario, the contribution from topologically-protected surface states would make the residual resistivity ratio less useful for characterizing sample quality. As can be seen in Figs.~\ref{transport}(c) and (d), the highest slope in the low-temperature upturn remains at constant temperature. The transport results presented here are consistent with those of other groups\cite{Kitazawa1983,Hasegawa1985,Settai1993,Kasuya1993,Stepanov2015,Tafti2016,Zeng2016,Sun2016,Wu2016,Kumar2016}, and serve to demonstrate that the crystals on which we report ARPES behave exactly as expected. Having verified that our crystals exhibit the same magnetoresistance upturn that recently piqued interest in these materials, we now turn to ARPES. We note that ARPES probes the electronic structure near the surface, where the atoms have different coordination than in the bulk, and that electron density in surface states must come at the expense of bulk bands --- we use ``bulk bands" to refer to the near-surface bulk-like bands. At ambient pressure, LaBi and LaSb crystallize in the simple rock salt structure as illustrated in Fig.~\ref{FS}(a). The corresponding Brillouin zone (BZ) and the (001)-projected SBZ are depicted in Fig.~\ref{FS}(b). The Fermi surfaces of LaBi and LaSb in the $k_x$-$k_y$ plane are shown in Figs.~\ref{FS}(c) and (e) respectively, and their corresponding schematic representations are depicted in panels (d) and (f), respectively. The Fermi surface topologies are basically consistent with calculations\cite{Hasegawa1985,Guo2016}, including one electron pocket at the BZ corner (the $\delta$ band) and two hole pockets at the BZ center (the $\alpha$ and $\beta$ bands). The perpendicular $\delta$ and $\varepsilon$ pockets in Figs.~\ref{FS}(c-f) are from two inequivalent $X$ points ($X_2$ and $X_1$, in blue), and appear together due to the poor $k_z$ resolution of our experiments using vacuum ultraviolet photons (VUV-ARPES) --- the bands from a specific $k_z$ plane may have a projection over a wide range of $k_z$\cite{Song2016,Xu2016}. Meanwhile, the $X_3$ point (beige) is projected to the $\bar{\varGamma}$ point for the same reason. \begin{figure*}[htb \includegraphics[width=\textwidth]{Fig4.pdf} \caption {\label{kz} The Fermi surface and photoemission intensity below $E_F$ plotted as a function of $k_z$ for LaBi and LaSb. Photoemission intensity maps and corresponding Momentum Distribution Curves (MDCs) around the surface Brillouin zone (a) center and (b) corner in the $k_y$-$k_z$ plane of LaBi are shown at at $E_F$ and at the Dirac points' binding energies. (c) Fermi crossings of the $s_1$ and $s_2$ bands of LaBi as a function of photon energy. (d), (e) $k_y$-$k_z$-plane maps and corresponding MDCs of LaSb. (f) Fermi crossings of the $s_1$ and $s_2$ bands of LaSb as a function of photon energy. The dashed lines show the bands and Dirac points, where identifiable. Different $k_z$s were accessed by varying the photon energy.} \end{figure*} \begin{figure*}[htb] \includegraphics[width=\textwidth]{Fig5.pdf} \caption {\label{cd} Circular dichroism of band structures at $\bar{\varGamma}$ and $\bar{X}$. (a) The difference intensity plot of cut \#1 in Fig.~\ref{FS}(e) of LaBi measured under RCP and LCP light, with several illustrative MDCs at binding energies and with polarizations as marked. (b, c) Similar data on LaSb on cuts \#1 and \#2 in Fig.~\ref{FS}(g), respectively. Data in panels a, b and c were taken with 22, 22 and 26~eV photons, respectively.} \end{figure*} \begin{figure}[htb] \includegraphics[width=\columnwidth]{Fig6.pdf} \caption{\label{DP} Illustrations of the Dirac cones in the surface Brillouin zones of LaBi and LaSb. LaBi has one Dirac cone at $\bar{\varGamma}$ and two Dirac cones at $\bar{X}$. LaSb has two Dirac cones at $\bar{X}$ and one or two Dirac cones at $\bar{\varGamma}$. The light brown Dirac cone may be a projection from the $X_3$ point. } \end{figure} \begin{figure}[htb] \includegraphics[width=\columnwidth]{Fig7.pdf} \caption{\label{EDC} (a,b) and (c,d) The EDCs for the photoemission intensity in Figs.~\ref{bands}(a,b) and (d,e), respectively. The orange curves identify the EDCs at the $\bar{\varGamma}$ and $\bar{X}$ points.} \end{figure} The band dispersions of LaBi and LaSb are shown in Fig.~\ref{bands}. Besides the calculated bulk bands\cite{Guo2016, Kumigashira1998}, we observe several surface bands. Figs.~\ref{bands}(a) and (b), respectively, show the photoemission intensity of LaBi along cuts \#1 and 2 together with the corresponding 2D curvature intensity plots. Around the $\bar{\varGamma}$ point in Fig.~\ref{bands}(a), we are able to resolve two hole-like bands ($\alpha$ and $\beta$), one electron-like band ($\gamma$), and surface $s_1$ band which has a linear band dispersion and an apparent Dirac point at a binding energy of $-$0.24\,eV. The $\alpha$ and $\beta$ bands are consistent with band structure calculations\cite{Guo2016}, while the parabolic-like $\gamma$ band is the projection of the electron pocket at the $X_3$ point. The bottom of the $\gamma$ band and the Dirac point (DP1) of the surface $s_1$ band can be clearly distinguished. In Fig.~\ref{bands}(b), there is a clear anti-crossing of the $\beta$ and $\varepsilon$ bands along the $\bar{\varGamma}$-$\bar{X}$ direction, which is predicted by band structure calculations\cite{Fuliang2015} but was not observed in the previous ARPES data\cite{Kumigashira1998,Wu2016,Nayak2016}. Interestingly, there are two apparent Dirac points at the $\bar{X}$ point --- DP2 and DP3 occur at binding energies of $-$0.16 and $-$0.35\,eV, respectively. The bulk band $\delta$ is too weak to be observed at $\bar{X}$, possibly due to the strong signal from the surface $s_2$ band. At cut \#3 near the $\bar{X}$ point [Fig.~\ref{bands}(c)], we can clearly see both the parabolic $\delta$ band and the surface $s_2$ band. For LaSb, around $\bar{\varGamma}$ in Fig.~\ref{bands}(d), there are two hole-like bands ($\alpha$ and $\beta$) and a linear surface band $s_1$ with its Dirac point DP1 located at $-$0.27\,eV. The black dashed line below $-$0.4\,eV indicates the parabolic bulk $\omega$ band. Above the $ \omega $ band, there is a second apparent Dirac cone DP3', with its apex at $-$0.37\,eV. The $\gamma$ band is not clearly resolved in cut \#1. The bands near $E_F$ around $\bar{X}$ are much weaker in Fig.~\ref{bands}(e), preventing observation of the band anti-crossing along the $\bar{\varGamma}$-$\bar{X}$ direction. However, similar to the surface states in LaBi, the linearly-dispersing $s_2$ and $s_3$ surface states and the two Dirac cones (DP2 at $-$0.29\,eV and DP3 at $-$0.37\,eV) still exist at $\bar{X}$ in LaSb. However, only halves of the Dirac-cone-like bands can be seen in cut~\#1 and \#2. Fig.~\ref{bands}(f) is the band structure on cut~\#3 along the $\bar{\varGamma}$-$\bar{M}$ direction. Here we can see the parabolic-like bulk $\gamma$ band more clearly than in Fig.~\ref{bands}(d), demonstrating the close similarity to LaBi. An additional hole-like band $\zeta$ is observed which can be found in the bulk band calculations\cite{Kumigashira1998}. To verify the two-dimensional nature of the surface states, we performed photon-energy-dependent measurements from 10 to 40\,eV to scan $k_z$. Figs.~\ref{kz}(a) and (b) show the $k_z$-$k_y$-plane maps of LaBi at $E_F$ and selected higher binding energies around the BZ center and corner, respectively. The $s_1$ and $s_2$ bands show two-dimensional character. The $k_z$-$k_y$-plane maps of DP1, DP2 and DP3 demonstrate that these states do not disperse along the $k_z$ direction, identifying them as surface states. Some of the intensity in the $k_z$-$k_y$-plane maps derives from the bulk bands. The $k_z$ dispersions of the Fermi crossings of $s_1$ and $s_2$ in Fig.~\ref{kz}(c) demonstrate that these are two totally distinct, $k_z$-nondispersive surface bands, while the three Dirac points occur at different energies. Thus the $s_1$ band cannot be a projection from the $X_3$ point. Figs.~\ref{kz}(d) and (e) show similar $k_z$-$k_y$-plane maps for LaSb. The $s_1$ and $s_2$ bands and the Dirac points show two-dimensional bahavior as in LaBi. The $k_z$ dispersions of the Fermi crossings of $s_1$ and $s_2$ in Fig.~\ref{kz}(f) also demonstrate that the $s_1$ band cannot be a projection of the $s_2$ band from the $X_3$ point. However, a similar comparison is difficult for the $s_3'$ and $s_3$ bands in LaSb because of their proximity to bulk bands. Dirac points DP3' and DP3 occur at the same energy ($-$0.37\,eV), suggesting that the $s_3'$ band is the projection of the $s_3$ band from the $X_3$ point, but this remains to be confirmed. If the $s_3'$ band is indeed the projection of the $s_3$ band, the total number of Dirac points below $E_F$ in both compounds would be three, which would suggest that they are both topologically nontrivial. The differential coupling of right-circularly (RCP) versus left-circularly (LCP) polarized light can reveal the orbital angular momentum (OAM) of an electronic state\cite{Wang2013CD}. The difference between transition matrix elements of photoelectron final states results in circular dichroism (CD) proportional to the inner product of the OAM direction and the incoming photon direction\cite{Park2012CD}. In strong spin-orbit-coupled materials, both the spin and orbital angular momenta of a state would exhibit conjugate textures around the Fermi surface. Therefore, to further explore the spin texture of the surface states, we performed CD-ARPES experiments. Fig.~\ref{cd} shows the CD of the ARPES band structures and momentum distribution curves (MDCs) at illustrative binding energies under RCP and LCP light. The photoemission intensity exhibits strong intensity inversion between RCP and LCP data. Interestingly, from the detailed MDCs, we can see clear CD of both the surface and bulk bands. Circular dichroism is natural and expected for topological surface states, and supports the ``forbidden backscattering'' mechanism for magnetoresistance of the surface state. However, bulk La$X$ materials are believed to be both inversion- and time-reversal-symmetric. In such a case, one can strictly prove the spin must be degenerate\cite{Dresselhaus1955,fu2007TI2}, regardless of SOC effects; therefore, the ``forbidden backscattering'' mechanism should not apply to the bulk bands. However, ARPES probes the near-surface electronic structure where inversion is explicitly broken by the presence of a surface. The near-surface bulk band structure detected at the surface can vary from the actual bulk band structure. Thus the CD observed for the near-surface bulk bands may suggest that these bands are strongly influenced by the surface, and forbidden backscattering may play a role in the magnetoresistance for all states near the surface, not just the Dirac cones. This may be similar to the case of WTe$_2$\cite{Jiang2015} and Cd$_3$As$_2$\cite{liang2015CdAs}. \section{Discussion} \begin{table}[htb] \caption{\label{carrier density} ARPES Fermi surface volumes and carrier densities, compared with those from de~Haas-van~Alphen (dHvA). Labelling of Fermi surface sheets refers to Fig.~\ref{FS}(e); dHvA values were extracted from Ref.~\onlinecite{Hasegawa1985}, based on data in Ref.~\onlinecite{Kitazawa1983}.} \begin{tabular}{lclcccc}\hline \multicolumn{2}{c}{Sheet} & Type & \multicolumn{2}{c}{Volume (nm$^{-3}$)} & \multicolumn{2}{c}{Carrier density (cm$^{-3}$)} \\ & & & ARPES & dHvA & ARPES & dHvA \\ \hline\hline &$\alpha$ & hole, $\varGamma$ & 9.6 & 11.1 & 7.75$\times$10$^{19}$ & 8.98$\times$10$^{19}$\\ LaBi &$\beta$ &hole, $\varGamma$ & 22.2 & 38.2 & 1.79$\times$10$^{20}$ & 3.09$\times$10$^{20}$\\ & $\delta/\varepsilon$ & electron, $X$ & 30.3 & 46.9 & 2.45$\times$10$^{20}$ & 3.79$\times$10$^{20}$\\ \hline & $\alpha$ & hole, $\varGamma$ & 3.4 & 7.6 & 2.72$\times$10$^{19}$ & 6.01$\times$10$^{19}$\\ LaSb & $\beta$ & hole, $\varGamma$ & 14.8 & 20.8 & 1.19$\times$10$^{20}$ & 1.68$\times$10$^{20}$\\ & $\delta/\varepsilon$ & electron, $X$ & 15.0 & 26.4 & 1.21$\times$10$^{20}$ & 2.14$\times$10$^{20}$\\ \hline \end{tabular} \end{table} Although the near-surface bulk bands observed by ARPES may vary from the actual bulk bands, it is worth comparing the ARPES-derived Fermi surfaces with those found by quantum oscillations\cite{Kitazawa1983,Settai1993}. The two-dimensional projected Fermi surfaces in Fig.~\ref{FS}(d, f) allow us to estimate the Fermi pocket volumes by also considering the calculated Fermi pockets' topology and noting the cubic symmetry. We regard the two-dimensional Fermi surfaces as the maximum section of the three-dimensional Fermi pockets. The electron and hole Fermi pocket volumes from ARPES are roughly half those from dHvA. Since our quantum oscillation data on the same crystals are consistent with dHvA, the discrepancy is presumably due to the influence of the surface on the ARPES spectra. Our bulk-like Fermi surfaces and band structures do contain the essential features found in the calculations, even if the bands are shifted. In Tab.~\ref{carrier density}, we also calculated the carrier densities from our ARPES data. The ratio of the populations of the holes and electrons is about 1.05 and 1.21 for LaBi and LaSb, respectively, i.e.\ electrons and holes are compensated within our experimental error bar. Although the surface carrier is just a small fraction of the bulk, because of the back scattering, its conductivity can be much higher than the bulk, so its contribution to MR should be noted. Thus our findings suggest that the XMR may result from the carrier compensation in addition to the ``forbidden backscattering'' mechanism of surface and near-surface bulk states. The surface band structure in LaBi includes three clear, fully-resolved Dirac points, although DP2 and DP3 are set off by only a few tens of meV from the bulk bands between them. LaSb, however, shows three or possibly four Dirac cones seemingly terminating at Dirac points, likely due to broadening from proximity to the bulk bands. This situation is summarized schematically in Fig.~\ref{DP}, with further detail provided in Figs.~\ref{EDC}(a-d). Band structure calculations with spin-orbit coupling indicate band inversion at the $X$ point between the La $d$ states and the pnictogen $p$ states, with an anti-crossing along the $\varGamma$-$X$ direction\cite{Zeng2016}. The gap we observe at the anti-crossing in LaBi is 20\,meV [see Fig.~\ref{bands}(b)], which is smaller than the calculated 35\,meV\cite{Fuliang2015}. The reduced spin-orbit coupling in lighter Sb is most likely responsible for the reduced separation of the Dirac states from the bulk bands compared with the Bi material. The bulk band structure observed in LaBi fits well with the calculations, while the intervening bulk band is not observed in LaSb. However, the authors are not aware of a slab calculation relevant to (001) surfaces in these materials. To fully interpret the data, further band calculations are required beyond those in Refs.~\onlinecite{Fuliang2015,Kumar2016}. It is worth comparing the present results with recent ARPES measurements on LaBi and LaSb. In the case of LaBi, Ref.~\onlinecite{Wu2016} reports both surface and bulk bands at $\bar{\varGamma}$; however, the $s_3'$ band was not resolved and data at $\bar{X}$ are not reported. Ref.~\onlinecite{Nayak2016} presents linear surface bands at both $\bar{\varGamma}$ and $\bar{X}$, similar to the present work, but the two distinct Dirac cones at the $\bar{X}$ point are not clearly resolved. Furthermore, the band anti-crossing shown in Fig.~\ref{bands}(b) is not visible in Refs.~\onlinecite{Wu2016, Nayak2016}. A consistent picture emerges for this material, with higher-resolution data and circular dichroism providing crucial additional detail. Our data on LaSb are similar to those in Ref.~\onlinecite{Zeng2016}, the primary difference being that Zeng $et~al$ did not identify the surface bands. Hints of the Dirac cones are indeed visible in their $\bar{X}$-point data, but the proximity to faint bulk bands makes these features more difficult to discern. While circular dichroism makes a strong case for the existence of spin-polarized surface states, and it would be unlikely that the reduction in spin-orbit coupling from Bi to Sb would eliminate these features entirely, the different interpretations possible based on these data would be best resolved through detailed slab calculations. Finally, it is worth commenting further on the quantum oscillations: Shubnikov-de~Haas oscillations were readily observed to remarkably low fields at 1.8\,K, without the use of specialized apparatus, in all samples measured, and without any optimization of the crystal growth beyond its initial success. This implies that these systems are not just crystallographically simple, but also extremely clean and easy to prepare, and suggests that they may be an excellent system for future in-depth study. \section{Conclusion} In summary, we have observed the surface bands and near-surface bulk electronic states on the (001) surface of LaBi and LaSb using ARPES, and identify an obvious band anti-crossing along the $\bar{\varGamma}$-$\bar{X}$ direction in LaBi. An odd number of Dirac cones are clearly present below the Fermi level in LaBi. Dirac surface states are also observed in LaSb, but the Dirac points are much closer to the bulk bands, and we cannot be certain whether an odd or even number lie below $E_F$. Furthermore, strong circular dichroism indicates spin-orbit coupling-induced spin and orbital angular momentum textures in both the surface and near-surface bulk bands, which likely contribute to the large magnetoresistance in addition to the electron-hole compensation. Our high-resolution electronic structures of LaBi and LaSb lay the foundation for further investigations. Since preparing excellent crystals of both materials is relatively straightforward and the crystal structure is extremely simple, the lanthanum pnictide family may provide a uniquely accessible experimental platform for investigating topological surface states and their evolution as spin-orbit coupling increases down the periodic table. \section*{Acknowledgments} Some preliminary data (not shown here) were taken at beamline 13U of the National Synchrotron Radiation Laboratory (NSRL) and beamline 09U of the Shanghai Synchrotron Radiation Facility (SSRF). We gratefully acknowledge helpful discussions with Professor X. G. Wan of Nanjing University and experimental support by Dr.\ D.\ H.\ Lu and Dr.\ H.\ Makoto at SSRL. This work is supported by the National Key R$\&$D Program of the MOST of China (Grant No.~2016YFA0300203) and the Science Challenge Program of China. Use of the Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, is supported by the U.S.\ Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No.\ DE-AC02-76SF00515.
2,877,628,089,922
arxiv
\section*{Acknowledgements} First and foremost, a very sincere thank you goes to my supervisor Dr. Spiro Karigiannis. I am extremely grateful for his patience and the tremendous amount of time he spent teaching and helping me this summer. I would also like to thank Dr. Shengda Hu, my second reader, for the very useful comments and corrections that he provided. I need to acknowledge two of my good friends; Janis Lazovksis and Cameron Williams. Janis is a LaTeX machine, and I am very appreciative of the time he spent helping me this summer. Cam was a great support this year, always there to help me work through any problem. I also would like to thank my family for their constant love and interest, it means very much to me. Last, but not least (I'd say least goes to Cameron Williams), a thank you goes to my cousin Matt Rappoport, for without our discussions some of the contents in this paper would not exist. } \del{ \cleardoublepage \vspace*{70pt} \begin{center} \itshape OPTIONAL DEDICATION CAN GO HERE. \end{center} } \tableofcontents \pagenumbering{arabic} \setcounter{page}{2} \section{Introduction} Recall that for a symplectic manifold $(M,\omega)$, a Lie algebra $\g$ is said to act symplectically if $\L_{V_\xi}\omega=0$, for all $\xi\in\g$, where $V_\xi$ is its infinitesimal generator. A symplectic group action is called Hamiltonian if one can find a moment map, that is, a map $f:\g\to C^\infty(M)$ satisfying \[df(\xi)=V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega,\] for all $\xi\in\g$. \\ In multisymplectic geometry, $\omega$ is replaced by a closed, non-degenerate $(n+1)$-form, where $n\geq 1$. A Lie algebra action is called multisymplectic if $\L_{V_\xi}\omega=0$ for each $\xi\in\g$. A generalization of moment maps from symplectic to multisymplectic geometry is given by a (homotopy) moment map. These maps are discussed in detail in \cite{questions}. A homotopy moment map is a collection of maps, $f_k:\Lambda^k\g\to \Omega^{n-k}(M)$, with $1\leq k \leq n+1$, satisfying \begin{equation}\label{equation1}df_k(p)=-f_{k-1}(\partial_k(p))+(-1)^{\frac{k(k+1)}{2}}V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega,\end{equation}for all $p\in\Lambda^k\g$, where $V_p$ is its infinitesimal generator (see Definition \ref{inf gener}). A weak (homotopy) moment map is a collection of maps $f_k:\Rho_{\g,k}\to\Omega^{n-k}(M)$ satisfying \begin{equation}\label{equation 2}df_k(p)=(-1)^{\frac{k(k+1)}{2}}V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega,\end{equation} for $p\in\Rho_{\g,k}$. Here $\Rho_{\g,k}$ is the Lie kernel, which is the kernel of the $k$-th Lie algebra cohomology differential $\partial_k:\Lambda^k\g\to\Lambda^{k-1}\g$, defined by \[\partial_k:\Lambda^k\g\to\Lambda^{k-1}\g \ \ \ \ \ \ \ \ \xi_1\wedge\cdots\wedge\xi_k\mapsto\sum_{1\leq i<j\leq k}(-1)^{i+j}[\xi_i,\xi_k]\xi_1\wedge\cdots\wedge\widehat\xi_i\wedge\cdots\wedge\widehat\xi_j\wedge\cdots\wedge\xi_k,\] for $k\geq 1$ and $\xi_1,\cdots,\xi_k\in \g$. We see that any collection of functions satisfying equation (\ref{equation1}) must also satisfy (\ref{equation 2}). That is, any homotopy moment map induces a weak homotopy moment map. Weak moment maps generalize the moment maps of Madsen and Swann in \cite{ms} and \cite{MS}, and were also used to give a multisymplectic version of Noether's theorem in \cite{me}. In this paper, we study the existence and uniqueness of weak homotopy moment maps and show that the theory is a generalization from symplectic geometry. We also show that the equivariance of a weak moment map can be characterized in terms of $\g$-module morphisms, analogous to symplectic geometry. Recall that in symplectic geometry we have the following well-known results on the existence and uniqueness of moment maps. \begin{proposition} Consider the symplectic action of a connected Lie group $G$ acting on a symplectic manifold $(M,\omega)$. \begin{itemize} \item If the first Lie algebra cohomology vanishes, i.e. $H^1(\g)=0$, then a not necessarily equivariant moment map exists. \item If the second Lie algebra cohomology vanishes, i.e. $H^2(\g)=0$, then any non-equivariant moment map can be made equivariant. \item If the first Lie algebra cohomology vanishes, i.e. $H^1(\g)=0$, then equivariant moment maps are unique, \end{itemize} and combining these results, \begin{itemize} \item If both the first and second Lie algebra cohomology vanish, i.e. $H^1(\g)=0$ and $H^2(\g)=0$, then there exists a unique equivariant moment map. \end{itemize} \end{proposition} We generalize these results with the following theorems. Letting $\Omega^{n-k}_\mathrm{cl}$ denote the set of closed $(n-k)$-forms on $M$, we get the above propositions, in their respective order, by taking $n=k=1$. \begin{theorem} If $H^0(\g,\Rho_{\g,k}^\ast)=0$, then there exists a not necessarily equivariant weak homotopy $k$-moment map. The same result holds if $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ and $H^0(\g,\Omega^{n-k}_\mathrm{cl})\not=0$. \end{theorem} \begin{theorem} If $H^1(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$, then any non-equivariant weak homotopy $k$-moment map can be made equivariant. \end{theorem} \begin{theorem} If $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$ then an equivariant weak homotopy $k$-moment map is unique, \end{theorem} and combining these results, \begin{theorem} If $H^0(\g,\Rho_{\g,k}^\ast)=0$, and $H^1(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$, then there exists a unique equivariant weak $k$-moment map $f_k:\Rho_{\g,k}\to\Omega^{n-k}$. Moreover, if $H^0(\g,\Rho_{\g,k}^\ast)=0$, and $H^1(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$ for all $1\leq k\leq n$, then a full equivariant weak moment map exists and is unique. \end{theorem} \del{ is a If the first Lie algebra cohomology of a Lie group vanishes then we there exists a not necessarily equivariant moment map. If the second Lie algebra cohomology vanishes, then we can make any non-equivariant moment map equivariant by adding a cocycle. Moreover, any two equivariant moment maps differ by something in $[\g,\g]^0$ and thus if $H^1(\g)=0$, then any equivariant moment map is unique. We obtain strong generalizations of these results by showing analogous results are obtained in the setting of multisymplecitc geometry. In order to do this, however, we must consider a refined version of a homotopy moment map. This refining makes two previously non-existing connections with symplectic geometry. In particular, with our new definition, every element in the image of the moment map is a conserved quantity in the sense defined in \cite{cq}. Moreover, we also get that $\L_{X_H}H=0$ for a Hamiltonian vector field, which doesn't happen in the general setting. Specifically, in our setup we obtain that if $H^k(\g)=0$ then the $k$th component of a pre homotopy moment map exists. Moreover, if $H^{k+1}(\g)=0$ then any $k$th component of a pre homotopy moment map can be made equivariant. We also show that the difference of two $kth$ components of homotopy moment maps differ by something in $[\Rho_{\g,k},\g]^0$, analogous to the setup of symplectic geometry. We find that if $H^k(\g)=0$ then the $k$th component of an homotopy moment map is necessarily unique. Here are some theorems that generalize results from symplectic geometry. \begin{theorem} If $H^k(\g)=0$ then the $k$-th component of a not necessarily equivariant refined homotopy moment map exists. If there exists an equivariant $k$th component of a refined homotopy moment, then it is unique. \end{theorem} \begin{theorem} If $H^{k+1}(\g)=0$ then any non equivariant restricted homotopy moment map can be made equivariant. \end{theorem} \begin{theorem} For any refined homotopy moment map $(f)$ there exists a well defined cohomology class $[\sigma]$. \end{theorem} \begin{theorem} If $H^1(\g)=\cdots=H^{n+1}(\g)=0$ then a unique equivariant refined homotopy moment map exists. \end{theorem} Recall that in symplectic geometry, a moment map gives a Lie algebra morphism. Suppose that $f:g\to\ C^\infty(M)$ is a non-equivariant moment map. Then $f$ will only setup a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M),\{\cdot,\cdot\})$ if $f$ is equivariant. In general, an equivariant moment map will give a Lie algebra morphism. However, it is not necessarily true that a moment map which is a Lie algebra morphism is necessarily equivariant. It is true if the group is compact and connected. A moment map that is not equivariant will not necessarily be a Lie algebra morphism. This is because it is only true that $f([\xi,\eta])-\{f(\xi),f(\eta)\}$ is constant. If the moment map is equivariant then this constant vanishes. However, it is not true that if the constant vanishes then the map is equivariant. Now given All the existence and uniqueness theorems hold if the group is compact and connected. If it is not compact or connected, then the definition of equivariance has to be changed to be that is a Lie algebra morphism. In symplectic geometry, a moment map is always a Lie algebra homomorphism from $\g$ to $\widetilde\Omega$. If the moment map is equivariant, then it is a Lie algebra morphism from $\g$ to $\Omega/exact$. This is the first thing we generalize. An equivariant moment map always gives a Lie algebra morph from $\g$ to $\Omega/exact$. Since this is the property we are interested, we will say a moment is equivariant if its a Lie alge morphism between these spaces, just like Da Silva does. If $H^k(\g)=0$ then for a symplectic action a not-necessarily-equivariant moment map exists. Moreover, if $H^k(\g)=0$ then equivariant moment maps are unique. If $H^{k+1}(\g)=0$ then one can always obtain an equivariant moment from a non-equivariant moment. We generalize each of these theorems. This work gernealizes the existence and uniqueness results of moment maps in symplectic geometry. It also generalizes the existence and uniquess results of the moment maps introduced by Madsen and Swann in \cite{ms} and \cite{MS}. } We also show that the morphism properties of moment maps from symplectic geometry are preserved in multisymplectic geometry. More specifically, recall that in symplectic geometry the equivariance of a moment map $f:\g\to C^\infty(M)$ is characterized by whether or not $f$ is a Lie algebra morphism. That is, $f$ is equivariant if and only if \[f([\xi,\eta])=\{f(\xi),f(\eta)\},\] for all $\xi,\eta\in\g$. However, as shown in Theorem 4.2.8 of \cite{Marsden} it is always true that $f$ induces a Lie algebra morphism between $\g$ and $C^\infty(M)/\text{constant}$, because $df([\xi,\eta])=d\{f(\xi),f(\eta)\}$. We generalize these results to multisymplectic geometry by showing that: \begin{theorem} For any $1\leq k \leq n$, a weak $k$-moment map is always a $\g$-module morphism from $\Rho_{\g,k}\to\Omega^{n-k}_{\mathrm{Ham}}(M)/\text{closed}$. A weak $k$-moment map is equivariant if and only if it is a $\g$-module morphism from $\Rho_{\g,k}\to\Omega^{n-k}_{\mathrm{Ham}}(M)$. \end{theorem} Here $\Omega^{n-k}_{\mathrm{Ham}}(M)$ is denoting the space of multi-Hamiltonian forms, which are differential forms $\alpha\in\Omega^{n-k}(M)$ satisfying $d\alpha=X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$ for some $X_\alpha\in\Gamma(\Lambda^k(TM))$ (see Definition \ref{ham k form}). These forms were introduced in \cite{me}, and give a notion of a multi-symmetry which occurs when there is a given Hamiltonian $(n-1)$-form $H\in\Omega^{n-1}_{\mathrm{Ham}}(M)$, (see Definition \ref{ham 1 form}). \section{Cohomology} We briefly recall some basic notions from group and Lie algebra cohomology. \subsection{Group Cohomology} Let $G$ be a group and $S$ a $G$-module. For $g\in G$ and $s\in S$, let $g\cdot s$ denote the action of $G$ on $S$. Let $C^k(G,S)$ denote the space of smooth alternating functions from $G^k$ to $S$ and consider the differential $\partial_k:C^k(G,S)\to C^{k+1}(G,S)$ defined as follows. For $\sigma\in C^k(G,S)$ and $g_1,\cdots, g_{k+1}\in G$ define \begin{equation}\label{group differential}\begin{aligned}\partial&_k\sigma(g_1,\cdots,g_{k+1}):=\\ &g_1\cdot\sigma(g_2,\cdots,g_{k+1})+\sum_{i=1}^k(-1)^i\sigma(g_1,\cdots, g_{i-1},g_ig_{i+1},g_{i+2},\cdots,g_{k+1})-(-1)^k\sigma(g_1,\cdots,g_k).\end{aligned}\end{equation}A computation shows that $\partial_k^2=0$ so that $C^0(G,S)\to C^1(G,S)\to \cdots$ is a cochain complex. This cohomology is known as the differentiable cohomology of $G$ with coefficients in $S$. We let $H^k(G,S)$ denote the $k$-th cohomology group and will call an equivalence class representative a $k$-cocycle. \subsection{Lie Algebra Cohomology} Let $\g$ be a Lie algebra and $R$ a $\g$-module. Given $\xi\in\g$ and $r\in R$, let $\xi\cdot r$ denote the action of $\g$ on $R$. We let $C^k(\g,R)$ denote the space of multilinear alternating functions from $\g^k$ to $R$ and consider the differential $\delta_k:C^k(\g,R)\to C^{k+1}(\g,R)$ defined as follows. For $f\in C^k(\g,R)$ and $\xi_1,\cdots\xi_{k+1}\in \g$ define \begin{equation}\label{group differential 2}\begin{aligned}\delta_k& f(\xi_1,\cdots,\xi_{k+1}):=\\ &\sum_i(-1)^{i+1}\xi_i\cdot f(\xi_1,\cdots,\widehat\xi_i,\cdots,\xi_{k+1})+\sum_{i<j}(-1)^{i+j}f([\xi_i,\xi_j],\xi_1,\cdots,\widehat\xi_i,\cdots,\widehat\xi_j,\cdots,\xi_{k+1}). \end{aligned}\end{equation} A computation shows that $\delta_k^2=0$. We let $H^k(\g,R)$ denote the $k$-th cohomology group and call an equivalence class representative a (Lie algebra) $k$-cocycle. Note that for $k=0$ the map $\delta_0:R\to C^1(\g,R)$ is given by $(\delta_0r)(\xi)=\xi\cdot r$, where $r\in R$ and $\xi\in \g$. For $k=1$ the map $\delta_1:C^1(\g,R)\to C^2(\g,R)$ is given by $\delta_1(f)(\xi_1,\xi_2)=\xi_1\cdot f(\xi_2)-\xi_2\cdot f(\xi_1)-f([\xi_1,\xi_2])$, where $f\in C^1(\g,R)$ and $\xi_1$ and $\xi_2$ are in $\g$.\\ The standard example of Lie algebra cohomology is given when $R=\R$: \begin{example}\bf{(Exterior algebra of $\g^\ast$) }\rm Consider the trivial $\g$-action on $\R$. Then $C^k(\g,\R)=\Lambda^k\g^\ast$, and the Lie algebra cohomology differential $\delta_k:\Lambda^k\g^\ast\to\Lambda^{k+1}\g^\ast$ is given by \begin{equation}\label{differential g dual}\delta_k\alpha(\xi_1\wedge\cdots\wedge\xi_k):= \alpha\left(\sum_{1\leq i<j\leq k}(-1)^{i+j}[\xi_i,\xi_k]\wedge\xi_1\wedge\cdots\wedge\widehat\xi_i\wedge\cdots\wedge\widehat\xi_j\wedge\cdots\wedge\xi_k)\right)\end{equation}where $\alpha\in\Lambda^k\g^\ast$, and $\xi_1\wedge\cdots\wedge\xi_k$ is a decomposable element of $\Lambda^k\g$, and extended by linearity to non-decomposables. It is easy to check that $\delta^2=0$. We will also make frequent reference to the corresponding Lie algebra homology differential which is given by \begin{equation}\label{differential g}\partial_k:\Lambda^k\g\to\Lambda^{k-1}\g \ \ \ \ \ \ \ \ \xi_1\wedge\cdots\wedge\xi_k\mapsto\sum_{1\leq i<j\leq k}(-1)^{i+j}[\xi_i,\xi_k]\wedge\xi_1\wedge\cdots\wedge\widehat\xi_i\wedge\cdots\wedge\widehat\xi_j\wedge\cdots\wedge\xi_k,\end{equation}for $k\geq 1$. We define $\Lambda^{-1}\g=\{0\}$ and $\partial_0$ to be the zero map. \end{example} For the rest of this section we only consider the exterior algebra homology complex. \begin{definition}\label{cohomology differential} We follow the terminology and notation of \cite{ms} and call $\Rho_{\g,k}=\ker \partial_k$ the $k$-th Lie kernel, which is a vector subspace of $\Lambda^k\g$. Notice that if $\g$ is abelian then $\Rho_{\g,k}=\Lambda^k\g$. We will let $\Rho_g$ denote the direct sum of all the Lie kernels; \[\Rho_\g=\oplus_{k=0}^{\dim(\g)}\Rho_{\g,k},\] and denote $H^k(\g,\R)$ simply by $H^k(\g)$. \end{definition} We now recall the Schouten Bracket. \begin{definition} On decomposable multivectors $X=X_1\wedge\cdots \wedge X_k \in\Lambda^k \g$ and $Y=Y_1\wedge\cdots \wedge Y_l\in\Lambda^l\g$, the Schouten bracket $[\cdot,\cdot]$ is given by \[ [X,Y]:=\sum_{i=1}^k\sum_{j=1}^l(-1)^{i+j}[X_i,Y_j]\wedge X_1\wedge\cdots\wedge \widehat X_i\wedge\cdots\wedge X_k\wedge Y_1\wedge\cdots\wedge\widehat Y_j\wedge\cdots\wedge Y_l,\] and extended by linearity to all multivector fields. \del{The Schouten bracket is also defined on the multivector fields $\Gamma(\Lambda^\bullet(TM))$ by the same formula. We will abuse notation and let $[\cdot,\cdot]$ represent the Schouten bracket on both $\Lambda^\bullet\g$ and $\Gamma(\Lambda^\bullet(TM))$. } \end{definition} \del{ Now let a Lie group act on a manifold $M$. Then we get the following induced differential algebra on the exterior powers of vector fields. Let $\mathfrak{X}_k=\{V_p \ ; p\in\Rho_{\g,k}\}$, where $V_p$ is the infinitesimal generator of $p$. Let $\mathfrak{X}=\oplus \mathfrak{X}_k$. \begin{proposition} We have that $(\mathfrak{X},\partial,[\cdot,\cdot])$ is a differential graded Lie algebra, where we define $\partial V_p$ to be $V_{\partial p}$ and $[\cdot,\cdot]$ to be the usual Schouten bracket. \end{proposition} \begin{proof} We first show that $V_{[p,q]}=-[V_p,V_q]$. Indeed,\begin{align*} V_{[p,q]}&=\sum_{i,j}(-1)^{i+j}[X_i,Y_j]_M\wedge (X_1)_M\wedge\cdots \widehat X_i\wedge\cdots\widehat Y_j\wedge\cdots (Y_l)_M\\ &=\sum_{i,j}-(-1)^{i+j}[(X_i)_M,(Y_j)_M]\wedge (X_1)_M\wedge\cdots \widehat X_i\wedge\cdots\widehat Y_j\wedge\cdots (Y_l)_M\\ &=-[V_p,V_q] \end{align*} The proposition now follows since $(\Rho_\g,\partial,[\cdot,\cdot])$ is a differential graded Lie algebra. \end{proof} Next we recall the definition of the Lie derivative with respect to a multi vector field. \begin{definition} For a differential form $\tau\in\Omega^{r}(M)$ and a multi-vector field $Y\in\Gamma(\Lambda^k(TM))$ we define the Lie derivative of $\tau$ along $Y$ by \[\L_Y\tau := d(Y\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\tau)+(-1)^{k+1}d\tau.\] \end{definition} We recall some properties of this Lie derivative without proof. The following lemma holds for arbitrary multi vector fields; however, in this paper we will only need to consider multi vector fields in $\X$. \begin{lemma}(\bf{Extended Cartan Lemma})\rm \label{extended Cartan} For a decomposable multivector field $\xi_1\wedge\cdots\wedge\xi_k$ in $\X_k$, and differential form $\tau\in\Omega^\bullet(M)$ we have that \begin{align*} (-1)^kd(\xi_1\wedge\cdots\wedge\xi_k)\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\tau&=(\partial(\xi_1\wedge\cdots\wedge\xi_k))\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\tau +\sum_{i=1}^k(-1)^i(\xi_1\wedge\cdots\wedge\widehat\xi_i\wedge\cdots\wedge\xi_k)\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\L_{\xi_i}\tau\\ &+(\xi_1\wedge\cdots\wedge\xi_k)\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} d\tau \end{align*} Here we have abused notation and represented the infinitesimal generator of an element in $\Lambda^k\g$ by itself. \end{lemma} \begin{proof} This is Lemma 3.4 of \cite{ms} \end{proof} \begin{remark} In the case where $k=1$ we see that we obtain the standard Cartan formula. \end{remark} The next proposition shows how the Schouten bracket distributes over interior products. \begin{lemma} \label{interior}For $X\in\Gamma(\Lambda^k(TM))$, $Y\in\Gamma(\Lambda^l(TM))$ we have that the interior product of their Schouten bracket satisfies \[i[X,Y]=[-[i(Y),d],i(X)]\] where the bracket on the right hand side is a graded commutator. \end{lemma} \begin{proof} This is proposition 4.1 of \cite{marle} \end{proof} Analogous to the commutator property of the Lie derivative with respect to a vector field we get \begin{proposition}\label{Lie of bracket} For $X\in\Gamma(\Lambda^k(TM))$ and $Y\in\Gamma(\Lambda^l(TM))$ we have that \[\L_{[X,Y]}\tau=(-1)^{(k-1)(l-1)}\L_X\L_Y\tau-\L_Y\L_X\tau.\] \end{proposition} \begin{proof} This follows from proposition \ref{interior}. For details see proposition A.3 of \cite{Poisson} \end{proof} } The next proposition shows that the Schouten bracket and the Lie algebra differential are equal, when restricted to elements of a certain form. \begin{proposition} \label{wedge is Schouten} For $p\in\Rho_{\g,k}$ and $\xi\in\g$ we have that \[\partial(p\wedge\xi)=[p,\xi].\] \end{proposition} \begin{proof} A computation using the definition of $\partial$ shows that \begin{align*} \partial(p\wedge \xi)&=\partial(p)\wedge\xi +p\wedge\partial(\xi)+[p,\xi]\\ &=[p,\xi]&\text{since $p\in\Rho_{\g,k}$ and $\partial(\xi)=[\xi,\xi]=0$. } \end{align*} \end{proof} Let $\g$ be a Lie algebra acting on a manifold $M$. For $\xi\in\g$, we let $V_\xi\in\Gamma(TM)$ denote its infinitesimal generator. \begin{definition}\label{inf gener} For a decomposable element $p=\xi_1\wedge\cdots\wedge\xi_k$ of $\Lambda^k\g$, its infinitesimal generator, denoted $V_p$, is the multivector field $V_{\xi_1}\wedge\cdots\wedge V_{\xi_k}$. \end{definition} \begin{lemma}(\bf{Extended Cartan Lemma})\rm \label{extended Cartan} For a decomposable multivector field $p=\xi_1\wedge\cdots\wedge\xi_k$ in $\Lambda^k\g$ and differential form $\tau$ we have that \begin{align*} (-1)^kd(V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\tau)&=V_{\partial{p}}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\tau +\sum_{i=1}^k(-1)^i(V_{\xi_1}\wedge\cdots\wedge\widehat V_{\xi_i}\wedge\cdots\wedge V_{\xi_k})\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\L_{V_{\xi_i}}\tau +V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} d\tau. \end{align*} \end{lemma} \begin{proof} This is Lemma 3.4 of \cite{ms} or Lemma 2.18 of \cite{cq}. \end{proof} Let $\Phi:G\times M\to M$ be a Lie group action on $M$. \begin{definition} For $A\in\Gamma(TM)$ we let $\Phi_{g}^\ast A$ denote the vector field given by the push-forward of $A$ by $\Phi_{g}^{-1}$. That is, \[(\Phi_{g}^\ast(A))_x:=(\Phi_{g^{-1}})_{\ast,\Phi_{g}(x)}(A_{\Phi_{g(x)}}),\] where $x\in M$. For a decomposable multivector field $Y=Y_1\wedge\cdots \wedge Y_k$ in $\Gamma(\Lambda^k(TM))$ we will let $\mathrm{Ad}_gY$ denote the extended adjoint action \[\mathrm{Ad}_gY=\mathrm{Ad}_gY_1\wedge\cdots\wedge \mathrm{Ad}_gY_k\] and we will let $\Phi_g^\ast Y$ denote the multivector field \[\Phi_g^\ast Y=\Phi_g^\ast Y_1\wedge\cdots\wedge \Phi_g^\ast Y_k.\]We also extend $\mathrm{ad}$ to a map $\mathrm{ad}:\g\times \Lambda^k\g\to\Lambda^k\g$ by \begin{equation}\label{ad equation}\mathrm{ad}_\xi(Y_1\wedge\cdots\wedge Y_k)=\sum_{i=1}^kY_1\wedge\cdots\wedge \mathrm{ad}_{\xi}(Y_i)\wedge\cdots\wedge Y_k.\end{equation} \end{definition} \begin{corollary} \label{ad} For $\xi\in\g$ we have that $\mathrm{ad}_\xi$ preserves the Lie kernel. That is, if $p$ is in $\Rho_{\g,k}$ then $\mathrm{ad}_{\xi}(p)$ is in $\Rho_{\g,k}$. \end{corollary} \begin{proof} A computation shows that $\mathrm{ad}_\xi(p)=[\xi,p]$. Hence, by Proposition \ref{wedge is Schouten}, we see $[\xi,p]$ is exact. Thus it is closed. \end{proof} The next proposition shows that the infinitesimal generator of the extended adjoint action agrees with the pull back action. \begin{proposition} \label{adjoint over wedge} Let $\Phi:G\times M\to M$ be a group action. For every $g\in G$ and $p\in\Lambda^k\g$ we have that \[V_{Ad_gp}=\Phi_{g^{-1}}^\ast V_p.\] Equivalently, the map $\Lambda^k\g\to\Gamma(\Lambda^k(TM))$ given by $\xi_1\wedge\cdots\wedge\xi_k\mapsto V_{\xi_1}\wedge\cdots\wedge V_{\xi_k}$ is equivariant with respect to the extended adjoint and pull back action. \end{proposition} \begin{proof} Fix $q\in M$, $g\in G$. First suppose that $\xi\in\g$. Then by Proposition 4.1.26 of \cite{Marsden} we have that \[V_{Ad_g\xi}=\Phi_{g^{-1}}^\ast V_\xi.\] The claim now follows since for $p=\xi_1\wedge\cdots\wedge \xi_k$ in $\Lambda^k\g$, \begin{align*} V_{Ad_gp}&:=V_{Ad_g{\xi_1}}\wedge\cdots\wedge V_{Ad_g{\xi_k}}\\ &=\Phi_{g^{-1}}^\ast V_{\xi_1}\wedge\cdots\wedge\Phi_{g^{-1}}^\ast V_{\xi_k}\\ &=\Phi_{g^{-1}}^\ast V_p&\text{by definition.} \end{align*} \end{proof} \section{Multisymplectic Geometry} Here we recall some concepts and tools used in multisymplectic geometry. \subsection{Multisymplectic Manifolds} \begin{definition} A manifold $M$ equipped with a closed $(n+1)$-form $\omega$ is called a pre-multisymplecitc manifold. If in addition the map $T_pM\to\Lambda^n T_p^\ast M,\ V\mapsto V\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$ is injective, then $(M,\omega)$ is called a multisymplectic, or $n$-plectic, manifold. \end{definition} \begin{definition}\label{ham 1 form} An $(n-1)$-form $\alpha\in\Omega^{n-1}(M)$ is called Hamiltonian if there exists a vector field $V_\alpha\in\Gamma(TM)$ such that $d\alpha=-V_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$. Note that the non-degeneracy of $\omega$ insures uniqueness of the corresponding Hamiltonian vector field. We let $\Omega^{n-1}_{\mathrm{Ham}}(M)$ denote the space of Hamiltonian $(n-1)$-forms, which is a subspace of $\Omega^{n-1}(M)$. \end{definition} As in symplectic geometry, we are interested in Lie group actions which preserve the $n$-plectic form. \begin{definition} A Lie group action $\Phi:G\times M\to M$ is called multisymplectic if $\Phi_g^\ast\omega=\omega$. A Lie algebra action $\g\times \Gamma(TM)\to \Gamma(TM)$ is called multisymplectic if $\L_{V_\xi}\omega=0$ for all $\xi\in\g$. We remark that a multisymplectic Lie group action induces a multisymplectic Lie algebra action. Conversely, a multisymplectic Lie algebra action induces a multisymplectic group action if the Lie group is connected. \end{definition} In \cite{rogers} it was shown that to any multisymplectic manifold one can associate the following $L_\infty$-algebra. \begin{definition}\label{Lie n observables} The Lie $n$-algebra of observables, $L_\infty(M,\omega)$ is the following $L_\infty$-algebra. Let $L=\oplus_{i=0}^nL_i$ where $L_0=\Omega^{n-1}_{\text{Ham}}(M)$ and $L_i=\Omega^{n-1-i}(M)$ for $1\leq i\leq n-1$. The maps $l_k:L^{\otimes k}\to L$ of degree $k-2$ are defined as follows: For $k=1$, \[l_1(\alpha) =\left\{\begin{array}{l}d\alpha \ \ \ \ \ \text{if deg $\alpha>0$},\\ 0 \ \ \ \ \ \ \ \text{if deg $\alpha=0$.}\end{array}\right.\] For $k>1$, \[l_k(\alpha_1,\cdots,\alpha_k) =\left\{\begin{array}{l}\zeta(k)X_{\alpha_k}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\cdots X_{\alpha_1}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega \ \ \ \ \ \text{if deg $\alpha_1\otimes\cdots\otimes\alpha_k=0$},\\ 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{if deg $\alpha_1\otimes\cdots\otimes\alpha_k>0$.}\end{array}\right.\] Here $\zeta(k)$ is defined to equal $-(-1)^{\frac{k(k+1)}{2}}$. We introduce this notation as this sign comes up frequently. \end{definition} \subsection{Hamiltonian forms} Let $(M,\omega)$ be an $n$-plectic manifold. The following definition generalizes the concept of a Hamiltonian $1$-form from symplectic geometry. \begin{definition}\label{ham k form} A differental form $\alpha\in\Omega^{n-k}(M)$ is called Hamiltonian if there exists a multivector field $X_\alpha\in\Gamma(\Lambda^k(TM))$ such that $d\alpha=-X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$. \end{definition} Note that the Hamiltonian multivector field corresponding to a Hamiltonian form is not unique; however, the difference of any two Hamiltonian vector fields is in the kernel of $\omega$. The next proposition shows that the Hamiltonian forms are an $L_\infty$ subalgebra of the Lie $n$-algebra of observables \begin{proposition} Let $\widehat L_i=\Omega^{n-i}_{\mathrm{Ham}}(M)$ and $\widehat L=\oplus_{i=0}^n\widehat L_i$. Let $\widehat L_\infty(M,\omega)$ denote the space $\widehat L$ together with the mappings $l_k$ defined above in the definition of the Lie-$n$-algebra of observables. Then $\widehat L_\infty(M,\omega)$ is an $L_\infty$-subalgebra of $L_\infty(M,\omega)$. \end{proposition} \begin{proof} This is Theorem 4.15 of \cite{me}. \end{proof} \subsection{Weak Homotopy Moment Maps} For a group acting on a symplectic manifold $M$, a moment map is a Lie algebra morphism between $(\g,[\cdot,\cdot])$ and $(C^\infty(M),\{\cdot,\cdot\})$, where $\{\cdot,\cdot\}$ is the Poisson bracket. In multisymplectic geometry, a moment map is an $L_\infty$-morphism from the exterior algebra of $\g$ to the Lie $n$-algebra of observables. We direct the reader to \cite{questions} for more information on $L_\infty$-algebras and morphisms. \begin{definition}\label{hmm} A (homotopy) moment map is an $L_\infty$-morphism $(f)$ between $\g$ and the Lie $n$-algebra of observables. This means that $(f)$ is a collection of maps $f_1:\Lambda^1\g\to\Omega^{n-1}_{\text{Ham}}(M)$ and $f_k:\Lambda^k\g\to\Omega^{n-k}(M)$ for $k\geq 2$ satisfying, for $p\in\Lambda^k\g$ \begin{equation}\label{hcmm}-f_{k-1}(\partial p)=df_k(p)+\zeta(k)V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega.\end{equation} \end{definition} It follows immediately from equation (\ref{hcmm}) that if $p$ is in $\Rho_{\g,k}$ then $f_k(p)$ is a Hamiltonian form. That is, if the domain of a homotopy moment map $(f)$ is restricted to the Lie kernel, then the image of $(f)$ is completely contained in the space of Hamiltonian forms. This motivates the definition of a weak homotopy moment map: \begin{definition} A weak (homotopy) moment map, is a collection of maps $(f)$ with $f_k:\Rho_{\g,k}\to\Omega_{\mathrm{Ham}}^{n-k}(M)$ satisying \begin{equation}\label{hcmm kernel}df_k(p)=-\zeta(k)V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega.\end{equation}We refer to the component $f_k$ as a weak $k$-moment map. \end{definition} \del{ Notice that equation (\ref{hcmm kernel}) is directly analogous to the defining equation of a moment map in symplectic geometry. Moreover, it was shown in \cite{cq} that elements in the Lie kernel get mapped to conserved quantities, just like in symplectic geometry. Moreover, in \cite{me} it was shown that if one consider weak moment maps, then one can obtain a generalization of Noether's theorem to multisymplectic geometry. } \begin{remark} Notice that any moment map gives a weak moment map. Indeed, if $(f)$ satisfies equation (\ref{hcmm}) then it satisfies equation (\ref{hcmm kernel}). \end{remark} \begin{remark} Notice that a weak homotopy moment map coincides with the moment map from symplectic geometry in the case $n=1$. Indeed, setting $n=1$ in equations (\ref{hcmm}) and (\ref{hcmm kernel}) yields $f:\g\to C^\infty(M)$ such that $df(\xi)=-V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$. Also notice that the $n$-th component of a weak moment map is precisely the moment map introduced by Madsen and Swann in \cite{ms} and \cite{MS}. \end{remark} The next proposition says that a weak moment map is still an $L_\infty$-morphism. \begin{proposition} A weak moment map is an $L_\infty$-morphism from $\g$ to $\widehat L_\infty(M,\omega)$. \end{proposition} \begin{proof} This is Proposition 5.9 of \cite{me}. \end{proof} \begin{definition} A homotopy moment map $(f)$ is equivariant if each component $f_k:\Lambda^k\g\to \Omega^{n-k}(M)$ is equivariant with respect to the adjoint and pullback actions respectively. That is, for all $g\in G$, $p\in\Lambda^k\g$ and $1\leq k\leq n$ \begin{equation}\label{equivariant equation}f_k(Ad_{g^{-1}}^\ast p)=\Phi_g^\ast f(p).\end{equation} Similarly, a weak moment map is equivariant if equation (\ref{equivariant equation}) holds for all $p\in\Rho_{\g,k}$ and $1\leq k\leq n$. \end{definition} We study the equivariance of moment maps further in the following section. \del{ \begin{definition} A pre homotopy comoment maps is an $L_\infty$ morphism $(f)$ between the Chevalley Eilenberg complex and the Lie n-algebra of observables. That is, $(f)$ is a collection of maps $f_1:\Lambda^1\g\to\Omega^{n-1}_{\text{Ham}}(M)$ and $f_k:\Lambda^k\g\to\Omega^{n-k}(M)$ for $k\geq 2$ satisfying, for $p\in\Lambda^k\g$ \[-f_{k-1}(\partial p)=df_k(p)+(-1)^{k(k+1)}V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega.\] \end{definition} \begin{definition} We say a pre homotopy moment map is equivariant if for each $k$ we have \[f_k(p)=\Phi_{g^{-1}}^\ast(f_k(Ad_{g^{-1}}(p)))\] where $g$ is an arbitrary element of the group. We will call a pre homotopy moment map that is equivariant a homotopy moment map. \end{definition} Let us first point out aspects of this definition that differ from those in Hamiltonian mechanics. Fix a multi Hamiltonian system $(M,\omega, H)$. In symplectic geometry, it is always true that $\L_{X_H}H=0$. However, this is no longer valid in the multisymplectic setup as the following examples shows. \begin{example} On $\R^3$, let $\omega=dx\wedge dy\wedge dz$ and let $H=xdx+ydz$. Then $dH=dy\wedge dz$ and so $X_H=\frac{\partial}{\partial x}$. It follows that $X_H\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} H=x$ and $X_H\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} dH=0$, and so $\L_{X_H}H=dx$ \end{example} Moreover, in the symplectic setup, given a moment map $\mu:\g\to C^\infty(M)$ it is always true that $\mu(\xi)$ is a conserved quantity. That is, $\L_{X_H}\mu(\xi)=0$. However, this result is not true in the multiysymplectic setup. Recall from \cite{cq} that a $k$-form $\alpha$ can be conserved with respect to $X_H$ in three different way. Namely, $\alpha$ is locally, globally, strictly conserved if either $\L_{X_H}\alpha$ is exact, closed or zero respectively. The next example shows that an element in the image of a moment map may not be conserved in any of the three ways just mentioned. \begin{example} To do \end{example} We now refine the definition of a homotopy moment map, and by doing so will resolve the two issues mentioned above. Firstly, we restrict the domain of a moment map to the Lie kernel. It now follows, by the results in \cite{cq}, that every element in the image of a moment map is a conserved quantity. Summarizing the results from \cite{cq} we have that if the action globally or locally preserves $H$ then $f_k(p)$ is locally conserved. If the action strictly preserves $H$, then $f_k(p)$ is globally conserved. The final restriction is that we require the image of a moment map to be contained in the algebra of special forms. By definition, the image is contained in the Hamiltonian forms since $X_{f_k(p)}=V_p$; however, each $f_k(p)$ is not necessarily special. \begin{example} Moment map where $f_k(p)$ is not special. \end{example} By adding in the constraint of special forms, we see that now $\L_{X_H}H$ does equal zero by proposition \ref{preserve omega}. Summarizing, our new definition is \begin{definition} A refined homotopy moment map is a collection of maps $(f)=\{f_k:\Rho_{\g,k}\to\Omega^{n-k}_S(M)$ satisfying \[d(f_k(p))=-(-1)^{k(k+1)}V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega\] for $k=1,\ldots,n+1$ and $p\in\Rho_{\g,k}$. Equivalently, this means that \[(V_p-X_{f_k(p)})\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega=0.\]As before, the collection $(f)$ is called equivariant if $f_k(p)=\Phi_{g^{-1}}^\ast(f_k(Ad_{g^{-1}}(p)))$ for each $k$. \end{definition} \begin{remark} Here we are talking about equivariance with respect to the natural action on $\Lambda^k\g\otimes\Omega^{n-k}_{\text{Ham}}$ induced from the action of $G$ on $M$. If $g\cdot$ were an arbitrary action on $\Lambda^k\g\otimes\Omega^{n-k}_{\text{Ham}}$ then we would say the moment map is equivariant if \[f_k(p)=g\cdot f_k(p).\]See proposition \ref{new action} below. \end{remark} } \del{ \subsection{Hamiltonian forms} The following definition extends the notion of a Hamiltonian function from symplectic geometry. \begin{definition} If for $\alpha\in\Omega^{n-1}(M)$ there exists $X_\alpha\in\Gamma(TM)$ such that $d\alpha=X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \omega$ then we call $\alpha$ a Hamiltonian $(n-1)$-form and $X_\alpha$ its corresponding Hamiltonian vector field. Notice that if $\omega$ is multisymplectic then the Hamiltonian vector field is unique. We let $\Omega^{n-1}_{\text{Ham}}(M)$ denote the space of Hamiltonian $(n-1)$ forms. \end{definition} We now extend the notion of Hamiltonian $(n-1)$-forms. \begin{definition}We call \[\Omega^{n-k}_{\text{Ham}}:=\{\alpha\in\Omega^{n-k}(M) ; \text{ there exists $X_\alpha\in\Gamma(\Lambda^k(TM))$ with $d\alpha=-X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$ } \} \]the set of Hamiltonian $(n-k)$ forms. We call $X_\alpha$ the (multi) Hamiltonian vector field associated to $\alpha$.\end{definition} Of course, multi Hamiltonian vector fields are not necessarily unique. However, it's clear that they differ by something in the kernel of $\omega$. \begin{proposition} If $\alpha\in\Omega^{n-k}_{\text{Ham}}$ then any two of its Hamiltonian vector fields differ by something in the kernel of $\omega$. \end{proposition} } \del{ \subsection{A Generalized Poisson Bracket} We now put in a structure that represents a generalized Poisson bracket. \begin{definition} Given $\alpha\in\Omega^{n-k}_{\text{Ham}}$ and $\beta\in\Omega^{n-l}_{\text{Ham}}$ we define their Poisson bracket to be \[\{\alpha,\beta\} := (-1)^{l+1}X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \omega\] \end{definition} \begin{remark} The sign choice in the above proposition is so that the Hamiltonian forms modulo closed forms constitutes a graded Lie algebra \end{remark} We first show that the space of Hamiltonian vector fields is closed under the Poisson bracket. \begin{lemma} \label{Poisson is Schouten} For $\alpha\in\Omega^{n-k}_{\text{Ham}}$ and $\beta\in\Omega^{n-l}_{\text{Ham}}$ we have that $\{\alpha,\beta\}$ is in $\Omega^{n+1-k-l}_{\text{Ham}}(M)$. More precisely, we have that (check signs) \[X_{\{\alpha,\beta\}}=(-1)^{l}[X_\alpha,X_\beta]\] \end{lemma} \begin{proof} By lemma \ref{interior} we have that \begin{align*} [X_\alpha,X_\beta]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega&=-X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}(d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega))+(-1)^ld(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)-(-1)^{k(l+1)}X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} d\omega)-(-1)^{(k+1)(l+1)}X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}(d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega))\\ &=(-1)^ld(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)\\ &=(-1)^ld(\{\alpha,\beta\}) \end{align*} \end{proof} \del{ Notice also that each Hamiltonian vector field preserves $\omega$. \begin{proposition} \label{preserve omega} If $X\in\Gamma(\Lambda^k(TM))$ is a Hamiltonian vector field, then $\L_X\omega=0$ \end{proposition} \begin{proof} By definition, there exists some $n-k$-form $\alpha$ such that $X\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega=d\alpha$.Thus \[\L_X\omega=d(X\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)=d^2\alpha=0\] \end{proof} } The bracket commutes in a graded fashion \begin{proposition} For $\alpha\in\Omega^{n-k}_{\text{Ham}}(M)$ and $\beta\in\Omega^{n-l}_{\text{Ham}}(M)$ we have that \[\{\alpha,\beta\}=(-1)^{(n-k)(n-l)}\{\beta,\alpha\}\] \end{proposition} \begin{proof} This follows since $\omega$ is skew symnetric. \end{proof} It's not true in general that our bracket satisfies the Jacobi identity; however, as the next proposition shows, it is satisfied up to an exact term. \begin{proposition} \label{Jacobi} Fix $\alpha\in\Omega^{n-k}_{\text{Ham}}(M)$, $\beta\in\Omega^{n-l}_{\text{Ham}}(M)$ and $\gamma\in\Omega^{n-p}_{\text{Ham}}(M)$. Let $X_\alpha, X_\beta$ and $X_\gamma$ denote Hamiltonian vector fields for $\alpha,\beta$ and $\gamma$ respectively. Then we have that \[(-1)^{(n-k)(n-p)}\{\alpha,\{\beta,\gamma\}\}+(-1)^{(n-l)(n-k)}\{\beta,\{\gamma,\alpha\}\}+(-1)^{(n-p)(n-l)}\{\gamma,\{\alpha,\beta\}\} = d(\varphi(X_\alpha,X_\beta,X_\gamma)).\] \end{proposition} \begin{proof} (May do this proof in the other paper, and cite results. Still missing the negative signs) First note that \begin{align*} \{\alpha,\beta\}&=(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\varphi)(X_\beta,\cdot)\\ &=d\alpha(X_\beta,\cdot)\\ &=\L_{X_\beta}\alpha-d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\alpha)\\ &=-\L_{X_\alpha}\beta+d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\beta)&\text{since $\{\alpha,\beta\}=-\{\beta,\alpha\}$} \end{align*} Thus \begin{align*} \{\alpha,\{\beta,\gamma\}\}&=-\L_{X_\alpha}\{\beta,\gamma\}+d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\{\beta,\gamma\})\\ &=-\L_{X_\alpha}(-\L_{X_\beta}\gamma+d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))+d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\{\beta,\gamma\})\\ &=\L_{X_\alpha}\L_{X_\beta}\gamma-\L_{X_\alpha}d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)+d(\varphi(X_\alpha,X_\beta,X_\gamma)) \end{align*} and \begin{align*} \{\beta,\{\gamma,\alpha\}\}&=-\L_{X_\beta}(\{\gamma,\alpha\})+d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\{\gamma,\alpha\})\\ &=-\L_{X_\beta}(\L_{X_\alpha}\gamma-d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))+d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\{\gamma,\alpha\})\\ &=-\L_{X_\beta}\L_{X_\alpha}\gamma+\L_{X_\beta}d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)+d(\varphi(X_\alpha,X_\beta,X_\gamma)) \end{align*} and \begin{align*} \{\gamma,\{\alpha,\beta\}\}&=-\{\{\alpha,\beta\},\gamma\}\\ &=\L_{X_{\{\alpha,\beta\}}}\gamma-d(X_{\{\alpha,\beta\}}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma) \end{align*} Adding these terms and using proposition \ref{Poisson is Schouten} and proposition \ref{Lie of bracket} we get that \begin{align*} \{\alpha,\{\beta,\gamma\}\}+\{\beta,\{\gamma,\alpha\}\}+\{\gamma,\{\alpha,\beta\}\}&=-\L_{X_\alpha}(d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))+\L_{X_\beta}(d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))+\\ &\ \ \ \ \ -d(X_{\{\alpha,\beta\}}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)+2d(\varphi(X_\alpha,X_\beta,X_\gamma)) \end{align*} However, \begin{align*} -d(X_{\{\alpha,\beta\}}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)&=d([X_\alpha,X_\beta]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)\\ &=d\left(\L_{X_\alpha}(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)-X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \L_{X_\alpha}\gamma\right)\\ &=d(\L_{X_\alpha}(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))-d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \L_{X_\alpha}\gamma)\\ &=d(\L_{X_\alpha}(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))-d\bigl[X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} d\gamma)-X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}(d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))\bigr]\\ &=\L_{X_\alpha}(d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))-d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} X_\gamma\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\varphi)-d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}(d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)))\\ &=\L_{X_\alpha}(d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))-d(\varphi(X_\alpha,X_\beta,X_\gamma))-d(\L_{X_\beta}(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))\\ &=\L_{X_\alpha}(d(X_\beta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma))-d(\varphi(X_\alpha,X_\beta,X_\gamma))-\L_{X_\beta}(d(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\gamma)) \end{align*} Substituting this in, we see that \[\{\alpha,\{\beta,\gamma\}\}+\{\beta,\{\gamma,\alpha\}\}+\{\gamma,\{\alpha,\beta\}\} = d(\varphi(X_\alpha,X_\beta,X_\gamma)).\] \end{proof} } \del{ \subsection{Special Forms} We make a new definition \begin{definition} An element $\alpha\in\Omega^{n-k}_{\text{Ham}}(M)$ is called a special form if its Lie derivative with respect to any multivector field $X$ in that preserves $\omega$ vanishes. That is, if \[\L_X\omega=0 \implies \L_X\alpha=0.\] We let $\Omega^k_S(M)$ denote the set of special $k$-forms. \end{definition} } \section{Equivariance of Weak Moment Maps} In this section we show how the theory of equivariance of moments maps in symplectic geometry generalizes to multisymplectic geometry. \subsection{Equivariance in Multisymplectic Geometry} We first recall the theory from symplectic geometry without proof and then generalize to the multisymplectic setting. The results from symplectic geometry can all be found in Chapter 4.2 of \cite{Marsden} for example. Let $(M,\omega)$ be a symplectic manifold, and $\Phi:G\times M\to M$ a symplectic Lie group action by a connected Lie group $G$ . We consider the induced symplectic Lie algebra action $\g\times\Gamma(TM)\to\Gamma(TM)$. Suppose that a moment map $f:\g\to C^\infty(M)$ exists. That is, $df(\xi)=V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$ for all $\xi\in\g$. By definition, $f$ is equivariant if \[f(\mathrm{Ad}_{g^{-1}}\xi)=\Phi_g^\ast f(\xi).\]Following Chapter 4.2 of \cite{Marsden}, for $g\in G$ and $\xi\in\g$ define $\psi_{g,\xi}\in C^\infty(M)$ by \begin{equation}\label{psi}\psi_{g,\xi}(x):= f(\xi)(\Phi_g(x))-f(\mathrm{Ad}_{g^{-1}}\xi)(x).\end{equation} \begin{proposition}\label{psi constant} For each $g\in G$ and $\xi\in\g$, the function $\psi_{g,\xi}\in C^\infty(M)$ is constant. \end{proposition} Since $\psi_{g,\xi}$ is constant, we may define the map $\sigma:G\to\g^\ast$ by \[\sigma(g)(\xi):=\psi_{g,\xi},\] where the right hand side is the constant value of $\psi_{g,\xi}$. \del{This motivates consideration of the chain complex \[\g^\ast\otimes\R\to(G\to \g^\ast\otimes\R)\to(G\times G\to\g^\ast\otimes\R)\to\cdots,\]with the natural action of $G$ on $\g^\ast\otimes\R$ given by \[g\cdot\alpha(\xi):=Ad_{g^{-1}}^\ast\alpha(\xi)\] for $g\in G,\alpha\in\g^\ast\otimes\R$ and $\xi\in\g$.} \begin{proposition}\label{cocycle} The map $\sigma:G\to\g^\ast$ is a cocycle in the chain complex \[\g^\ast\to C^1(G,\g^\ast)\to C^2(G,\g^\ast)\to\cdots.\]That is, $\sigma(gh)=\sigma(g)+\mathrm{Ad}_{g^{-1}}^\ast\sigma(h)$ for all $g,h\in G$. \end{proposition} The map $\sigma$ is called the cocycle corresponding to $f$. The following proposition shows that for any symplectic group action, the cocycle gives a well defined cohomology class. \begin{proposition}\label{cohomology class} For any symplectic action of $G$ on $M$ admitting a moment map, there is a well defined cohomology class. More specifically, if $f_1$ and $f_2$ are two moment maps, then their corresponding cocycles $\sigma_1$ and $\sigma_2$ are in the same cohomology class, i.e. $[\sigma_1]=[\sigma_2]$. \end{proposition} By definition, we see that $\sigma$ is measuring the equivariance of $f$. That is, $\sigma=0$ if and only if $f$ is equivariant. Moreover, if the cocycle corresponding to a moment map vanishes in cohomology, the next proposition shows that we can modify the original moment map to make it equivariant. \begin{proposition}\label{sigma class zero} Suppose that $f$ is a moment map with corresponding cocycle $\sigma$. If $[\sigma]=0$ then $\sigma=\partial\theta$ for some $\theta\in\g^\ast$, and $f+\theta$ is an equivariant moment map. \end{proposition} We now show how this theory generalizes to multisymplectic geometry. For the rest of this section we let $(M,\omega)$ denote an $n$-plectic manifold and $\Phi:G\times M\to M$ a multisymplectic connected group action. We consider the induced multisymplectic Lie algebra action $\g\times\Gamma(TM)\to\Gamma(TM)$. Assume that we have a weak homotopy moment map $(f)$, i.e. a collection of maps $f_k:\Rho_{\g,k}\to \Omega^{n-k}_{\mathrm{Ham}}(M)$ satisfying equation (\ref{hcmm kernel}). To extend equation (\ref{psi}) to multisymplectic geometry, for $g\in G$ and $p\in\Rho_{\g,k}$, we define the following $(n-k)$-form: \begin{equation}\label{ms equiv}\psi^k_{g,p}:= f_k(p)-\Phi_{g^{-1}}^\ast f_k(\mathrm{Ad}_{g^{-1}}(p)).\end{equation}The following proposition generalizes Proposition \ref{psi constant}. \begin{proposition}\label{general closed} The $(n-k)$-form $\psi^k_{g,p}$ is closed. \end{proposition} \begin{proof} \del{Since our moment maps are taking values in the special forms, it follows that $\psi_{g,p}$ is a special form as the sum of two special forms. It remains to show that $\psi_{g,p}$ is closed. } Since $\Phi^\ast_g$ is injective and commutes with the differential, our claim is equivalent to showing that $\Phi^\ast_g(\psi^k_{g,p})$ is closed. Indeed we have that \begin{align*} d(\Phi_g^\ast(\psi^k_{g,p}))&=d(\Phi_g^\ast f_k(p)-f_k((\mathrm{Ad}_{g^{-1}}p)))\\ &=\Phi_g^\ast(df_k(p))-d(f_k(\mathrm{Ad}_{g^{-1}}(p)))\\ &=-\zeta(k)\Phi_g^\ast(V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)+\zeta(k)V_{\mathrm{Ad}_{g^{-1}}p}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega&\text{since $(f)$ is moment map}\\ &=-\zeta(k)\Phi_g^\ast(V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)+\zeta(k)(\Phi_g^\ast V_p)\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega&\text{by Proposition \ref{adjoint over wedge}}\\ &=-\zeta(k)\Phi_g^\ast(V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)+\zeta(k)\Phi_g^\ast(V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)&\text{since $G$ preserves $\omega$}\\ &=0. \end{align*} \end{proof} \del{ \begin{definition} To ease the notation, we will let $\Omega_{Sc}^k(M)$ denote the intersection of $\Omega^k_{S}(M)$ and $\Omega^k_{\text{cl}}(M)$. \end{definition} } In analogy to symplectic geometry, we now show that each component of a weak moment map gives a cocycle. \begin{definition} We call the map $\sigma_k:G\to\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}}$ defined by \[\sigma_k(g)(p):=\psi^k_{g,p}\]the cocycle corresponding to $f_k$. \end{definition} As a generalization of Proposition \ref{cocycle} we obtain: \begin{proposition} \label{multi cocycle} The map $\sigma_k$ is a $1$-cocycle in the chain complex \[\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}}\to C^1(G,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}}) \to C^2(G,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}}) \to \cdots,\]where the action of $G$ on $\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}}$ is given by the tensor product of the co-adjoint and pullback actions. The induced infinitesimal action of $\g$ on $\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}}$ is defined as follows: for $f\in\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{Ham}}$, $p\in\Rho_{\g,k}$ and $\xi\in \g$, \begin{equation}(\xi\cdot f)(p):=f(\mathrm{ad}_\xi(p))+\L_{V_\xi}f(p).\end{equation} \end{proposition} \begin{proof} By equation (\ref{group differential}) we know that $(\partial(\sigma)(g,h))(p):=\sigma(gh)(p)-\sigma(g)(p)-g\cdot\sigma(h)(p)$. For arbitrary $p\in\Rho_{\g,k}$ we have \begin{align*} \sigma_k(gh)(p)&=f_k(p)-\Phi^\ast_{(gh)^{-1}}(f_k\mathrm{Ad}_{(gh)^{-1}}p)\\ &=f_k(p)-\Phi^\ast_{g^{-1}}\Phi^\ast_{h^{-1}}(f_k((\mathrm{Ad}_{h^{-1}}\mathrm{Ad}_{g^{-1}}p)))\\ &=f_k(p)-\Phi^\ast_{g^{-1}}(f_k(\mathrm{Ad}_{g^{-1}}p))+\Phi^\ast_{g^{-1}}(f_k(\mathrm{Ad}_{g^{-1}}p))-\Phi^\ast_{g^{-1}}(\Phi^\ast_{h^{-1}}(f_k(\mathrm{Ad}_{h^{-1}}\mathrm{Ad}_{g^{-1}}p)))\\ &=\sigma_k(g)(p)+\Phi_{g^{-1}}^\ast(\sigma_k(h)(\mathrm{Ad}_{g^{-1}}p))\\ &=\sigma_k(g)(p)+g\cdot \sigma_k(h)(p). \end{align*} \end{proof} \begin{definition} Let \[\mathfrak{C}=\bigoplus_{k=1}^n \Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}}\] Let $\sigma=\sigma_1+\sigma_2+\cdots $. We call the map $\sigma$ the cocycle corresponding to $(f)$. \end{definition} Since the components of a weak moment map do not interact, as a corollary to Proposition \ref{multi cocycle} we obtain \begin{proposition} The map $\sigma$ is a cocycle in the complex \[\mathfrak{C}\to C^1(G,\mathfrak{C})\to C^2(G,\mathfrak{C})\to\cdots\] \end{proposition} The next theorem shows that multisymplectic Lie algebra actions admitting weak moment maps give a well defined cohomology class, generalizing Proposition \ref{cohomology class}. \begin{theorem}\label{general class} Let $G$ act multisymplectically on $(M,\omega)$. To any weak moment map, there is a well defined cohomology class $[\sigma]$ in $H^1(G,\mathfrak{C})$. More precisely if $(f)$ and $(g)$ are two weak moment maps with cocycles $\sigma$ and $\tau$, then $\sigma-\tau$ is a coboundary. \end{theorem} \begin{proof} We need to show that $\sigma_k-\tau_k$ is a coboundary for each $k$. We have that \[\sigma_k(g)(p)-\tau_k(g)(p)=f_k(p)-g_k(p)-\Phi_{g^{-1}}^\ast(f_k(Ad_{g^{-1}}p)-g_k(Ad_{g^{-1}}(\xi))).\] However, $(f)$ and $(g)$ are both moment maps and so $d(f_k(p)-g_k(p))=0$. Thus $f_k-g_k$ is in $\mathfrak{C}$. Moreover, by equation (\ref{group differential 2}), we see that $\sigma_k-\tau_k=\partial(f_k-g_k)$. \end{proof} If $(f)$ is not equivariant but its cocycle vanishes, then we can define a new equivariant moment map from $(f)$, in anology to Proposition \ref{sigma class zero}. \begin{proposition}\label{general make} Let $(f)$ be a weak moment map with cocycle satisfying $[\sigma]=0$. This means that $\sigma=\partial\theta$ for some $\theta\in\mathfrak{C}$. The map $(f)+\theta$ is a weak moment map that is equivariant. \end{proposition} \begin{proof} We have that $(f)+\theta$ is a moment map since $\theta(p)$ is closed for all $p\in\Rho_{\g,k}$. Let $\widetilde\sigma$ denote the corresponding cocycle. Note that by equation (\ref{group differential 2}) we have $(\partial(\theta)(g))(p)=\theta(\mathrm{Ad}_{g^{-1}}p)-\Phi_g^\ast\theta(p)$. By the injectivity of $\Phi_g^\ast$, to show that $\widetilde\sigma=0$, it is sufficient to show that $\Phi_g^\ast(\widetilde\sigma(g)(p))=0$ for all $g\in G$ and $p\in\Rho_{g,k}$. Indeed, \begin{align*} \Phi_g^\ast(\widetilde\sigma_k(g)(p))&=\Phi_g^\ast f(p)+\Phi_g^\ast\theta(p)-f(\mathrm{Ad}_{g^{-1}}p)-\theta(\mathrm{Ad}_{g^{-1}}p)\\ &=\sigma(g)(\xi)-\partial\theta(g)(\xi)\\ &=\sigma(g)(\xi)-\sigma(g)(\xi)&\text{since $\partial\theta =\sigma$}\\ &=0. \end{align*} \end{proof} If $(f)$ is not equivariant with respect to the $G$-action, then we can define a new action for which $(f)$ is equivariant. \begin{proposition} \label{new action} For $g\in G$ define $\Upsilon_g:\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{Ham}}\to\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{Ham}}$ by \[\Upsilon_g(\theta)(p):= \Phi_{g^{-1}}^\ast \theta(\mathrm{Ad}_{g^{-1}}p)+\sigma(g)(p)\]where $\theta$ is in $\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{Ham}}$ and $p$ is in $\Rho_{\g,k}$. Then $\Upsilon_g$ is a group action and $(f)$ is $\Upsilon_g$-equivariant. \end{proposition} \begin{proof} The proof is a direct extension from the proof of Proposition 4.2.7 in \cite{Marsden}. We first show that $\Upsilon_g$ is a group action. Indeed, $\sigma(e)=0$ and $\mathrm{Ad}_{e}$ is the identity showing that $\Upsilon_e(\theta)=\theta$. For the multiplicative property of the group action we have \begin{align*} \Upsilon_{gh}(\theta)(p)&=\Phi^\ast_{(gh)^{-1}}\theta(\mathrm{Ad}_{(gh)^{-1}}p)+\sigma(gh)(p)\\ &=\Phi^\ast_{g^{-1}}(\Phi^\ast_{h^{-1}}(\theta(\mathrm{Ad}_{h^{-1}}Ad_{g^{-1}}p)))+\sigma(g)(p)+\Phi^\ast_{g^{-1}}(\sigma(h)(\mathrm{Ad}_{g^{-1}}p))&\text{by Proposition \ref{cocycle}}\\ &=\Phi^\ast_{g^{-1}}(\Upsilon_h(\theta)(\mathrm{Ad}_{g^{-1}}p))+\sigma(g)(p)\\ &=\Upsilon_g(\Upsilon_h(\theta))(p). \end{align*} To show that $f_k$ is equivariant The moment map $f_k$ is equivariant with respect to this action because \begin{align*} \Upsilon_g(f_k)(p)&=\Phi^\ast_{g^{-1}}(f_k(\mathrm{Ad}_{g^{-1}}p))+\sigma(g)(p)\\ &=\Phi^\ast_{g^{-1}}f_k(\mathrm{Ad}_{g^{-1}}p)+ f_k(p)-\Phi^\ast_{g^{-1}}f_k(\mathrm{Ad}_{g^{-1}}p)\\ &=f_k(p). \end{align*} \end{proof} \subsection{Infinitesimal Equivariance in Multisymplectic Geometry} Next we recall the notion of infinitesimal version of equivariance in symplectic geometry. That is, we differentiate equation (\ref{psi}) to obtain the map $\Sigma:\g\times\g\to C^\infty(M)$ defined by $\Sigma(\xi,\eta):=\left.\frac{d}{dt}\right|_{t=0}\psi_{\exp(t\eta),\xi}$. A straightforward computation, which we generalize in Proposition \ref{comp of Sigma}, gives that \[\Sigma(\xi,\eta)=f([\xi,\eta])-\{f(\xi),f(\eta)\}.\] Another quick computation shows that $df([\xi,\eta])=d\{f(\xi),f(\eta)\}$, showing $\Sigma(\xi,\eta)$ is a constant function for every $\xi,\eta\in\g$. That is, $\Sigma$ is a function from $\g\times\g$ to $\R$. \begin{proposition}\label{inf cocycle}The map $\Sigma:\g\times\g\to\R$ is a Lie algebra $2$-cocycle in the chain complex \[\R\to C^1(\g, \R)\to C^2(\g,\R)\to\cdots.\] \end{proposition} \begin{definition}\label{inf equiv moment} A moment map $f:\g\to C^\infty(M)$ is infinitesimally equivariant if $\Sigma=0$, i.e. if \begin{equation}\label{equivariant equation 2}f([\xi,\eta])=\{f(\xi),f(\eta)\}\end{equation} for all $\xi,\eta\in\g$. \end{definition} \begin{proposition} For a connected Lie group, infinitesimal equivariance and equivariance are equivalent. \end{proposition} \begin{proof} This is clear since $\Sigma$ is just the derivative of $\sigma$. \end{proof} Since we will always be working with connected Lie groups, we will abuse terminology and call a moment map equivariant if it satisfies equation (\ref{equivariant equation}) or (\ref{equivariant equation 2}). \del{ \begin{proposition}\label{lie alg morph 1} A pre-moment map is a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M)/\mathrm{closed},\{\cdot,\cdot\})$. \end{proposition} \begin{proof} Since $df([\xi,\eta])=d\{f(\xi),f(\eta)\}$ it follows that $f([\xi,\eta])-\{f(\xi),f(\eta)\}$ is a constant. Hence, for a pre-moment map it is always true that $f([\xi,\eta])=\{f(\xi),f(\eta)\}$ in the quotient space. \end{proof} \begin{proposition}\label{lie alg morph 2} If $f$ is an equivariant moment map then $f$ is a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M),\{\cdot,\cdot\})$. \end{proposition} \begin{proof} This follows from the above, since if $f$ is equivariant, then $\Sigma=0$. \end{proof} \begin{remark} The converse to Proposition \ref{lie alg morph 2} is not necessarily true. That is, if a moment map is a Lie algebra morphism, then it is not necessarily equivariant. This is true, however, if the acting group is compact and connected. Nonetheless, we will abuse terminology and call a moment map equivariant if it is a Lie group homomorphism. \end{remark} } Now we turn our attention towards the multisymplectic setting. As in symplectic geometry, the infinitesimal equivariance of a weak moment map comes from differentiating $\psi_{\exp(t\xi),p}$ for fixed $\xi\in\g$ and $p\in\Rho_{\g,k}$. \begin{proposition}\label{comp of Sigma} Let $\Sigma_k$ denote $\left.\frac{d}{dt}\right|_{t=0}\psi_{\exp(t\xi),p}$. Then we have that $\Sigma_k$ is a map from $\g$ to $\Rho_{\g,k}^\ast\otimes \Omega^{n-k}_{\mathrm{cl}}$ and is given by \[\Sigma_k(\xi,p)=f_k([\xi,p])+L_{V_\xi}f_k(p).\] \end{proposition} \begin{proof}We have that \begin{align*} \left.\frac{d}{dt}\right|_{t=0}\psi_{\exp(t\xi),p}&=\left.\frac{d}{dt}\right|_{t=0} f_k(p)-\left.\frac{d}{dt}\right|_{t=0}\Phi_{\exp(-t\xi)}^\ast(f_k(\mathrm{Ad}_{\exp(-t\xi)}(p)))\\ &=-\left.\frac{d}{dt}\right|_{t=0}\Phi^\ast_{\exp(-t\xi)}(f_k(\mathrm{Ad}_{\exp(-t\xi)}p))\\ &=-f_k(\left.\frac{d}{dt}\right|_{t=0}\mathrm{Ad}_{\exp(-t\xi)}p)-(\left.\frac{d}{dt}\right|_{t=0}\Phi^\ast_{\exp(-t\xi)})(f_k(p))\\ &=-f_k(-[\xi,p])+L_{\xi_M}f_k(p)&\text{by Corollary \ref{ad}.}\\ \end{align*} \end{proof} \del{ The next corollary is a complete generalization of the situation in symplectic geometry. \begin{corollary} If a refined homotopy moment map is equivariant, then it is a morphism between the differential graded Lie algebras $(\Lambda^\bullet\g, \partial,[\cdot,\cdot])$ and $(\Omega^\bullet_p(M),d,\{\cdot,\cdot\})$. (Change this is a little, this is true but it is not a corollary of the above) \end{corollary} \begin{proof} If $(f)$ is equivariant, then $\sigma=0$ and so $\Sigma=0$. TO DO \end{proof} } Let $R_k=\Rho_{\g,k}^\ast\otimes \Omega^{n-k}_\text{cl}$. Then $R_k$ is a $\g$-module under the induced action from the tensor product of the adjoint and Lie derivative actions. Concretely, for $\alpha\in R_k$, $\xi\in\g$ and $p\in\Rho_{\g,k}$, \[(\xi\cdot\alpha)(p)=\alpha([\xi,p])+L_{V_\xi}\alpha.\] Consider the cohomology complex \[R_k\to C^1(\g,R_k)\to C^2(\g,R_k)\to\cdots,\]where the differential is the usual one from equation (\ref{group differential 2}). The following is a generalization of Proposition \ref{inf cocycle}. \begin{proposition}\label{general inf} The map $\Sigma_k$ is in the kernel of $\partial_k$. That is, $\Sigma_k$ is a cocycle. \end{proposition} \begin{proof}We need to show that $\partial\Sigma_k=0$. Indeed, for $\xi,\eta\in\g$ and $p\in\Rho_{\g,k}$, we have that \begin{align*} \partial\Sigma_k(\xi,\eta)(p)&=\xi\cdot(\Sigma_k(\eta)(p))-\eta\cdot(\Sigma_k(\xi)(p))+\Sigma_k([\xi,\eta])(p)&\text{by equation (\ref{group differential 2})}\\ &= \Sigma_k(\eta)(\mathrm{ad}_\xi(p))+\L_{V_\xi}(\Sigma_k(\eta)(p))-\Sigma_k(\xi)(\mathrm{ad}_\eta(p))\\ &\quad{}-\L_{V_\eta}(\Sigma_k(\xi)(p))+\Sigma_k([\xi,\eta])(p)\\ &=\Sigma_k(\eta)([\xi,p])+\L_{V_\xi}(\Sigma_k(\eta)(p))-\Sigma_k(\xi)([\eta,p])\\ &\quad{}-\L_{V_\eta}(\Sigma_k(\xi)(p))+\Sigma_k([\xi,\eta])(p)&\text{by definition of ad}\\ &=f_k([\eta,[\xi,p]])+\L_{V_\eta}f_k([\xi,p])+\L_{V_\xi}f_k([\eta,p])+\L_{V_\xi}\L_{V_\eta}f_k(p)\\ &\quad{}-f_k([\xi,[\eta,p]])-\L_{V_\xi}f_k([\eta,p])-\L_{V_\eta}f_k([\xi,p])-\L_{V_\eta}\L_{V_\xi}f_k(p)\\ &\quad{}+f_k([[\xi,\eta],p])-\L_{V_{[\xi,\eta]}}f_k(p)\\ &=f_k([\eta,[\xi,p]])-f_k([\xi,[\eta,p]])+f_k([[\xi,\eta],p])\\ &\quad{}+\L_{V_\xi}\L_{V_\eta}f_k(p)-\L_{V_\eta}\L_{V_\xi}f_k(p)-\L_{V_{[\xi,\eta]}}f_k(p)\\ &=\L_{V_\xi}\L_{V_\eta}f_k(p)-\L_{V_\eta}\L_{V_\xi}f_k(p)-\L_{V_{[\xi,\eta]}}f_k(p)&\text{by the Jacobi identity}\\ &=0&\text{by the Lie derivative property}. \end{align*} \del{For $\xi,\eta\in\g$ and $p\in\Rho_{\g,k}$ we have by definition that $\partial\Sigma_k(\xi,\eta)(p)=\Sigma_k(\eta, ad_\xi(p))-\Sigma_k(\xi,ad_\eta(p))+\Sigma_k(\xi,\eta)(p)$. That is, showing that $\partial\Sigma_k=0$ is equivalent to showing that $\Sigma$ satisfies the Jacobi identity \[\Sigma_k([\eta,[\xi,p]])+\Sigma_k([\xi,[p,\eta]])+\Sigma_k([p,[\xi,\eta]])=0.\] Indeed we have that \begin{align*} \Sigma_k([\eta,[\xi,p]])+&\Sigma_k([\xi,[p,\eta]])+\Sigma_k([p,[\xi,\eta]])=f_k([\eta,[\xi,p]]+[\xi,[p,\eta]]+[p,[\xi,\eta]])\\ &+\{f_1(\eta),f_k[\xi,p]\}+\{f_1(\xi),f_k[\eta,p]\}+\{f_1([\xi,\eta]),f_k(p)\}\\ &+d(\eta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k([\xi,p]))+d(\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k([\eta,p]))+d([\xi,\eta]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k(p))\\ &=\{f_1(\eta),f_k[\xi,p]\}+\{f_1(\xi),f_k[\eta,p]\}+\{f_1([\xi,\eta]),f_k(p)\}&\text{by Jacobi}\\ &+d(\eta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k([\xi,p]))+d(\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k([\eta,p]))+d([\xi,\eta]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k(p))\\ &=\{f_1(\eta),\{f_k(p),f_1\xi\}]\}+\{f_1(\xi),\{f_k(p),f_1(\xi)\}\}+\{\{f_1(\xi),f_1(\eta)\},f_k(p)\}\\ &+d(\eta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k([\xi,p]))+d(\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k([\eta,p]))+d([\xi,\eta]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k(p))\\ &=\L_\eta(f_k([\xi,p]))+\L_\xi(f_k([\eta,p]))+\L_{[\xi,\eta]}f_k(p)&\text{by prop \ref{Jacobi}}\\ &=0&\text{since $im(f)\subset\Omega^\bullet_{Sc}(M)$} \end{align*}} \end{proof} As in symplectic geometry, we have that for a connected Lie group, a weak homotopy moment map is equivariant if and only if it is infinitesimally equivariant. That is, the weak homotopy $k$-moment map is equivariant if and only if $\sigma_k=0$ or $\Sigma_k=0$. A weak homotopy moment map is equivariant if $\sigma_k=0$ or $\Sigma_k=0$ for all $1\leq k\leq n$. Now that we have generalized the notions of equivariance from symplectic to multisymplectic geometry, we move on to study the existence and uniqueness of these weak homotopy moment maps. \section{Existence of Not Necessarily Equivariant Weak Moment Maps} In this section we show how the results on the existence of not necessarily equivariant moment maps in symplectic geometry generalizes to multisymplectic geometry. For a connected Lie group $G$ acting symplectically on a symplectic manifold $(M,\omega)$, recall the following standard results from symplectic geometry. \begin{proposition}\label{bracket gives} For any $\xi,\eta\in\g$ we have \[[V_\xi,V_\eta]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega=d(V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} V_\eta\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega).\] \end{proposition} \del{ \begin{proposition}\label{H1} We have that $H^1(\g)=[\g,\g]^0$, where $[\g,\g]^0$ is the annihilator of $[\g,\g]$. \end{proposition} \begin{proof} This follows since for $c\in\g^\ast$ we have $\partial c(\xi,\eta)=c([\xi,\eta])$ by definition. \end{proof} } \begin{proposition}\label{H2} We have that $H^1(\g)=0$ if and only if $\g=[\g,\g]$. \end{proposition} and Combining these two propositions \begin{proposition}\label{H4} If $H^1(\g)=0$, then any symplectic action admits a moment map, which is not necessarily equivariant. \end{proposition} We now show how these results generalize to multisymplectic geometry. Let a connected Lie group act multisymplectically on an $n$-plectic manifold $(M,\omega)$. \begin{proposition}\label{existence 1} For arbitrary $q$ in $\Rho_{\g,k}$ and $\xi\in\g$ we have that \[[V_q,V_\xi]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega= -(-1)^kd(V_q\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega).\] \end{proposition} \begin{proof} By linearity it suffices to consider decomposable $q=\eta_1\wedge\cdots\wedge\eta_k$. A quick computation shows that $[V_q,V_\xi]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega=-V_{[q,\xi]}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$. It follows that \begin{align*} V_{[q,\xi]}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega&=V_{\partial({q\wedge\xi})}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega&\text{by Proposition \ref{wedge is Schouten}}\\ &=(-1)^kd(V_{q\wedge\xi}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)-\sum_{i=1}^k(-1)^i\eta_1\wedge\cdots\wedge\widehat \eta_i\wedge\cdots\wedge \eta_k\wedge\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\L_{\eta_i}\omega -V_{q\wedge\xi}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} d\omega&\text{by Lemma \ref{extended Cartan}}\\ &=(-1)^kd(V_q\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega).\\ \end{align*}The claim now follows. \end{proof} The next proposition is a generalization of Proposition \ref{H2}. \begin{proposition}\label{kernel equals bracket} If $H^0(\g,\Rho_{\g,k}^\ast)=0$ then $\Rho_{\g,k}=[\Rho_{\g,k},\g]$. \end{proposition} \begin{proof} By equation (\ref{group differential 2}), an element $c\in H^0(\g,\Rho_{\g,k}^\ast)$ satisfies $c([\xi,p])=0$ for all $\xi\in\g$. That is, \[H^0(\g,\Rho_{\g,k}^\ast)=[\Rho_{\g,k},\g]^0,\] where $[\Rho_{\g,k},\g]^0$ is the annihilator of $[\Rho_{\g,k},\g]$. \end{proof} We now arrive at our main theorem on the existence of not necessarily equivariant weak moment maps. The following is a generalization of Proposition \ref{H4}. \begin{theorem}\label{theorem existence} Let $G$ act multisymplectically on $(M,\omega)$. If $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$, and $H^0(\g,\Omega^{n-k}_{\mathrm{cl}})\not=0$, then the $k$-th component of a not necessarily equivariant moment map exists. \end{theorem} \begin{proof} If $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ and $H^0(\g,\Omega^{n-k}_{\mathrm{cl}})\not=0,$ then $H^0(\g,\Rho_{\g,k}^\ast)=0$ by the Kunneth formula (see for example Theorem 3.6.3 of \cite{kunneth}). The claim now follows from Proposition \ref{kernel equals bracket} and Proposition \ref{existence 1}. Indeed, Proposition \ref{existence 1} says we may define a weak moment map on elements of the form $[p,\xi]$ by $(-1)^kV_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} V_\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$, where $p\in\Rho_{\g,k}$ and $\xi\in\g$, and Proposition \ref{kernel equals bracket} says every element in $\Rho_{\g,k}$ is a sum of elements of this form. \end{proof} \begin{remark} Notice that for the case $n=k$, it is always true that $H^0(\g,\Omega^{n-k}_{\mathrm{cl}})\not=0$ since any-non zero constant function is closed. Hence Theorem \ref{theorem existence} gives a generalization of Theorems 3.5 and 3.14 of \cite{ms} and \cite{MS} respectively. Moreover, by taking $n=k=1$, we see that we are obtaining a generalization from symplectic geometry. \end{remark} \begin{example}\label{example 1} Consider the multisympletic manifold $(\R^4,\omega)$ where $\omega=\mathrm{vol}$ is the standard volume form. That is, we are working in the case $n=3$. Let $x_1,\cdots,x_4$ denote the standard coordinates. Let $G=U(2)$ act on $\R^4$ by rotations. The corresponding Lie algebra action generates the vector fields \[E_0=x^3\frac{\pd}{\pd x^1}+x^4\frac{\pd}{\pd x^2}-x^1\frac{\pd}{\pd x^3}-x^2\frac{\pd}{\pd x^4},\] \[E_1=x^3\frac{\pd}{\pd x^1}+x^4\frac{\pd}{\pd x^2}-x^1\frac{\pd}{\pd x^3}-x^2\frac{\pd}{\pd x^4},\] \[E_2=-x^2\frac{\pd}{\pd x^1}+x^1\frac{\pd}{\pd x^2}-x^4\frac{\pd}{\pd x^3}+x^3\frac{\pd}{\pd x^4},\] and \[E_3=x^4\frac{\pd}{\pd x^1}+x^3\frac{\pd}{\pd x^2}-x^2\frac{\pd}{\pd x^3}-x^1\frac{\pd}{\pd x^4}.\] For the case $k=2$, consider the distance function $r=\sqrt{x_1^2+x_2^2+x_3^2+x_4^2}$. It is clear that the distance function is invariant under rotations and hence $\L_{E_i}dr=0$ for $i=0,1,2,3$. Since $dr$ is a closed $1$-form, it follows that $dr$ is a non-zero element of $H^0(\g,\Omega^{1}_{\mathrm{cl}}(M))$. That is, $H^0(\g,\Omega^{1}_{\mathrm{cl}}(M))\not=0$. For the case $k=1$, consider $\alpha:=dx^1\wedge dx^2+dx^3\wedge dx^4$. A quick calculuation shows that $E_i\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\alpha=$ for $i=0,1,2,3$ so that $\alpha$ is invariant under the $\mathfrak{u}(2)$ action. Since $d\alpha=0$, it follows that $H^0(\g,\Omega^{2}_{\mathrm{cl}}(M))\not=0$ as well. Hence, by Theorem \ref{theorem existence}, it follows that a weak moment map exists. \end{example} The next example gives a scenario for which Theorem \ref{theorem existence} can only be applied to specific components of a weak moment map. \begin{example}Take the setup of Example \ref{example 1} but instead consider the action of $SO(4)$. As in Example \ref{example 1}, $dr$ is a non-zero closed $1$-form which is invariant under the action. That is, $H^0(\g,\Omega^{1}_{\mathrm{cl}}(M))\not=0$. However, in this setup, $H^0(\g,\Omega^{2}_{\mathrm{cl}}(M))=0$. Indeed, the infinitesimal generators of $\mathfrak{so}(4)$ are of the form $x^i\frac{\pd}{\pd x^j}-x^j\frac{\pd}{\pd x^i}$ where $1\leq i,j,\leq 4$. An arbitrary $2$-form may be written as $\beta=\sum_{i,j}a_{ij}dx^i\wedge dx^j$. A computation shows that the condition $\L_{V_\xi}\beta=0$ for all $\xi\in\mathfrak{so}(4)$ showing that necessarily $\beta=0$. Hence $H^0(\g,\Omega^{1}_{\mathrm{cl}}(M))=0$. It follows that, in this case, Theorem \ref{theorem existence} guarantees the existence of the $2$nd component of a weak moment map, but does not guarantee the existence of the $1$st. \end{example} Another generalization of Proposition \ref{H4} to multisymplectic geometry is given by: \begin{proposition}\label{dont know} If $H^k(\g)=0$, then the $k$-th component of a not necessarily equivariant weak moment map exists. \end{proposition} \begin{proof} If $H^k(\g)=0$ then $\Rho_{\g,k}=\mathrm{Im}(\partial_{k+1})$, since $\Rho_{\g,k}=\mathrm{ker}(\partial_k)$. But for $p\in\mathrm{Im}(\partial_{k+1})$ we have that $p=\partial q$ for some $q\in\Lambda^{k+1}\g$. Then by Lemma \ref{extended Cartan} we have \[V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega=(-1)^kd(V_q\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega).\] Hence we may define $f_k(p)$ to be $(-1)^kV_q\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$. \end{proof} \begin{remark} Proposition \ref{dont know} gives another generalization of the results of Madsen and Swann. Indeed, by taking $n=k$ we again arrive at Theorems 3.5 and 3.14 of \cite{ms} and \cite{MS} respectively. \end{remark} Summarizing Theorem \ref{theorem existence} and Proposition \ref{dont know} we obtain: \begin{proposition}\label{theorem open question 1} If $H^1(\g)=\cdots=H^n(\g)=0$ then a not necessarily equivariant weak moment map $(f)$ exists. \end{proposition} \begin{theorem} If, for all $1\leq k\leq n$, $H^0(\g,\Rho_{\g,k}^\ast)=0$, or equivalently $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ and $H^0(\g,\Omega^{n-k}_{\mathrm{cl}})\not=0$, then a not necessarily equivariant weak moment map $(f)$ exists. \end{theorem} \del{ \begin{proposition}\label{existence 2} If $H_k(\g)=0$ then every element $p$ in $\Rho_{\g,k}$ is of the form \[p=\sum[q_i,\xi_i],\]where $q_i\in\Rho_{\g,k}$ and $\xi_i\in\g$. \end{proposition} \begin{proof} This fact follows from Theorem 3.5 of Madsen and Swann, although have question about their argument. Don't know if this helps but we have that $[\Rho_{\g,k},\g]$ is a subset of $\Rho_{\g,k}$. Let $[p,\xi]$ be an arbitrary element of $[\Rho_{\g,k},\g]$. \begin{align*} \partial(p\wedge\xi)&=\partial(p)\wedge\xi+p\wedge\partial(\xi)+[p,\xi]\\ &=p\wedge\partial(\xi)+[p,\xi]&\text{since $\partial(p)=0$}\\ &=[p,\xi]&\text{since $\xi$ has degree $1$} \end{align*} Thus, $[p,\xi]$ is closed. \end{proof} Combining these two propositions gives us the main theorem of this section. \begin{proposition} Let $\g$ act multisymplectically on a manifold $M$. If $H_k(\g)=0$, then the $k$-th component, $f_k$, of a moment map exists. The map $f_k$ is not necessarily equivariant. \end{proposition} \begin{proof} This follows directly from Propositions \ref{existence 1} and \ref{existence 2}. \end{proof} \begin{proposition} Let $\g$ act multisymplectically on $(M,\omega)$. If $H^1(\g)=\cdots=H^n(\g)=$ then a not necessarily equivariant weak homotopy moment map exists. \end{proposition} } In the next section we study when a non-equivariant weak moment map can be made equivariant. \section{Obtaining an Equivariant Moment Map from a Non-Equivariant Moment Map} In this section we show that the theory involved in obtaining an equivariant moment map from a non-equivariant moment map extends from symplectic to multisymplectic geometry. We first recall the standard results from symplectic geometry. Proposition \ref{inf cocycle} shows that the map $\Sigma$ corresponding to a moment map $f$ is a Lie algebra $2$-cocycle. The next proposition says that if the cocycle is exact then $f$ can be made equivariant. \begin{proposition}\label{if exact} Let $f$ be a moment map and $\Sigma$ its corresponding cocycle. If $\Sigma=\partial(l)$ for some $l$, then $f+l$ is equivariant, \end{proposition} It follows from this that \begin{proposition}\label{obtain theorem} If $H^2(\g)=0$ then one can obtain an equivariant moment map from a non-equivariant moment map. \end{proposition} Now let $G$ be a connected Lie group acting on an $n$-plectic manifold $(M,\omega)$. The following proposition generalizes Proposition \ref{if exact} to multisymplectic geometry. \begin{proposition}\label{exact Sigma} Let $f_k$ be the weak homotopy $k$-moment map, and let $\Sigma_k$ denote its corresponding cocycle. If $\Sigma_k=\partial(l_k)$ for some $l_k\in H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})$, then $f_k+l_k$ is equivariant. \end{proposition} \begin{proof} Fix $p\in\Rho_{\g,k}$ and $\xi\in\g$. Then \begin{align*} (f_k+l_k)([\xi,p])&=f_k([\xi,p])+l_k([\xi,p])\\ &=f_k([\xi,p])-((\partial l_k)(\xi))(p)+\L_{V_\xi}l_k(p)&\text{by equation (\ref{group differential 2})}\\ &=f_k([\xi,p])-\Sigma_k([\xi,p])+\L_{V_\xi}l_k(p)\\ &=\L_{V_\xi}f_k(p)+\L_{V_\xi}(l_k(p))&\text{by definition of $\Sigma_k$}\\ &=\L_{V_\xi}((f_k+l_k)(p)). \end{align*} \end{proof} We now arrive at our generalization of Proposition \ref{obtain theorem}: \begin{theorem} If $H^1(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ then any weak $k$-moment map can be made equivariant. In particular, if $H^1(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ for all $1\leq k \leq n$, then any weak moment map $(f)$ can be made equivariant. \end{theorem} \begin{proof} Let $f_k:\Rho_{\g,k}\to\Omega^{n-k}_\mathrm{Ham}$ be a weak $k$-moment map. If $H^1(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ then the corresponding cocycle $\Sigma_k$ is exact, i.e. $\Sigma_k=\partial(l_k)$ for some $l_k\in H^0(\g,\Rho_{\g,k})$. It follows from Proposition \ref{exact Sigma} that $f_k+l_k$ is equivariant. \end{proof} \del{ \begin{proposition} For an element $m$ of degree $0$ in the complex \[(\R\to \Rho_{\g,k}^\ast\otimes\Omega^{n-k}_\text{cl})\to(\g\to\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_\text{cl})\to (\g\times \g\to\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_\text{cl})\to\cdots\] we have that \[\delta(m)(\xi)(p)=m([\xi,p]).\] \end{proposition} \begin{proof} Let $p=X_1\wedge\cdots\wedge X_k$ be an element of $\Rho_{\g,k}$. Recall that the Schouten bracket is defined by \[[p,\xi]:=\sum_{i=1}^n(-1)^i[X_i,\xi]\wedge X_1\cdots\widehat X_i\cdots\wedge X_n.\]By definition of the differential, \begin{align*} \delta m(\xi,p)&:=\sum_{i<j}(-1)^{i+j} m([X_i,X_j]X_1\cdots\widehat X_i\cdots\widehat X_j\cdots X_n)\\ &=\sum_{i=1}^n(-1)^im([X_i,\xi]\wedge X_1\cdots\widehat X_i\cdots\wedge X_n)&\text{since $p$ is in $\Rho_{\g,k}$}\\ &=m([p,\xi]) \end{align*} \end{proof} Now we come to one of the main results of this section. \begin{theorem} Suppose that we have a refined homotopy moment map $(f)$. Consider the $k$th component $f_k:\Rho_{\g,k}\to\Omega^{n-k}_S(M)$. If $H^{k+1}(\g)=0$ then $f_k$ can be made equivariant. \end{theorem} \begin{proof}TO DO. } \del{ Wit the assumption that $H_{k+1}(\g)=0$ then since $\delta\Sigma_k=0$, we have $\Sigma_k=\delta m_k$ for some $m_k\in \Rho_{\g,k}^\ast\otimes\Omega^{n-k}_\text{Sc}$. Since $m_k$ is closed, it follows that $d(f_k+m_k)=d(f_k)$ and so $f_k+m_k$ satisfies the moment properry. We show that $f_k + m_k$ is equivariant. Indeed \begin{align*} f_k([p,\xi])+m_k([p,\xi])&=f_k([p,\xi])+\Sigma(\xi,p)\\ &=\{f_k(p),f_1(\xi)\}+d(\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k(p))\\ &=\{f_k(p)+m_k(p),f_1(\xi)+m_k(p)\}+d(\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} f_k(p))&\text{since $m_k(p)$ is closed}\\ &=\{f_k(p)+m_k(p),f_1(\xi)+m_k(p)\}+d(\xi\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (f_k(p)+m_k(p))&\text{since $m_k(p)$ is a special form} \end{align*} Thus, the cocycle corresponding to $f_k+m_k$ vanishes showing that $f_k+m_k$ is equivariant. \end{proof} } \del{ The way uniqueness is proved in the symplectic case is by noting that any two equivariant moment maps differ by something in $[\g,\g]^0=H^1(\g)$. Hence if $H^1(\g)=0$, any equivariant moment map is unique. I have shown that in the Madsen and Swann setup, any two equivariant moment maps differ by something in $[\Rho_\g,\g]^0$ and if $H^2(\g)=0$ then this annhilator is zero and so moment maps are unique. (I believe this is a new proof for the existence and uniqueness of multi moment maps in the Madsenn and Swann setup) We collect the results of this section into the following theorem. This generalizes the existence and uniqueness theorem of Madsen and Swann in \cite{ms} as our target spaces are now forms of arbitrary degree. \begin{theorem} If $H^k(\g)=0$ then the $k$th component of a refined homotopy moment map exists. This $k$th component is not necessarily equivariant; however, if it is, then it is unique. If $H^{k+1}(\g)=0$ then any non equivariant refined homotopy moment map can be made equivariant. Thus, if both $H^{k}(\g)=0$ and $H^{k+1}(\g)=0$ then an equivariant $kth$ component of a refined homotopy moment map exists and is unique. \end{theorem} We thus can say that \begin{theorem} If $H^1(\g)=H^2(\g)=\cdots= H^{n+1}(\g)=0$ then a unique equivariant refined homotopy moment map exists. \end{theorem} \begin{question} Given any $n\in \N$ does there exist a Lie group satisfying $H_1(\g)=\cdots= H_n(\g)=0$? In the language of Madsen and Swann, for any $n\in\N$, does there exist a $(k_1,\cdots,k_n)$-trivial Lie group? \end{question} } \section{Uniqueness of Weak moment Maps} We first recall the results from symplectic geometry without explicit proof. A proof can be found by setting $n=1$ (i.e. the symplectic case) in our more general Theorem \ref{dunno2}. Let $\g$ be a Lie algebra acting on a symplectic manifold $(M,\omega)$. \begin{proposition}\label{H5} If $f$ and $g$ are two equivariant moment maps, then $f-g$ is in $H^1(\g)$. \end{proposition} \begin{proof} For $\xi,\eta\in\g$ we have that $(f-g)([\xi,\eta])=\{(f-g)(\xi),(f-g)(\eta)\}$ since $f$ and $g$ are equivariant. However, $(f-g)(\xi)$ is a constant function since both $f$ and $g$ are moment maps. The claim now follows since the Poisson bracket with a constant function vanishes. \end{proof} From Proposition \ref{H5} it immediately follows that \begin{proposition}\label{H6} If $H^1(\g)=0$ then equivariant moment moments are unique. \end{proposition} The following is a generalization of Proposition \ref{H5}. \begin{proposition}\label{uniqueness in H^0} If $f_k$ and $g_k$ are $k$-th components of two equivariant weak moment maps, then $f_k-g_k$ is in $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})$. \end{proposition} \begin{proof} If $f_k$ and $g_k$ are equivariant then $(f_k-g_k)([\xi,p])=\L_{V_\xi}((f_k-g_k)(p))$. Moreover, $(f_k-g_k)(p)$ is closed since both $f_k$ and $g_k$ are moment maps. \end{proof} We now arrive at our generalization of Proposition \ref{H6}. Let $\g$ be a Lie algebra acting on an $n$-plectic manifold $(M,\omega)$. \begin{theorem}\label{dunno2} If $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$, then equivariant weak $k$-moment maps are unique. In particular, if $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})=0$ for all $1\leq k \leq n$ then equivariant weak moment maps are unique. \end{theorem} \begin{proof} If $f_k$ and $g_k$ are two equivariant weak $k$-moment maps, then Proposition \ref{uniqueness in H^0} shows that $f_k-g_k$ is in $H^0(\g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\mathrm{cl}})$. \end{proof} \begin{remark} This theorem gives a generalization of the results of Madsen and Swann. Indeed, by taking $n=k$ we again arrive at Theorems 3.5 and 3.14 of \cite{ms} and \cite{MS} respectively. \end{remark} \del{ We make two more observations. \begin{proposition}\label{HH1} If $f_k$ and $g_k$ are two components of an equivariant homotopy moment map $(f)$, then $f_k-g_k$ is in $[\Rho_{\g,k},\g]^0$. \end{proposition} \begin{proof} Let $[p,\xi]$ be an artbitrary element of $[\Rho_{\g,k},\g]$. Then we have that \begin{align*} (f_k-g_k)([p,\xi])&=\{(f_k-g_k)(p),(f_1-g_1)(\xi)\}&\text{by equivariance}\\ &=0&\text{since $f_k(p)-g_k(p)$ is closed} \end{align*} \end{proof} \begin{proposition}\label{HH2} If $H^{k}(\g)=0$ then $[\Rho_{\g,k},\g]^0=0$ \end{proposition} \begin{proof} We have already shown that $\Rho_{\g,k}=[\Rho_{\g,k},\g]$. Since $\dim([\Rho_{\g,k},\g])+\dim([\Rho_{\g,k},\g])^0=\dim(\Rho_{\g,k})$ the claim now follows. \end{proof} Our generalization of Proposition \ref{H5} is: \begin{proposition} If $H^k(g)=0$ then the $k$-th components of homotopy moment maps are unique. \end{proposition} \begin{proof} This follows from Propositions \ref{HH1} and \ref{HH2}. \end{proof} Therefore, if an equivariant moment map exists and $H^k(\g)=0$, it is unique. } \section{Weak moment Maps as Morphisms} Consider a symplectic action of a connected Lie group $G$ acting on a symplectic manifold $(M,\omega)$. Let $f:\g\to C^\infty(M)$ be a moment map. By Definition \ref{inf equiv moment}, $f$ is equivariant if and only if $f$ is a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M),\{\cdot,\cdot\})$. That is, if and only if \[f([\xi,\eta])=\{f(\xi),f(\eta)\}.\] Taking $d$ of both sides of this equation yields: \begin{proposition}\label{morph 1} A moment map $f$ induces a morphism onto the quotient of $C^\infty(M)$ by constant functions. That is, a moment map induces a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M)/\text{constant},\{\cdot,\cdot\})$, regardless of equivariance. If moreover, the moment map $f$ is equivariant, then $f$ is a morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M),\{\cdot,\cdot\})$. \end{proposition} We now restate Proposition \ref{morph 1} in an equivalent way, but which will allow for a direct generalization to multisymplectic geometry: Notice that $\g$ is a $\g$-module under the Lie bracket action and $C^\infty(M)$ is $\g$-module under the action $\xi\cdot\g=L_{V_{\xi}}g$, where $\xi\in\g$ and $g\in C^\infty(M)$. Proposition \ref{morph 1} is equivalent to: \begin{proposition}\label{Morph 1} A moment map $f$ always induces a $\g$-module morphism from $\g$ to $C^\infty(M)/\text{constant}$. Moreover, if the moment map $f$ is equivariant, then it is a $\g$-module morphism from $\g$ to $C^\infty(M)$. \end{proposition} Now let a connected Lie group $G$ act multisymplectically on an $n$-plectic manifold $(M,\omega)$. \begin{proposition} For any $1\leq k\leq n$, we have that $\Rho_{\g,k}$ is a $\g$-module under the action $\xi\cdot p = [p,\xi]$, where $p\in\Rho_{\g,k}$, $\xi\in\g$, and $[\cdot,\cdot]$ is the Schouten bracket. \end{proposition} \begin{proof} This follows since Proposition \ref{wedge is Schouten} shows that $[p,\xi]$ is in the Lie kernel. \end{proof} \begin{proposition} For any $1\leq k \leq n$, we have that $\Omega^{n-k}_{\mathrm{Ham}}(M)$ is a $\g$-module under the action $\xi\cdot\alpha=\L_{V_\xi}\alpha$, where $\alpha\in\Omega^{n-k}_{\mathrm{Ham}}(M)$ and $\xi\in\g$. \end{proposition} \begin{proof} Suppose that $\alpha\in\Omega^{n-k}_{\mathrm{Ham}}(M)$ is a Hamiltonian $(n-k)$-form. Then $d\alpha=-X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega$ for some $X_\alpha\in\Gamma(\Lambda^k(TM))$. Then, for $\xi\in\g$, \begin{align*} d\L_{V_\xi}\alpha&=-\L_{V_\xi}(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)\\ &=-\L_{V_\xi}(X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)+X_\alpha\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\L_{V_\xi}\omega &\text{since $\L_{V_\xi}\omega=0$}\\ &=[V_\xi,X_\alpha]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega &\text{by the product rule} \end{align*} Hence $\L_{V_\xi}\alpha$ is in $\Omega^{n-k}_{\mathrm{Ham}}(M)$. \end{proof} Our generalization of Proposition \ref{Morph 1} to multisymplectic geometry is: \begin{theorem}\label{morphism 1} For any $1\leq k \leq n$, the $k$-th component of a moment map $f_k$ is a $\g$-module morphism from $\Rho_{\g,k}$ to $\Omega^{n-k}_{\mathrm{Ham}}(M)/\mathrm{closed}$. Moreover, a weak $k$-moment map $f_k$ is equivariant if and only if it is a $\g$-module morphism from $\Rho_{\g,k}$ to $\Omega^{n-k}_{\mathrm{Ham}}(M)$. \end{theorem} \begin{proof} Suppose that $(f)$ is a weak moment map. Then, by definition \begin{align*} df_k([\xi,p])&=-\zeta(k)V_{[\xi,p]}\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega\\ &=-\zeta(k)[V_\xi,V_p]\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega\\ &=-\zeta(k)\L_{V_\xi}(V_p\mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}}\omega)\\ &=\zeta(k)\zeta(k)d\L_{V_\xi}f_k(p)\\ &=d\L_{V_\xi}f_k(p). \end{align*} This proves the first statement of the theorem. Now suppose $f_k$ is equivariant. It follows that $\Sigma_k=0$. Thus, by Proposition \ref{comp of Sigma} we have $f_k([\xi,p])=\L_{V_\xi}f_k(p)$. Conversely, if $f_k$ is a $\g$-module morphism, that $f_k([\xi,p])=\L_{V_\xi}f_k(p)$ for every $\xi\in\g$ and $p\in\Rho_{\g,k}$. That is, $\Sigma_k=0$. \end{proof} \section{Open Questions} \del{ \subsection{Symplectic Geometry} \begin{proposition} If $H^1(\g)=0=H^2(\g)$ then a unique equivariant moment map exists. \end{proposition} \subsection{Multisymplectic Geometry} \begin{theorem} If $H^0(g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$ and $H^1(g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$, then there exists a unique equivariant weak homotopy $k$-moment map. \end{theorem} \begin{theorem} If $H^0(g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$ and $H^1(g,\Rho_{\g,k}^\ast\otimes\Omega^{n-k}_{\text{cl}})=0$ for all $1\leq k\leq n$, then a full equivariant weak moment map exists uniquely. \end{theorem} \begin{theorem} If $H^k(\g)=0$ then a \end{theorem} } We end by noting some open questions naturally posed by the results in this paper. \begin{enumerate}\del{ \item Fix $n\geq 1$. In light of Theorem \ref{theorem open question 1}, it is natural to ask whether there exists a Lie algebra $\g$ such that $H^k(\g)=0$ for all $1\leq k \leq n$?} \item Consider Theorems \ref{morphism 1}. In symplectic geometry, Proposition \ref{inf cocycle} shows that a moment map $f:\g\to C^\infty(M)$ induces a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to the quotient space $(C^\infty(M)/\mathrm{constant},\{\cdot,\cdot\})$, and if $f$ is equivariant then it is a Lie algebra morphism from $(\g,[\cdot,\cdot])$ to $(C^\infty(M)/\mathrm{exact},\{\cdot,\cdot\})$. Moreover, in \cite{me}, Proposition 4.10 showed that both $\Omega^\bullet_{\mathrm{Ham}}(M)/\mathrm{closed}$ and $\Omega^\bullet_{\mathrm{Ham}}(M)/\mathrm{exact}$ are graded Lie algebras while Proposition 5.9 of \cite{me} showed that a weak homotopy moment map is always a graded Lie algebra morphism from $\Rho_\g$ to $\Omega^\bullet_{\mathrm{Ham}}(M)/closed$. \hspace{0.2cm} Hence, a natural question is: If $(f)$ is an equivariant weak moment map, does it induce a graded Lie algebra morphism from $(\Rho_\g,[\cdot,\cdot])$ to $(\Omega^\bullet_{\mathrm{Ham}}(M)/\mathrm{exact},\{\cdot,\cdot\})$? Conversely? \item In our work, we provided a couple of examples of $n$-plectic group actions to which our theory of the existence and uniqueness of moment maps could be applied. There are many other interesting $n$-plectic geometries; see for example \cite{questions}, \cite{me} and \cite{cq}. What does the work done in our paper say about the existence and uniqueness of moment maps in these setups? \item Given a weak moment map $(f)$ with $f_k:\Rho_{\g,k}\to\Omega^{n-k}_{\mathrm{Ham}}(M)$, does there exists a full homotopy moment map $(h)$ whose restriction to the Lie kernel is $(f)$? Something about equivariant cohomology. In particular, what is the relationship between the results on the existence and uniqueness of homotopy moment maps given in \cite{existence 1} and \cite{existence 2} to the results in this paper? \end{enumerate}
2,877,628,089,923
arxiv
\section{Introduction} The surprising existence of short-period ($\sim$4~day) Jupiter-mass extra-solar planets (termed ``hot Jupiters''), confirmed by radial velocity measurements in the last decade, has shown us that planetary systems exist in patterns unlike that of our own Solar System. The class of hot Jupiter planets ($P<10$~d and $M\sin{i}<10\,M_{J}$) makes up $\sim$35\% of the planets discovered to date, and $\sim$0.7\% of the transiting planets from the {\it Kepler} space mission host such a companion (Dong \& Zhu 2013). Given a fortuitous geometric alignment, an extra-solar planet may be observed to transit the host star as viewed from the Earth. Such a planetary transit is characterized by a small decrease in the observed brightness of the host star that repeats at the orbital period of the extra-solar planet. The probability that a typical hot Jupiter transits its host star is $\sim$10\% (Horne 2003), and hence, being conservative, $\sim$1 in 1400 stars will host a transiting extra-solar hot Jupiter. Five years after the discovery of the first extra-solar planet around a sun-like star (Mayor \& Queloz 1995), the extra-solar planet in orbit around HD~209458 was found to transit the stellar disk (Charbonneau et al. 2000, Henry et al. 2000). This hot Jupiter was already known to have $M_{p}\sin{i}=0.69\pm0.05\,M_{J}$\footnote{$M_{J}$ is Jupiter's mass, $M_{p}$ is the planet's mass and $i$ is the orbital inclination.} from the radial velocity measurements (Mazeh et al. 2000). Also, the spectral type, and hence the mass and radius, of the host star were already known. Consequently, the modelling of the two observed transit events allowed the measurement of the orbital inclination, which in turn allowed the true mass of HD 209458b to be calculated. Charbonneau et al. (2000) measured $i=87.1\pm0.2$ degrees, implying that $M_{p}=0.69\pm0.05M_{J}$. They also measured $R_{p}=1.27\pm0.03R_{J}$ from the transit fit (where $R_{p}$ is the planet radius). The importance of this result lies in the fact that for the first time the mass and radius of an extra-solar planet had been measured, not just a lower limit on the mass. Before this discovery, the radii of the extra-solar planets were unknown and hence their average densities were unknown. The average density derived for HD~209458b was $\sim0.38$ g/cm$^{3}$, significantly less than the average density of Saturn ($0.7$ g/cm$^{3}$), the least dense of the Solar System gas giants. This was proof that HD~209458b must be a gas giant rather than a rocky (terrestrial) planet, lending weight to the term hot Jupiter. Average density was not the only important quantity that could be calculated for an extra-solar planet for the first time. Other such quantities included surface gravity and effective temperature. Since the discovery of the transiting nature of HD~209458b, some 424 transiting planets have been confirmed with periods from 0.18 to 904 days\footnote{See exoplanet.eu}. This class of extra-solar planet now makes up a significant fraction of the 1047 known extra-solar planets to date. The statistics provided from these systems are helping to pin down the mass-radius relationship for these planets (Figure~\ref{fig1} reproduced from Chabrier et al. 2009). They are also providing a challenge to theories of planetary structure and evolution in order to explain the observed radii (Liu et al. 2008), which depend on stellar type, orbital distance, planet mass and age. The results from the modelling of planetary structure have implications for the planetary formation theories, especially in discriminating between the core-accretion model (Ida \& Lin 2004) and the gravitational instability scenario (Perryman 2000). The importance of improving extra-solar planet statistics is paramount for advancing these theories, and for helping to determine the exact definition of what constitutes a planet in the transition regime between brown dwarfs and planets. \begin{figure}[htb] \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{fig1.eps} \FigCap{Mass-radius relationship from the stellar to the planetary regime. The (black) solid and short-dashed lines correspond to models with solar composition, for two isochrones. The long-dashed line corresponds to an object with a $Z = 10$\% mass fraction of heavy elements. The observationally-determined values of HAT-P-2b and Corot-3b are indicated. Reproduced from Chabrier et al. (2009).} \label{fig1} \end{figure} The first major jump in the discoveries of transiting extra-solar planets arrived with the advent of ground-based wide-field surveys of bright stars. Many projects searching for transiting exoplanets have preceded QES. Among those that have met with success are SuperWASP, HAT, XO, TrES and KELT. These pioneering surveys pave the way for new projects such as QES, since they have tackled and solved many hardware and software issues. A prototypical ground-based exoplanet search project is the Wide-Angle Search for Planets (SuperWASP) survey (Pollacco et al. 2006). The SuperWASP survey employs eight camera units attached to a single robotic mount, where each camera uses an E2V $2048\times2048$ pixel professional CCD backing a 200mm f/1.8 Canon lens. Each unit has a field of view (FoV) of 60 square degrees at a scale of 14.2 arcsec per pixel, giving a field coverage of $\sim$500 square degrees, which is imaged once per minute. SuperWASP targets stars in the brightness regime from 8-13$^{th}$ magnitude, leading to the discovery of some 80 transiting exoplanets to date. Currently there is a gap in the magnitude range of stars being surveyed for transiting planets, and this is the range from 12-15$^{th}$~mag where SuperWASP targets become too faint for their instrumentation, and stars are too bright for the deeper surveys, like OGLE (Udalski et al. 2002), which start at 15$^{th}$ mag. Exploration of this range is important, because it increases the sampling volume for intrinsically faint K and M-dwarf stars, whose smaller radii facilitate the detection of transits by small planets. QES is designed to fill this gap and detect transiting planets in the range 10-14$^{th}$ magnitude by constructing survey equipment targeted at this magnitude range. We are planning to deploy a network of wide-field cameras at three sites around the globe to monitor stars for the presence of transit signals. The first site in New Mexico has been constructed and has been taking data since November 2009. The cameras are wide-field in order to concurrently monitor as many stars as possible. Since a typical hot Jupiter planet has only a $\sim$10\% probability of transiting the host star from geometrical considerations, and since it has come to light that $\sim$0.7\% of stars host a hot Jupiter, we expect that $\sim$1 in 1400 dwarf stars will show a $\sim$4 day periodic transit signal. In our 5.3$^\circ$ FoV, we will be observing anywhere from 10000 to 50000 stars simultaneously, the exact number depending on how close we point towards the Galactic plane. Three QES sites have been chosen strategically to provide better temporal coverage of northern and equatorial stars when combined with the SuperWASP data in future cooperation on chosen fields. The ``New Mexico Skies'' observing station, located in southern New Mexico at latitiude~+32$^\circ$54'14", longitude~105$^\circ$31'44" and elevation 7200 feet, was chosen to complement the SuperWASP-North telescope on La Palma. Our aim is to deploy similar or improved facilities at two additional northern sites at complementary longitudes in order to be able to more rapidly establish ephemerides for transiting exoplanet candidates. The QES project has developed a customised data pipeline using the {\tt DanDIA} image subtraction software\footnote{{\tt DanDIA} is built from the DanIDL library of {\tt IDL} routines available at http://www.danidl.co.uk} to process the data and an archive compatible with that of SuperWASP to handle the imaging and light curve datasets necessary for a project with a data rate that is similar to SuperWASP, both outlined in the later sections. We expect $\sim$50~Gb of data per clear night from each site. The data are currently partially processed on site before being fully reduced by the pipeline software system and archived. The pipeline processing has so far been performed at the University of St~Andrews while the archive has been hosted at the Universities of Leicester and Warwick. Currently we are moving all processing and archive operations to the Qatar Environment and Energy Research Institute. \section{The First-Generation QES Wide-Field Camera System} \label{hardware} The first QES site, in New Mexico, hosts one wide-field camera system described here and summarised in Table~\ref{table1}. The camera system consists of four 400mm f/2.8 Canon lenses and one 200mm f/2.0 Canon lens each with a FLI ProLine PL16801 camera with a 4K$\times$4K-pixel KAF-16801E CCD chip. Each CCD is chilled to a temperature of -40$^\circ$~C to minimize the dark current inherent in such devices. All five cameras are mounted on a Mathis equatorial mount MI-750 (Figure~\ref{fig2}). The 400mm lenses each have a $5.3^\circ\times5.3^\circ$ FoV, a pixel scale of 4.64 arcsec/pixel and cover the magnitude range from 11-15$^{th}$ magnitude by employing an exposure time of 100~s. The 200mm f/2 lens has an $11^\circ\times11^\circ$ FoV, a pixel scale of 9.26 arcsec/pixel and covers the magnitude range from 8-12$^{th}$ mag by employing an exposure time of 60~s. The FoV of the 200mm lens encapsulates the combined fields of the four 400mm lenses providing photometry of all stars in the field in the range from 8-15$^{th}$ magnitude. \begin{figure}[htb] \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{fig2.eps} \FigCap{The first QES observing station, in New Mexico, is fully functioning. It consists of four 400mm lenses and one smaller 200mm lens.} \label{fig2} \end{figure} \MakeTable{||l|r||}{12.5cm}{First-Generation QES Wide-Field Camera System.} {\hline CCD & 5 KAF-16801E-2 , 4k x 4k \\ \hline Lenses & 4x400mm, f/2.8, 1x 200mm, f/2 \\ \hline Mount & Equatorial Fork type mount \\ \hline FoV & $5.24^\circ\times5.24^\circ$ per CCD for 400mm lens \\ \hline Pixel scale & 4.64 arcsec/pix for 400mm lens \\ \hline Gain & 1.7 e$^-$/ADU \\ \hline Peak Q.E. & 65\% \\ \hline Zero point & 1 ADU/s at 23 mag \\ \hline } \label{table1} The focus of the camera lenses is very important for data reduction by difference image analysis (DIA). This is because all DIA algorithms have problems constructing a kernel solution from images with under-sampled PSFs. To investigate the focus effects, different hardware focus setups were used during the first four months of operations. Data taken from 2009~November to 2010~January with the lenses focused provided images with an average FWHM of 1.7 pixels. During 2010 February and March data were taken with the focus set to blur the images to a FWHM of $\sim$3.5 pixels. Comparing the data from the two campaigns showed that the out-of-focus data gave better results than the in-focus data when the out-of-focus data had a FWHM of close to the target value. However, it was found that by defocusing the lenses the FWHM became very variable and sometimes produced donut-shaped PSFs. As this resulted in poor photometry, we decided to focus the lenses until hardware upgrades could be made to automatically stabilise the out of focus data. The data acquisition system (DAS), for automatic scheduling and image acquisition, consists of locally networked Windows PC's with one PC assigned to each CCD camera. A master PC provides control of the mount and synchronises the remaining slave PCs, coordinating slews, focusing, calibration frames as well as when science frames are taken. For example, once a slew is completed as instructed by the master, the master begins its exposure and instructs the slaves to begin theirs. CCDAutoPilot version 4 is a proprietary commercial product that is used extensively by the amateur astronomy community. It is not open source. It was modified by the author (Smith) to provide continuous operation, master/slave control of the multiple systems, coordinated data and calibration frames including sky flats, simultaneous focusing and appropriate file and folder naming for compatibility with SuperWASP. It acts as an executive program controlling other programs for mount slewing, camera operation and focusing via the Windows ActiveX interface. Other software used is TheSky6 by Software Bisque for telescope control, MaxIm DL by Diffraction Ltd for camera control, FocusMax by Larry Weber and Steve Brady for focusing, and PinPoint by DC-3 Dreams for plate solving and WCS insertion. An instance of CCDAutoPilot runs on each PC and has been modified to run continuously, night after night, without operator intervention as well as for compatibility with the pipeline-processing program. The software handles weather interrupts by idling the system until the weather clears. If the dome is closed due to adverse weather, the system continues to idle so that it can continue data acquisition once the weather clears and the dome is opened. For certain session phases, the slaves are autonomous. For example, when sky flats are initiated, all systems determine their own exposures to achieve the targeted signal level via an automatic exposure routine. In a similar manner the desired number of dark and bias frames are acquired. When it is desired to update the focus, each system is instructed to run an automatic focus routine. After an activity is instructed to begin, all systems report back when they have completed that activity. When all have reported an idle condition, the systems are instructed to begin the next activity. The target list is specified by a simple text file and is defined for the year. Another simple text file defines the base exposure time and cooling temperature for each camera. These text files can be accessed by non-Windows PC's, thus not requiring direct access to the DAS. Each evening the software determines the target to be used, based on its elevation and proximity to the moon. When that target sinks to a specified elevation in the west, another target is chosen by the same selection process. WCS coordinates are inserted into the FITS header of the science frames at the end of the evening's session and the data are subsequently presented for transfer by the pipeline. \section{QES Observing Strategy} A full field is defined as a $2\times2$ mosaic of four sub-fields, numbered 1, 2, 3 and 4 (Figure~\ref{fig3}). Each sub-field is covered by the FoV of one of the four 400mm lenses, and all four are covered by the encompassing FoV of the 200mm lens. While the 400mm cameras take synchronised 100~s exposures, the 200mm camera takes a 60~s exposure. With these exposure times the 200mm lens records brighter stars from 9 to 12.5 visual magnitude with an RMS accuracy of 1\%, while the 400mm lenses reach deeper to fainter stars from 10 to 13.75 visual magnitude. \begin{figure}[htb] \includegraphics[width=0.5\textwidth,height=0.5\textheight,keepaspectratio]{fig3a.eps} \includegraphics[width=0.5\textwidth,height=0.5\textheight,keepaspectratio]{fig3b.eps} \FigCap{The 5.3$^{\circ}\times$5.3$^{\circ}$ fields of view of the 400mm lenses are delineated by the smaller boxes (dotted squares) labelled 1 through 4. The 11$^{\circ}\times$11$^{\circ}$ field of view of the 200mm lens is delineated by the larger enclosing box (solid square). The whole system moves from pointing A to pointing D continuously all night to cover in total $\sim$400 square degrees.} \label{fig3} \end{figure} During readout of the CCD images from the 400mm cameras, the mount slews from pointing A to B (Figure~\ref{fig3}). Once there, the 400mm cameras again take 100~s exposures while the 200mm camera takes a 60~s exposure. The slew and exposure sequence then repeats, moving from pointing B to C, and then to pointing D. This full cycle of 4 pointings takes approximately 8 minutes to complete, including the 20~s CCD readout times. Faint stars in the 12-15$^{th}$ mag range have a photometric measurement from the 400mm camera system, brighter 8-10$^{th}$ mag stars have a photometric measurement from the 200mm camera, and stars in the intermediate range 11-12$^{th}$ mag are recorded by both the 200mm and 400mm cameras. Thus every 8 minutes a sky area of $\sim$400 square degrees is recorded by both the 200mm and 400mm cameras. The cycle of measurements is repeated at an observing site throughout the night so long as the chosen field remains higher than 30 degrees above the horizon, below which the airmass is too high for reliable wide-field photometry. A field setting at one site can be picked up by the next site to the west, providing a capability for continuous coverage of the field apart from occasional breaks due to bad weather. As we are aiming to capture the transits of planets with periods from $\sim$0.5-10 days, we observe each field for a period of 2 months before moving to the next field. In one year, we can cover up to 10 fields. \section{QES Data Reduction} We employ difference image analysis based on the {\tt DanDIA} software to achieve optimal extraction of light curves from the QES images. The pipeline has a control program to make it fully automatic. The difference image analysis procedure begins by selecting the sharpest (best seeing) image as the reference image. Constraints are imposed on sky background and sky transparency to prevent selecting a reference image with a sharp focus but poor signal-to-noise ratio due to thin cloud and/or bright moon-lit sky background. For each detected star, the flux measured on the reference image (referred to as the reference flux, $f_0$) is found by optimal scaling of the star's point-spread function (PSF) to fit the reference image data in a pixel box centered on the star's position. The PSF varies considerably over the wide field of the QES images, and we account for this by using a spatially-variable empirical point spread function (PSF) model fitted to all of the detected PSF-like images. Deblending of very close objects is attempted. Stars are matched between each image in the sequence and the reference image, and a linear coordinate transformation is derived and used to resample the images to register them with the star positions on the reference image. As mentioned in Section~\ref{hardware}, in-focus QES images have an under-sampled PSF and yield poor difference image results. We find that {\tt DanDIA} gives better photometry on these under-sampled images if we first convolve the reference and registered images with a Gaussian function that blurs the star images into adjacent pixels. Figure~\ref{fig4} shows how the Gaussian function's full-width at half-maximum (FWHM) affects the light curve RMS for stars in different magnitude bins. From the figure we see that the best results arise after blurring the images with a FWHM between 1.5 and 2.5 pixels. Based on these results, we keep the QES cameras in focus, but blur the reference and registered images by a Gaussian FWHM of 2.5 pixels before performing image subtraction. \begin{figure}[htb] \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{fig4.eps} \FigCap{The effect of different convolving Gaussians used to blur in-focus images, with FWHM of $\sim$1.7~pixels, on the light curve RMS scatter once the data has been processed by difference image analysis. The figure shows the effect of the different Gaussian blurs for 6 different instrumental magnitude bins corresponding roughly to V mag 11, 12, 13, 14, 15 and 16 respectively.} \label{fig4} \end{figure} Image subtraction is preformed using the methods from Bramich (2008). We sub-divide the images into a $14\times14$ grid of cells and determine for each cell a kernel function, modelled as a pixel array, derived by matching the PSF in each cell of the reference image with the PSF in the corresponding cell of the registered image. The kernel function for each image pixel is obtained by bi-linear interpolation in the grid of kernels. The reference image, convolved with the appropriate kernel function is then subtracted from each registered image to produce a sequence of difference images. The differential flux, $\Delta f$, for each star detected in the reference image is measured on each difference image as follows. The empirical PSF at the measured position of the star on the reference image is determined by shifting the empirical PSF model corresponding to the nearest pixel by the appropriate sub-pixel shift using image resampling. The empirical PSF model is then convolved with the kernel model corresponding to the star position on the difference image. Finally, the PSF is optimally scaled to fit the difference image at the star position using pixel variances following a standard CCD noise model. As the reference image is convolved and scaled to match the registered image, the flux change $\Delta f$ needs to be scaled by the photometric scale factor $p$ derived from the image subtraction kernels as described in Bramich (2008). To correct for partial cloud cover and transparency variations across the large field of view of the lenses, the photometric scale factor is allowed to be spatially variable. This helps improve the light curve quality achieved by better modelling the poorer quality data (Bramich et al. 2013). The photometric scale factor is therefore solved for in each grid cell, and interpolated in the same way as the kernel and the differential background. Light curves for each star are constructed by calculating the total flux $f(t)=f_0+\Delta f(t)/p(t)$ at time $t$ as the sum of the reference flux $f_0$ and the time-dependent difference flux $\Delta f(t)$, corrected by the time-dependent (and spatially variable) photometric scale factor $p(t)$. Fluxes are then converted to instrumental magnitudes via the standard formula $m=25-2.5\log(f)$, where $m$ is magnitude and $f$ is flux (ADU/s). To reduce the number of faint stars with a signal-to-noise ratio too low for exoplanet detection, the stars detected by the pipeline are matched with the UCAC3 catalogue (Zacharias et al. 2010) and any objects that lack a matching UCAC3 star are not passed to the archive. A magnitude zero point correction is also performed between the reference magnitudes and the UCAC3 magnitudes for all the stars in a given field using a global SVD fit. The resulting absolute photometric correction has a mean RMS scatter of approximately 0.1 magnitudes. For the first week of observations of any field, the images are simply calibrated for the standard bias, dark and flat field corrections. At the end of the first seven nights of clear weather, the best seeing image with an acceptable sky background is automatically chosen by the software as the reference image. The best seeing image is required since this image will be convolution matched to every other image. The pipeline then produces differential photometry via difference image analysis as described above for all images to date, and on a nightly basis for all subsequent observations. The results of the reductions are automatically uploaded for ingestion into the database archive. \section{QES Analysis Strategy and Archive} The reduced photometric data are stored in a data archive system based on that developed for SuperWASP and described in detail by Pollacco et al. (2006). The data from each of the survey fields are treated to remove trends due to instrumental systematic errors using the SYSREM algorithm (Tamuz et al. 2005, Collier Cameron et al. 2006) and the Trend Filtering Algorithm (Kov\'acs et al. 2005). To search for transit signatures we use an evolution of the box-least-squared (BLS) detection scheme described by Collier Cameron et al. (2006). The BLS search covers a period range from 0.35-10.0 days. At present we exclude periods in the ranges 0.53-0.57 and 0.95-1.05 days, as detections in these period ranges are almost invariably spurious and due to residual instrumental systematic effects. The BLS code is tuned to search for box-like signatures with durations in the range 1.5-3.75h. We investigate the performance of the latest methods (BLS, AoVtr, etc., see Tingley 2003 for a review) on our data sets, and apply more than one method to identify the most convincing transit candidates. Once a candidate transit signature is detected by the BLS code, its parameters are further refined using a Markov Chain Monte-Carlo (MCMC) algorithm as described by Collier Cameron et al. (2007). The results from the MCMC analyses of the candidate transits are then imported into a database, and subjected to manual filtering, eye-balling and prioritization before being fed into the follow-up programme. \section{QES Follow-Up Strategy} The first stage in the follow-up of convincing transit candidates is to estimate the stellar density and the planetary radius by fitting the transit profiles from the survey data themselves. We use the pre-filtering methodology developed by the WASP Project (Collier Cameron et al. 2007) to identify candidate planetary systems. The stellar effective temperature is estimated from the 2MASS $J-H$ colour index. This yields an estimate of the stellar mass under the assumption that the star is on the main sequence, and a set of non-linear limb-darkening coefficients as tabulated by Claret (2000). We use the small-planet model of Mandel \& Agol (2002) to fit the transit light curve as function of the epoch $T_{0}$ of mid-transit, the orbital period $P$, the total duration $t_{T}$ from first to fourth contact, the ratio $R_p/R_\star$ of the planetary to the stellar radius, and the impact parameter $b$ of the planet's trajectory across the stellar disc. A Markov-chain Monte Carlo (MCMC) algorithm is used to determine the posterior probability distributions for each of the fitting parameters. The posterior probability distribution for the planetary radius yields the probability that the planet has a radius less than 1.5 times that of Jupiter. We also determine the departure of the fitted stellar radius from the main-sequence value expected for a star of the catalogued $J-H$ colour. As Sozzetti et al. (2007) have noted, the stellar density is related in a fundamental and model-independent way to the ratio of the transit duration $t_{T}$ to the orbital period $P$. The location of the system in a plot of $R_\star/M_\star^{1/3}$ versus $T_{\rm eff}$ gives a direct assessment of the star's proximity to the main sequence. Many astrophysical false-positive configurations in which an eclipsing stellar binary is blended with a brighter star can be detected and eliminated because the stellar density derived from the transit duration is inconsistent with the effective temperature derived from the $J-H$ colour. At this stage we also fit a cosine curve with half the orbital period to determine the amplitude and significance of any ellipsoidal variation out of transit. Any significant tidal distortion of the primary indicates that the orbiting companion must be of stellar mass, eliminating the system as a planet candidate (Sirko \& Paczy\'nski 2003). For candidates whose transit parameters indicate an object of planet-like radius orbiting a star that appears to be on the main sequence, the next step is to obtain high-accuracy high-cadence light curves covering the suspected transit event, and covering the light curve phase where a secondary eclipse might occur if the system is an eclipsing binary. For the brightness range of our survey target stars, this can be achieved by a 1m-class telescope. Transit candidates are rejected upon the detection of secondary eclipses, ellipsoidal variations, and/or heating effects, all indicative of an eclipsing binary rather than a bona fide transiting planet. The higher angular resolution of the 1m-class telescope helps us to resolve cases where the transits originate in a nearby, faint stellar binary located close to a brighter star. Further MCMC analysis of the follow-up transit light curve and host star properties (colour, spectral type etc.) yields a minimum radius for the transiting body, which can be used to reject a planetary transit candidate if its value is too large ($> 2 R_{J}$). For the purpose of follow-up observations, the SuperWASP Alsubai Follow-up Telescope (SAFT) is being constructed as a 1-m robotic telescope facility on La Palma in the Canary Islands. The aim of the telescope is solely for transit candidate follow-up, shared with the SuperWASP Project. The QES Project will access $\sim$35\% of the observing time in return for the investment in the construction and running costs. The project has been granted permission to place the telescope at the observatory site, and the construction is in progress. It is estimated to be finished most likely in early 2014. Candidates that pass the photometric follow-up stage are then placed on candidate lists for radial-velocity measurements. The existing collaborations between WASP Consortium and the Geneva CORALIE team and the French SOPHIE consortium have developed a highly efficient and successful strategy modelled on that described by Pont et al. (2005) for Doppler follow-up of OGLE transit candidates. An initial reconnaissance observation is used to screen for extreme rotation (which precludes determination of a planetary orbit) or obvious double-lined spectroscopic binaries. Surviving candidates are re-observed a day or so later, to eliminate single-lined binaries with unseen companions of stellar mass. Subsequent radial velocity observations target the quadrature phases of the orbit, to determine the total radial-velocity amplitude and hence the planetary mass. Further observations are then obtained to determine the radial acceleration near both conjunctions, to estimate the orbital eccentricity. For candidates brighter than $V=13$~mag or so, we combine our candidate lists with the WASP programmes on CORALIE in the southern hemisphere and SOPHIE in the north. For fainter candidates we will submit our own observing proposals to carry out radial-velocity follow-up using 4-10m-class telescopes. \section{Initial Results from QES} Figures \ref{fig5} and \ref{fig6} show the RMS of the magnitude residuals for two fields taken with the 200mm and the 400mm cameras, respectively. These plots show the residuals before and after being detrended with the SYSREM algorithm. It can be seen from the two figures that as planned the 400mm camera samples a much fainter magnitude range with a good RMS precision. The 200mm camera is imaging the bright stars and providing an overlap in detection for stars in the $V$ magnitude range 10 to 12. Many QES candidates are discovered independently by both the 200mm and 400mm cameras, in the magnitude range where the two systems overlap. This dual detection capability adds confidence that the detected transits are real and not due to systematic errors. The QES data are also producing transiting candidates around much fainter stars, with some $V\sim15^{th}$ mag stars being flagged as candidates. \begin{figure}[htb] \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{fig5.eps} \FigCap{RMS plot showing the star magnitude residuals for the 200mm camera for a whole field's worth of data. The RMS scatter is shown before (grey) and after (red) being detrended with the SYSREM algorithm.} \label{fig5} \end{figure} \begin{figure}[htb] \includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{fig6.eps} \FigCap{RMS plot of the star magnitude residuals for one of the 400mm cameras for a whole field's worth of data. The RMS scatter is shown before (grey) and after (red) being detrended with the SYSREM algorithm.} \label{fig6} \end{figure} \begin{figure}[htb] \includegraphics[width=0.5\textwidth,height=0.5\textheight,keepaspectratio]{fig8.eps} \includegraphics[width=0.5\textwidth,height=0.5\textheight,keepaspectratio]{fig8a.eps} \FigCap{QES light curve and periodogram of WASP-37b, an exoplanet detected by SuperWASP (Simpson et al. 2011) and independently flagged in the QES database.} \label{fig8} \end{figure} \begin{figure}[htb] \includegraphics[width=0.5\textwidth,height=0.5\textheight,keepaspectratio]{fig9.eps} \includegraphics[width=0.5\textwidth,height=0.5\textheight,keepaspectratio]{fig9a.eps} \FigCap{QES light curve and periodogram of WASP-36b, an exoplanet detected by SuperWASP (Smith et al. 2012) and independently flagged in the QES database.} \label{fig9} \end{figure} QES is also independently identifying a number of transiting exoplanets found first by other surveys such as SuperWASP. Several of these planets were independently flagged as A-list candidates in the QES data before later checks with the SuperWASP database showed them to already be confirmed planets. Figures~\ref{fig8} and \ref{fig9} show two such examples. Figure~\ref{fig8} shows the QES light curve and corresponding periodogram of WASP-37 obtained with one of the 400mm cameras. The QES data reveal the transit signature and the QES periodogram identifies the correct period independently of the SuperWASP data that first found the transits. WASP-37b is a $1.8\,M_J$ $1.2~R_J$ planet in a 3.58~day orbit around a $V=12.7$~mag metal-poor G2 dwarf (Simpson et al. 2011). Figure~\ref{fig9} shows the QES light curve and periodogram for WASP-36. This $V=12.7$~mag metal-poor G2 dwarf hosts a $1.3~M_J$ $2.3~R_J$ planet in a 1.54~day orbit (Smith et al. 2012). The transit is clearly detected in the QES light curve from one of the 400mm cameras, and the QES periodogram identifies the correct period. The QES project has been collecting data since mid November 2009. The first 34 fields that were processed and ingested into the archive yielded 1,863,375,935 photometric data points on on a total of 951,417 stars. QES has identified hundreds of promising candidates, dozens of which have been promoted to the A-list for photometric and radial velocity follow up. The first two QES planets, Qatar~1b (Alsubai et al. 2011) and Qatar~2b (Bryan et al. 2012), have already been confirmed. The next stage in the development of QES is a planned deployment of similar camera systems at two complementary longitudes in the northern hemisphere. The more nearly continuous temporal coverage afforded by a 3-site survey should greatly reduce the time needed to identify candidates and establish reliable transit ephemerides prior to photometric and radial velocity follow-up observations. Given the quality of the light curves that the QES project is producing and the effective validation of candidate filtering methods, we can anticipate that QES will find many more transiting exoplanets among which will be hot Saturns and hot Neptunes orbiting stars bright enough for follow-up investigations. \Acknow{KA, KH, ACC and DMB acknowledge the Qatar Foundation for support from QNRF grant NPRP-09-476-1-078. KH is supported by a Royal Society Leverhulme Trust Senior Research Fellowship.}
2,877,628,089,924
arxiv
\section{Approach} \label{sec:approach} \vspace{-5mm} Our robot is equipped with a stereo camera that generates a dense map for general environments, however as pointed out before, there are few feature points on the water surface and stereo matching doesn't work on the dynamic surface due to water ripples caused by rotor downwash. Therefore in addition to using the stereo camera, we also use monocular cues to detect the water's surface. This section is divided into two parts on the basis of the two approaches followed. \subsection{Monocular Approach} \vspace{-5mm} Similar to prior monocular work \cite{bosch2006autonomous, cheng2010real}, we use planar homography to determine the suitability of an area for safe landing. However, instead of using sparse feature matching, we choose dense optical flow to compute homography. For every two successive images, we densely sample pixels (every 20 pixels in our case) on the image denoted as $P$, then compute dense optical flow $Q$ shown in Fig \ref{fig:mono flow} using Lucas-Kanade algorithm \cite{baker2004lucas}. Finally, the planar homography can be found using all sampled pixels and the homography error $e$ indicates whether the area is planar and safe to land on. A low pass filter is applied to homography error to remove the noise. As can be seen from the bottom of Fig \ref{fig:mono flow}, there is a large homography error when the robot approaches the water surface, while on the rigid ground, the error doesn't increase much. \begin{equation} e = \min_H \|P-H(P+Q)\|_2 \end{equation} \begin{figure}[ht] \centering \includegraphics[scale=0.43]{images/sample_flow.PNG} \includegraphics[scale=0.27]{images/flow22hist.pdf} \includegraphics[scale=0.27]{images/flow33hist.pdf} \caption{ Dense optical flow pattern and planar homography error in different environments. From the bottom left, we can see that planar error increases greatly when the robot approaches the water surface and downwash from its rotors starts generating ripples. Flying over a rigid surface, the flow error doesn't change much. } \label{fig:mono flow} \end{figure} \vspace{-7mm} \subsection{Stereo Approach} \vspace{-5mm} This part utilizes the stereo images to compute surface depth information. This is subsequently split into a grid, and each cell within the grid is evaluated on roughness and slope metrics. (1) A fast block matching method is used to generate point clouds which are subsequently aligned to the gravitation vector using IMU information fused through a Madgwik Filter \cite{madgwick2010efficient}. (2) The point cloud is divided into a grid wherein each cell in the grid represents a 0.5 m x 0.5 m area. (3) A least squares plane fit is computed for each cell in the grid. The normal and residual are used to compute the slope and roughness of the surface within the cell. (4) A 1 sq. m. area is considered for safe landing that lies directly below the drone. Based on the roughness and slope thresholds, the algorithm filters and sends a Boolean command to the landing controller for the drone to either proceed with landing or to stop. The output from one run is shown in Fig \ref{fig:tables_demo}. The 1 sq. m. area under the drone is colored yellow for dangerous and green for safe. Outside the 1 sq. m, red indicates danger and blue indicates safety. If a part of the area under the drone is unsafe, the algorithm asks the landing controller to abort the descent. \begin{figure}[ht] \centering \includegraphics[width=1\textwidth]{images/tables_demo_combined.png} \caption{ Output of the stereo geometric evaluation pipeline. The drone flies over a set of tables. The middle picture shows the left camera output and the rightmost picture shows the output of the algorithm. } \label{fig:tables_demo} \end{figure} \section{Introduction} \label{sec:intro} \vspace{-5mm} Safety is an important factor in drone autonomy. How does a drone guarantee safety for different scenarios that come up in autonomous flight? Landing is one such scenario that requires the drone to successfully evaluate potential zones that are safe to land on. This places a high importance on the perception system of the drone to provide accurate and robust feedback of the landing surface. There has been extensive work done on evaluation of unprepared landing sites for autonomous aerial vehicles. Most vision based approaches rely on a monocular/stereo camera and accurate motion estimation to compute dense surface reconstruction. Micro unmanned aerial vehicles (UAV) often operate in GPS denied environment without accurate global state estimation \cite{fang2017robust}, therefore, they are usually equipped with camera and inertial IMU sensor. Our algorithm relies only on stereo and IMU information and is designed to run fast and reliably. It is designed for the last few metres of the landing maneuver which makes it reactive to scene changes on the surface below such as people walking below. The system consists of two parts. One part analyses the surface below the drone using a custom stereo camera and inertial sensor combination. Slope and roughness are two metrics used to evaluate the terrain, and on the basis of which it is deemed safe to land on. The second part is a fast, monocular image based approach that is able to detect the presence of a non-rigid surfaces such as water or grass, which would normally be difficult for the stereo based approach to detect. We also built our own datasets consisting of 10 outdoor scenes with dense disparity ground truth to help us select an appropriate stereo matching algorithm to use in our work. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{images/intro_scenarios.PNG} \caption{Various landing scenarios to evaluate including water, trees and rigid surface. Monocular and stereo cues need to be combined in order to classify them.} \label{fig:intro_scenarios} \end{figure} \vspace{-7mm} \subsection{Related Work} \vspace{-5mm} There has been work done on vision based autonomous landing in unknown environments. Monocular camera based approaches can broadly be divided into two types. One type is to use a homography matrix to find feature points in a planar area \cite{bosch2006autonomous, cheng2010real}. The other approach is to build a dense point cloud using motion stereo triangulation \cite{desaraju2015vision, forster2015continuous}. These approaches can work well in static environments such as rigid ground, roof but cannot work in dynamic scenarios where epipolar geometry fails, such as flowing water, water ripples, leaves blowing on trees. To the best of the author's knowledge, there is no prior work that demonstrates a landing zone evaluation system that can detect non-rigid surfaces such as water. There is also work done using depth sensors such as stereo cameras \cite{theodore2006flight, meingast2004vision} and LIDAR scanners \cite{ scherer2012autonomous, chamberlain2011self} to directly get a dense map for evaluating the terrain. \cite{meingast2004vision} uses a multi-frame planar parallax algorithm to produce a digital elevation map (DEM) of the terrain. The work in \cite{scherer2012autonomous} incorporates terrain/skid interaction, and the aircraft geometry in addition to terrain modeling to assess the suitability of a good landing site. \cite{maturana20153d} extends this work to an approach that uses a 3D Convolution Neural Network to assess the safety of landing zones covered in low vegetation. \section{Experiments} \label{sec:experiments} \vspace{-5mm} This section shows the experimental results of the two proposed approaches. We also show the comparison of different stereo matching algorithms using our own stereo disparity dataset. The robot platform we have used is an Autel X-Star Quadrotor UAV. It has a custom built stereo camera setup with an IMU attached to it. The cameras and IMU are triggered together using a hardware GPIO input provided by a microcontroller. \iffalse \begin{table}[ht] \centering \begin{tabular}{||c | c||} \hline Description & Specification \\ [0.5ex] \hline\hline Camera Module & UEye CMOS Monochrome Global Shutter 1/1.8" sensor \\ \hline Lens & Lensation 5.5mm D:76deg FOV \\ \hline Inertial Sensor & Epson G364 Six Axis IMU \\ \hline Compute Unit & NVIDIA TX2 with Ubuntu 16.04 \\ [1ex] \hline \end{tabular} \caption{Sensor and Compute Stack Specifications} \label{tab:stack} \end{table} \fi \vspace{-7mm} \subsection{Monocular Landing Evaluation} \vspace{-5mm} We evaluated the monocular approach on various water surfaces including lakes, ponds, streams, puddles and also other general environments such as rigid ground, grass, and trees. The drone flies over a target region and starts to gradually descend. It stops when the algorithm detects danger. As can be seen from Table \ref{tab:flow_results}, the algorithm is robust to variations in water surface and rigid ground surface. In scenarios with fallen leaves where we think it is safe to land, the algorithm detects large planar error due to leaves blowing away by the rotor downwash. For trees, the optical flow is scattered due to motion of the leaves. However, these are scenarios that can be handled more effectively by the Stereo based approach. \begin{table}[ht] \centering \begin{tabular}{||c | c | c | c | c||} \hline Surface & No. of Trials & Accuracy & Robustness \\ [0.5ex] \hline\hline Water & 15 & 100\% & Robust \\ \hline Solid Ground/Roof & 17 & 100\% & Robust \\ \hline Grass & 4 & 100\% & Robust \\ \hline Fallen Leaves & 3 & 67\% & Not Robust \\ \hline Trees & 7 & 70\% & Not Robust \\ [1ex] \hline \end{tabular} \caption{Summary of Monocular Evaluation Tests} \label{tab:flow_results} \end{table} \vspace{-7mm} \subsection{Stereo Disparity Dataset} \vspace{-5mm} There are existing benchmarks such as KITTI \cite{Menze2015CVPR} and Middlebury \cite{scharstein2014high} that provide an extensive set of scenes for evaluation of stereo algorithms. However, they don't cover cases with a short baseline stereo and downward looking scenes. This motivated us to build our own dataset consisting of 10 scenes taken in different locations around the CMU campus. The dense ground truth disparity map is computed from data acquired from accurate Faro 120 Laser Scanner. Stereo images were captured at 2208 x 1242 resolution using the ZED Stereo Camera. The results are summarized from comparing 5 algorithms: Block Matching (BM)\cite{konolige1998small}, BM + WLS Filter + LR Check, Semi-Global Matching (SGBM) \cite{hirschmuller2005accurate}, SGBM + WLS Filter + LR Check, and LIBELAS\cite{Geiger2010ACCV} shown in Table \ref{tab:bad_4}. The results are mixed in terms of accuracy. SGBM + WLS + LRC is a close match for LIBELAS. But both methods are quite slow to run the NVIDIA TX2 platform. We finally chose Block Matching because it is the fastest out of all 5 and would ensure the entire pipeline could be run in real-time. \begin{table}[ht] \centering \begin{tabular}{||c | c | c | c | c | c |} \hline Scene & BM & BM + WLS + LRC & SGBM & SGBM + WLS + LRC & LIBELAS \\ [0.5ex] \hline\hline Scene 1 & 33.3\% & 30.3\% & 18\% & 16.2\% & 9.8\% \\ \hline Scene 2 & 37.6\% & 36.5\% & 21.8\% & 20.9\% & 20.2\% \\ \hline Scene 3 & 37.4\% & 36.2\% & 18.8\% & 17.6\% & 12.6\% \\ \hline Scene 4 & 39.1\% & 39.5\% & 17.6\% & 16.6\% & 9.1\% \\ \hline Scene 5 & 42.5\% & 37.2\% & 17.8\% & 16.0\% & 23.7\% \\ \hline Scene 6 & 67.0\% & 53.2\% & 33.1\% & 31.5\% & 40.4\% \\ \hline Scene 7 & 70.0\% & 58.3\% & 30.2\% & 28.8\% & 40.5\% \\ \hline Scene 8 & 49.2\% & 45.5\% & 35.3\% & 34.4\% & 32.4\% \\ \hline Scene 9 & 54.0\% & 50.5\% & 35.8\% & 35.3\% & 29.4\% \\ \hline Scene 10 & 57.3\% & 48.7\% & 38.7\% & 36.0\% & 31.7\% \\ [1ex] \hline \end{tabular} \caption{Percentage of bad pixels (disparity error $> 4$ pixel)} \label{tab:bad_4} \end{table} \vspace{-7mm} \subsection{Stereo Landing Evaluation} \vspace{-5mm} Similar to monocular landing evaluation, we perform landing tests in different scenes using the stereo based approach. The robot will stop descending when the stereo module detects danger. Results are shown in Table \ref{tab:geom_results}. Our approach can work well in most environments except in areas with textureless surfaces where stereo matching is not able to find enough matches to do a good reconstruction. Tall, sharp, and thin grass is also difficult as it only occupies small regions in the image making it difficult for the stereo matching to find enough correspondences. \begin{table}[ht] \centering \begin{tabular}{||c | c | c | c |} \hline Surface/Obstacle & No. of Trials & Success Rate & Description \\ [0.5ex] \hline\hline Short Grass & 5 & 100\% & Grass blades not longer than 10 cm \\ \hline Tarmac & 5 & 100\% & Metalled Road surface \\ \hline Chair & 5 & 100\% & Outdoor lounge chairs placed in a grassy area \\ \hline Box & 5 & 100\% & Plastic storage box (dimensions 0.3x0.4x0.3m) \\ \hline Tables & 5 & 100\% & Outdoor concrete tables with benches \\ \hline Concrete & 5 & 100\% & Concrete flat ground \\ \hline Steps & 5 & 100\% & Outdoor concrete staircase \\ \hline Stones & 5 & 100\% & Large stones (Elliptical shape - major axis 0.3 m) \\ \hline Tall Grass & 5 & 0\% & Tall grass blades longer than 15 cm \\ \hline Textureless Surfaces & 5 & 0\% & Uniformly colored Volleyball Ground\\ [1ex] \hline \end{tabular} \caption{Summary of Geometric Evaluation Tests} \label{tab:geom_results} \end{table} \section{Conclusions} \label{sec:insights} \vspace{-5mm} In this paper, we propose a vision based landing evaluation using monocular and stereo cues. Stereo enables a dense surface reconstruction in most environments to evaluate slope and roughness, however it cannot work well in challenging low-texture scenes as well as non-rigid surfaces. A monocular approach that exploits optical flow in the scene is proposed to handle such cases. Results from the experiments show that the two approaches are fairly robust to different scenes. Their respective failure scenarios also indicate that the two approaches can complement each another. Currently we have tested the two approaches separately. We will combine them into a single module to assess the landing zone automatically and robustly. \section{Abstract} \vspace{-5mm} Autonomous and safe landing is important for unmanned aerial vehicles. We present a monocular and stereo image based method for fast and accurate landing zone evaluation for UAVs in various scenarios. Many existing methods rely on Lidar or depth sensor to provide accurate and dense surface reconstruction. We utilize stereo images to evaluate the slope and monocular images to compute homography error. By combining them together, our approach works for both rigid and non-rigid dynamic surfaces. Experiments on many outdoor scenes such as water, grass and roofs, demonstrate the robustness and effectiveness of our approach. \vspace{-5mm} \input{inputs/1_intro.tex} \vspace{-5mm} \input{inputs/2_approach.tex} \vspace{-5mm} \input{inputs/3_experiments.tex} \vspace{-5mm} \input{inputs/4_insights_future.tex} \vspace{-5mm} \bibliographystyle{unsrt} \small{ \section{Section Heading} \label{sec:1} Use the template \emph{chapter.tex} together with the Springer document class SVMono (monograph-type books) or SVMult (edited books) to style the various elements of your chapter content in the Springer layout. Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations. And please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \section{Section Heading} \label{sec:2} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. Use the standard \verb|equation| environment to typeset your equations, e.g. \begin{equation} a \times b = c\;, \end{equation} however, for multiline equations we recommend to use the \verb|eqnarray| environment\footnote{In physics texts please activate the class option \texttt{vecphys} to depict your vectors in \textbf{\itshape boldface-italic} type - as is customary for a wide range of physical subjects}. \begin{eqnarray} a \times b = c \nonumber\\ \vec{a} \cdot \vec{b}=\vec{c} \label{eq:01} \end{eqnarray} \subsection{Subsection Heading} \label{subsec:2} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references\index{cross-references} and citations\index{citations} as has already been described in Sect.~\ref{sec:2}. \begin{quotation} Please do not use quotation marks when quoting texts! Simply use the \verb|quotation| environment -- it will automatically render Springer's preferred layout. \end{quotation} \subsubsection{Subsubsection Heading} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{subsec:2}, see also Fig.~\ref{fig:1}\footnote{If you copy text passages, figures, or tables from other works, you must obtain \textit{permission} from the copyright holder (usually the original publisher). Please enclose the signed permission with the manuscript. The sources\index{permission to print} must be acknowledged either in the captions, as footnotes or in a separate section of the book.} Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{figure}[b] \sidecaption \includegraphics[scale=.65]{figure} \caption{If the width of the figure is less than 7.8 cm use the \texttt{sidecapion} command to flush the caption on the left side of the page. If the figure is positioned at the top of the page, align the sidecaption with the top of the figure -- to achieve this you simply need to use the optional argument \texttt{[t]} with the \texttt{sidecaption} command} \label{fig:1} \end{figure} \paragraph{Paragraph Heading} % Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. For typesetting numbered lists we recommend to use the \verb|enumerate| environment -- it will automatically render Springer's preferred layout. \begin{enumerate} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \begin{enumerate} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{enumerate} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{enumerate} \subparagraph{Subparagraph Heading} In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}, see also Fig.~\ref{fig:2}. For unnumbered list we recommend to use the \verb|itemize| environment -- it will automatically render Springer's preferred layout. \begin{itemize} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development, cf. Table~\ref{tab:1}.} \begin{itemize} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{itemize} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{itemize} \begin{figure}[t] \sidecaption[t] \includegraphics[scale=.65]{figure} \caption{If the width of the figure is less than 7.8 cm use the \texttt{sidecapion} command to flush the caption on the left side of the page. If the figure is positioned at the top of the page, align the sidecaption with the top of the figure -- to achieve this you simply need to use the optional argument \texttt{[t]} with the \texttt{sidecaption} command} \label{fig:2} \end{figure} \runinhead{Run-in Heading Boldface Version} Use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. \subruninhead{Run-in Heading Italic Version} Use the \LaTeX\ automatism for all your cross-refer\-ences and citations as has already been described in Sect.~\ref{sec:2}\index{paragraph}. \begin{table} \caption{Please write your table caption here} \label{tab:1} \begin{tabular}{p{2cm}p{2.4cm}p{2cm}p{4.9cm}} \hline\noalign{\smallskip} Classes & Subclass & Length & Action Mechanism \\ \noalign{\smallskip}\svhline\noalign{\smallskip} Translation & mRNA$^a$ & 22 (19--25) & Translation repression, mRNA cleavage\\ Translation & mRNA cleavage & 21 & mRNA cleavage\\ Translation & mRNA & 21--22 & mRNA cleavage\\ Translation & mRNA & 24--26 & Histone and DNA Modification\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} $^a$ Table foot note (with superscript) \end{table} \section{Section Heading} \label{sec:3} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. If you want to list definitions or the like we recommend to use the Springer-enhanced \verb|description| environment -- it will automatically render Springer's preferred layout. \begin{description}[Type 1] \item[Type 1]{That addresses central themes pertainng to migration, health, and disease. In Sect.~\ref{sec:1}, Wilson discusses the role of human migration in infectious disease distributions and patterns.} \item[Type 2]{That addresses central themes pertainng to migration, health, and disease. In Sect.~\ref{subsec:2}, Wilson discusses the role of human migration in infectious disease distributions and patterns.} \end{description} \subsection{Subsection Heading} % In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the \LaTeX\ automatism for all your cross-references and citations citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{svgraybox} If you want to emphasize complete paragraphs of texts we recommend to use the newly defined Springer class option \verb|graybox| and the newly defined environment \verb|svgraybox|. This will produce a 15 percent screened box 'behind' your text. If you want to emphasize complete paragraphs of texts we recommend to use the newly defined Springer class option and environment \verb|svgraybox|. This will produce a 15 percent screened box 'behind' your text. \end{svgraybox} \subsubsection{Subsubsection Heading} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{theorem} Theorem text goes here. \end{theorem} \begin{definition} Definition text goes here. \end{definition} \begin{proof} Proof text goes here. \qed \end{proof} \paragraph{Paragraph Heading} % Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{theorem} Theorem text goes here. \end{theorem} \begin{definition} Definition text goes here. \end{definition} \begin{proof} \smartqed Proof text goes here. \qed \end{proof} \begin{acknowledgement} If you want to include acknowledgments of assistance and the like at the end of an individual chapter please use the \verb|acknowledgement| environment -- it will automatically render Springer's preferred layout. \end{acknowledgement} \section*{Appendix} \addcontentsline{toc}{section}{Appendix} When placed at the end of a chapter or contribution (as opposed to at the end of the book), the numbering of tables, figures, and equations in the appendix section continues on from that in the main text. Hence please \textit{do not} use the \verb|appendix| command when writing an appendix at the end of your chapter or contribution. If there is only one the appendix is designated ``Appendix'', or ``Appendix 1'', or ``Appendix 2'', etc. if there is more than one. \begin{equation} a \times b = c \end{equation} \input{referenc} \end{document}
2,877,628,089,925
arxiv
\section{Lepton Flavor Violation} In the Standard Model (SM), the lepton number of each flavor is separately conserved. However, it is well known from the neutrino oscillation experiments that lepton flavor conservation should be violated \cite{superk}. In the framework of R-parity violating (RPV) supersymmetry, the requirement of neutrino masses and mixings are easily fulfilled. Moreover, it has the advantage of a richer phenomenology. Higgs to $\mu\bar{\tau} + \tau\bar{\mu}$ decay which is forbidden in the SM is our example at hand. These kinds of lepton flavor violating (LFV) processes, have received much attention and may give us hints to physics beyond the SM. In this talk, we present the first results of a comprehensive study on a generic supersymmetric standard model (without R parity), highlighting cases of most interest. More details about the topic can be found in Refs.\cite{short,long}. \section{The generic supersymmetric standard model (without R parity) and Higgs mass matrix} With the superfield content of the minimal supersymmetric standard model (MSSM), the most general renormalizable superpotential can be written as \begin{equation} W=\epsilon_{ab} \left[ \mu_\alpha \hat{H}^a_u \hat{L}^b_\alpha + h^u_{ik} \hat{Q}^a_i \hat{H}^b_u \hat{U}^C_k + \lambda^{'}_{\alpha jk} \hat{L}^a_\alpha \hat{Q}^b_j \hat{D}^C_k + \frac{1}{2}\lambda_{\alpha\beta k}\hat{L}^a_\alpha \hat{L}^b_\beta \hat{E}^C_k \right] + \frac{1}{2}\lambda^{''}_{ijk}\hat{U}^C_i \hat{D}^C_j \hat{D}^C_k \end{equation} where $(a,b)$ are SU(2) indices with $\epsilon_{12}=- \epsilon_{21}=1$, $(i,j,k)$ are the usual family (flavor) indices, and $(\alpha ,\beta)$ are extended flavor indices going from 0 to 3. We have four leptonic superfields $\hat{L}$, which contain the components of fermion doublet as $l^0$ and $l^-$, and their scalar partners as $\tilde{l}^0$ and $\tilde{l}^-$. For convenience, we choose a flavor basis such that only $\hat{L}_0$ bears a nonzero vacuum expectation value and thus can be identified as $\hat{H}_d$ in the MSSM. Details of the model formulation can be found in \cite{Otto}. With all the RPV terms, the physical scalar states are now a mixture of Higgses and sleptons. The RPV terms provide new contributions to scalar mass matrices and hence Higgs mass. In addition, third generation quarks and squarks could play an important role in radiative corrections to the Higgs sector, and hence should be included. Accordingly, we implement complete one-loop corrections, from Ref.\cite{Ellis}, to matrix elements directly relating to Higgs bosons (including CP-even, CP-odd and charged Higgs bosons) during our computation. Moreover, we include an estimation \cite{2loop} of the key two-loop corrections in light Higgs related elements also \footnote{Higgs bosons mix with sleptons via RPV terms, but we can still identify Higgses from other sleptons due to the foreseeable smallness of RPV parameters.}. Note that radiative RPV corrections are typically too small to be taken into account, thus we study tree level RPV effects only. \section{Calculations and Numerical results} Among all sources which can give constraints on our RPV parameter setting, the one from indirect evidence of neutrino mass (i.e., $\sum_i m_{\nu_i}\lesssim\text{1eV}$ \cite{neutrino}) is quite crucial. However, since it has not been completely ruled out for neutrinos having mass larger than 1eV, we give some comments on branching ratios as reference under the condition that neutrino mass is constrained only by the solid bounds (i.e., $m_{\nu_e}<\text{3eV}$, $m_{\nu_\mu}<\text{190keV}$ and $m_{\nu_\tau}<\text{18.2MeV}$ \cite{neutrinoex}) as well. In our numerical computation, we deal directly with mass eigenstates and put all the tree level mass matrices into the program. The mass of the Higgs bosons (and other sparticles) needed in our analysis are obtained by diagonalizing corresponding mass matrices numerically. The necessary amplitudes of tree and one-loop Feynman diagrams \footnote{During numerical computation of Feynman diagrams, \textit{LoopTools} package is used for the evaluation of loop integrals \cite{looptools}.} and relevant effective couplings in the model are derived analytically by hand. By encoding the derived analytical formulas of the decay amplitudes into the numerical program, values of total amplitude and hence decay rate can be obtained. In the computation of the total decay width of light Higgs, we include all significant decay channels as well as the RPV decay $h^0\rightarrow \mu\bar{\tau}+\tau\bar{\mu}$. With the RPV partial decay width rate for the channel and total decay width, the branching ratio can be obtained. While the details of our parameter setting can be found in \cite{short,long}, we highlight the most interesting combinations of RPV parameters which give significant contributions to the decay in the Table 1. \begin{center} \begin{tabular}{c@{\hspace{20pt}}c@{\hspace{20pt}}c} \multicolumn{3}{l}{Table 1. Interesting contributions to branching ratio of $h^0\rightarrow\mu\bar{\tau} + \tau\bar{\mu}$} \\ \hline\hline RPV Parameter & With Neutrino Mass& With Relaxed\\[-10pt] Combinations & $\lesssim$1eV Constraint & Neutrino Mass Constraint\\ \hline $B_2 \,\mu_3\hspace{8pt} $ & $1\times10^{-15}$ & $9\times10^{-6\hspace{4pt}}$ \\ $B_3 \,\mu_2\hspace{8pt} $ & $1\times10^{-13}$ & $7\times10^{-4\hspace{4pt}}$ \\ $B_1 \,\lambda_{123}$ & $1\times10^{-5\hspace{4pt}}$ & $4\times10^{-5\hspace{4pt}}$ \\ $B_1 \,\lambda_{132}$ & $3\times10^{-5\hspace{4pt}}$ & $7\times10^{-5\hspace{4pt}}$ \\ $B_2 \,\lambda_{232}$ & $3\times10^{-5\hspace{4pt}}$ & $6\times10^{-2\hspace{4pt}}$ \\ $B_3 \,\lambda_{233}$ & $3\times10^{-5\hspace{4pt}}$ & $3\times10^{-2\hspace{4pt}}$ \\ $B_2 \,A^\lambda_{232}$ & $5\times10^{-11}$ & $7\times10^{-7\hspace{4pt}}$ \\ $B_3 \,A^\lambda_{233}$ & $5\times10^{-11}$ & $1\times10^{-7\hspace{4pt}}$ \\[2pt] \hline \end{tabular}\\ \end{center} \medskip\medskip In $B_i \mu_j$ combinations, $B_3 \mu_2$ is particularly enhanced by tau Yukawa coupling $y_{e_3}$ thus becoming the largest among all $B_i \mu_j$'s. On the other hand, the values of $B_i$ and $B_i \mu_j$ are highly constrained separately by their loop contribution to neutrino mass matrix \cite{Rakshit}; a non-zero $\mu_j$ will induce tree level neutrino mass, hence it is constrained. In the meantime, leptonic radiative decays like $\mu \rightarrow e\gamma$, etc. also give upper bounds on $B_i \mu_j$\cite{ottoleptonic}. Among all $B_i \lambda$'s, $B_1 \lambda_{123}$, $B_1 \lambda_{132}$, $B_2 \lambda_{232}$ and $B_3 \lambda_{233}$ are the most important because they can contribute to the amplitudes at tree level. The value of $\lambda$ is constrained by charged current experiments \cite{Barbier}. Besides, leptonic radiative decays also give upper bounds on $B_i \lambda$ \cite{ottoleptonic}. We can see from the Table 1 that $B_i \lambda$ type combinations give the most important contributions among all RPV parameter combinations. As to $B_i A^\lambda$ type combinations, $A^\lambda$'s do not have experimental constraints, and can consequently take any value. In our setting ($A^\lambda$ = 2500GeV), branching ratios from $B_i A^\lambda$ can at most reach the order of $10^{-7}$. Since decay rate is proportional to amplitude square and hence $A^\lambda$ square, it is easy to see how branching ratio modifies as $A^\lambda$ increases. From the results above, we can see that even with RPV parameters being the only source of lepton flavor violation, notable contributions to $h^0\to \mu\bar{\tau} + \tau\bar{\mu}$ with branching ratios beyond $10^{-5}$ are possible. This would lead to several raw events with almost no SM background in LHC with 8 TeV energy and get amplified further with the 14 TeV energy for future LHC runs. Hence, such RPV contributions on lepton flavor violation should not be overlooked in future collider experiments.
2,877,628,089,926
arxiv
\section{Introduction} Depending on the surface layer stratification, rolls approximatively aligned with the mean flow can develop in the Marine Atmospheric Boundary Layer (MABL) into Organised Large Eddies (OLE). Although, this secondary circulation plays a key role in turbulent heat and momentum fluxes~\cite{Zhang2008}, MABL fluxes parameterization and modelling is still an active field of research for both the numerical models~\cite{Sandu2013} and Large Eddies simutations~\cite{Saggiorato2020}. One of the reasons is certainly that ground truth for collocated wind, temperature and humidity profile observations and for documenting OLE characteristics in open ocean are rare. As summarized by~\cite{Etling1993}, the main known characteristics are the following: vertical extent: 1-2 km, wavelength: 2-20 km, aspect ratio: 2-15, downstream extent: 10-1000 km, orientation of roll axis to mean wind direction: -20° to +30°; and lifetime: 1-72 h. Yet, more advanced information such as their frequency of occurrence, strength or formation remains to be documented. For a more complete review on OLE from the observational, theoretical and numerical point of view, the reader can refer to review papers from~\cite{Etling1993} and~\cite{Young2002}. C-Band high-resolution radar (or SAR for synthetic aperture radar) is the only spaceborne instrument able to probe at very high spatial resolution ocean sea surface day and night, regardless of the cloud coverage, with pixel resolution of few meters in swaths of several hundred kilometers. At C-band (about 5~cm), the backscattered signal from the ocean is primary sensitive to capillary and small gravity ocean surface waves which are very sensitive to the local surface stress allowing for the SAR to trace the updraft and downdraft associated to horizontal wind field convergence and divergence areas in between the rolls. As a consequence, several studies have been conducted to report on the rolls signature on SAR images~\cite{Alpers1994} and document the spatial evolution of convection for particular events such as strong cold outbreak interacting with the Gulf Stream north wall~\cite{Babin2003}. Most of them focused on coastal areas where large images of several hundreds of kilometers were acquired with previous SAR missions Envisat and Radarsat. But, very few take benefit of the acquisition mode used in open ocean : the so-called wave mode. In fact, to our knowledge, the only studies based on wave mode have been published by~\cite{Wang2020} for OLE detection and~\cite{Wang2019} for OLE documentation (orientation and wind speed modulation of $\pm$ 7 m/s). In particular, these two recent studies rely on the new capabilities of Sentinel-1 European SAR mission that provides images of 20 by 20 km over open ocean whereas the former European SAR (Envisat) was only able to provides scenes of 8 by 10 km, much less adapted to the OLE wavelength. Moreover, Wave Mode acquisition have the highest resolution: about 5~m. Until now, previous works only used second order statistics to characterize size and direction of convective rolls directly from radar imagery of the atmosphere~\cite{Lohou1998} or from SAR imagery of the ocean~\cite{Wang2020}. However, higher order statistics have been already used to characterize turbulence in 1-dimensional signals of wind velocity in the MABL~\cite{Atta1970}. Thus, we propose here to use higher order statistics to characterize more subtly the shape of convective rolls. To do so, we compute the 2-dimensional skewness $\mathcal{S}$ and flatness $\mathcal{F}$ of spatial increments and examine their evolutions across length-scales~\cite{Frisch1995}, which allows us to describe the asymmetry and intermittency of convective rolls. To the best of our knowledge, it is the first time that such an analysis is performed on SAR images. \section{Theoretical relations} \subsection{Two-dimensional direction-dependent structure functions} The n$^{th}$ order structure function $S_{n}$ of a two-dimensional field $F(x,y)$ can be defined as: \begin{equation} S^{l_x,l_y}_{n}(F) = \left\langle \left( F(r_x+l_x,r_y+l_y) - F(r_x,r_y) \right)^{n} \right\rangle \end{equation} \noindent where $(r_x,r_y)$ denotes a spatial position, and so $l_x$ and $l_y$ are the separation distances along each dimension of the field. We then note $\delta_{l_x,l_y}F = F(r_x+l_x,r_y+l_y) - F(r_x,r_y)$, the spatial increment of the two-dimensional field. In this paper we focus on $S_{2}^{l_x,l_y}$, $S_{3}^{l_x,l_y}$ and $S_{4}^{l_x,l_y}$ that provide respectively a characterization of the variance, asymmetry and tails prominence of the statistical distribution of the increments of the field at scales $(l_x,l_y)$. To avoid the impact of $S_{2}^{l_x,l_y}$ variations on the characterization of $S_{3}^{l_x,l_y}$ and $S_{4}^{l_x,l_y}$, the Skewness $\mathcal{S}$ and Flatness $\mathcal{F}$ factors across scales are defined. They correspond to $S_{3}^{l_x,l_y}$ and $S_{4}^{l_x,l_y}$ on centered and standardized increments. \noindent If $\mathcal{S}=0$ then the distribution is symmetrical, while $\mathcal{S}<0$ implies left-tailed and $\mathcal{S}>0$ right-tailed distributions. On the other hand, the flatness of a Gaussian distribution is $\mathcal{F}=3$, while $\mathcal{F}<3$ means less prominent tails and $\mathcal{F}>3$ more prominent ones. From a physical point of view, $S_{2}^{l_x,l_y}$ characterizes the distribution of energy across scales, while $\mathcal{S}^{l_x,l_y}$ and $\mathcal{F}^{l_x,l_y}$ allow us to describe intermittency which translates into a deformation of the shape of the distribution across scales~\cite{Frisch1995}. Thus for example turbulence is characterized by a Gaussian distribution at large scales and an increase, of skewness and flatness when the scale decreases~\cite{Kolmogorov1962, Obukhov1962, Frisch1995}. Finally, a change of coordinates from cartesian to polar allows for analysis with respect to direction: $S_{2}^{l_x,l_y} \rightarrow S_{2}^{r,\theta}$, $\mathcal{S}^{l_x,l_y}\rightarrow\mathcal{S}^{r,\theta}$ and $\mathcal{F}^{l_x,l_y}\rightarrow\mathcal{F}^{r,\theta}$, with $r$ the radius and $\theta$ the angle dimension. \subsection{Interpretation of structure functions analysis of OLE} So, second order statistics such as power spectrum, correlation function or $S_{2}^{l_x,l_y}$, describe the size and direction of convective rolls~\cite{Lohou1998,Wang2020} without providing finer information nor on the shape of rolls neither on the evolution of this shape across scales. Thus from~\cite{Lohou1998}, the correlation function decreases the fastest along the direction perpendicular to the rolls $\theta_{\perp}$. Moreover, along this direction the correlation length, measured as the distance to the first minimum, indicates the size of the rolls. On the other hand from~\cite{Wang2020}, peaks of energy appear in the 2-dimensional power spectrum and the most energetic peaks indicate the size and direction of rolls. Since power spectrum and correlation (equivalently $S_{2}^{l_x,l_y}$) are simply related by the Fourier transform, both methodologies are equivalent. These second order statistics are frequently used in the characterization of ocean surface from remote sensing images~\cite{Tandeo2014}. Contrary to second-order statistics, high order ones such as $\mathcal{S}^{l_x,l_y}$ and $\mathcal{F}^{l_x,l_y}$ characterize the shape of the rolls and its finer changes across scales. More precisely, non-zero values of $\mathcal{S}^{l_x,l_y}$ illustrate an asymmetry of the shape of the rolls along the direction pointed out by the couple $(l_x, l_y)$. Indeed, $\mathcal{S}^{l_x,l_y}$ characterizes the asymmetry of the distribution of the increments of size $(l_x, l_y)$, and so, characterizes the differences between rises and falls of the field along this direction and at this scale, see figure~\ref{figschema}. We interpret low intensity areas of SAR fields as areas occupied by the rolls, while high intensity areas are interpreted as areas between rolls. Thus, $\mathcal{S}^{l_x,l_y}\neq0$ along $\theta_{\perp}$ implies that we enter and leave rolls in a different way (more or less smoothly/sharply) pointing out an asymmetry, see figure~\ref{figschema}. \begin{figure*}[htbp] \centerline{\includegraphics[width=0.8\linewidth]{Schema.pdf}} \caption{a) Outline of asymmetrical convective rolls and their SAR imprint. The direction of analysis corresponding to $\theta_{\perp}$ is indicated by the \textit{direction of analysis} arrow. b) Schematic distribution of the increments of the SAR roughness at a given scale $r$ smaller than the rolls wavelength and along the direction indicated by the \textit{direction of analysis} arrow in a). The positive skewed distribution of the increments points out the asymmetry of the rolls observed in a). In this example, roughness increases when leaving rolls are sharper than roughness decreases when entering into the rolls which leads to larger values of positive increments than those of negative ones and so to a positive skewed distribution of the increments.} \label{figschema} \end{figure*} \section{Sentinel-1 SAR Wave Mode dataset}\label{sec:data} Nowadays, the largest database of C-band Synthetic Aperture Radar (SAR) images of the ocean is provided by the Sentinel-1 constellation mission. Sentinel-1A and Sentinel-1B twin SAR can operate in 4 different and exclusive acquisition modes. They differ in incidence angle range, swath width (spatial coverage), resolution and polarization In this work we focus on the Wave (WV) acquisition mode, which provides images of $20 \times 20$ km (the smallest) with a spatial resolution of $5$ m (the highest). Then, this mode furnishes large enough images to study convective rolls (typical sizes of 2-20 km), while covering quite homogeneous areas and at very high spatial resolutions. We work on previously preprocessed Sentinel-1 SAR WV images from https:\/\/xwaves.ifremer.fr\/\#\/~\cite{Wang2020}. These images are corrected from the mean decrease (due to local incidence angle change across the image) of the backscattered signal measured by SAR for a scene assumed homogeneous and present a final spatial resolution after preprocessing of $50$ m which allows us to study convective rolls but also smaller scale events such as swell (typically 200~m; up to 800~m). To illustrate the potential impact of swell on the OLE signature in SAR images, we study here two different images: both obtained by Sentinel-1 A, one between 11:30:04 and 11:30:07 on 2017\/03\/15 from Great Lakes in the border between U.S.A and Canada and another between 20:25:22 and 20:25:24 on 2016\/12\/29 from open North Pacific ocean. While the last one present a significant swell event, the first one is free of this phenomena, see Fig.~\ref{figImages}. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{Images2.pdf}} \caption{Sentinel-1 SAR images of the ocean: left Great Lakes and right North Pacific. Green dashed lines correspond to the parallel ($\theta_{||}$) and blue and red dashed lines to the perpendicular ($\theta_{\perp}$) directions of convective rolls as obtained from $S_{2}^{l_x,l_y}$. Red indicates the direction with positive skewness and blue with negative one. Yellow dashed line indicates the wind direction ($\theta_{w}$) obtained from the European Centre for Medium-Range Weather Forecasts (ECMWF) global model.} \label{figImages} \end{figure} \section{Results and discussion} \subsection{Second-order structure functions} Fig.~\ref{fig2dS2} shows $S_{2}^{l_x,l_y}$ and $S_{2}^{r,\theta}$ for the Great Lakes and the North Pacific Ocean SAR images presented in section~\ref{sec:data}. For both SAR images, we observe a ellipsoidal structure of a given width about 1Km (green dashed vertical line in polar coordinates) and with its long axis along a given direction $\theta_{||}$ (horizontal green dashed line in polar coordinates). Thus, the second order structure function applied on SAR images characterizes the size and direction of convective rolls, see table~\ref{table:sizedirection}. Same results were obtained with power spectrum~\cite{Wang2020} and correlation~\cite{Lohou1998} methodologies. Moreover, the swell --- if present --- can be characterized in the same way, but looking at smaller scales (see Fig.~\ref{fig2dS2} b). Its presence is revealed by ellipsoidal structures of $S_{2}^{l_x,l_y}$ at small scales (inside the black square in Fig.~\ref{fig2dS2} b). The direction and size of the swell is also obtained (and represented as black dashed lines in Fig.~\ref{fig2dS2} d). We obtain a swell size of 120m in agreement with the WaveWatch3 model (130m) and Fourier analysis (118m). However, the resolution of our measures is the pixel size, in this case 50m, and so swell scales are too small for our method to be adapted. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{S2_v2.pdf}} \caption{Two-dimensional second-order structure function of two Sentinel-1 SAR images of the ocean: left Great Lakes and right North Pacific. Top in cartesian coordinates $S_{2}^{l_x,l_y}$, bottom in polar coordinates $S_{2}^{r,\theta}$. In a) and b) the center $l_x=l_y=0$ is indicated with a red dot. In figure b) a black square has been added to point out the signature of the swell. In figures c) and d) horizontal and vertical green dashed lines indicate respectively the direction and size of convective rolls. In figure d) black dashed lines indicate the direction and size of swell.} \label{fig2dS2} \end{figure} \begin{table}[htbp] \caption{Size and direction of convective rolls from Great Lakes and Open ocean SAR images as obtained from $S_{2}^{r,\theta}$.} \begin{center} \begin{tabular}{|c|c|c|} \hline & \textbf{Great Lakes} & \textbf{Open Ocean} \\ \hline $\theta_{||}$ (rads) & $0.95$ ($0.95-\pi$) & $2.06$ ($2.06-\pi$) \\ \hline $\theta_{\perp}$ (rads) & $0.95\pm\pi/2$ & $2.06\pm\pi/2$ \\ \hline Size (m) & $1800$ & $2600$ \\ \hline \end{tabular} \label{table:sizedirection} \end{center} \end{table} \subsection{Third-order structure functions} Fig.~\ref{fig2dS3} shows $\mathcal{S}^{l_x,l_y}$ and $\mathcal{S}^{r,\theta}$ for the two SAR images. Whereas the line parallel to the direction of the rolls $\theta_{||}$ passing by the center was a symmetry line for $S_{2}^{l_x,l_y}$, it is now an anti-symmetry line for the skewness $\mathcal{S}^{l_x,l_y}$, because $\mathcal{S}^{l_x,l_y}$ is now an even function of $(l_x,l_y)$: $\mathcal{S}^{l_x,l_y}=-\mathcal{S}^{-l_x,-l_y}$. For both SAR images, Fig.~\ref{fig2dS3} shows that along $\theta_{||}$ there is no asymmetry, while along the direction perpendicular to the rolls, $\theta_{\perp}$, $\mathcal{S}^{l_x,l_y}\neq0$ for scales smaller than the rolls scale, and so rolls are asymmetrical along this direction, \textit{i.e.} rises and falls of the field are not symmetrical. Furthermore, along $\theta_{\perp}$ for scales larger than the roll scales $\mathcal{S}^{l_x,l_y}\approx0$. Finally, for both images wind direction presents a negative skewness (see yellow lines in Fig.~\ref{fig2dS3} and Fig.\ref{figImages}), pointing out a possible impact of wind direction on the rolls asymmetry. In Fig.~\ref{fig2dS3} a) strong negative values of $\mathcal{S}^{l_x,l_y}$ are observed when going through $\theta_{\perp}=5.66$ rads. This direction corresponds to the blue dashed line in figure~\ref{figImages} a). Thus $\mathcal{S}^{l_x,l_y}<0$ at scales smaller than the rolls scale and goes to $0$ once this scale is reached. This implies an asymmetry in rolls shape along their perpendicular direction. More precisely, along $\theta_{\perp}=5.66$ rads SAR intensity falls are sharpener than rises and so, we leave rolls more smoothly than we enter into them. This asymmetry is also observed in $\mathcal{S}^{r,\theta}$, Fig.~\ref{fig2dS3} c). In Fig.~\ref{fig2dS3} b) and d) negative skewness along $\theta_{\perp}=3.63$ rads indicate also an asymmetry in rolls shape along this direction which corresponds to the blue dashed line in figure~\ref{figImages} b). Along this direction we enter into rolls more sharply than we leave them. Moreover, the swell presence in the SAR image from open ocean adds a small scale effect into the analysis of high-order statistics since swell presents also asymmetries which are characterized by the skewness. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{S3.pdf}} \caption{Two-dimensional Skewness across scales of two Sentinel-1 SAR images of the ocean: left Great Lakes and right North Pacific. Top in cartesian coordinates $\mathcal{S}^{l_x,l_y}$, bottom in polar coordinates $\mathcal{S}^{r,\theta}$. In a) and b) the center $l_x=l_y=0$ is indicated with a black dot. In figures c) and d) horizontal and vertical green dashed lines indicate respectively the direction and size of convective rolls and the horizontal yellow line indicates the wind direction.} \label{fig2dS3} \end{figure} \subsection{Fourth-order structure functions} Fig.~\ref{fig2dS4} shows $\mathcal{F}^{l_x,l_y}/3$ and $\mathcal{F}^{r,\theta}/3$ along scales and directions. We observe ellipsoidal structures from which the direction and shape of rolls can be obtained, especially for the Great Lakes case without swell, while for the open ocean case the observation is more blurry. The evolution of $\mathcal{F}^{r,\theta_{\perp}}/3$ across scales indicates the intermittent nature of the rolls evolution along this direction. On the contrary, $\mathcal{F}^{r,\theta_{||}}/3$ seems to remain constant. Finally, the obtained values of the flatness are slightly larger than those of a Gaussian distribution, so implying increment distributions with larger tails than Gaussian. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{S4.pdf}} \caption{Two-dimensional Flatness across scales of two Sentinel-1 SAR images of the ocean: left Great Lakes and right North Pacific. Top in cartesian coordinates $\mathcal{F}^{l_x,l_y}/3$, bottom in polar coordinates $\mathcal{F}^{r,\theta}/3$. In a) and b) the center $l_x=l_y=0$ is indicated with a red dot. In figures c) and d) horizontal and vertical green dashed lines indicate respectively the direction and size of convective rolls.} \label{fig2dS4} \end{figure} \subsection{Transects} Fig.~\ref{fig1dS234} shows $\theta$-transects of $S_{2}^{r,\theta}$, $\mathcal{S}^{r,\theta}$ and $\mathcal{F}^{r,\theta}/3$ as a function of $r$ for both studied SAR images. Black lines correspond to all the studied directions $\theta$, while the direction parallel to the rolls is highlighted in green cercles and the directions perpendicular to the rolls are highlighted in red and blue asterisks. We focus then on $\theta_{\perp}$ and $\theta_{||}$ since they show up the more extreme behaviors with all the other directions in between. In Fig.~\ref{fig1dS234} a) and b) $S_{2}^{r,\theta}$ increases from small values at small scale to a higher value plateau at large scale. This plateau is reached faster in the perpendicular direction than along the parallel direction. Fig.~\ref{fig1dS234} c) and d) show $\mathcal{S}^{r,\theta}$ which evolves very differently depending on $\theta$. Along $\theta_{||}$ $\mathcal{S}^{r,\theta_{||}}$ remains constant and equal to zero. On the contrary, along $\theta_{\perp}$ it is different than zero for scales smaller than the rolls scales and go back to zero for scales larger than the rolls scale. Each perpendicular direction correspond to a different sign of the skewness due to the inverse symmetry explained above. In the case of Great lakes, where swell is not present, $\mathcal{S}^{r,\theta_{\perp}}$ shows the maximum/minimum values. In the case of the open ocean image this is less clear due to the noise induced by the swell. Finally, Fig.~\ref{fig1dS234} e) and f) show the evolution of $\mathcal{F}^{r,\theta}/3$ for both studied images. On the one hand, for Great Lakes $\mathcal{F}^{r,\theta_{||}}/3$ is almost constant at a value close to $1.05$, indicating a distribution slightly more tailed than a Gaussian. On the contrary, $\mathcal{F}^{r,\theta_{\perp}}/3$ decreases and approaches $1$ at scales smaller than the rolls scale. Then at large scales, it turns up to the $\mathcal{F}^{r,\theta_{||}}/3$ value. This evolution of $\mathcal{F}^{r,\theta_{\perp}}/3$ across scales is a signature of intermittency. On the other hand, for open ocean $\mathcal{F}^{r,\theta}/3$ seems to be constant for all directions being impossible to discriminate between $\theta_{||}$ and $\theta_{\perp}$. This can be newly due to the swell that difficult the analysis. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{S2_S3_S4_Pol1D.pdf}} \caption{One-dimensional $\theta$-transects of $S_{2}^{r,\theta}$ (a and b), $\mathcal{S}^{r,\theta}$ (c and d) and $\mathcal{F}^{r,\theta}/3$ (e and f) as a function of $r$ for two Sentinel-1 SAR images of the ocean: left Great Lakes and right Open ocean. Direction $\theta_{||}$ is highlighted in green dots and directions $\theta_{\perp}$ and $\theta_{\perp}+\pi$ are highlighted in blue and red asterisks respectively. Vertical black dashed line correspond to the maximum value of $S_{2}^{r,\theta}$ which is used to define the size of the roll.} \label{fig1dS234} \end{figure} \section{Conclusions} We proposed second and higher order structure functions to analyse the morphology of convective rolls through the study of Sentinel-1 SAR images of the ocean. Thus, we showed that $S_{2}^{l_x,l_y}$ characterizes the direction and size of convective rolls, just like correlation~\cite{Lohou1998} or power spectrum~\cite{Wang2020} do. Moreover, and contrary to correlation and power spectrum, structure functions can be easily generalized ($\mathcal{S}^{l_x,l_y}$ and $\mathcal{F}^{l_x,l_y}/3$) to grasp high-order statistics and so to provide finer information on the shape of convective rolls. While $\mathcal{S}^{l_x,l_y}$ characterized the asymmetry of convective rolls along the direction perpendicular to the rolls $\theta_{\perp}$, $\mathcal{F}^{l_x,l_y}/3$ pointed out the intermittent nature of convective rolls also along $\theta_{\perp}$. From our knowledge it is the first time that both the rolls asymmetry and intermittency, and consequently their turbulent nature, is described from the analysis of SAR images of the ocean. In open ocean and on SAR images at 50m resolution, swell can difficult the analysis of high order statistics due to the small scale fluctuations that it produces on the measures. However, preliminary analysis show than a low-pass filtering (or equivalently working on pre-processed SAR images at 200m) could help filtering out the swell signature. Such a filtering may require specific analysis to be adjusted depending on waves properties (e.g. swell orientation with respect to wind direction, swell wavelength ...) Future research will deal with better understanding the relationship between rolls asymmetry and atmospheric conditions such as wind direction or shear intensity. Finally, the codes used in this article to estimate two-dimensional direction-dependent structure functions are provided in open access: \href{https://github.com/cgranerob/2D-Structure-Functions}{https://github.com/cgranerob/2D-Structure-Functions}. \bibliographystyle{IEEEtran}
2,877,628,089,927
arxiv
\section{Introduction} \label{Introduction} Recurrent Neural Networks (RNN) with fixed weights are known to be able to solve problems of adaptive classification, recognition, and control \shortcite{Conf:IJCNN-02,Conf:ICNN-96:Feldkamp,Conf:ICNN-97:Feldkamp,IEEE_TNN:1999:Younger,Conf:IJCNN-2001:Lo}. When the objects to be classified are static, e.g. still images or vectors in $\mathbb{R}^n$, the way the fixed-weight RNN solves problems is usually characterized in terms of convergence of the RNN state to an attractor \shortcite{Hopfield82,Fuchs88}. Each attractor corresponds to a specific class of objects and its basin determines which objects belong to the class. Conditions specifying convergence to an attractor are widely available in this case, \shortcite{IEEE_TSMC:Cohen:1983,IEEE_TCS:Michel:1990,IEEE_TNN:Yang:1994,IEEE_TNN:Amari:2001,NeuralComp:Chen:2003} to name a few. When the objects to be classified are dynamic, for instance nonlinearly parameterized functions of time of which the parameters are unknown a-priori, no adequate theory exists that explains why the fixed-weight RNN approach is successful. At present, theoretical results are available to demonstrate that a single fixed-weight RNN of a certain type can {\it approximate} the solutions of multiple dynamical systems \shortcite{Neural_Computation:Chen:2002}. Hence in principle, a fixed-weight RNN can behave adaptively with respect to changes of its input signals. These theoretical results, however, are restricted to the class of parameter replacement networks \shortcite{IEEE_TNN:Chen:1995}. The structure of these networks differs from that of the more commonly used recurrent multilayered perceptrons. Whether adaptive behavior is inherent to other types of RNN, therefore, remains an unresolved theoretical issue. In spite of plausibility arguments given by several authors \shortcite{Conf:ICNN-97:Feldkamp,Conf:IJCNN-02}, no formal proof has been made available, to the best of our knowledge. In this paper we consider adaptive behavior in fixed-weight RNNs from the standpoint of their ability to {\it classify temporal signals} adaptively. We provide a formal proof that {\it continuous-time} recurrent neural networks with fixed weighs can successfully classify and recognize nonlinear functions of time and unknown parameter. These functions are allowed to be nonlinearly parameterized. The main idea behind our results consists of presenting a prototype dynamical system which solves the recognition problem. This is followed by a proof that a RNN with fixed weights can realize this system. We construct such a system using the concepts of relaxation times and weakly attracting sets \shortcite{CommMathPhys:Milnor:1985,Gorban:2004} as well as the tests for convergence to such sets obtained in our earlier work \shortcite{ArXive:Non-uniform:2006}. To show that our system can indeed be realized by a RNN with fixed weights we employ classical results on function approximation by feed-forward networks \shortcite{Cybenko}. The paper is organized as follows. Section \ref{Notation} describes notational agreements. In Section \ref{Problem Formulation} we provide a mathematical statement of the problem, Section \ref{Main_Results} contains the main results, and Section \ref{Conclusion} concludes the paper. \section{Notational Preliminaries}\label{Notation} \hspace{5mm} $\bullet$ Symbol $\mathbb{R}$ defines the field of real numbers, and symbol $\mathbb{R}_{\geq c}$, $c\in\mathbb{R}$ stands for the following set $\mathbb{R}_{\geq c}=\{x\in\mathbb{R}|x\geq c\}$, and $\mathbb{R}_{> c}=\{x\in\mathbb{R}|x> c\}$. $\bullet$ Symbol $\mathbb{R}^n$ stands for an $n$-dimensional linear space over the field of reals. $\bullet$ $\mathcal{C}^k$ denotes the space of functions that are at least $k$ times differentiable. $\bullet$ Symbol $\mathcal{K}$ denotes the class of all strictly increasing functions $\kappa: \mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{\geq 0}$ such that $\kappa(0)=0$; symbol $\mathcal{K}_\infty$ denotes the class of all functions $\kappa\in\mathcal{K}$ such that $\lim_{s\rightarrow\infty}\kappa(s)=\infty$. $\bullet$ Symbol $\oplus$ denotes concatenation of two vectors. $\bullet$ The solution of a system of differential equations $\dot{\mathbf{x}}=\mathbf{f}(t,\mathbf{x},\thetavec,\mathbf{u}(t))$, $\mathbf{f}:\mathbb{R}\times\mathbb{R}^n\times\mathbb{R}^d\times\mathbb{R}^m\rightarrow\mathbb{R}^n$, $\mathbf{f}\in\mathcal{C}^0$, $\mathbf{u}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^m$, $\thetavec\in\mathbb{R}^d$ passing through point $\mathbf{x}_0$ at $t=t_0$ will be denoted for $t\geq t_0$ as $\mathbf{x}(t,\mathbf{x}_0,t_0,\thetavec,\mathbf{u})$, or simply as $\mathbf{x}(t,\mathbf{x}_0)$ or $\mathbf{x}(t)$ if it is clear from the context what the values of $\mathbf{x}_0,\thetavec$ are and how the function $\mathbf{u}(t)$ is defined. $\bullet$ By ${L}^n_\infty[t_0,T]$, $t_0\geq 0$, $T\geq t_0$ we denote the space of all functions $\mathbf{f}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^n$ such that $\|\mathbf{f}\|_{\infty,[t_0,T]}=\ess \sup\{\|\mathbf{f}(t)\|,t \in [t_0,T]\}<\infty$; $\|\mathbf{f}\|_{\infty,[t_0,T]}$ stands for the ${L}^n_\infty[t_0,T]$ norm of $\mathbf{f}(t)$. $\bullet$ Let $\mathcal{A}$ be a set in ${\mathbb{R}^n}$ and $\|\cdot\|$ be the usual Euclidean norm in $\mathbb{R}^n$. By the symbol $\norms{\cdot}$ we denote the following induced norm: \[ \norms{\mathbf{x}}=\inf_{\mathbf{q}\in\mathcal{A}}\{\|\mathbf{x}-\mathbf{q}\|\} \] In case $x$ is a scalar and $\Delta\in\mathbb{R}_{>0}$, notation $\|x\|_\Delta$ stands for the following \[ \|x\|_{\Delta}=\left\{ \begin{array}{ll} |x|-\Delta, & |x|> \Delta\\ 0, & |x|\leq \Delta \end{array} \right. \] \section{Problem Formulation}\label{Problem Formulation} Consider the following set of signals \begin{equation}\label{eq:signals} \begin{split} \mathcal{F}&=\{f_i(\xi(t),\theta_i)\}, \ i\in\{1,\dots,N_f\}, \\ & f_i: \mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}, \ f_i(\cdot,\cdot)\in\mathcal{C}^0,\\ & \xi:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}, \ \xi(\cdot)\in\mathcal{C}^1\cap L_{\infty}[0,\infty] \end{split} \end{equation} where $\theta_i\in\Omega_\theta\subset\mathbb{R}$ are parameters of which the values are unknown a-priori, $\Omega_\theta=[\theta_{\min},\theta_{\max}]$ is a bounded interval, and $\xi(t)$ is a known and bounded function. Signals $f_i(\xi(t),\theta_i)$ represent relevant physical variables of an object. For the given functions $f_i(\xi(t),\theta_i)$ and $\xi(t)$ we say that $\theta_i$ is {\it equivalent} to $\theta_i'$ iff \begin{equation}\label{eq:equivalece_notion} f_i(\xi(t),\theta_i)=f_i(\xi(t),\theta_i') \ \forall \ t\in\mathbb{R}_{\geq 0}. \end{equation} Hence an equivalence class for $\theta_i\in\Omega_\theta$ can be defined as \begin{equation}\label{eq:equivalent_class} E_i(\theta_i)=\{\theta_i'\in\mathbb{R}| \ f_i(\xi(t),\theta_i)=f_i(\xi(t),\theta_i') \ \forall \ t\in\mathbb{R}_{\geq 0}\} \end{equation} Equivalence classes (\ref{eq:equivalent_class}) determine sets of indistinguishable parameterizations of the $i$-th signal. It is natural, therefore, to restrict ourselves to the problem of recognizing signals (\ref{eq:signals}) up to their equivalence classes. With respect to the equivalence classes $E_i(\theta_i)$, we further assume that there is at least one point $\theta_{0}\in\mathbb{R}$ such that \begin{equation}\label{eq:non-degeneracy_recognition} \|\theta_{0}\|_{E_i(\theta_i)}\geq \Delta_\theta\in\mathbb{R}_{>0} \ \forall \ \theta_i\in\Omega_\theta. \end{equation} Requirement (\ref{eq:non-degeneracy_recognition}) is a technical assumption. It holds, however, for a wide range of practically relevant situations in which the union of $E_i(\theta_i)$ for all $i$ and $\theta_i$ belongs to an interval of $\mathbb{R}$. Furthermore, it allows us to exclude from consideration pathological cases in which almost all points in $\Omega_\theta$ are indistinguishable in the sense of condition (\ref{eq:equivalece_notion}). In many systems, artificial or natural, measured physical quantities, represented here by signals $f_i(\xi(t),\theta_i)$, are often unavailable. This is because a measurement device is involved in measuring $f_i(\xi(t),\theta_i)$. Given that signals $f_i(\xi(t),\theta_i)$ are functions of time, inherent dynamical properties of a measurement device would distort the measured values. Our present study takes this possibility into account. To do so we consider the case where signals $f_i(\xi(t),\theta_i)$ are affected by additive bounded noise and pass through nonlinear filters with uncertain dynamics. In particular, we assume that instead of functions $f_i(\xi(t),\theta_i)$ we access variables $s_i(t,s_{i,0},\theta_i,\eta_i(t))$, which are solutions to the following ordinary differential equation: \begin{equation}\label{eq:filtered_signals} \begin{split} \dot{s}_i&=-\varphi_i(s_i)+f_i(\xi(t),\theta_i) + \eta_i(t), \\ &s_i(t_0)=s_{i,0}, \ s_{i,0}\in\Omega_s\subset\mathbb{R} \end{split} \end{equation} In (\ref{eq:filtered_signals}) the function $\eta_i:\mathbb{R}_{>0}\rightarrow\mathbb{R}$: \begin{equation}\label{eq:additive_noise} \eta_i(t)\in L_{\infty}[0,\infty], \ \|\eta_i(t)\|_{\infty,[0,\infty]}\leq \Delta_{\eta}\in\mathbb{R}_{\geq 0} \end{equation} corresponds to measurement noise. The value of $\Delta_{\eta}$ in (\ref{eq:filtered_signals}) is supposed to be known, while the values of initial conditions $s_i(t_0)$ and functions $\varphi_i:\mathbb{R}\rightarrow\mathbb{R}, \ \varphi(\cdot)\in\mathcal{C}^1$ in (\ref{eq:filtered_signals}) are assumed to be uncertain. We do, however, require that $\Omega_s=[s_{\min},s_{\max}]$ is an interval and that the functions $\varphi_i(s_i)$ satisfy the following constraint: \begin{equation}\label{eq:filter_stability} \forall s_i\in\mathbb{R} \Rightarrow \ \varphi_{\min}\leq \frac{{\partial} \varphi_i(s_i)}{{\partial} s_i}\leq \varphi_{\max}, \ \varphi_{\min},\varphi_{\max}\in\mathbb{R}_{>0}. \end{equation} Condition (\ref{eq:filter_stability}) ensures that filters (\ref{eq:filtered_signals}) are convergent \shortcite{Pavlov:2004}, e.g. the dynamics of each variable $s_i(t,s_{i,0},\theta_i,\eta_i(t))$ at $t\rightarrow\infty$ is uniquely determined in the absence of noise by $f_i(\xi(t),\theta_i)$, and the effects of initial conditions $s_{i,0}$ vanish with time asymptotically. A recurrent neural network is defined by the following set of differential equations: \begin{eqnarray}\label{eq:RNN_recurrent} & & \dot{x}_j= \sum_{m=1}^{N} c_{j,m}\sigma(\mathbf{w}^{T}_{j,m}(\xi(t)\oplus s(t) \oplus\mathbf{x})+b_{j,m}), \ j\in\{1,\dots,N_x\},\\ & & \mathbf{x}=\mathrm{col}(x_1,\dots,x_{N_x}), \ \mathbf{x}(t_0)=\mathbf{x}_0,\nonumber \end{eqnarray} where functions $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ are sigmoid. Vectors $\mathbf{c}_j=\mathrm{col}(c_{j,1},\dots,c_{j,N})$, $\mathbf{b}_j=\mathrm{col}(b_{j,1},\dots,b_{j,N})$ and matrices $\boldsymbol{W}_j=(\mathbf{w}_{j,1},\dots,\mathbf{w}_{j,N})$ are the RNN parameters. Functions $\xi(t)$, $s(t):\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}$, $\xi(t), s(t)\in\mathcal{C}^0$ are inputs; $\mathbf{x}$ is the state vector, and $\mathbf{x}_0$ is a vector of initial conditions. According to notation (\ref{eq:RNN_recurrent}) the network maps two functions of time $\xi(t)$, $s(t)$ into the functions $x_1(t,\mathbf{x}_0),\dots, x_{N_x}(t,\mathbf{x}_0)$, which are the solutions of (\ref{eq:RNN_recurrent}). In what follows we will consider variables $\xi(t)$, $s(t)$ as inputs to the network. While the variable $\xi(t)$ is known a-priori, variable $s(t)$ is allowed to vary within the set of functions $s_i(t,s_{i,0},\theta_i,\eta_i(t))$, which are the solutions of (\ref{eq:filtered_signals}). In particular, we assume that the following condition is satisfied: \begin{assume}[Existence]\label{assume:signal} There exist $i\in N_f$, $\theta_i\in\Omega_\theta$, $s_{i,0}\in\Omega_s$ and $\eta_i(t)$ specified by (\ref{eq:additive_noise}) such that \begin{equation}\label{eq:class_definition} s(t)=s_i(t,s_{i,0},\theta_i,\eta_i(t)) \ \forall \ t\geq 0. \end{equation} \end{assume} We aim to determine if there is a network of type (\ref{eq:RNN_recurrent}) which is able to recover uncertain parameters $i$ and $\theta_i$ from the input $s(t)$\footnote{Because filters (\ref{eq:filtered_signals}) are convergent, the effect of uncertainty in parameter $s_{i,0}$ vanishes with time exponentially. Hence the only effective uncertainties are $i$ and $\theta_i$.}, $t\geq t_0\in\mathbb{R}_{\geq0}$ within a finite interval of time for all $t_0\in\mathbb{R}_{\geq 0}$ . Informally, this means that there exist two sets of functions of network state $\mathbf{x}$ and input $s(t)$: \begin{equation}\label{eq:RNN_output} \begin{split} &\{h_{f,j}(\mathbf{x}(t),s(t))\}, \ \{h_{\theta,j}(\mathbf{x}(t),s(t))\}, \\ & h_{f,j}:\mathbb{R}^{N_x}\times\mathbb{R}\rightarrow\mathbb{R}, \ h_{\theta,j}:\mathbb{R}^{N_x}\times\mathbb{R}\rightarrow\mathbb{R}, \ j\in\{1,\dots,N_f\}, \end{split} \end{equation} such that the values of $i$ and $\theta_i$ can be inferred from $\{h_{f,j}(\mathbf{x}(t),s(t))\}$, $\{h_{\theta,j}(\mathbf{x}(t),s(t))\}$ respectively within a given finite interval of time. Formally we can state this as follows: \begin{problem}\label{problem:RNN_existence} Consider class $\mathcal{F}$ of signals (\ref{eq:signals}), where the function $\xi(t)$ is known, and the values of parameters $\theta_i$ are unknown a-priori. Determine a recurrent neural network (\ref{eq:RNN_recurrent}) such that the following properties hold: 1) there is a set of initial conditions $\Omega_x$ such that $\mathbf{x}(t,\mathbf{x}_0)$ is bounded for all $\mathbf{x}_0\in\Omega_x$ and $t\geq t_0\in\mathbb{R}_{\geq 0}$; the volume of $\Omega_x$ is nonzero; 2) there exists a set of output functions (\ref{eq:RNN_output}) such that, for all $\theta_i\in\Omega_{\theta}$, $s_{i,0}\in\Omega_s$, $t_0\in\mathbb{R}_{\geq 0}$, $\mathbf{x}_0\in\Omega_x$, and functions $\eta_i(t)$ given by (\ref{eq:additive_noise}), condition (\ref{eq:class_definition}) implies existence of a constant $\mathcal{T}\in\mathbb{R}_{>0}$, time instant $t'\in(t_0,t_0+\mathcal{T})$, (arbitrarily large) $T^\ast\in\mathbb{R}_{>0}$, and (arbitrarily small) $\varepsilon\in\mathbb{R}_{>0}$ and $\mathcal{D}\in\mathcal{K}_\infty$ such that \begin{equation}\label{eq:class_recognition} \begin{split} & \| h_{f,i}(\mathbf{x}(t),s(t)) \|_{\infty,[t',t'+T^\ast]}<\varepsilon + \mathcal{D}(\Delta_\eta), \\ & \inf_{\theta_i'\in E(\theta_i)} \| h_{\theta,i}(\mathbf{x}(t),s(t))-\theta_i'\|_{\infty,[t',t'+T^\ast]}<\varepsilon + \mathcal{D}(\Delta_\eta). \end{split}\nonumber \end{equation} \end{problem} In general, this problem has no solutions for all possible $\xi(t)\in\mathcal{C}^1$ and $f_i(\cdot,\cdot)\in\mathcal{C}^0$. Consider, for instance, the case when $f_i(\xi(t),\theta_i)=\sin(\xi(t)\theta_i)$ and \[ \xi(t)=\left\{\begin{array}{ll} \sin^2(\ln(t-t_0+1)), & \sin(\ln(t-t_0+1))\geq 0\\ 0, & \sin(\ln(t-t_0+1))< 0 \end{array} \right. \ \forall \ t\geq t_0. \] Time intervals when $\xi(t)=0$ are growing unboundedly with time. Hence for any fixed $\mathcal{T}$, $T^\ast$ there will always exist an instant $t_0'$ such that for all $t\geq t_0'$ the lengths of intervals when $\xi(t)=0$ exceed $\mathcal{T}+T^\ast$. For all such intervals, solutions $s_i(t,s_{i,0}, \theta_i,\eta_i(t))$ do not depend on $\theta_i$. Hence recovery of the actual values of $\theta_i$ from signal $s(t)$ cannot be achieved within a fixed time interval $[t_0, t_0+\mathcal{T}+T^\ast]$ for all $t_0\geq t_0'$. In order to enable a solution of the classification/recognition problem above we must introduce an additional constraint on the functions $f_i(\xi(t),\theta_i)$. This should ensure that variation in parameter $\theta_i$ can be detected from the values of $f_i(\xi(t),\theta_i)$ within a finite time interval. We therefore require that the following property holds: \begin{assume}[Non-degeneracy]\label{assume:NLPE} For the set of functions $f_i(\xi(t),\theta_i)$ specified by (\ref{eq:signals}) and all $t\geq t_0$, $\theta_i, \ \theta_i'$ there exist a constant $T\in\mathbb{R}_{>0}$ and a strictly increasing function $\rho:\mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{\geq 0}$, $\rho\in\mathcal{K}_\infty$ such that the following condition holds: \begin{equation}\label{eq:nonlinear_persistency} \forall \ t\geq t_0 \ \exists \ t'\in [t,t+T]: \ \ |f_i(\xi(t'),\theta_i)-f_i(\xi(t'),\theta_i')|\geq \rho\left(\normstwo{\theta_i}{E_i(\theta_i')}\right). \end{equation} \end{assume} In case the equivalence classes $E_i(\theta_i')$ consist of single elements, e.g. when there is a unique value of $\theta_i'=\theta_i$ satisfying (\ref{eq:equivalece_notion}), condition (\ref{eq:nonlinear_persistency}) will have a more transparent form: \begin{equation}\label{eq:nonlinear_persistency_unique} \forall \ t\geq t_0 \ \exists \ t'\in [t,t+T]: \ \ |f_i(\xi(t'),\theta_i)-f_i(\xi(t'),\theta_i')|\geq \rho(|\theta_i-\theta_i'|). \end{equation} These conditions simply state that within a fixed time interval the values of $\normstwo{\theta_i}{E_i(\theta_i')}$ or $|\theta_i-\theta_i'|$ can be inferred from the differences $f_i(\xi(t),\theta_i)-f_i(\xi(t),\theta_i')$ for all $t\in\mathbb{R}_{\geq 0}$. In the next section we show that the solution to Problem \ref{problem:RNN_existence} can be obtained for the class $\mathcal{F}$ of functions $f_i(\xi(t),\theta_i)$ that are Lipschitz in $\theta_i$. We present these results in the form of sufficient conditions formulated in Theorem \ref{theorem:RNN_existence}. \section{Main Results}\label{Main_Results} As was suggested in our previous work \shortcite{Conf:IJCNN-02}, as well as in \shortcite{IEEE_TNN:1999:Younger} the reason why RNNs with fixed parameters (weights) demonstrate adaptive behavior could be found in their dynamics; supposedly, it is already sufficiently rich to have an adequate adaptation mechanism embedded into it. Finding a system which satisfies requirements 1), 2) in Problem \ref{problem:RNN_existence} and which is, at the same time, realizable by a RNN, therefore, automatically constitutes an existence proof. This intuition, we will show, is correct. The result is provided in Theorem \ref{theorem:RNN_existence} below. \begin{thm}[Existence]\label{theorem:RNN_existence} Let functions $\xi(t)$, $f_i(\xi(t),\theta_i)$ be given and defined as in (\ref{eq:signals}), and Assumptions \ref{assume:signal}, \ref{assume:NLPE} hold. Furthermore, suppose that $f_i(\xi(t),\theta_i)$ are (locally) Lipschitz\footnote{Property (\ref{eq:Lipschitz}) can be understood as a generalized Lipschitz condition. When equivalence sets $E_i(\theta_i')$ consist of single elements the property transforms into: $|f_i(\xi(t),\theta_i)-f_i(\xi(t),\theta_i')|\leq D_{\theta} |\theta_i-\theta_i'|$.}: \begin{eqnarray} &\exists \ D_{\theta}\in\mathbb{R}_{>0}: \ |f_i(\xi(t),\theta_i)-f_i(\xi(t),\theta_i')|\leq D_{\theta} \normstwo{\theta_i}{E_i(\theta_i')}& \ \ \forall \ t>0, \ \theta_i,\theta_i' \label{eq:Lipschitz} \\ &\exists \ D_{\xi}\in\mathbb{R}_{>0}: \ |f_i(\xi,\theta_i)-f_i(\xi',\theta_i)|\leq D_{\xi} |\xi - \xi'| & \ \ \ \ \ \forall \ \theta_i, \xi, \xi' \label{eq:Lipschitz_xi} \end{eqnarray} and the time-derivative of $\xi(t)$ is bounded: \begin{equation}\label{eq:xi_derivative} \left|\frac{d}{dt} \xi(t)\right|\leq {\partial} \xi_\infty \ \forall \ t\geq 0. \end{equation} Then for any $T^\ast\in\mathbb{R}_{>0}$, $\varepsilon\in\mathbb{R}_{>0}$ there is a recurrent neural network (\ref{eq:RNN_recurrent}) satisfying the requirements of Problem \ref{problem:RNN_existence}, provided that the upper bound $\Delta_\eta$ for the $L_\infty[0,\infty]$-norms of the disturbance terms, $\eta_i(t)$, is sufficiently small. \end{thm} {\it Proof of Theorem \ref{theorem:RNN_existence}.} We prove the theorem in four steps. First, we present a dynamical system which will be referred to as the {\it convergence prototype}. We select this system in the following class of differential-algebraic equations: \begin{eqnarray} & &\begin{aligned} \dot{\hat{s}}_i&=-\varphi_i(\hat{s}_i)+f_i(\xi(t),\hat{\theta}_i) \end{aligned}\label{eq:convergence_prototype}\\ & & \begin{aligned} \hat{\theta}_i&= a+\frac{b-a}{2}(x_i+1) \end{aligned}\label{eq:convergence_prototype:1}\\ & & \begin{aligned} \dot{x}_i&=\gamma \|\hat{s}_i-s\|_{\varepsilon}\left(x_i-y_i-x_i(x_i^2+y_i^2)\right)\\ \dot{y}_i&=\gamma \|\hat{s}_i-s\|_{\varepsilon}\left(x_i+y_i-y_i(x_i^2+y_i^2)\right), \end{aligned}\label{eq:convergence_prototype:2} \end{eqnarray} where \begin{equation}\label{eq:theorem:prototype_parameters} \gamma\in\mathbb{R}_{>0}, \ a,b\in \mathbb{R}, \ a<\theta_{\min}, \ b>\theta_{\max}, \ \theta_0\in [a,b], \ i=1,\dots,N_f, \ \varepsilon\in\mathbb{R}_{>0}. \end{equation} System (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) has a locally Lipschitz right-hand side and its solutions are bounded for all initial conditions $\hat{s}_i(t_0)$, $x_i(t_0)$, $y_i(t_0)\in\mathbb{R}$. We show that there exist (domains of) $\gamma>0$, $\varepsilon>0$ and a point $\hat{s}_i(t_0)=s_0'$, $x_i(t_0)=x_0'$, $y_i(t_0)=y_0'$, such that the trajectories passing through this point converge to the following target set \begin{equation}\label{eq:target_set_RNN} \|\hat{s}_i-s_i\|_{\varepsilon}=0, \ \ \normstwo{\hat{\theta}_i}{E_i(\theta_i)}\leq \varepsilon_{\theta}(\varepsilon). \end{equation} Second, we prove that there is a point $x_i(t_0)=x_0'$, $y_i(t_0)=y_0'$ such that convergence is locally uniform with respect to the values of uncertain $\theta_i$ and $s_{i,0}$. In other words, for all $t_0\geq 0$, $s_{i,0}\in\Omega_s$, and $\theta_i\in\Omega_{\theta}$ there exists $\tau>0$ such that solutions of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) with initial conditions $x_i(t_0)=x_0'$, $y_i(t_0)=y_0'$ will be in an arbitrarily small neighborhood of (\ref{eq:target_set_RNN}) for all $t\geq t_0+\tau$. System (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}), however, is not structurally stable. That is, small perturbations of its right-hand side might change asymptotic properties of the system drastically. Hence, due to the inevitable approximation errors, the chances that an RNN realization of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) would solve Problem \ref{problem:RNN_existence} are slim. To continue our argument we need to modify (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) such that the resulting system becomes structurally stable. For this reason we, third, consider the perturbed version of system (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) \begin{eqnarray} & & \begin{aligned} \dot{\hat{s}}_i&=-\varphi_i(\hat{s}_i)+f_i(\xi(t),\hat{\theta}_i)\\ \hat{\theta}_i&= a+\frac{b-a}{2}(x_i+1) \end{aligned}\label{eq:convergence_prototype_struct_stable}\\ & & \begin{aligned} \dot{x}_i&=\gamma (\|\hat{s}_i-s\|_{\varepsilon}+\delta)\left(x_i-y_i-x_i(x_i^2+y_i^2)\right)\\ \dot{y}_i&=\gamma (\|\hat{s}_i-s\|_{\varepsilon}+\delta)\left(x_i+y_i-y_i(x_i^2+y_i^2)\right), \ \delta\in\mathbb{R}_{>0} \end{aligned}\label{eq:convergence_prototype_struct_stable:2} \end{eqnarray} aiming at achieving structural stability of an otherwise structurally unstable system. We show that trajectories of system (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) periodically visit a small vicinity of (\ref{eq:target_set_RNN}) and stay there for an arbitrary long time, depending on the value of $\delta$. Fourth, given that system (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) is structurally stable, we apply the results from \shortcite{Cybenko} to demonstrate that solutions of (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) can be approximated in forward time over the semi-infinite interval $[0,\infty]$ by the state of a recurrent neural network specified by equations (\ref{eq:RNN_recurrent}). {\it 1. Convergence prototype.} According to Assumption \ref{assume:signal} there exist $i\in\{1,\dots,N_f\}$, $s_{i,0}$, $\theta_i$ such that $s(t)=s_i(t,s_{i,0},\theta_i,\eta_i(t))$ for all $t\geq 0$. Consider the $i$-th subsystem of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) and analyze the dynamics of the following difference: $s_i(t)-\hat{s}_i(t)$. Denoting \begin{equation}\label{eq:theorem:1} \begin{split} e_i(t)&=s(t)-\hat{s}_i(t)=s_i(t)-\hat{s}_i(t), \\ \alpha_i(t)&=\int_{0}^1\frac{{\partial} \varphi(s_i(t) r + (1-r)\hat{s}_i(t))}{{\partial} s_i(t) r + (1-r)\hat{s}_i(t))} d r\\ \Delta f_i(t)&=f_i(\xi(t),\theta_i)-f_i(\xi(t),\hat{\theta}_i(x_i(t))) \end{split} \end{equation} and using Hadamard's lemma we can derive the following estimate: \begin{equation}\label{eq:theorem:2} |e_i(t)|\leq e^{-\int_{0}^t \alpha_i(\tau)d\tau} |e_i(0)| + \frac{1}{\varphi_{\min}} \left(1 - e^{-\int_{0}^t \alpha_i(\tau)d\tau}\right)(\|\Delta f_i(\tau)\|_{\infty,[0,t]}+\|\eta_i(\tau)\|_{\infty,[0,\infty]}) \end{equation} Given that $\|\eta_i(\tau)\|_{\infty,[0,\infty]}\leq \Delta_\eta$ for all $t\geq 0$, inequality (\ref{eq:theorem:2}) implies that \ \left(|e_i(t)|- \frac{\Delta_{\eta}}{\varphi_{\min}}\right)\leq e^{-\varphi_{\min} t} \left(|e_i(0)|- \frac{\Delta_{\eta}}{\varphi_{\min}}\right) + \frac{1}{\varphi_{\min}} \|\Delta f_i(\tau)\|_{\infty,[0,t]} \] Hence the following estimate holds along the trajectories of (\ref{eq:convergence_prototype}): \begin{equation}\label{eq:theorem:3} \|e_i(t)\|_{\varepsilon}\leq e^{-\varphi_{\min} t} \|e_i(0)\|_{\varepsilon} + \frac{1}{\varphi_{\min}} \|\Delta f_i(\tau)\|_{\infty,[0,t]}, \ \ \varepsilon=\frac{\Delta_{\eta}}{\varphi_{\min}} \end{equation} Taking (\ref{eq:Lipschitz}), (\ref{eq:theorem:3}) into account plus the fact that $\normstwo{\hat{\theta}_i}{E_i(\theta_i)}=\inf_{\bar{\theta}_i\in E_i(\theta_i)}|\hat{\theta}_i-\bar{\theta}_i|$ we can conclude that the following inequality holds: \begin{equation}\label{eq:theorem:4} \|e_i(t)\|_{\varepsilon}\leq e^{-\varphi_{\min} t} \|e_i(0)\|_{\varepsilon} + \frac{D_\theta}{\varphi_{\min}} \|\bar{\theta}_i-\hat{\theta}_i(\tau)\|_{\infty,[0,t]}, \ \bar{\theta}_i\in E_i(\theta_i)\cap [a,b]. \end{equation} Let us now consider equations (\ref{eq:convergence_prototype:1}), (\ref{eq:convergence_prototype:2}). We pick up a point $x'$, $y'$ which satisfies the following condition: \begin{equation}\label{eq:theorem:4.5} {x'}^2+{y'}^2=1. \end{equation} Solutions of (\ref{eq:convergence_prototype:2}) passing through this point can be defined as follows: \begin{equation}\label{eq:theorem:5} \begin{split} x_i(t,x',y')&=\cos\left(\int_0^t \gamma \|\hat{s}_i(\tau)-s(\tau)\|_{\varepsilon} d\tau + \nu_x\right), \ x'=\cos(\nu_x), \ \nu_x\in[0,2\pi]\\ y_i(t,x',y')&=\sin\left(\int_0^t \gamma \|\hat{s}_i(\tau)-s(\tau)\|_{\varepsilon} d\tau + \nu_y\right), \ y'=\sin(\nu_y), \ \nu_y\in[0,2\pi] \end{split} \end{equation} This can be easily verified when writing (\ref{eq:convergence_prototype:2}) in the system of polar coordinates: $x_i=r \cos (\nu)$, $y_i=r \sin(\nu)$ \shortcite{Guckenheimer:2002}: \begin{equation}\label{eq:theorem:5.5} \begin{split} \dot{r}&=\gamma \|\hat{s}_i-s\|_{\varepsilon} \cdot r(1-r)\\ \dot{\nu}&=\gamma \|\hat{s}_i-s\|_{\varepsilon}\cdot 1 \end{split} \end{equation} Given that $\bar{\theta}_i$ belongs to the interval $[a,b]$, there is a number $\bar{h}(\bar{\theta}_i)\in[0,\pi]$ such that for all $k\in\mathbb{Z}$ the following equivalence holds \begin{equation}\label{eq:theorem:5.75} \bar{\theta}_i= a+ \frac{b-a}{2} \left(\cos(\bar{h}(\bar{\theta}_i)+2\pi k)+1\right). \end{equation} Hence according to (\ref{eq:convergence_prototype:1}), (\ref{eq:theorem:5}) the norm $\|\bar{\theta}_i-\hat{\theta}_i(\tau)\|_{\infty,[0,t]}$ can be estimated from above as follows: \begin{equation}\label{eq:theorem:6} \|\bar{\theta}_i-\hat{\theta}_i(\tau)\|_{\infty,[0,t]}\leq\frac{b-a}{2} \|\bar{h}(\bar{\theta}_i)-\nu_x+2 \pi k - \int_0^t \gamma \|\hat{s}_i(\tau)-s(\tau)\|_{\varepsilon} d\tau\|_{\infty,[0,t]} \end{equation} Denoting \[ \begin{split} c&=\frac{D_\theta}{\varphi_{\min}}\frac{b-a}{2}; \ \ h(t,\bar{\theta}_i,k)=\bar{h}(\bar{\theta}_i)-\nu_x+2 \pi k - \int_0^t \gamma \|\hat{s}_i(\tau)-s(\tau)\|_{\varepsilon} d\tau \end{split} \] and taking into account (\ref{eq:theorem:4}), (\ref{eq:theorem:6}) we can conclude that the following holds along the solutions of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}): \begin{equation}\label{eq:theorem:7} \begin{split} \|e_i(t)\|_{\varepsilon}&\leq e^{-\varphi_{\min} t} \|e_i(0)\|_{\varepsilon} + c \|h(\tau,\bar{\theta}_i,k)\|_{\infty,[0,t]};\\ h(0,\bar{\theta}_i,k)&-h(t,\bar{\theta}_i,k) = \int_0^t \gamma \|e_i(\tau)\|_{\varepsilon} d\tau \end{split} \end{equation} According to \shortcite{ArXive:Non-uniform:2006} (Theorem 1 and Corollaries 2, 3) there exist $\gamma^\ast\in\mathbb{R}_{>0}$ and $h^\ast$ such that for a given bounded $e_i(0)$, all $\gamma\in\mathbb{R}_{>0}$, $\gamma<\gamma^\ast$ and $h(0,\bar{\theta}_i,k)\geq h^\ast$ the norm $\|e_i(\tau)\|_{\infty,[0,\infty]}$ is bounded and \begin{equation}\label{eq:theorem:7.5} \lim_{t\rightarrow\infty}h(t,\bar{\theta}_i,k)\in [0,h(0,\bar{\theta}_i,k)]. \end{equation} The value of $\gamma^\ast$, according to Corollary 3 in \cite{ArXive:Non-uniform:2006}, can be determined from the following inequality \begin{equation}\label{eq:convergence_condition:1} 0<\gamma^\ast< \frac{\varphi_{\min}}{c} \left(\ln\left(\frac{\kappa}{d}\right)\frac{\kappa}{\kappa-1}\left(2+\frac{\kappa}{1-d}\right)\right)^{-1}, \ \kappa\in \mathbb{R}_{>1}, \ d\in(0,1)\subset\mathbb{R}. \end{equation} The value of $h^\ast$ can be estimated from: \begin{equation}\label{eq:convergence_condition:2} \|e_i(t_0)\|_{\varepsilon}\leq \left(\frac{\varphi_{\min}}{\gamma^\ast}\left(\ln \frac{\kappa}{d}\right)^{-1}\frac{\kappa -1}{\kappa}-c\left(2+\frac{\kappa}{1-d}\right)\right) h^\ast \end{equation} Given that $\|e_i(t_0)\|_{\varepsilon}$ in (\ref{eq:convergence_condition:2}) is bounded from above for all $t_0\geq 0$, $\|e_i(t_0)\|_{\varepsilon}\leq s_{\max} - s_{\min} + D_\theta/\varphi_{\min}(b-a) $, condition \begin{equation}\label{eq:convergence_condition:3} h^\ast \geq \left((s_{\max}-s_{\min})+\frac{D_\theta (b-a)}{\varphi_{\min}}\right) \left(\frac{\varphi_{\min}}{\gamma^\ast}\left(\ln \frac{\kappa}{d}\right)^{-1}\frac{\kappa -1}{\kappa}-c\left(2+\frac{\kappa}{1-d}\right)\right)^{-1} \end{equation} together with (\ref{eq:convergence_condition:1}) imply that for all $\hat{s}_i(t_0)\in\Omega_s$ and $h(0,\bar{\theta}_i,k)\geq h^\ast$ the norm $\|e_i(\tau)\|_{\infty,[0,\infty]}$ is bounded and property (\ref{eq:theorem:7.5}) holds. Notice that in the definition of $h(0,\bar{\theta}_i,k)$: \begin{equation}\label{eq:theorem:7.25} h(0,\bar{\theta}_i,k)=\bar{h}(\bar{\theta}_i)-\nu_x+2 \pi k \end{equation} the value of $k$ can be chosen arbitrarily large. Moreover, $\bar{h}(\bar{\theta}_i)\in[0,\pi]$ for all $\bar{\theta}_i\in[a,b]$. This implies that there exists a finite $k'$ such that condition $h(0,\bar{\theta}_i,k')\geq h^\ast$ will be satisfied for any fixed $h^\ast$ (i.e. for all $\gamma^\ast$ satisfying (\ref{eq:convergence_condition:1})) and all $\bar{\theta}_i\in[a,b]$. In addition, the following will hold: \begin{equation}\label{eq:theorem:7.75} \lim_{t\rightarrow\infty}h(t,\bar{\theta}_i,k')\in [0,h(0,\bar{\theta}_i,k')]\subset [0, \pi - \nu_x + 2\pi k'] \ \ \forall \in \bar{\theta}_i\in[a,b]. \end{equation} Taking (\ref{eq:theorem:5}) into account we can conclude that solutions $x_i(t,x',y')$ converge to a point in the interval $[-1,1]$ as $t\rightarrow\infty$, and vector $(x_i(t,x',y'),y_i(t,x',y'))$ makes no more than $k'$ full rotations around the origin for all $\theta_i\in[\theta_{\min},\theta_{\max}]$. Hence for a given initial condition $x_i(0)=x'$, $y_i(0)=y'$, $\hat{s}_{i,0}\in \Omega_s$ and ${\theta}_i\in[\theta_{\min},\theta_{\max}]$ the estimate $\hat{\theta}_i(t)=a+(b-a)/2 \cdot (x_i(t,x',y')+1)$ converges to a point in $[a,b]$ as $t\rightarrow\infty$. We denote this point by symbol $\hat{\theta}_i^\ast$. Given that $\hat{\theta}_i(t)$ converges to a limit, there exists a time instant $t^\ast$ such that for all $t\geq t^\ast$ the following condition holds: $|\hat{\theta}_i(t)-\hat{\theta}_i^\ast|<\mu_\infty$, where $\mu_\infty \in\mathbb{R}_{>0}$ is an arbitrarily small constant. Therefore, taking condition (\ref{eq:Lipschitz}) into account, we can conclude that for all $t\geq t^\ast$ derivative $\dot{e}_i$ satisfies the following equation: \begin{equation}\label{eq:theorem:8} \dot{e}_i=-\alpha(t) e_i + f_i(\xi(t),\theta_i) - f_i(\xi(t),\hat{\theta}_i^\ast)+\mu_i(t) + \eta_i(t) \end{equation} where $|\mu_i(t)|\leq D_\theta \ \mu_\infty$ is a continuous function. Now we will show that the norm $\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)}$ can be bounded from above by a $\mathcal{K}_\infty$-function of $\Delta_\eta$. Consider the term $f_i(\xi(t),\theta_i) - f_i(\xi(t),\hat{\theta}_i^\ast)$. According to (\ref{eq:nonlinear_persistency}) there exists a sequence of monotonically increasing time instances $t_j$, $j=1,2,\dots$ such that $t_{j+1}-t_j\leq 2 T$ and $|f_i(\xi(t_j),\theta_i) - f_i(\xi(t_j),\hat{\theta}_i^\ast)|\geq \rho (\normstwo{\theta_i}{E_i({\hat{\theta}_i^\ast})})$. Furthermore, according to (\ref{eq:Lipschitz_xi}), (\ref{eq:xi_derivative}), the time-derivative of $f_i(\xi(t),\theta_i) - f_i(\xi(t),\hat{\theta}_i^\ast)$ is bounded: \[ \left|\frac{d}{dt} f_i(\xi(t),\theta_i) - f_i(\xi(t),\hat{\theta}_i^\ast)\right|\leq 2 D_\xi \cdot {\partial} \xi_\infty=D_f \] Hence the following estimate holds: \begin{equation}\label{eq:theorem:9} \begin{split} &\int_{t}^{t+L} |f_i(\xi(\tau),\theta_i) - f_i(\xi(\tau),\hat{\theta}_i^\ast)| \geq \frac{\rho (\normstwo{\theta_i}{E_i({\hat{\theta}_i^\ast})})^2}{2 D_f}\\ & L = \max\left\{2 T, \frac{\rho(b-a)}{D_f}\right\} \end{split} \end{equation} \noindent In order to proceed further we will need the following lemma. \begin{lem}\label{lem:filtered_pe} Consider the following differential equation \begin{equation} \label{eq:filtered_pe_system} \dot{z}=-\varphi(t,z) + u(t) + \eta(t), \ z_0=z(0)\in[z_{\min},z_{\max}]\subset \mathbb{R} \end{equation} Let us suppose that 1) $\varphi(z)z\geq 0, \ \varphi_{\min}\leq {{\partial} \varphi(t,z)}/{{\partial} z}\leq \varphi_{\max}$; 2) $u(t)\in L_{\infty}[0,\infty]\cap \mathcal{C}^1, \ \|u(t)\|_{\infty,[0,\infty]}\leq u_\infty$, $\|\dot{u}(t)\|_{\infty,[0,\infty]}\leq {\partial} u_{\infty}$ 3) $\eta(t)\in L_\infty[0,\infty]$, $\|\eta(t)\|_{\infty,[0,\infty]}\leq \Delta$ 4) there exist constants $L$, $\delta$ such that for all $t\geq 0$ \begin{equation}\label{eq:pe} \int_{t}^{t+L}|u(\tau)|d\tau\geq \delta \end{equation} 5) finally assume that the following inequality holds \begin{equation}\label{eq:filtered_pe_condition} \left(\frac{\delta}{L}\right)^2 - \Delta u_\infty > 0. \end{equation} Then for any $p\in\mathbb{R}_{>0}$ there exist constants $L^\ast>0$ and $\delta^\ast \geq (\left({\delta}/{L}\right)^2 - \Delta u_\infty)/p$, such that \begin{equation}\label{eq:filtered_pe_result} \int_{t}^{t+L^\ast}|z(\tau)|d\tau\geq \delta^\ast\geq \frac{1}{p}\left(\frac{\delta^2}{L} - \Delta u_\infty L \right) \ \forall \ t\geq 0 \end{equation} \end{lem} {\it Proof of Lemma \ref{lem:filtered_pe}.} We prove the lemma along the lines of an argument provided in \shortcite{ArXive:Loria:2003} (Property 1). Consider the time-derivative of $z u$: \begin{equation}\label{eq:lemma_pe:1} \frac{d}{dt}\left(z u\right) = (-\varphi(t,z) + u +\eta) u + z \dot{u} \geq u^2 - |z|\left(\varphi_{\max}+{\partial} u_\infty\right) - |u|\Delta \end{equation} According to (\ref{eq:lemma_pe:1}) for all $t,t_0\in\mathbb{R}_{\geq 0}$, $t\geq t_0$ the following inequality holds: \begin{equation}\label{eq:lemma_pe:2} z(t)u(t)-z(t_0)u(t_0)\geq \int_{t_0}^t u^2(\tau)d\tau - \left(\varphi_{\max}+{\partial} u_\infty\right)\int_{t_0}^t |z(\tau)|d\tau - \Delta \int_{t_0}^t|u(\tau)|d\tau \end{equation} Rearranging terms in (\ref{eq:lemma_pe:2}) yields \[ \begin{split} & \left(\varphi_{\max}+{\partial} u_\infty\right)\int_{t_0}^t |z(\tau)|d\tau\geq z(t_0)u(t_0)-z(t)u(t) + \int_{t_0}^t u^2(\tau)d\tau - \Delta \int_{t_0}^t|u(\tau)|d\tau \end{split} \] Notice that $z(t_0)u(t_0)-z(t)u(t)$ is bounded from below for all $t\geq 0$. We denote this bound by symbol $M$. Furthermore, according to the Holder inequality and property (\ref{eq:pe}), the following estimate holds for all $t\geq 0$: \[ \frac{\delta^2}{L}\leq\frac{1}{L} \left(\int_{t}^{t+L}|u(\tau)|d\tau \right)^2 \leq \int_{t}^{t+L} u^2(\tau)d\tau. \] Hence for all time instances $t$: $(n+1) L \geq t-t_0\geq n L$, where $n$ is a positive integer, we have \begin{equation}\label{eq:lemma_pe:3} \begin{split} & \left(\varphi_{\max}+{\partial} u_\infty\right)\int_{t_0}^t |z(\tau)|d\tau\geq M + n \frac{\delta^2}{L} - \Delta \int_{t_0}^t|u(\tau)|d\tau\\ & \geq M + n \frac{\delta^2}{L} - (n+1)\Delta u_\infty = (M -\Delta u_\infty L) + n \left(\frac{\delta^2}{L} - \Delta u_\infty L\right) \end{split} \end{equation} According to the requirements of the lemma, inequality (\ref{eq:filtered_pe_condition}), the difference ${\delta^2}/{L} - \Delta u_\infty L>0$ is a positive constant. Therefore, there exists $n=n'$ such that the right-hand side of (\ref{eq:lemma_pe:3}) exceeds some $\delta'= ({\delta^2}/{L} - \Delta u_\infty L)/p' \in\mathbb{R}_{>0}$, $p'\in\mathbb{R}_{>0}$. Choosing $t'=\min_{t} \{t-t_0\}\geq n' L$ we can conclude that \begin{equation}\label{eq:lemma_pe:4} \left(\varphi_{\max}+{\partial} u_\infty\right)\int_{t_0}^{t'} |z(\tau)|d\tau \geq \delta' \end{equation} Given that we could chose the value of $t_0$ arbitrarily in the domain $\mathbb{R}_{\geq 0}$, inequality (\ref{eq:lemma_pe:4}) is equivalent to \[ \int_{t}^{t+L^\ast} |z(\tau)|d\tau \geq \delta^\ast, \] where $L^\ast = t'-t_0$, $\delta^\ast=\delta'/\left(\varphi_{\max}+{\partial} u_\infty\right)=({\delta^2}/{L} - \Delta u_\infty L)/p$, $p=p'\left(\varphi_{\max}+{\partial} u_\infty\right)$. {\it The lemma is proven.} Denoting $f_i(\xi(t),\theta_i) - f_i(\xi(t),\hat{\theta}_i^\ast)=u(t)$, $\eta_i(t)+\mu_i(t)=\eta(t)$ we can observe that equation (\ref{eq:theorem:8}) is of the same class as (\ref{eq:filtered_pe_system}) in the formulation of Lemma \ref{lem:filtered_pe}. Furthermore, the following inequalities hold: \begin{equation}\label{eq:theorem:10} \Delta\leq \Delta_{\eta} + D_\theta \ \mu_\infty; \ \ \|u(t)\|_{\infty,[0,\infty]} \leq D_\theta \normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)}\leq D_\theta (b-a) \end{equation} Notice that the value of $\mu_\infty$ in (\ref{eq:theorem:10}) can be made arbitrarily small because $\hat{\theta}_i(t)$ converges to a limit, and $\hat{\theta}_i^\ast$ can be chosen from its arbitrarily small vicinity. Let us therefore chose $\hat{\theta}_i^\ast$ such that $D_\theta \mu_\infty\leq \Delta_\eta$. Hence, in accordance with Lemma \ref{lem:filtered_pe}, condition \begin{equation}\label{eq:theorem:11} \left(\frac{\rho^2(\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)})}{2 D_f L}\right)^2 > 2 \Delta_\eta D_\theta (b-a) \end{equation} implies existence of constants $L^\ast$, $p\in\mathbb{R}_{>0}$ such that \begin{equation}\label{eq:theorem:12} \begin{split} & \int_{t}^{t+L^\ast} |e_i(\tau)|d\tau \geq \frac{1}{p} \left(\left(\frac{\rho^2(\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)})}{2 D_f}\right)^2\frac{1}{L}- \Delta u_\infty L \right)=\delta^\ast>0 \ \ \forall t\geq t^\ast. \end{split} \end{equation} We are going to show now that the norm $\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)}$ is bounded from above by a function $\varepsilon_\theta(\Delta_\eta)\in\mathcal{K}_\infty$ for all sufficiently small $\Delta_\eta$. Let us parameterize $\Delta_\eta$ as follows: \begin{equation}\label{eq:theorem:12.5} \Delta_\eta = \left(\frac{\rho^2(\varepsilon^\ast)}{2 D_f L}\right)^2 \frac{1}{2 D_\theta (b-a)}, \ \varepsilon^\ast\in\mathbb{R}_{> 0}. \end{equation} Parametrization (\ref{eq:theorem:12.5}) is always possible because $\rho(\cdot)\in\mathcal{K}_\infty$. For all $\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)}> \varepsilon^\ast$ condition (\ref{eq:theorem:11}) is satisfied. Hence, according to Lemma \ref{lem:filtered_pe} there exist constants $L^\ast$, $p$ such that inequality (\ref{eq:theorem:12}) holds. Given that $\delta^\ast, \ L^\ast, \ \varphi_{\min} \in\mathbb{R}_{>0}$ there will always exist a number $\Delta_\eta^\ast\in\mathbb{R}_{>0}$ such that $\Delta_\eta^\ast< (L^\ast)^{-1} \delta^\ast \varphi_{\min}/2$. This implies that for all $\Delta_\eta\leq \Delta_\eta^\ast$ the following inequality holds \begin{equation}\label{eq:theorem:13} \begin{split} & \int_{t}^{t+L^\ast} \|e_i(\tau)\|_{\varepsilon}d\tau \geq \frac{\delta^\ast}{2}, \ \varepsilon=\frac{\Delta_\eta}{\varphi_{\min}}. \end{split} \end{equation} Let us suppose that the norm $\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)}$ is greater than $\varepsilon^\ast$. In this case (\ref{eq:theorem:11}), (\ref{eq:theorem:13}) hold and the integral \begin{equation}\label{eq:theorem:14} \int_{t^\ast}^t \|e_i(\tau)\|_{\varepsilon}d\tau \end{equation} grows unboundedly with $t$. On the other hand, according to (\ref{eq:theorem:7}), (\ref{eq:theorem:7.5}) integral (\ref{eq:theorem:14}) is bounded. Hence we have reached a contradiction. This implies that $\normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)}\leq\varepsilon^\ast$. Given that $\rho(\cdot)\in \mathcal{K}_\infty$, the inverse $\rho^{-1}(\cdot)$ is well defined and is a $\mathcal{K}_\infty$-function. Therefore, taking (\ref{eq:theorem:12.5}) into account, we can conclude that the latter inequality is equivalent to: \begin{equation}\label{eq:theorem:15} \normstwo{\theta_i}{E_i(\hat{\theta}_i^\ast)} \leq \rho^{-1}\left( \left(8 \Delta_\eta D_\theta (b-a) D_f^2 L^2\right)^{1/4} \right) \end{equation} Thus we have just shown that there exists a point $x'$, $y'$ in system (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}), and parameters $\gamma$ and $\varepsilon$ such that the system trajectories starting from this point converge into a small neighborhood of $E_i(\theta_i)$ in finite time for all $s_{i,0}\in\Omega_s$ and any given $\theta_i\in[\theta_{\min},\theta_{\max}]$. The size of this neighborhood can be characterized by a $\mathcal{K}_\infty$-function of $\Delta_{\eta}$, when $\Delta_\eta$ is sufficiently small. Let us now show that this convergence is uniform with respect to $\theta_i$. {\it 2. Uniformity.} Consider equation (\ref{eq:theorem:7.75}). According to (\ref{eq:theorem:7}), (\ref{eq:theorem:7.75}) trajectories passing through a point $(x',y')$ satisfying (\ref{eq:theorem:4.5}) at $t=0$ also satisfy the following constraint: \begin{equation}\label{eq:theorem:16} \exists k'\in \mathbb{Z}: \ \ h(0)-h(\infty)=\gamma\int_{0}^\infty\|e_i(\tau,e_i(0),\theta_i,\eta_i(\tau))\|_\varepsilon d\tau \leq {\pi - \nu_x + 2\pi k'}<\infty \ \end{equation} for all $\theta_{i}\in[\theta_{\min},\theta_{\max}]$ and $e_i(0)$. We will use this property to demonstrate that there is a point $(x', y')$, $\sqrt{x'^2+y'^2} = 1$, $\|\hat{\theta}_i(x')\|_{E_i(\theta_i)}\geq \Delta_0$, $\Delta_0\in\mathbb{R}_{>0}$, such that for any $\theta_i\in[\theta_{\min},\theta_{\max}]$ the estimate $\hat{\theta}_i(x_i(t,x',y'))$ converges into a set \begin{equation}\label{eq:theorem:17} \normstwo{\theta_i}{E_i(\hat{\theta}_i)}\leq \rho^{-1}\left( \left(8 \Delta_\eta D_\theta (b-a) D_f^2 L^2\right)^{1/4} \right) \end{equation} in finite time $T'(\theta_i)$ for all $t_0$, $\hat{s}_{i,0}\in\Omega_s$, and stays there for all $t\geq t_0 + T'(\theta_i)$. Furthermore, the value of $T'(\theta_i)$ is bounded from above for all $\theta_{i}\in[\theta_{\min},\theta_{\max}]$. In other words, there exists $T'_{\max}\in\mathbb{R}_{>0}$: \begin{equation}\label{eq:theorem:18} T'(\theta_i)\leq T'_{\max} \ \forall \ \theta_i\in[\theta_{\min},\theta_{\max}]. \end{equation} The fact that estimate $\hat{\theta}_i$ converges into a set specified by (\ref{eq:theorem:17}) in finite time $T'(\theta_i)$ and stays there for $t\geq t_0 + T'(\theta_i)$ for all $x', \ y': \ \sqrt{x'^2+y'^2}=1$ follows immediately from (\ref{eq:theorem:15}). We must show, however, that (\ref{eq:theorem:18}) holds. According to (\ref{eq:non-degeneracy_recognition}), (\ref{eq:theorem:prototype_parameters}) there is a point $\theta_0\in[a,b]$ such that $\|\theta_{0}\|_{E_i(\theta_i)}\geq \Delta_\theta$ for every $\theta_i\in \Omega_\theta$. Hence, there exists a point $\theta_{i,1}\in[a,b]$ such that \[ \inf_{\bar{\theta}_i\in E_i(\theta_i)\cap[a,b]}\| \bar{\theta}_i-\theta_{i,1}\|= \Delta_\theta \] Without loss of generality, suppose that the set $\Omega_1=\{\bar{\theta}_i\in E_i(\theta_i)\cap [a,b]| \ \theta_{i,1}>\bar{\theta}_i \}$ is not empty\footnote{If $\Omega_1$ is empty then $\Omega_2=\{\bar{\theta}_i\in E_i(\theta_i)\cap [a,b]| \ \theta_{i,1}<\bar{\theta}_i\}$ is not empty. We can proceed with the same argument replacing interval $[0,\pi]$ with $[\pi, 2\pi]$ and $\sup$ with $\inf$ when appropriate.}. By symbol $\theta_{i,\max}$ we denote $\theta_{i,\max}=\sup \{\Omega_1\}$. Let us pick a point $\theta_{i,2}\in[a,b]$ according to the following constraints \begin{equation}\label{eq:theorem:18.03} \begin{split} |\theta_{i,2}-\theta_{i,1}|&=|\theta_{i,2}-\theta_{i,\max}|=\Delta_{\theta}/2, \\ & \theta_{i,1}>\theta_{i,2}>\theta_{i,\max}, \end{split} \end{equation} and choose the value of $\nu_x$ in (\ref{eq:theorem:5}) such that \[ \theta_{i,2}=a + \frac{b-a}{2}(\cos(\nu_x)+1), \ \nu_x\in[0,\pi]. \] According to (\ref{eq:theorem:5.75}) there exist $\bar{h}(\theta_{i,\max})$, $k$ such that \[ \begin{split} \theta_{i,\max}&=a + \frac{b-a}{2}(\cos(\bar{h}(\theta_{i,\max})+2\pi k)+1), \ \bar{h}(\theta_{i,\max})\in[0,\pi], \ k\in\mathbb{N}. \end{split} \] Given that $\theta_{i,2}>\theta_{i,\max}$ we set the value of $k=0$ and chose $\bar{h}(\theta_{i,\max})$ in accordance with the following inequality: \begin{equation}\label{eq:theorem:18.0625} \nu_x < \bar{h}(\theta_{i,\max}). \end{equation} Because $ |\hat{\theta}_i(\cos(\nu_x))-\hat{\theta}_i(\cos(\nu_x'))|\leq \frac{b-a}{2}|\nu_x-\nu_x'|$ for all $\nu_x,\nu_x'\in\mathbb{R}$, conditions (\ref{eq:theorem:18.03}), (\ref{eq:theorem:18.0625}) ensure existence of a constant $\nu_x'\leq \bar{h}(\theta_{i,\max})$, $\nu_x'=\nu_x+\Delta_\theta/(2(b-a))$ such that \begin{equation}\label{eq:theorem:18.07} |\hat{\theta}_i(\cos(\nu_x))-\hat{\theta}_i(\cos(\nu_x''))|\leq \Delta_{\theta}/4 \ \ \forall \ \nu_x''\in[\nu_x,\nu_x']. \end{equation} Hence, \[ \|\hat{\theta}_i(\cos(\nu_x''))\|_{E_i(\theta_i)}\geq \frac{\Delta_{\theta}}{4} \ \ \forall \ \nu_x''\in[\nu_x,\nu_x']. \] The inequality above implies that the values of $\hat{\theta}_i(\cos(\nu_x''))$ are outside of the $\Delta_\theta/4$-neighborhood of $E_i(\theta_i)$ for all $\nu_x''\in[\nu_x,\nu_x']$. Furthermore, because $\hat{\theta}_i(\cos(\cdot))$ is monotone (non-increasing) over $[\nu_x,\bar{h}(\theta_{i,\max}))$, and $\theta_{i,2}>\theta_{i,\max}$, there are no values of $\nu_x''\in[\nu_x,\bar{h}(\theta_{i,\max}))$ such that $\|\hat{\theta}_i(\cos(\nu_x''))\|_{E_i(\theta_i)}=0$. Let us consider solutions of system (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) passing through the following point $x_i(0)=\cos(\nu_x)$, $y_i(0)=\sin(\nu_x)$, $\hat{s}_i(0)\in\Omega_s$. Suppose that $0<\gamma<\gamma^\ast$, and $\gamma^\ast$ satisfies (\ref{eq:convergence_condition:3}) with $h^\ast=\Delta_{\theta}/(2(b-a))$. Then, according to \shortcite{ArXive:Non-uniform:2006} the sum $\nu_x + \gamma \int_{0}^t \|e_i(\tau)\|_\varepsilon d\tau$ converges to a point in $[\nu_x,\bar{h}(\theta_{i,\max})]$. Taking monotonicity and continuity of function $\hat{\theta}_i(\cos(\nu_x''))$ for $\nu_x''\in[\nu_x,\bar{h}(\theta_{i,\max})]$ into account, we can conclude that trajectory $\hat{\theta}_i(x_i(t,x'(\theta_i)))$ enters the $\varepsilon^\ast$-neighborhood of $\theta_{i,\max}$ only once for all $t\in[0,\infty]$. Let us show that amount of time required for the system to enter this neighborhood is bounded from above for all $\theta_i\in\Omega_\theta$. Given that trajectory $\hat{\theta}_i(x_i(t,x',y'))$ enters the $\varepsilon^\ast$-neighborhood of $\theta_{i,\max}$ only once, we shall show that the amount of time the system spends outside of this neighborhood is bounded from above for all $\theta_i\in\Omega_\theta$. We prove this by contradiction. Suppose that for any fixed $T'_0\in\mathbb{R}_{>0}$ there is a $\theta_i\in[\theta_{\min},\theta_{\max}]$ such that $T'(\theta_i)\geq T'_0$. Consider dynamics of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) when $s(t)=s_i(t,s_{i,0},\theta_i,\eta_{i}(t))$. Let us pick a sequence of time instances $\{t_j\}_{j=1}^{\infty}$, such that $t_{j+1}-t_{j}=D_T$, and $D_T\geq L^\ast$. For each interval $[t_j,t_{j+1}]$ we consider two possibilities: 1) the norm $\|\hat{\theta}_i(t_j)-\hat{\theta}_i(\tau)\|_{\infty,[t_j,t_{j+1}]}\leq \epsilon$, $\epsilon\in\mathbb{R}_{>0}$, $\epsilon\leq D_\theta^{-1} \Delta_\eta$, and 2) the norm $\|\hat{\theta}_i(t_j)-\hat{\theta}_i(\tau)\|_{\infty,[t_j,t_{j+1}]}> \epsilon$. \noindent In case the first alternative applies, according to (\ref{eq:theorem:13}) the following estimate holds $\int_{t_j}^{t_{j+1}}\|e_i(\tau)\|_\varepsilon d\tau \geq \delta^\ast$. Hence $h(t_j)-h(t_{j+1})>\gamma \delta^\ast$. When the second alternative holds, e.g. $\|\hat{\theta}_i(t_i)-\hat{\theta}_i(\tau)\|_{\infty,[t_j,t_{j+1}]} > \epsilon$, we can conclude, using inequality (\ref{eq:theorem:6}), that \[ \|\gamma \int_{t_j}^{\tau}\|e_i(\tau_1)\|_{\varepsilon}d\tau_1\|_{\infty,[t_j,t_{j+1}]} > \epsilon \frac{2}{b-a}. \] Given that $h(t)$ is monotone with respect to $t$ we obtain that $h(t_j)-h(t_{j+1})>\epsilon 2/(b-a)$. Thus we have shown that \[ h(t_j)-h(t_{j+1})>\min\{\gamma \delta^\ast, \epsilon 2/(b-a)\}=\Delta_h \] for all $j$ such that $\normstwo{\hat{\theta}_i(\tau)}{E_i(\theta_i)}\geq \varepsilon^\ast$ for all $\tau\in [t_j,t_{j+1}]$. Given that $h(t)$ is non-increasing and $T'$ is arbitrarily large, there will be a time instance $t_m\leq T'$ such that $\sum_{j}^{m} h(t_{j})-h(t_{j+1}) \geq m \Delta_h > {\pi - \nu_x + 2\pi k'}$. This, however, contradicts to (\ref{eq:theorem:16}). Hence property (\ref{eq:theorem:18}) is proven. {\it 3. Structurally stable prototype.} So far we have shown that for the given system (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) there exists a non-empty set of parameters $\gamma$, $\varepsilon$, and $x', y': \sqrt{{x'}^2+{y}'^2}=1$ such that trajectories $x_i(t,x',y')$, $y_i(t,x',y')$ converge to a point on the unit circle in $\mathbb{R}^2$, and variable $\hat{\theta}_i(x_i(t,x',y'))$ reaches a given small vicinity of $E_i(\theta_i)$ (see (\ref{eq:theorem:17})) within finite time $T'_{\max}$ for all $\theta_i\in[\theta_{\min},\theta_{\max}]$. Let us now consider perturbed system (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) where $\delta\in\mathbb{R}_{>0}$ and initial conditions are selected in a neighborhood of $x'$, $y'$: \begin{equation}\label{eq:theorem:18.25} (x_i(0),y_i(0))\in\Omega(x',y')=\{(x,y)\in\mathbb{R}^2| \sqrt{(x-x')^2+(y-y')^2}\leq \delta_r \}, \ \delta_r\in\mathbb{R}_{>0}. \end{equation} In order to distinguish solutions of (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) from the solutions of unperturbed system (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}), we denote the latter by symbols $x_i^\ast(t,x_i(0),y_i(0))$, $y_i^\ast(t,x_i(0),y_i(0))$, and $\hat{s}_i^\ast(t,\theta_i,s_{i,0},\eta_i(t))$. For the sake of notational compactness we also denote the state vector of the $i$-th subsystem of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) as $\mathbf{q}_i^\ast=(\hat{s}_i^\ast,x_i^\ast,y_i^\ast)$, and the state vector of the $i$-th subsystem of (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) as $\mathbf{q}_i$. Solutions of (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) are bounded: \begin{equation}\label{eq:theorem:18.5} \begin{split} \|\hat{s}_i(t,\hat{s}_{i,0},\eta_i(t))\|_{\infty,[0,\infty]}&\leq |\hat{s}_{i,0}|+ (\max\{|a|,|b|\} D_{\theta}+\Delta_\eta)/\varphi_{\min},\\ \|x_i(t,x_i(0),y_i(0))\|_{\infty,[0,\infty]}&\leq \max\{1,\sqrt{x_i(0)^2+y_i(0)^2}\},\\ \|y_i(t,x_i(0),y_i(0))\|_{\infty,[0,\infty]}&\leq \max\{1,\sqrt{x_i(0)^2+y_i(0)^2}\}. \end{split} \end{equation} Hence for all $\hat{s}_{i}(0), x_i(0), y_i(0)\in \Omega_s\times\Omega(x',y')$ there exists a constant $D_0$ such that $\|\mathbf{q}_i(t)\|_{\infty,[0,\infty]}\leq D_0$ for all $\theta_i$. Let us rewrite (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) as follows: \begin{equation}\label{eq:theorem:18.75} \begin{split} \dot{\hat{s}}_i&=-\varphi_i(\hat{s}_i)+f_i(\xi(t),\hat{\theta}_i(x_i))\\ \dot{x}_i&=\gamma \|\hat{s}_i-s\|_{\varepsilon} \left(x_i-y_i-x_i(x_i^2+y_i^2)\right) + \gamma \delta \cdot \varepsilon_x(x_i,y_i) \\ \dot{y}_i&=\gamma \|\hat{s}_i-s\|_{\varepsilon}\left(x_i+y_i-y_i(x_i^2+y_i^2)\right) + \gamma \delta \cdot \varepsilon_y(x_i,y_i), \end{split} \end{equation} where \[ \begin{split} \varepsilon_{x}(x_i(t),y_i(t))&=x_i(t)-y_i(t)-x_i(t)(x_i^2(t)+y_i^2(t)); \\ \varepsilon_{y}(x_i(t),y_i(t))&=x_i(t)+y_i(t)-y_i(t)(x_i^2(t)+y_i^2(t)) \end{split} \] The right-hand side of (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) is locally Lipschitz in $\hat{s}_i$, $x_i$, $y_i$ (and so is the right-hand side of (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2})). We denote its corresponding Lipschitz constant in the domain specified by (\ref{eq:theorem:18.5}) by symbol $L_i(D_0)$. Furthermore, provided that (\ref{eq:theorem:18.5}) holds, $\varepsilon_x(x_i(t),y_i(t))$, $\varepsilon_y(x_i(t),y_i(t))$ are globally bounded with respect to $t$. Let us denote this bound by symbol $B$: \[ \max \left\{ \|\varepsilon_x(x_i(t),y_i(t))\|_{\infty,[0,\infty]}, \|\varepsilon_y(x_i(t),y_i(t))\|_{\infty,[0,\infty]} \right\} = B \] For the sake of notational compactness let us rewrite (\ref{eq:theorem:18.75}) as follows: \begin{equation}\label{eq:theorem:18.875} \dot{\mathbf{q}}_i=\mathbf{f}(\mathbf{q}_i,s(t),\xi(t))+ \gamma \delta \cdot \mathbf{g}(\mathbf{q}_i), \end{equation} where $\mathbf{f}(\mathbf{q}_i,s(t),\xi(t))$ and $\mathbf{g}(\mathbf{q}_i)$ are defined to copy the right-hand side of (\ref{eq:theorem:18.75}). Notice that $\|\mathbf{f}(\mathbf{q}_i,s(t),\xi(t))\|\leq L_i(D_0) \|\mathbf{q}_i\|$, $\|\mathbf{g}(\mathbf{q}_i)\|\leq B\sqrt{2}$. According to the theorem on continuous dependence of solutions of an ODE on parameters and initial conditions (see, for instance, \shortcite{Khalil:2002} Theorem 3.4, page 96) the following holds: \begin{equation}\label{eq:theorem:19} \|\mathbf{q}_i(t)-\mathbf{q}_i^\ast(t)\|\leq \|\mathbf{q}_i(t_0)-\mathbf{q}_i^\ast (t_0))\|e^{L_i(D_0)(t-t_0)} + \frac{\delta \gamma B \sqrt{2}}{L_i(D_0)}\left(e^{L_i(D_0)(t-t_0)}-1\right). \end{equation} When the values of $\hat{s}_{i,0}$ and $\hat{s}_{i,0}^\ast$ coincide estimate (\ref{eq:theorem:19}) implies that \begin{equation}\label{eq:theorem:19.5} \|\mathbf{q}_i(t)-\mathbf{q}_i^\ast(t)\|\leq \delta_r e^{L_i(D_0)(t-t_0)} + \frac{\delta \gamma B\sqrt{2}}{L_i(D_0)}\left(e^{L_i(D_0)(t-t_0)}-1\right). \end{equation} This assures existence of $\delta_r\in\mathbb{R}_{>0}$, $\delta\in\mathbb{R}_{>0}$ such that for a fixed, yet arbitrarily large, time $T''(\delta_r,\delta)>T'_{\max}$ solutions of system (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) passing through a point from $\Omega(x',y')$ at $t=t_0$ will remain within a fixed, yet arbitrarily small, neighborhood of a solution of system (\ref{eq:convergence_prototype})--(\ref{eq:convergence_prototype:2}) with initial conditions $x_i(t_0)=x'$, $y_i(t_0)=y'$. The value of $T_{\max}'$ does not depend on $\delta_r$, $\delta$. Taking (\ref{eq:theorem:5.5}) into account, we can conclude that the set $x_i^2+y_i^2=1$ is globally attracting in the state space of system (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) for almost all initial conditions (except when $x_i(t_0)=0$, $y_i(t_0)=0$). This implies that solutions starting in $\Omega(x',y')$ will remain there. In addition, according to (\ref{eq:theorem:5}), for any $t_0\geq 0$ a $\delta_r$-vicinity of $(x',y')$ will be visited within at least time $t'\leq t_0 + 2\pi/(\gamma \cdot \delta)$. Hence we have just shown that for all $t_0\geq0$ solutions starting at $\Omega_s\times \Omega(x',y')$ approach the target set within a fixed time $T_{\max}'$ and stay in its vicinity for arbitrarily long time $T''(\delta_r,\delta)$. The latter time is a function of $\delta_r$, $\delta$: the smaller the values of $\delta_r$, $\delta$, the larger the value of $T''(\delta_r,\delta)$. {\it 4. Realizability.} Let us finally show that system (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) can be realized by a recurrent neural network. More precisely, we wish to prove that there exists a system (\ref{eq:RNN_recurrent}) such that $\mathbf{x}=\zetavec_1\oplus\zetavec_2\oplus\cdots\oplus\zetavec_{N_f}$, $\zetavec_i\in\mathbb{R}^3$, $\zetavec_{i}=\zeta_{i,1}\oplus\zeta_{i,2}\oplus\zeta_{i,3}$, $i=\{1,\dots,N_f\}$ and solutions $\zetavec_i(t,\mathbf{q}_{i,0})$ are sufficiently close to $\mathbf{q}_i(t,\mathbf{q}_{i,0})$, where $\mathbf{q}_{i,0}\in \Omega_s\times\Omega(x',y')\subset\mathbb{R}^3$. It is clear that the right-hand side of (\ref{eq:convergence_prototype_struct_stable}), (\ref{eq:convergence_prototype_struct_stable:2}) is a continuous and locally Lipschitz function. To proceed further we use the following result by Cybenko \shortcite{Cybenko}: \begin{thm}[Cybenko, 1989]\label{theorem:Cybenko} Let $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ be any continuous sigmoid-type function. Then finite sums of the form \[ G(\zetavec)=\sum_{j=1}^N \alpha_j \sigma (\omegavec_j^T \zetavec + \beta_j), \ \zetavec\in\mathbb{R}^m, \ \omegavec_j\in\mathbb{R}^m, \ \alpha_j, \ \beta_j\in\mathbb{R} \] are dense in $\mathcal{C}[0,1]^n$. \end{thm} According to Theorem \ref{theorem:Cybenko}, for any arbitrarily small $\varepsilon_N\in\mathbb{R}_{>0}$, any given bounded intervals $\Omega_x\subset\mathbb{R}$, $\Omega_y\subset\mathbb{R}$, and any \[ s(t),\xi(t): \ \max\{\|s(t)\|_{\infty,[0,\infty]}, \|\xi(t)\|_{\infty,[0,\infty]}\}< M, \ M\in\mathbb{R}_{>0}, \] there exist $N\in\mathbb{N}$, $\omegavec_{j}\in\mathbb{R}^5$, $\alpha_{j}\in\mathbb{R}$, $\beta_{j}\in\mathbb{R}$, $j=1,2,\dots,N$ such that \begin{equation}\label{eq:theorem:20} \left|\sum_{j=1}^N \alpha_j \sigma (\omegavec_j^T \cdot (\xi(t)\oplus s(t)\oplus\zetavec_i ) + \beta_j) - \mathbf{f}(\zetavec_i,s(t),\xi(t)) - \gamma \delta \cdot \mathbf{g}(\zetavec_i)\right|<\varepsilon_N, \end{equation} where $\zetavec_i\in\Omega_s\oplus\times\Omega_x\times\Omega_y$. It follows from (\ref{eq:theorem:20}) that there exist $N$, $\omegavec_j$, $\alpha_j$, $\beta_j$ such that \begin{equation}\label{eq:theorem:20.5} \sum_{j=1}^N \alpha_j \sigma (\omegavec_j^T \cdot (\xi(t)\oplus s(t)\oplus\zetavec_i ) + \beta_j) = \mathbf{f}(\zetavec_i,s(t),\xi(t)) + \gamma \delta \cdot \mathbf{g}(\zetavec_i) + \Delta(\zetavec_i,s(t),\xi(t)), \end{equation} where $\Delta(\zetavec_i,s(t),\xi(t))$ is continuous and \[ |\Delta(\zetavec_i,s(t),\xi(t))| < \varepsilon_N. \] Let us chose $\Omega_x=[-v,v]$, $\Omega_y=[-v,v]$ where $v\in\mathbb{R}_{>0}$, $v > 1$ and consider the dynamics of \begin{equation}\label{eq:theorem:21} \dot{\zetavec}_i=\mathbf{f}(\zetavec_i,s(t),\xi(t))+\gamma \delta \cdot \mathbf{g}(\zetavec_i) + \Delta(\zetavec_i,s(t),\xi(t)). \end{equation} System (\ref{eq:theorem:21}) has a globally attracting invariant set (for almost all initial conditions) which can be characterized as follows \[ \{\zetavec_i\in\mathbb{R}^3| \ 1-\rho(\varepsilon_N)\leq {\zeta_{i,2}^2+\zeta_{i,3}^2}\leq 1+\rho(\varepsilon_N)\}, \ \rho\in\mathcal{K}_\infty. \] This follows immediately from the fact that (\ref{eq:theorem:18.875}) is structurally stable and has a globally attracting invariant set (for almost all initial conditions). Furthermore, for any given $\varepsilon_N$ and a bounded set of initial conditions $\Omega_\zeta (r)=\{\zetavec_i\in\mathbb{R}^3| \ \|\zetavec_i\|\leq r, \ r\in\mathbb{R}_{>0}\}$ there exists constant $B_1$ such that $\|\zetavec_i(t)\|_{\infty,[0,\infty]}<B_1$ . Hence solutions of system \begin{equation}\label{eq:theorem:22} \dot{\zetavec}_i=\sum_{j=1}^N \alpha_j \sigma (\omegavec_j^T \cdot (\zetavec_i\oplus s(t)\oplus \xi(t) + \beta_j) \end{equation} are bounded for all initial conditions from $\Omega_\zeta (r)$ provided that inequality (\ref{eq:theorem:20}) holds over sufficiently large intervals $\Omega_x$, $\Omega_y$ (for sufficiently large $v$). Furthermore, given that $\varepsilon_N$ is sufficiently small, solutions of (\ref{eq:theorem:22}) enter domain $\Omega_s\times\Omega(x',y')$ specified by (\ref{eq:theorem:18.25}) in finite time. Finally, according to equality (\ref{eq:theorem:20.5}) and Theorem 3.4 in \shortcite{Khalil:2002}, solutions of (\ref{eq:theorem:22}) starting in $\Omega(x',y')$ satisfy the following inequality: \begin{equation}\label{eq:theorem:23} \|\mathbf{q}_i(t,\mathbf{q}_{i,0})-\zetavec_i(t,\mathbf{q}_{i,0})\|\leq \frac{\varepsilon_N}{L_i(D_0)}\left(e^{L_i(D_0)(t-t_0)}-1\right), \ \mathbf{q}_{i,0}\in\Omega_s\times\Omega(x',y'). \end{equation} Hence, for any $t\geq 0$, solutions of (\ref{eq:theorem:22}) starting from $\Omega_\zeta (r)$ approach the target set within a fixed time (dependant on $\delta$) and stay in its vicinity arbitrary long provided that $\delta$ and $\varepsilon_N$ are sufficiently small. The possibility of the latter follows from Theorem \ref{theorem:Cybenko}. Taking (\ref{eq:theorem:23}), (\ref{eq:theorem:19.5}), (\ref{eq:theorem:1}), (\ref{eq:convergence_prototype_struct_stable}) into account we conclude the proof by choosing $h_{f,i}(\mathbf{x},s)$, $h_{\theta,i}(\mathbf{x},s)$ as follows \begin{equation}\label{eq:theorem:24} \begin{split} h_{f,i}(\mathbf{x},s)&=h_{f,i}(\zetavec_1\oplus\cdots\oplus\zetavec_{N_f},s)=s-\zeta_{i,1},\\ h_{\theta,i}(\mathbf{x},s)&=h_{\theta,i}(\zetavec_1\oplus\cdots\oplus\zetavec_{N_f},s)=a + \frac{b-a}{2}(\zeta_{i,2}+1) \end{split} \end{equation} {\it The theorem is proven.} \noindent Before concluding this section we would like to provide several remarks regarding Theorem \ref{theorem:RNN_existence}. \begin{rem}[Read-out from the outputs]\normalfont As follows from the theorem the class of signal $s(t)=s_i(t,s_{i,0},\theta_i,\eta_i(t))$, e.g. parameter $i$, can be inferred from the values of $h_{f,j}(\mathbf{x}(t),s(t))$, $j=\{1,\dots,N_f\}$ within a finite interval of time. The values of $h_{f,i}(\mathbf{x}(t),s(t))$ should approach a small neighborhood of zero and stay there for a sufficiently long time. The estimate of $\theta_i$ up to its equivalence class is available from the values of $h_{\theta,i}(\mathbf{x}(t),s(t))$ over the same interval. From a practical viewpoint, however, it is preferable to read-out from the RNN outputs explicitly, rather than having to satisfy ourselves with the existence of two sets of read-out functions, for state and input, respectively, of the RNN. Even though this option is not stated explicitly in Theorem \ref{theorem:RNN_existence}, it can be easily shown that the preferred option can, indeed, be realized. Adding to recurrent subsystem (\ref{eq:RNN_recurrent}) a {\it feed-forward} part realizing continuous "output" functions (\ref{eq:theorem:24}) enables explicit read-out from the RNN outputs. \end{rem} \begin{rem}[Convergence to an attractor]\normalfont Theorem \ref{theorem:RNN_existence} does not imply that recognition of a class of the input signal $s(t)$ involves convergence of the RNN state to an attractor. Yet its formulation does not exclude this option either. In fact, when $f_i(\xi(t),\theta_i)$ satisfies some additional restrictions (e.g. linear or monotone parametrization with respect to $\theta_i$), it is possible to replace (\ref{eq:convergence_prototype:1}), (\ref{eq:convergence_prototype:2}) with another prototype system: one that converges to a point attractor exponentially \shortcite{IEEE_TAC_2007}. This implies that it depends substantially on the properties of $f_i(\xi(t),\theta_i)$ whether the state of a network will behave intermittently or asymptotically converge to an attractor. It is important, however, that in both cases the recognition problem will be successfully solved by a RNN. \end{rem} \begin{rem}[Multidimensional uncertainty]\normalfont Even though the theorem applies to the case where $\theta_i$ is a scalar, it can be trivially extended to the case where uncertain parameters are vectors from a bounded domain $\Omega_{\theta,d}\subset\mathbb{R}^d$. To do so one needs to find a Lipschitz mapping $\lambdavec:\mathbb{R}\rightarrow\mathbb{R}^d$ such that for a given small $\varepsilon_{\lambda}\in\mathbb{R}_{>0}$ the following property holds: \[ \forall \ \thetavec_i\in\Omega_{\theta,d} \ \exists \ \theta_i\in\Omega_{\theta}: \ \|\thetavec_i-\lambdavec(\theta_i)\|<\varepsilon_\lambda \] Hence the problem will reduce to the scalar case to which Theorem \ref{theorem:RNN_existence} applies. \end{rem} \section{Conclusion}\label{Conclusion} We provided a theoretical justification to the important question why an RNN with fixed weights can serve as a universal adaptive classifier of both static and dynamic inputs. In addition to providing an existence proof we have proven that the number of dynamical states in an RNN recognizing $n$ different signals $s_i(t)$ can be as small as $3 n$, i.e. grows linearly with the size of the set of uncertain signals to be classified. We stated the classification and recognition problems in a behavioral context in which, over time, the desired input-output relationship is achieved. Finding a solution corresponds to a network dynamics in which the state reaches a given neighborhood of the a-priori specified set and stays there for sufficiently long time, provided that input to the network belongs to a given class (Problem \ref{problem:RNN_existence}). With these ramifications, RNN solve the problem of adaptively classifying time-dependent signals. We did not set out to guarantee, however, that the state of the RNN will asymptotically converge to an equilibrium or its small vicinity as a result of recognition. On the other hand the amount of time a network would spend in the vicinity of a target set can be made sufficiently large to qualify as a practical solution to the classification problem. For classification, after all, asymptotic convergence is not needed. In physics and nonlinear dynamics the phenomenon that the state of a system reaches a neighborhood of a set and stays there sufficiently long, yet inevitably escaping -- only to get caught again, is called (chaotic) itinerancy \shortcite{Chaos:Kaneko:2003}; the set is referred to as an attractor-ruin. These descriptive concepts are currently recognized as a possible mathematical basis for modeling brain activity \shortcite{Tsuda:1991,Tsuda:2004}. We envisage that our current result supports this idea, by showing the considerable power of these systems to perform adaptive classification. \bibliographystyle{apacite}
2,877,628,089,928
arxiv
\section{Introduction} Landau originally derived his celebrated theory of Fermi liquids~\cite{Landau1,*Landau2} assuming that the non-interacting many-body excited states evolve adiabatically into the fully interacting ones upon gradually turning on interaction. The theory was later microscopically derived~\cite{Nozieres&Luttinger-1,*Nozieres&Luttinger-2} by means of the just developed diagrammatic many-body perturbation theory~\cite{Luttinger&Ward,Luttinger-1}. A famous by-product of the diagrammatic formalism is the so-called Luttinger theorem~\cite{Luttinger}, which, in conventional Landau's Fermi liquids, predicts that the volume fraction enclosed by the quasiparticles' Fermi surface is just the electron filling fraction. Over the years, Landau's adiabatic hypothesis and Luttinger's theorem have become almost synonyms, in the sense that if one is violated, so is the other. Such belief has been mostly triggered by the anomalous properties of many strongly correlated materials, especially underdoped cooper-oxide superconductors.\\ However, the traditional derivation~\cite{Luttinger,AG&D,Langer-PR1961,Langreth-PRB1975} of Luttinger's theorem simply relies on the existence of a Luttinger-Ward functional~\cite{Luttinger&Ward}, which can be constructed non-perturbatively~\cite{Potthoff-2006}. Therefore, it is not at all evident why Luttinger's theorem should be violated at the breakdown of perturbation theory, as it is likewise not true that Landau's Fermi liquid theory applies only in the perturbative regime~\cite{mio,mio-2}.\\ The topological arguments by Oshikawa~\cite{Oshikawa-PRL2000} in periodic models clarify Luttinger's theorem violation in non-trivial examples that host fractionalised excitations~\cite{Fractionalised-Senthil-PRL2003,Senthil-PRB2004} or topological order~\cite{Vishwanath-PRB2004,Hastings_2005}, but does not allow identifying at which point the traditional proof may go wrong. Moreover, it is not instructive in non-magnetic Mott insulators at integer number of electrons per site, like the model discussed by Rosh~\cite{Rosh-2007}, where Luttinger's theorem does not yield the correct electron number, nor in models that lack translation symmetry, like quantum impurity models, where Luttinger's theorem is still applicable and can be violated~\cite{Hewson-2018}. \\ The detailed analysis of Heath and Bedell~\cite{Heath_2020} highlights which properties the self-energy must possess for Luttinger's theorem to hold true, even in non-periodic models. However, it leaves open the question how to count the number of particles when Luttinger's theorem is violated. \\ Indeed, there are by now several examples of Luttinger's theorem violation, see, e.g., Refs.~\cite{Altshuler-EPL1998,Georges-PRB2001,Fractionalised-Senthil-PRL2003,Senthil-PRB2004,Vishwanath-PRB2004,Rosh-2007,Phillips-PRL2013,Hewson-2018}. In addition, there are numerical evidences that Luttinger's theorem fails in models of doped Mott insulators below a critical doping~\cite{Kotliar-PRB2006,Prelovsek-PRB2007,Becca-PRB2012,Georges-PNAS2018,Georges-PRX2018,Trivedi-PRB2021} that seems to be associated with the birth of a Luttinger surface~\cite{Igor-PRB2003}, which, according to Ref.~\cite{Heath_2020}, does violates the requirements for Luttinger's theorem validity. \\ In this work, we try to shed further light on such fundamental issue, beyond the great progresses that have been already accomplished~\cite{Fractionalised-Senthil-PRL2003,Vishwanath-PRB2004,Heath_2020,Else-PRX2021,Wen-PRB2021}. We do that paying particular attention to the role of Luttinger surfaces, or, more generally, to the zeros of the single-particle Green's function at zero imaginary frequency, a concept that does not require translation invariance. \section{Luttinger's theorem} \label{Luttinger's theorem} We start by deriving Luttinger's theorem in a slightly different way as conventionally done~\cite{Luttinger,AG&D}, somehow closer to Refs.~\cite{Langer-PR1961,Langreth-PRB1975}, which better highlights under which circumstances that theorem may fail. Moreover, the derivation below, though based on old-fashioned many-body theory, naturally brings to the concept of 'quasiparticles', and their Fermi or Luttinger surfaces~\cite{mio-2}. \\ \noindent We consider a system of interacting electrons with annihilation operators $c^{\phantom{\dagger}}_\alpha$ corresponding to a complete basis of single-particle wavefunctions labelled by $\alpha=1,\dots,K$, with $K\to\infty$ in the thermodynamic limit. The Hamiltonian admits a set of conserved quantities $Q$, represented by hermitian matrices $\hat{Q}$ with components $Q_{\alpha\beta}$ defined in such a way that the eigenvalues are integers. $Q_{\alpha\beta}=\delta_{\alpha\beta}$ corresponds to the total number $N$ of electrons, while all other independent $Q$'s are represented by traceless matrices $\hat Q$. We hereafter consider the evolution of the operators in imaginary time and use the Matsubara formalism, which is more convenient~\cite{Igor-PRB2003} since on the imaginary frequency axis the single particle Green's function and self-energy cannot have singularities but, eventually, at the origin. Moreover, to avoid any issue related to the discontinuity at zero imaginary time of the Green's functions, we use instead of $N$ the deviation $N-K/2$ of the electron number with respect to half-filling, so that we can write the expectation value of any conserved quantity as \begin{eqs} Q &= \fract{1}{2}\,\sum_{\alpha\beta}\, Q_{\beta\alpha}\,\Big( \langle\,c^\dagger_\beta\,c^{\phantom{\dagger}}_\alpha\,\rangle - \langle\,c^{\phantom{\dagger}}_\alpha\,c^\dagger_\beta\,\rangle\Big)\\ &= T\,\sum_{n}\,\mathrm{Tr}\Big(\hat{G}(i{\epsilon}_n)\,\hat{Q}\Big)\,,\label{Q1} \end{eqs} where $\hat{G}(i{\epsilon}_n)=\hat{G}(-i{\epsilon}_n)^\dagger$ is the Green's function matrix in Matsubara frequencies ${\epsilon}_n=(2n+1)\,\pi T$. According to Dyson's equation, \begin{eqs} \hat{G}^{-1}(i{\epsilon}_n) &= i{\epsilon}_n\,\hat{I} -\hat{H}_0 -\hat{\Sigma}(i{\epsilon}_n)\,,\label{Dyson} \end{eqs} with $\hat{I}$ the identity matrix, and $\hat{H}_0$ the non-interacting Hamiltonian, including the chemical potential term, represented in the chosen basis. $\hat{\Sigma}(i{\epsilon}_n)=\hat{\Sigma}(-i{\epsilon}_n)^\dagger$ is the self-energy matrix that accounts for all interaction effects. We can equivalently write Eq.~\eqn{Q1} as \begin{eqs} Q &= -T\sum_n \fract{\partial}{\partial i{\epsilon}_n}\, \mathrm{Tr}\Big(\ln\hat{G}(i{\epsilon}_n)\;\hat{Q}\Big) + I_L(Q)\,,\label{Q} \end{eqs} where \begin{eqs} I_L(Q) &= T\sum_n\,\mathrm{Tr}\bigg(\hat{G}(i{\epsilon}_n)\,\fract{\partial\hat{\Sigma}(i{\epsilon}_n)} {\partial i{\epsilon}_n}\;\hat{Q}\bigg) \,.\label{I_L} \end{eqs} Hereafter, we denote Eq.~\eqn{I_L} as the Luttinger integral for the conserved quantity $Q$, and use simply $I_L$ for the case $\hat{Q}=\hat{I}$. \\ We just note that at particle-hole symmetry $I_L(Q)$ vanishes identically for all non particle-hole invariant $Q$'s, thus also the total electron number, in which case Luttinger's theorem holds trivially. Seemingly, $I_L(Q)=0$ in absence of interaction, where $\hat{\Sigma}(i{\epsilon}_n)=0$.\\ In more general circumstances, we consider the Luttinger-Ward functional $\Phi[G]$ satisfying~\cite{Luttinger&Ward,Potthoff-2006} \begin{eqs} \delta\Phi[G] &= T\,\sum_n\,\esp{i{\epsilon}_n\eta}\, \mathrm{Tr}\Big(\hat{\Sigma}(i{\epsilon}_n)\,\delta\hat{G}(i{\epsilon}_n)\Big)\,,\label{Phi[G]-1} \end{eqs} with $\eta>0$ that must be sent to zero after performing the summation. In perturbation theory, the explicit expression of $\Phi[G]$ reads~\cite{Luttinger&Ward} \begin{eqs} \Phi[G] &= T\sum_n\,\esp{i{\epsilon}_n\eta}\, \sum_{m\geq 1}\fract{1}{2m}\,\mathrm{Tr}\Big(\hat{G}(i{\epsilon}_n)\,\hat{\Sigma}^{(m)}(i{\epsilon}_n)\Big)\\ &\equiv T\sum_n\,\esp{i{\epsilon}_n\eta}\,\Phi(i{\epsilon}_n)\,,\label{Phi[G]-2} \end{eqs} where $\hat{\Sigma}^{(m)}(i{\epsilon}_n)$ is the sum of all $m$-th order skeleton diagrams. We assume that the non-perturbative $\Phi[G]$~\cite{Potthoff-2006} can be still written as a series of terms $\Phi(i{\epsilon}_n)$ as in Eq.~\eqn{Phi[G]-2}. Through Eqs.~\eqn{Phi[G]-1} and \eqn{Phi[G]-2} it readily follows that \begin{eqs} \fract{\delta\Phi[G]}{\delta i{\epsilon}}&\equiv T\,\sum_n\, \mathrm{Tr}\left(\hat{\Sigma}(i{\epsilon}_n)\,\fract{\partial\hat{G}(i{\epsilon}_n)}{\partial i{\epsilon}_n}\right)\\ &= T\sum_n\,\fract{\partial\Phi(i{\epsilon}_n)}{\partial i{\epsilon}_n}\;,\label{Phi[G]-3} \end{eqs} where we set $\eta=0$ before performing the sum since the function decays faster than $1/{\epsilon}_n$ for ${\epsilon}_n\to\pm\infty$. Eq.~\eqn{Phi[G]-3} allows us to rewrite $I_L(Q)$ of Eq.~\eqn{I_L} for $\hat{Q}=\hat{I}$ simply as \begin{eqs} I_L &= T\sum_n\,\mathrm{Tr}\bigg(\hat{G}(i{\epsilon}_n)\,\fract{\partial\hat{\Sigma}(i{\epsilon}_n)} {\partial i{\epsilon}_n}\bigg)\\ &=T\sum_n\,\fract{\partial I_L(i{\epsilon}_n)}{\partial i{\epsilon}_n}\;,\label{I_L-deriv} \end{eqs} where \begin{eqs} I_L(i{\epsilon}_n) &= \mathrm{Tr}\Big(\hat{\Sigma}(i{\epsilon}_n)\,\hat{G}(i{\epsilon}_n)\Big) -\Phi(i{\epsilon}_n)\,. \end{eqs} In other words, it is always possible to represent the Luttinger integral as a sum over ${\epsilon}_n$ of a derivative. It follows that the total number of electrons can be written as \beal\nonumber N &= \fract{K}{2} -T\sum_n \;\fract{\partial}{\partial i{\epsilon}_n}\, \mathrm{Tr}\Big(\ln\hat{G}(i{\epsilon}_n)\Big) \\ &\qquad + T\sum_n\,\fract{\partial I_L(i{\epsilon}_n)}{\partial i{\epsilon}_n}\\ &\;\xrightarrow[T\to 0]{} \fract{K}{2} -\int_{-\infty}^\infty \fract{d{\epsilon}}{2\pi}\;\fract{\partial}{\partial i{\epsilon}}\, \mathrm{Tr}\Big(\ln\hat{G}(i{\epsilon})\Big) \\ &\qquad + \int_{-\infty}^\infty \fract{d{\epsilon}}{2\pi}\;\fract{\partial I_L(i{\epsilon})}{\partial i{\epsilon}}\;. \end{eqs} Since $\hat{G}(-i{\epsilon})=\hat{G}(i{\epsilon})^\dagger$ and, similarly, $I_L(-i{\epsilon})=I_L(i{\epsilon})^*$, if we define, through the polar decomposition of $\hat{G}(i{\epsilon})$, the matrix \begin{eqs} \hat{\delta}({\epsilon}) &\equiv \arg\big(\hat{G}(i{\epsilon})\big) = \text{Im}\,\ln \big(\hat{G}(i{\epsilon})\big)\,,\label{LT: hat-delta} \end{eqs} then, for $T\to 0$, and noticing that $\text{Im}\,I_L(i{\epsilon})\to 0$ while $\hat{\delta}({\epsilon})\to -\pi/2\; \hat{I}$ for ${\epsilon}\to\infty$, \begin{eqs} N &= \fract{K}{2} + \int_{-\infty}^\infty \fract{d{\epsilon}}{2\pi}\;\mathrm{Tr}\Big(\hat{G}(i{\epsilon})\Big)\\ &= K + \fract{1}{\pi}\;\mathrm{Tr}\Big(\hat{\delta}(0^+)\Big) -\fract{1}{\pi}\; \text{Im}\,I_L(i0^+)\,. \label{LT: general statement} \end{eqs} This expression is exact. It is still not Luttinger's theorem but a kind of generalisation of it, and it is remarkable as it shows that a quantity requiring integration over all frequencies can be alternatively calculated through boundary terms. \\ In reality, Luttinger's theorem statement is that $\text{Im}\,I_L(i0^+)=0$ in Eq.~\eqn{LT: general statement}, which is not to be expected a priori. Nonetheless, the proof goes as follows. The Luttinger-Ward functional $\Phi[G]$ is invariant if the Matsubara frequency of each internal Green's function is replaced, see Eq.~\eqn{Dyson}, by $i{\epsilon}_n\,\hat{I}+ i\omega\,\hat{Q}$ for any conserved $Q$, where $\omega=2\pi\,T$. Therefore, \begin{equation} 0= \fract{\Delta^Q\Phi[G]}{i\omega}= T\sum_n\, \mathrm{Tr}\bigg(\hat{\Sigma}(i{\epsilon}_n)\, \fract{\Delta^Q \hat{G}(i{\epsilon})}{i\omega}\bigg),\label{delta Phi = 0} \end{equation} with \begin{eqs} \fract{\Delta^Q\hat{G}(i{\epsilon})}{i\omega} \equiv \fract{\hat{G}(i{\epsilon}_n+i\omega\,\hat{Q})-\hat{G}(i{\epsilon}_n)}{i\omega}\;, \end{eqs} the finite difference of $\hat{G}(i{\epsilon})$. For $\hat{Q}=\hat{I}$ that implies \begin{eqs} 0&= T\sum_n\, \mathrm{Tr}\bigg(\hat{\Sigma}(i{\epsilon}_n)\, \fract{\;\hat{G}(i{\epsilon}_n+i\omega)-\hat{G}(i{\epsilon}_n)\;}{i\omega}\bigg)\\ &= - T\sum_n\, \mathrm{Tr}\bigg(\hat{G}(i{\epsilon}_n)\, \fract{\;\hat{\Sigma}(i{\epsilon}_n+i\omega)-\hat{\Sigma}(i{\epsilon}_n)\;}{i\omega}\bigg)\\ &\equiv -T\sum_n\;\fract{\;I_L(i{\epsilon}_n+i\omega)-I_L(i{\epsilon}_n)\;}{i\omega}\equiv -I^\Delta_L\;, \label{finite-difference} \end{eqs} which just means that the convergence of the series allows the change of variable $i{\epsilon}_n+i\omega\to i{\epsilon}_n$ that makes $I^\Delta_L$ trivially vanish. It is tempting to assume that $I^\Delta_L$, i.e., the sum over ${\epsilon}_n$ of the finite difference, coincides with $I_L$ in Eq.~\eqn{I_L-deriv}, i.e., the sum over ${\epsilon}_n$ of the derivative, in the limit $T\to 0$, thus $\omega\to 0$. That is actually what is commonly assumed in the proof of Luttinger's theorem, in which case $I_L=0$ follows, and thus $\text{Im}\,I_L(i0^+)=0$ in Eq.~\eqn{LT: general statement}. However, that apparently reasonable assumption is not at all guaranteed, as we now discuss. \\ In the Supplementary Notes of Ref.~\cite{mio-2} it has been shown that, at leading order in $T$, \begin{eqs} I_L & =-\fract{1}{\pi}\,\text{Im}\,I_L(i0^+)= I_L -I^\Delta_L\\ &\simeq -\fract{1}{4\pi i}\,\lim_{{\epsilon}\to 0^+}\,S(i{\epsilon}) \,.\label{LT: condition} \end{eqs} where \begin{eqs} S(i{\epsilon}) \equiv \mathrm{Tr}\bigg[ \Big(\hat{G}(i{\epsilon})+ \hat{G}(i{\epsilon})^\dagger\Big) \,\Big( \hat{\Sigma}(i{\epsilon}) - \hat{\Sigma}(i{\epsilon})^\dagger\Big)\bigg]\,. \label{LT: def S(iep)} \end{eqs} It follows that, if $S(i{\epsilon})$ is finite for ${\epsilon}\to 0^+$, then Luttinger's theorem is definitely violated. That happens, e.g., in the Sachdev-Ye-Kitaev model~\cite{Sachdev&Ye-PRL1993,Georges-preprint2021}. On the contrary, one can readily prove that $S(i{\epsilon}\to 0^+)=0$ when perturbation theory holds. Indeed, if we define the `quasiparticle` residue \begin{eqs} \sqrt{\,\hat{Z}(i{\epsilon})^\dagger{^{-1}}\,\hat{Z}(i{\epsilon})^{-1}\;} &\equiv \hat{I} -\fract{\;\hat{\Sigma}(i{\epsilon}) - \hat{\Sigma}(i{\epsilon})^\dagger\;}{2i{\epsilon}}\;, \label{LT: def Z} \end{eqs} where $\hat{Z}(i{\epsilon})=\hat{Z}(-i{\epsilon})$, we do know that perturbatively $\hat{Z}(0)=\hat{Z}(0)^\dagger$ is positive definite, so that \beal\nonumber \hat{\Sigma}(i{\epsilon}) - \hat{\Sigma}(i{\epsilon})^\dagger \xrightarrow[{\epsilon}\to 0]{} 2\,\Big(\hat{I} -\hat{Z}(0)^{-1}\Big)\,i{\epsilon}\,, \end{eqs} and thus $S(i{\epsilon})$ vanishes as ${\epsilon}\to 0^+$. However, $S(i{\epsilon}\to 0^+)=0$, though necessary for $I_L=0$, is not a sufficient condition. The reason is that the right hand side of Eq.~\eqn{LT: condition} is just the leading term of an expansion in $T$. Its vanishing means that each term of the series expansion goes to zero as $T\to 0$, which does not guarantee that the whole series vanishes~\cite{mio-2}. In other words, while we can safely state that, in the regime where perturbation theory is valid, $S(i{\epsilon}\to 0^+)=0$ does imply that $I_L=0$, and thus that Luttinger's theorem holds true, we cannot exclude that the theorem is violated when perturbation theory breaks down.\\ However, let us assume the necessary condition $S(i{\epsilon}\to i0^+)= 0$ and draw its consequences. By definition, the single-particle density of states $A$ at the chemical potential is \beal\nonumber A &= -\lim_{{\epsilon}\to 0^+}\,\fract{1}{2\pi i}\,\mathrm{Tr}\Big(\hat{G}(i{\epsilon}) - \hat{G}(i{\epsilon})^\dagger\Big)\\ &\equiv \lim_{{\epsilon}\to 0^+}\,\mathrm{Tr}\Big(\hat{A}(i{\epsilon})\Big)\,, \end{eqs} where $\hat{A}(i{\epsilon}) = \hat{A}(i{\epsilon})^\dagger=-\hat{A}(-i{\epsilon})$. Through $\hat{A}(i{\epsilon})$, we can write \beal\nonumber \hat{\Sigma}(i{\epsilon}) - \hat{\Sigma}(i{\epsilon})^\dagger &= 2i{\epsilon} -2\pi i\,\hat{G}(i{\epsilon})^{-1}\,\hat{A}(i{\epsilon})\,\hat{G}(i{\epsilon})^\dagger{^{-1}}\,, \end{eqs} and thus $S(i{\epsilon})$ in Eq.~\eqn{LT: def S(iep)} becomes \beal\nonumber S(i{\epsilon}) &= 2i{\epsilon}\, \mathrm{Tr}\Big(\hat{G}(i{\epsilon})+ \hat{G}(-i{\epsilon})\Big)\\ &\quad -2\pi i\, \mathrm{Tr}\bigg[ \Big(\hat{G}(i{\epsilon})^{-1}+ \hat{G}(i{\epsilon})^\dagger{^{-1}}\Big) \,\hat{A}(i{\epsilon})\bigg]\,. \end{eqs} We now formally filter out the 'quasiparticle' Green's function through the 'quasiparticle' residue Eq.~\eqn{LT: def Z}, \begin{eqs} \hat{G}_\text{qp}(i{\epsilon})^{-1} &\equiv \sqrt{\,\hat{Z}(i{\epsilon})^\dagger\;}\; \hat{G}(i{\epsilon})^{-1}\; \sqrt{\,\hat{Z}(i{\epsilon})\;} \\ &= i{\epsilon}\,\hat{I}-\hat{\Xi}(i{\epsilon}) \,, \label{LT: def quasiparticle Green} \end{eqs} where \begin{eqs} \hat{\Xi}(i{\epsilon}) &\equiv \sqrt{\,\hat{Z}(i{\epsilon})^\dagger\;}\;\Big(\hat{H}_0 +\text{Re}\,\hat{\Sigma}(i{\epsilon})\Big)\; \sqrt{\,\hat{Z}(i{\epsilon})\;}\,, \label{LT: def Xi} \end{eqs} is a $K\times K$ hermitian matrix, and thus has real eigenvalues ${\epsilon}_{*\ell}({\epsilon})={\epsilon}_{*\ell}(-{\epsilon})$, $\ell=1,\dots,K$. Therefore, if we further define \begin{eqs} &\hat{A}_\text{qp}(i{\epsilon}) \equiv -\fract{1}{2\pi i}\,\Big(\hat{G}_\text{qp}(i{\epsilon}) - \hat{G}_\text{qp}(i{\epsilon})^\dagger\Big) \\ &\quad = \fract{{\epsilon}}{\pi }\, \hat{G}_\text{qp}(i{\epsilon})\,\hat{G}_\text{qp}(i{\epsilon})^\dagger = \fract{{\epsilon}}{\pi }\,\fract{1}{\;{\epsilon}^2 + \hat{\Xi}(i{\epsilon})^2\;} \\ &\quad= \sqrt{\,\hat{Z}(i{\epsilon})^{-1}\;}\;\hat{A}(i{\epsilon})\;\sqrt{\,\hat{Z}(i{\epsilon})^\dagger{^{-1}}\;}\,, \label{LT: quasiparticle A} \end{eqs} which is diagonal in the basis that diagonalises $\hat{\Xi}(i{\epsilon})$ with elements \beal\nonumber A_{\text{qp}\,\ell}(i{\epsilon}) = \fract{1}{\pi}\;\fract{{\epsilon}}{\;{\epsilon}^2+ {\epsilon}_{*\ell}({\epsilon})^2\;}\;, \end{eqs} then \begin{eqs} S(i{\epsilon}) &= 2i{\epsilon}\, \mathrm{Tr}\Big(\hat{G}(i{\epsilon})+ \hat{G}(-i{\epsilon})\Big)\\ &\qquad +4\pi i\, \mathrm{Tr}\bigg[\, \hat{\Xi}(i{\epsilon})\,\hat{A}_\text{qp}(i{\epsilon})\,\bigg]\\ &= 2i{\epsilon}\, \mathrm{Tr}\Big(\hat{G}(i{\epsilon})+ \hat{G}(-i{\epsilon})\Big)\\ &\quad +4\pi i\,\sum_{\ell=1}^K\, \fract{\;{\epsilon}_{*\ell}({\epsilon})\;}{\pi}\;\fract{{\epsilon}}{\;{\epsilon}^2+{\epsilon}_{*\ell}({\epsilon})^2\;}\;. \label{LT: def S} \end{eqs} Since the first term on the right hand side of Eq.~\eqn{LT: def S} vanishes for ${\epsilon}\to0$, the necessary condition for Luttinger's theorem to hold becomes \begin{eqs} &\lim_{{\epsilon}\to 0^+}\, \mathrm{Tr}\Big[\, \hat{\Xi}(i{\epsilon})\,\hat{A}_\text{qp}(i{\epsilon})\,\Big]\\ &\qquad = \lim_{{\epsilon}\to 0^+}\,\sum_{\ell=1}^K\, \fract{\;{\epsilon}_{*\ell}({\epsilon})\;}{\pi}\;\fract{{\epsilon}}{\;{\epsilon}^2+{\epsilon}_{*\ell}({\epsilon})^2\;} = 0\,. \label{LT: condition 2} \end{eqs} In the thermodynamic limit, $K\to\infty$, ${\epsilon}_{*\ell}({\epsilon})$ defines a continuous spectrum where $\ell$ runs in a $d$-dimensional space, with $d$ the spatial dimension of the system times the number of internal degrees of freedom. For instance, in the periodic case, $\ell$ labels the momentum within the Brillouin zone, the band index and the spin. Any $\ell$ such that ${\epsilon}_{*\ell}({\epsilon}\to 0)\not= 0$ yields a contribution to the sum \eqn{LT: condition 2} that trivially vanishes as ${\epsilon}\to 0$. Let us instead consider the manifold $\ell=\ell_*$ such that ${\epsilon}_{*\ell_*}({\epsilon}\to 0)= 0$. If, for a given $\ell_*$, ${\epsilon}_{*\ell_*}({\epsilon}\to 0)\sim c_*\,|{\epsilon}|^\alpha$, with $\alpha>0$, its contribution to the sum \eqn{LT: condition 2} is \beal\nonumber &\fract{\;{\epsilon}_{*\ell_*}({\epsilon})\;}{\pi}\;\fract{{\epsilon}}{\;{\epsilon}^2+{\epsilon}_{*\ell_*}({\epsilon})^2\;} \\ &\qquad \xrightarrow[{\epsilon}\to 0^+]{} \fract{\;c_*\,|{\epsilon}|^\alpha\;}{\pi}\;\fract{{\epsilon}}{\;{\epsilon}^2+c_*^2\,|{\epsilon}|^{2\alpha}\;} \;, \end{eqs} and vanishes only if $\alpha >1$, which thus becomes the necessary condition for the validity of Luttinger's theorem. We can further distinguish two different cases. For instance, if we assume that \begin{eqs} \mathbf{\bullet}\; \hat{\Xi}(i{\epsilon})~\text{is,~at~leading~order,~analytic~at~}{\epsilon}=0\,, \label{LT: condition quasiparticles} \end{eqs} then $\alpha =2$ since ${\epsilon}_{*\ell}({\epsilon})$ is even in ${\epsilon}$, which automatically satisfies the necessary condition for Luttinger's theorem to hold. In this case, ${\epsilon}_{*\ell}({\epsilon}\to 0) \simeq {\epsilon}_{*\ell}(0)+ O\big({\epsilon}^{2}\big)$, where ${\epsilon}_{*\ell}(0)\equiv {\epsilon}_{*\ell}$ are the eigenvalues of \begin{eqs} \hat{H}_* &\equiv \sqrt{\,\hat{Z}(0)^\dagger\;}\,\Big(\hat{H}_0 +\hat{\Sigma}(0)\Big)\, \sqrt{\,\hat{Z}(0)\;}\,. \label{LT: def H_*} \end{eqs} Accordingly, the `quasiparticle` Green's function and density of states at the chemical potential are \begin{eqs} \hat{G}_\text{qp}(i{\epsilon}) &\xrightarrow[{\epsilon}\to 0]{} \fract{1}{\;i{\epsilon}\,\hat{I} -\hat{H}_* \;}\;,\\ A_\text{qp} &= \lim_{{\epsilon}\to 0^+}\,\mathrm{Tr}\Big(\hat{A}_\text{qp}(i{\epsilon})\Big) = \sum_\ell\,\delta\big({\epsilon}_{*\ell}\big)\,, \label{LT: quasiparticle conjecture} \end{eqs} and correspond to those of free particles, thus the `quasiparticles', described by the `quasiparticle` Hamiltonian $\hat{H}_*$ with eigenvalues ${\epsilon}_{*\ell}$. \\ On the contrary, if $\hat{\Xi}(i{\epsilon})$ is non analytic and yet satisfies the necessary condition for Luttinger's theorem, then $1<\alpha<2$, since any non-analyticity yielding non-integer $\alpha>2$ will be hidden by the ever-present analytical terms. That is precisely what happens for interacting electrons in one dimension. Those systems do not sustain quasiparticles in the sense of Eq.~\eqn{LT: quasiparticle conjecture}, and yet Luttinger's theorem is valid~\cite{Bedell-PRL1997,Affleck-PRL1997}. The same occurs in marginal Fermi liquids~\cite{Varma-PRL1989}, or metals with quantum critical behaviour~\cite{Chubukov-AdvPhys2003}, which, despite a non-analytic self-energy, satisfy Luttinger's theorem~\cite{Heath_2020}. Conversely, since $S(i{\epsilon}\to i0^+)=0$ is not sufficient for Luttinger's theorem to hold, we must also conclude that `quasiparticles` may exist even when Luttinger's theorem is violated~\cite{mio-2}.\\ We also emphasise that $1<\alpha<2$ entails singularities in perturbation theory. Therefore, Eq.~\eqn{LT: condition quasiparticles} must be always verified when perturbation theory is well defined, which is equivalent to saying that quasiparticles always exist in the perturbative regime, in agreement with Landau's adiabatic hypothesis. \\ Hereafter, we assume the analyticity condition \eqn{LT: condition quasiparticles}, thus Eq.~\eqn{LT: quasiparticle conjecture}. We believe that this choice, though limiting, may be pertinent to doped Mott insulators in dimensions $d>1$~\cite{Kotliar-PRB2006,Prelovsek-PRB2007,Becca-PRB2012,Georges-PNAS2018,Georges-PRX2018,Trivedi-PRB2021}. In that case, $\hat{\delta}({\epsilon})$ is diagonal in the basis that diagonalises $\hat{H}_*$ with elements $-\pi + \pi\,\theta\big(-{\epsilon}_{*\ell}\big)$. It follows that Eq.~\eqn{LT: general statement} becomes \beal\nonumber N &= K + \fract{1}{\pi}\;\mathrm{Tr}\Big(\hat{\delta}(0^+)\Big) -\fract{1}{\pi}\; \text{Im}\,I_L(i0^+)\\ &= \sum_{\ell=1}^K\,\theta\big(-{\epsilon}_{*\ell}\big)-\fract{1}{\pi}\; \text{Im}\,I_L(i0^+)\,, \end{eqs} which represents the general statement \eqn{LT: general statement} of Luttinger's theorem when `quasiparticles` exist. We note that $N$ is integer at $T=0$ and so is the sum over $\ell$, which implies that the Luttinger integral $I_L$ is quantised in integer values when $\eqn{LT: condition quasiparticles}$ holds. Therefore, \begin{eqs} N &= \sum_{\ell=1}^K\,\theta\big(-{\epsilon}_{*\ell}\big) + \mathcal{L}\,,& \mathcal{L} &\in\mathbb{Z}\,, \label{LT: conventional and unconventional statement} \end{eqs} where $\mathcal{L}=0$ in the perturbative regime, in which case conventional Luttinger's theorem holds, while $\mathcal{L}$ may be finite when perturbation theory breaks down. \subsection{Generalised Luttinger's theorem in presence of quasiparticles and in periodic systems} \label{Generalised Luttinger's theorem in presence of quasiparticles and in periodic systems} In a single-band periodic system invariant under spin $SU(2)$ symmetry, we have the possibility to further elaborate on the meaning of `quasiparticle`. In this case, $\hat{G}(i{\epsilon})$ is diagonal in momentum and spin with elements $G(i{\epsilon},\mathbf{k})$ independent of spin, and thus $\hat{\Xi}(i{\epsilon})$ is diagonal, too, with elements ${\epsilon}_*({\epsilon}, \mathbf{k})$ equal for spin $\sigma=\uparrow$ and $\downarrow$, now defined, see Eq.~\eqn{LT: def Xi}, as \begin{equation} {\epsilon}_*({\epsilon}, \mathbf{k}) = \big|Z(i{\epsilon},\mathbf{k})\big|\;\Big({\epsilon}(\mathbf{k})+\text{Re}\,\Sigma(i{\epsilon},\mathbf{k})\Big) \,. \label{LT: def ep_*(ep,k)} \end{equation} Correspondingly, the quasiparticle, $A_\text{qp}$, and physical electron, $A$, density of states at the chemical potential are, in units of the number of sites $V$, see Eq.~\eqn{LT: quasiparticle conjecture}, \begin{eqs} A_\text{qp} &= \fract{1}{V}\,\sum_{\mathbf{k}\sigma}\,\delta\big({\epsilon}_*(\mathbf{k})\big)\,,\\ A &= \fract{1}{V}\,\sum_{\mathbf{k}\sigma}\,Z(i{\epsilon}\to i0^+,\mathbf{k})\,\delta\big({\epsilon}_*(\mathbf{k})\big)\,, \label{LT: quasiparticle and physical particle DOS} \end{eqs} where ${\epsilon}_*(\mathbf{k})={\epsilon}_*({\epsilon}\to 0, \mathbf{k})$. We already know that Eqs.~\eqn{LT: condition 2} and \eqn{LT: condition quasiparticles} imply that, if a manifold $\mathbf{k}=\mathbf{k}_*$ exists such that ${\epsilon}_*(0, \mathbf{k}_*)=0$, then ${\epsilon}_*({\epsilon}\to 0, \mathbf{k}_*)\simeq {\epsilon}^2$. We observe that ${\epsilon}_*(0, \mathbf{k}_*)=0$ may occur \begin{description} \item[Fermi Surface] if $\mathbf{k}_*=\mathbf{k}_F$, with $\mathbf{k}_F$ such that ${\epsilon}(\mathbf{k}_{F})+\Sigma(0,\mathbf{k}_F)=0$ while $0<Z(0,\mathbf{k}_F)<1$, which defines a conventional Fermi surface $\mathbf{k}=\mathbf{k}_F$ through the roots of $G(0,\mathbf{k})^{-1}$ in momentum space. The Fermi surface contribution to the physical electron DOS Eq.~\eqn{LT: quasiparticle and physical particle DOS} is finite since $Z(0,\mathbf{k}_F)\not=0$. \item[Luttinger Surface] if $\mathbf{k}_*=\mathbf{k}_L$, with $\mathbf{k}_L$ such that ${\epsilon}(\mathbf{k}_{L})+\Sigma(0,\mathbf{k}_L)\not=0$ but \begin{eqs} \lim_{{\epsilon}\to 0^+}\,\big|Z(i{\epsilon},\mathbf{k}_L)\big| &= \lim_{{\epsilon}\to 0^+}\,\fract{{\epsilon}}{\;{\epsilon} -\text{Im}\,\Sigma(i{\epsilon},\mathbf{k}_L)\;} \\ &\sim \lim_{{\epsilon}\to 0}\,{\epsilon}^2 = 0\,, \label{LT: Luttinger surface} \end{eqs} which implies $\Sigma(i{\epsilon},\mathbf{k}_L)\sim 1/i{\epsilon}$ and, correspondingly, $G(i{\epsilon},\mathbf{k}_L)\to 0$ as ${\epsilon}\to 0$. Therefore, Eq.~\eqn{LT: Luttinger surface} defines the so-called Luttinger surface~\cite{Igor-PRB2003}, i.e., the manifold of roots $\mathbf{k}=\mathbf{k}_L$ of $G(0,\mathbf{k})$ in momentum space, whose existence is due to a singular self-energy and thus signals the breakdown of perturbation theory. Remarkably, even though the Luttinger surface contribution to the quasiparticle DOS, $A_\text{qp}$ in Eq.~\eqn{LT: quasiparticle and physical particle DOS}, is finite, its contribution to the physical electron DOS vanishes~\cite{mio-2}. \end{description} Therefore, under the analyticity assumption \eqn{LT: condition quasiparticles}, Fermi and Luttinger surfaces are both defined by the one and only equation ${\epsilon}_*(0, \mathbf{k}_{F/L})=0$~\cite{mio-2}. Moreover, as we earlier mentioned, if perturbation theory is valid there are always quasiparticles, only a Fermi surface may exist within the Brillouin zone, and, see Eq.~\eqn{LT: conventional and unconventional statement} at $\mathcal{L}=0$, \begin{eqs} N &= \sum_{\mathbf{k}\sigma}\,\theta\big(-{\epsilon}_*(\mathbf{k})\big)\,, \label{LT: conventional statement periodic} \end{eqs} which is the standard perturbative Luttinger's theorem statement that the fraction of the quasiparticle Fermi volume, i.e., the manifold of $\mathbf{k}:\,{\epsilon}_*(\mathbf{k})<0$, with respect to the whole Brillouin zone is equal to the electron filling fraction $\nu=N/2V$.\\ When perturbation theory breaks down without breaking translational and spin $SU(2)$ symmetries, and Luttinger surfaces appear inside the Brillouin zone, we must use the more general formula \begin{eqs} N &= \sum_{\mathbf{k}\sigma}\,\theta\big(-{\epsilon}_*(\mathbf{k})\big) + \mathcal{L}\,,& \mathcal{L}&\in\mathbb{Z}\,, \label{LT: unconventional statement periodic} \end{eqs} and thus the quasiparticle Fermi volume fraction no more accounts for the electron filling fraction.\\ \noindent In order to proceed in this case, we use Oshikawa's topological approach to Luttinger's theorem in periodic systems~\cite{Oshikawa-PRL2000}. We first note that the above 'quasiparticle' derivation holds even when the system is a non-symmetry breaking Mott insulator provided it has a Luttinger surface within the Brillouin zone. In the single-band model we are discussing, that may occur only at half-filling. Following Oshikawa~\cite{Oshikawa-PRL2000} we imagine to adiabatically thread in the above Mott insulator a fictitious flux quantum $\Phi_0$ that only couples to one spin species, whose particle number is conserved by charge $U(1)$ and spin $SU(2)$, assuming, e.g., a gauge in which the vector potential has only finite $x$-component. The final state differs from the initial one by a lattice momentum of $\pi$ in the $x$-direction~\cite{Oshikawa-PRL2000}, and that must be supplied by the 'quasiparticles' at the Luttinger surface~\cite{mio-2}. The same result holds true if we couple the flux to the other spin species. On the contrary, if the flux couples to both spin species, the system acquires a momentum $2\pi\equiv 0$, and that suggests that each spin species contributes with momentum $\pi$. The conclusion is that the Luttinger surface, whatever its shape and volume are, contributes to the particle count by one electron per site. If that remains true even when, upon doping the Mott insulator, Fermi pockets appear in the Brillouin zone, then Oshikawa's argument implies that the electron filling fraction $\nu$ is given by \begin{eqs} \nu &= \fract{1}{2} + v_{EP} - v_{HP}\,,\label{Oshikawa result} \end{eqs} where $v_{EP}$ and $v_{HP}$ are the fraction of electron-like and hole-like Fermi pockets with respect to the whole Brillouin zone. This result is consistent with the proposal of Yang, Rice and Zhang~\cite{Rice-PRB2006,Rice-RPP2011} in the pseudo-gap phase of underdoped cuprates, but also of fractionalised Fermi liquids~\cite{Fractionalised-Senthil-PRL2003}. Equation~\eqn{Oshikawa result} is graphically shown in Fig.~\ref{count}. \begin{figure}[thb] \centerline{\includegraphics[width=0.4\textwidth]{count.pdf}} \caption{Graphical representation of electron count when perturbation theory is valid, panel (a), or, panel (b), when it breaks down and a Luttinger surface appears, green line in the figure. Electron-like, i.e., ${\epsilon}_*(\mathbf{k})<0$, and hole-like, i.e., ${\epsilon}_*(\mathbf{k})>0$, Fermi pockets are shown, respectively, in red and blue and have volume fraction $v_{EP}$ and $v_{HP}$ with respect to the whole Brillouin zone. When perturbation theory is valid, the electron filling fraction $\nu=N/2V$, where $N$ is the total number of electrons and $V$ the number of sites, is simply given by $\nu=v_{EP}$, panel (a). When a Luttinger surface exists, the filling fraction is obtained through $\nu=1/2 +v_{EP}-v_{HP}$, panel (b). } \label{count} \end{figure} \begin{figure}[htb] \centerline{\includegraphics[width=0.4\textwidth]{ReG_met_side.pdf}} \caption{Real part of the local Green's function as obtained by dynamical mean field theory in the Hubbard model with a very weak chemical potential breaking particle-hole symmetry. Upon increasing the Hubbard $U$, the model has a transition between a metal and a Mott insulator, which, away from particle-hole symmetry, is first order. In the figure we show the evolution of $\text{Re}\mathcal{G}(i{\epsilon})$ starting from the weak coupling metal and raising $U$. Note that a double zero first appear in the metal at $U\simeq 2$ at finite ${\epsilon}$, which signals the birth of the Hubbard bands. Upon further increasing $U$, that zero splits into two, one moving towards ${\epsilon}=0$. The value of $U\simeq 2.9$ at which the root reaches ${\epsilon}=0$ corresponds to the metal spinodal point, above which the only stable phase is insulating.} \label{Rea G DMFT} \end{figure} To better understand how the situation depicted in Fig.~\ref{count} may occur, let us start from the perturbative regime and, upon varying the Hamiltonian parameters $\lambda$, like the interaction strength or the doping, reach the point $\lambda_c$ at which perturbation theory breaks down, i.e., its convergence radius. For convenience, we assume that $\lambda<\lambda_c$ identifies the perturbative regime, and $\lambda>\lambda_c$ the non-perturbative one. Therefore, $\lambda=\lambda_c$ corresponds to the birth of a Luttinger surface and a concomitant dramatic change within the Brillouin zone: a large Fermi surface either disappears or abruptly turns into small hole and/or electron Fermi pockets, consistently with Eq.~\eqn{Oshikawa result}. Let us try to imagine how that may occur. In general, $\text{Re}\,G(i{\epsilon},\mathbf{k}) =\text{Re}\,G(-i{\epsilon},\mathbf{k})$ has an even number of roots $2\ell_\mathbf{k}$ on the imaginary frequency axis, symmetrically located around ${\epsilon}=0$. If we borrow the results obtained in the Hubbard model by single-site dynamical mean field theory (DMFT)~\cite{DMFT-review}, see Fig.~\ref{Rea G DMFT}, and translate them in finite dimensions, we expect that at fixed ${\epsilon}={\epsilon}_r>0$, which is function of $\lambda$ and vanishes as $\lambda\to \lambda_c$ from below, there is a surface of roots of $\text{Re}\,G\big(i{\epsilon}_r,\mathbf{k}_L({\epsilon}_r)\big)=\text{Re}\,G\big(-i{\epsilon}_r,\mathbf{k}_L({\epsilon}_r)\big)$, or, equivalently, of ${\epsilon}_*\big({\epsilon}_r,\mathbf{k}_L({\epsilon}_r)\big)$, which smoothly evolves into the Luttinger surface as $\lambda\to\lambda_c$. Similarly, we can always define at any small ${\epsilon}$, thus also at ${\epsilon}_r$ when $\lambda\lesssim\lambda_c$, a surface of zeros of ${\epsilon}_*\big({\epsilon},\mathbf{k}_F({\epsilon})\big)$ that are instead smoothly connected to the Fermi surface at ${\epsilon}=0$, i.e., the roots of ${\epsilon}_*(0,\mathbf{k}_F)$. Since ${\epsilon}_*({\epsilon}_r,\mathbf{k})$ are the eigenvalues of a hermitian operator, if the two surfaces, $\mathbf{k}_L({\epsilon}_r)$ and $\mathbf{k}_F({\epsilon}_r)$, cross within the Brillouin zone, those are actually avoided crossings. That simply rationalises the Fermi surface reshaping predicted by Eq.~\eqn{Oshikawa result}, see Fig.~\ref{count}, as $\lambda\to\lambda_c$, thus ${\epsilon}_r\to 0$. In the case of Fig.~\ref{Rea G DMFT}, where the breakdown corresponds to the metal spinodal point, the two zeros at ${\epsilon}=\pm{\epsilon}_r$ simply annihilate each other when ${\epsilon}_r\to 0$ as $\lambda\to\lambda_c$. Beyond single-site DMFT, we cannot exclude that the Luttinger surface survives after the breakdown, thus Eq.~\eqn{Oshikawa result}, changes shape and eventually disappears, as in the case discussed in Ref.~\cite{Rosh-2007}. Once that has happened, namely once the two zeros that had merged at ${\epsilon}=0$ finally annihilate each other, it is difficult to ascertain from the behaviour at ${\epsilon}=0$ whether the system is in the perturbative regime, and thus we can use conventional Luttinger's theorem, or, instead, perturbation theory has broken down and, in that case, how to count electron number. There is however a circumstance where we can make a firm statement, namely, when the self-energy is local, as in single-site dynamical mean field theory (DMFT)~\cite{DMFT-review}, see Fig.~\ref{Rea G DMFT}, or in impurity models. In that case, the sign of the real part of the impurity Green's function $\mathcal{G}(i{\epsilon})$, which is the local Green's function in DMFT, is fixed as ${\epsilon}\to\infty$, i.e., in the Hartree-Fock regime, and it is negative if the impurity is less than half-filled, the case of Fig.~\ref{Rea G DMFT}, and positive otherwise. It follows that, when perturbation theory is valid and Luttinger's theorem holds, then the sign of $\text{Re}\,\mathcal{G}(i{\epsilon})$ at ${\epsilon}=0$ must be the same as at ${\epsilon}\to\infty$. When it breaks down, the sign must be opposite, corresponding to the two zeros of $\text{Re}\,\mathcal{G}(i{\epsilon})$ that have annihilated each other at ${\epsilon}=0$. Therefore, the expectation value of the impurity occupation number close to half-filling is \begin{eqs} n &= \sum_\sigma \Bigg(\,\fract{1}{2} - \int_0^\infty \fract{d{\epsilon}}{\pi}\,\fract{\partial \delta({\epsilon})}{\partial{\epsilon}}\Bigg)\\ &\qquad -\sum_\sigma\,\fract{1-(-1)^{\ell}}{4}\,\text{sign}\big(\text{Re}\,\mathcal{G}(0)\big)\,, \label{PT-NO-AIM} \end{eqs} where now $\delta({\epsilon})=\text{arg}\big(\mathcal{G}(i{\epsilon})\big)$, and $\ell$ is simply the number of roots of $\text{Re}\,\mathcal{G}(i{\epsilon})$ in the semi axis $0<{\epsilon}<\infty$. In what follows, we discuss few solvable cases where perturbation theory breaks down and Luttinger's theorem is violated, and test the validity of Eqs.~\eqn{Oshikawa result} or \eqn{PT-NO-AIM}. \section{SDW fluctuation state} \label{SDW} The first example that we analyse is the model studied in Ref.~\cite{Altshuler-EPL1998} as representative of a nearly antiferromagnetic Fermi liquids. The model consists of electrons on a $V$-site cubic or square lattice, with non-interacting dispersion ${\epsilon}(\mathbf{k})$. The electrons exchange critical longitudinal spin fluctuations, with dynamical susceptibility \begin{eqs} \chi(i\,\omega,\mathbf{q}) = \fract{\Delta^2}{g}\,\fract{\delta_{\omega,0}}{T}\,V\,\delta_{\mathbf{q},\mathbf{Q}}\,, \label{SDW-U} \end{eqs} where $\mathbf{Q}=(\pi,\dots,\pi)$ and $g$ is the exchange constant. The exact self-energy in the paramagnetic phase reads~\cite{Altshuler-EPL1998,Chubukov-PhsRep1997} \begin{equation} \Sigma(i{\epsilon},\mathbf{k}) = \fract{\Delta^2}{\;i{\epsilon}-{\epsilon}(\mathbf{k}+\mathbf{Q})\;} = \Delta^2\,G_0(i{\epsilon},\mathbf{k}+\mathbf{Q})\;, \end{equation} where $G_0(i{\epsilon},\mathbf{k})$ is the non-interacting Green's function, hence \begin{eqs} G^{-1}(i{\epsilon},\mathbf{k}) &= G_0^{-1}(i{\epsilon},\mathbf{k}) -\Delta^2\,G_0(i{\epsilon},\mathbf{k}+\mathbf{Q})\,.\label{SDW-G} \end{eqs} \begin{figure} \centerline{\includegraphics[width=0.48\textwidth]{{Figure1}.pdf}} \caption{Top panels: band structure Eq.~\eqn{SDW:ep+ep-} on a square lattice with nearest $t$ and next nearest, $t'=-0.2t$, neighbour hopping, at $\Delta=-0.3$ and different chemical potentials corresponding to hole, left panel, and electron, right panel, doping with respect to half-filling, middle panel. Bottom panels: corresponding Luttinger surface, green line, and Fermi pockets, hole-like in orange and electron-like in blue. The non-interacting Fermi surface is also shown, black dotted line. In the present case, our conjecture \eqn{Oshikawa result} predicts that all $\mathbf{k}$ points in the Brillouin zone contribute with one to the total electron number, with the exception of those inside the Fermi pockets, which contribute with zero or with two if the pockets are, respectively, hole- or electron-like.} \label{FermiPockets} \end{figure} In this case a Luttinger surface always exists and Luttinger's theorem is violated at any $\Delta\not=0$~\cite{Altshuler-EPL1998}. Through the exact Green's function \eqn{SDW-G} one readily finds~\cite{Altshuler-EPL1998} that, for a single spin species, \begin{eqs} n(\mathbf{k})+n(\mathbf{k}+\mathbf{Q}) = \theta\big({\epsilon}_+(\mathbf{k})\big) + \theta\big({\epsilon}_-(\mathbf{k})\big)\,, \label{SDW-exact} \end{eqs} where \begin{eqs} {\epsilon}_{\pm}(\mathbf{k}) &= \fract{{\epsilon}(\mathbf{k})+{\epsilon}(\mathbf{k}+\mathbf{Q})}{2} \\ &\qquad \pm \sqrt{\left(\fract{{\epsilon}(\mathbf{k})-{\epsilon}(\mathbf{k}+\mathbf{Q})}{2}\right)^2 + \Delta^2\,}\;, \label{SDW:ep+ep-} \end{eqs} see Fig.~\ref{FermiPockets}, so that \begin{itemize} \item $n(\mathbf{k})+n(\mathbf{k}+\mathbf{Q})= 2\theta\big(-{\epsilon}(\mathbf{k})\big)$ if ${\epsilon}(\mathbf{k}){\epsilon}(\mathbf{k}+\mathbf{Q}) >\Delta^2>0$, \item $n(\mathbf{k})+n(\mathbf{k}+\mathbf{Q})=1$ if $\Delta^2>{\epsilon}(\mathbf{k}){\epsilon}(\mathbf{k}+\mathbf{Q})$. \end{itemize} The quasiparticle residue at ${\epsilon}=0$ is now \begin{eqs} Z(\mathbf{k}) = \fract{{\epsilon}(\mathbf{k}+\mathbf{Q})^2}{\Delta^2+{\epsilon}(\mathbf{k}+\mathbf{Q})^2}\;, \end{eqs} so that the Luttinger surface is defined by $\mathbf{k}_L:\,Z(\mathbf{k}_L)=0$, i.e., $\mathbf{k}_L:\,{\epsilon}(\mathbf{k}_L+\mathbf{Q})=0$, while the quasiparticle energy by \begin{eqs} {\epsilon}_*(\mathbf{k}) = Z(\mathbf{k})\,\fract{1}{{\epsilon}(\mathbf{k}+\mathbf{Q})}\, \Big({\epsilon}(\mathbf{k})\,{\epsilon}(\mathbf{k}+\mathbf{Q})-\Delta^2\Big)\,, \label{SDW: quasiparticle energy} \end{eqs} which allows defining the Fermi surface by $\mathbf{k}_F:\,{\epsilon}(\mathbf{k}_F)\,{\epsilon}(\mathbf{k}_F+\mathbf{Q})=\Delta^2$. The non-interacting Fermi surface, the interacting Luttinger one, and the interacting Fermi pockets are shown in Fig.~\ref{FermiPockets} for few exemplary cases. Let us now apply Eq.~\eqn{Oshikawa result} to calculate the momentum distribution. Through ${\epsilon}_*(\mathbf{k})$ in Eq.~\eqn{SDW: quasiparticle energy} we realise that the Fermi pockets, when they exist, include all $\mathbf{k}$ such that ${\epsilon}(\mathbf{k})\,{\epsilon}(\mathbf{k}+\mathbf{Q})\geq \Delta^2$, and are electron-like if ${\epsilon}(\mathbf{k})<0$ and hole-like otherwise. This observation together with Eq.~\eqn{Oshikawa result} directly yields Eq.~\eqn{SDW-exact}. Despite its simplicity, this model is very instructive and yields insights that we believe are rather general. Since the interaction is a $\delta$-function in frequency, it is rather easy to express the self-energy as functional of the interacting Green's functions \eqn{SDW-G} and of the interaction strength $\Delta$. We find that \begin{equation} \Sigma(i{\epsilon},\mathbf{k}) = \fract{\;\sqrt{1+X(i{\epsilon},\mathbf{k},\mathbf{k}+\mathbf{Q})\;}-1\;}{2G(i{\epsilon},\mathbf{k})}\,,\label{SDW:Sigma.vs.G} \end{equation} where \begin{eqs} X(i{\epsilon},\mathbf{k},\mathbf{k}+\mathbf{Q}) \equiv 4\Delta^2\,G(i{\epsilon},\mathbf{k})\,G(i{\epsilon},\mathbf{k}+\mathbf{Q})\,,\label{SDW-X} \end{eqs} through which the Luttinger integral can be written as \begin{widetext} \begin{eqs} I_L(\mathbf{k},\mathbf{k}+\mathbf{Q}) &= \int_{-\infty}^\infty \fract{d{\epsilon}}{2\pi}\;\Bigg\{ G(i{\epsilon},\mathbf{k})\fract{\partial\Sigma(i{\epsilon},\mathbf{k})}{\partial i{\epsilon}} +G(i{\epsilon},\mathbf{k}+\mathbf{Q})\fract{\partial\Sigma(i{\epsilon},\mathbf{k}+\mathbf{Q})}{\partial i{\epsilon}}\Bigg\}\\ &= \int_{-\infty}^\infty \fract{d{\epsilon}}{2\pi}\;\fract{\partial}{\partial i{\epsilon}}\,\ln\fract{\sqrt{1+X(i{\epsilon},\mathbf{k},\mathbf{k}+\mathbf{Q})\,}+1}{2}\\ &= -\fract{1}{\pi}\;\text{Im}\,\ln\fract{\sqrt{1+X(i0^+,\mathbf{k},\mathbf{k}+\mathbf{Q})\,}+1}{2}\\ &= -\theta\big(\Delta^2-{\epsilon}(\mathbf{k})\,{\epsilon}(\mathbf{k}+\mathbf{Q})\big)\, \theta\big({\epsilon}(\mathbf{k})\,{\epsilon}(\mathbf{k}+\mathbf{Q})\big)\,\text{sign}\big({\epsilon}(\mathbf{k})+{\epsilon}(\mathbf{k}+\mathbf{Q})\big) \,,\label{nota-bene} \end{eqs} \end{widetext} consistently with Eq.~\eqn{I_L-deriv}. It is worth noticing that $I_L$ yields an entanglement between the phases $\delta(0,\mathbf{k})$ and $\delta(0,\mathbf{k}+\mathbf{Q})$ of the two Green's functions, which appear as independent quantities in conventional Luttinger's theorem. We believe that is the key role of the Luttinger integral whenever it is finite.\\ We can take a step further and explicitly built the Luttinger-Ward functional \begin{equation} \Phi[G] = \sum_{\mathbf{k}}T\sum_n\esp{i{\epsilon}_n\eta}\, \Phi\big[G(i{\epsilon}_n,\mathbf{k}),G(i{\epsilon}_n,\mathbf{k}+\mathbf{Q})\big]\,, \end{equation} where the sum over $\mathbf{k}$ is within the reduced Brillouin zone, by solving \begin{eqs} \fract{\delta \Phi\big[G(i{\epsilon},\mathbf{k}),G(i{\epsilon},\mathbf{k}+\mathbf{Q})\big]}{\delta G(i{\epsilon},\mathbf{k})} = \Sigma(i{\epsilon},\mathbf{k})\,. \end{eqs} We find that $\Phi\big[G(i{\epsilon}_n,\mathbf{k}),G(i{\epsilon}_n,\mathbf{k}+\mathbf{Q})\big]$ is actually a functional $\Phi[X]$ of $X$ in Eq.~\eqn{SDW-X}, specifically \begin{eqs} \Phi[X] &= \sqrt{1+X\,}-1 - \ln\fract{\sqrt{1+X\,}+1}{2}\;. \end{eqs} We end noticing that the square root in the expression \eqn{SDW:Sigma.vs.G} of $\Sigma(i{\epsilon},\mathbf{k})$ implies that the inverse of Dyson's equation \begin{eqs} G_0(i{\epsilon},\mathbf{k})^{-1} &= G(i{\epsilon},\mathbf{k})^{-1} + \fract{\delta\Phi[G]}{\delta G(i{\epsilon},\mathbf{k})}\;, \end{eqs} generally admits two solutions $G_0(i{\epsilon},\mathbf{k})$, only one of which is physical. This result agrees with several evidences~\cite{Georges-PRL2015,Schafer-PRB2016,Toschi-PRL2017,Chalupa-PRB2018} that the Luttinger-Ward functional may become multivalued upon increasing the interaction strength. \section{Pseudo-gap impurity model} \label{AIM} Let us now discuss the failure of Luttinger's theorem in the impurity model studied in Ref.~\cite{Hewson-2018} by numerical renormalisation group (NRG). For convenience, we consider a slightly different model with the same physical properties, which was thoroughly investigated in Ref.~\cite{Lorenzo-PRB2004} thus saving us from recalculating the whole phase diagram. The model represents a two-orbital Anderson impurity with inverted Hund's rules. The Hamiltonian is \begin{eqs} H &=H_0+H_\text{imp}\,,\label{HAIM} \end{eqs} where \begin{equation} H_0 = \sum_{i=1}^2\sum_{\mathbf{k}\sigma}\Big[{\epsilon}_\mathbf{k}\,c^\dagger_{i\mathbf{k}\sigma}\,c^{\phantom{\dagger}}_{i\mathbf{k}\sigma} + V_\mathbf{k}\,\big(c^\dagger_{i\mathbf{k}\sigma}\,d^{\phantom{\dagger}}_{i\sigma} + H.c.\big)\Big]\,,\label{H0AIM} \end{equation} is the sum of two equivalent resonant level models, and \begin{equation} H_\text{imp} = {\epsilon}_d\,\big(n-2\big) + \fract{U}{2}\big(n-2\big)^2-2J\,\Big( \bd{T}\cdot\bd{T} - T_3^2\Big)\,,\label{Hlocal} \end{equation} where $n=\sum_{i\sigma} n_{i\sigma}$, with $n_{i\sigma}=d^\dagger_{i\sigma}\,d^{\phantom{\dagger}}_{i\sigma}$ the occupation number of the impurity orbital $i=1,2$ with spin $\sigma$, while $\bd{T} = \big(T_1,T_2,T_3\big)$ is a pseudo-spin operator with \begin{eqs} T_a &=\fract{1}{2}\,\sum_\sigma\,\sum_{ij}\, d^\dagger_{i\sigma}\,\big(\hat{\tau}_a\big)_{ij}\,d^{\phantom{\dagger}}_{j\sigma}\,,& a=1,2,3\,, \end{eqs} and $\hat{\tau}_a$ the Pauli matrices in the two-orbital space. We assume that $H_0$ in \eqn{H0AIM} is particle-hole (p-h) symmetric, so that a finite ${\epsilon}_d$ in \eqn{Hlocal} is the only source of p-h symmetry breaking. In the following calculations we take a hybridisation width \begin{eqs} \Gamma({\epsilon}) &\equiv \pi\,\sum_\mathbf{k}\,V_\mathbf{k}^2\,\delta\big({\epsilon}-{\epsilon}_\mathbf{k}\big) = \Gamma\,\theta\big(1-|{\epsilon}|\big)\,, \end{eqs} with $\Gamma=0.1$, which also defines our unit of energy, and $J=0.004\ll\Gamma$.\\ When $U$ is large, the impurity is occupied by two electrons that can form a spin-triplet orbital-singlet ($S=1$, $T=0$), or a spin-singlet orbital-triplet ($S=0$, $T=1$). If $J>0$, as we assume, the lowest energy state \begin{eqs} \fract{1}{\sqrt{2}}\,\big(d^\dagger_{1\uparrow}\,d^{\phantom{\dagger}}_{2\downarrow}+d^\dagger_{2\uparrow}\,d^{\phantom{\dagger}}_{1\downarrow} \big)\ket{0}\,,\label{0} \end{eqs} has $S=0$, $T=1$, and $T_3=0$. If we regard the two orbitals as the single orbitals of two impurities, state \eqn{0} simply represents the two impurities coupled into a spin-singlet configuration. In other words, for large $U$ the Hamiltonian \eqn{HAIM} is actually equivalent to two spin-1/2 impurities, each Kondo coupled to its own bath, and coupled to each other by an antiferromagnetic exchange, which is the model studied in Ref.~\cite{Hewson-2018}. The phase diagram of this model depends on the magnitude of $J$ relative to the Kondo temperature $T_K$ at $J=0$. If $J\ll T_K$, each impurity is Kondo screened by its bath, leading to a conventional Kondo effect. On the contrary, if $J\gg T_K$, the two impurities lock into a Kondo-inert spin-singlet state. These two regimes, which we denote as 'screened' and 'unscreened' phases, are separated by a quantum critical point~\cite{Jones&Varma,*Jones&VarmaPRB}, actually a whole critical line at ${\epsilon}_d\not=0$~\cite{Lorenzo-PRB2004}. Since we work at constant $\Gamma$ and $J\ll \Gamma$, and $T_K$ decreases with increasing $U$, the critical point is reached upon increasing $U$. Specifically, with the chosen $\Gamma$ and $J$, its location is at $U_c\simeq 1.85$ when ${\epsilon}_d=0$. In Fig.~\ref{phase diagram} we sketch the phase diagram as function of $U$ and ${\epsilon}_d>0$.\\ \begin{figure} \vspace{-0.5cm} \centerline{\includegraphics[width=0.45\textwidth]{Figure2.pdf}} \vspace{-0.5cm} \caption{Phase diagram of the impurity model \eqn{HAIM} at fixed hybridisation width $\Gamma$ and $J\ll \Gamma$, as function of $U$ and ${\epsilon}_d>0$. The case ${\epsilon}_d<0$ is symmetric. } \label{phase diagram} \end{figure} The screened, $U<U_c$, and unscreened, $U>U_c$, phases are both local Fermi liquids in Nozi\`eres sense~\cite{Nozieres-localFL}, despite the unscreened phase is not adiabatically connected to the non-interacting limit $U=J=0$. For instance, at p-h symmetry, ${\epsilon}_d=0$, the impurity self-energy in the unscreened phase diverges at the Fermi level~\cite{Lorenzo-PRB2004}, the local counterpart of a Luttinger surface, which leads to a pseudo gap in the density-of-states that is gradually filled in when ${\epsilon}_d\not= 0$~\cite{Lorenzo-PRB2004}; a totally different behaviour from a non-interacting resonant level model. \subsection{Fate of Luttinger's theorem in the impurity model} The Hamiltonian \eqn{HAIM} at ${\epsilon}_d\not=0$ is invariant under global spin $SU(2)$, separate charge $U(1)$ rotations in each channel $i=1,2$, that includes the conduction bath and the corresponding impurity level, as well as under the $Z_2$ symmetry $1\leftrightarrow 2$. If the conduction bandwidth is large enough, as we assume hereafter, the conserved quantities become effectively those at the impurity site, since the fluctuations in the bath are negligible. The impurity Green's function is \begin{equation} \mathcal{G}(i{\epsilon}) = \fract{1}{i{\epsilon} -{\epsilon}_d +i\,\Gamma-\text{Re}\,\Sigma(i{\epsilon}) -i\,\text{Im}\,\Sigma(i{\epsilon})}\;, \end{equation} and, by symmetry, is independent of $i=1,2$ and $\sigma=\uparrow,\downarrow$, and therefore Eq.~\eqn{PT-NO-AIM} reads \begin{eqs} n_{i\sigma} &= \left(\fract{1}{2} - \fract{ \text{arg}\big(\mathcal{G}(i\infty)\big)- \text{arg}\big(\mathcal{G}(0)\big)}{ \pi}\right)\\ &\qquad - \fract{1-(-1)^\ell}{4}\; \text{sign}\big(\text{Re}\,\mathcal{G}(0)\big) \;,\label{n-2-AIM} \end{eqs} where the term in parentheses is just the conventional statement of Luttinger's theorem that was shown in Ref.~\cite{Hewson-2018} not to yield the correct result in the unscreened phase at ${\epsilon}_d\not=0$. The last term in Eq.~\eqn{n-2-AIM}, which corrects that result when Luttinger's theorem fails, is finite only when the number $\ell$ of zeros of $\text{Re}\,\mathcal{G}(i{\epsilon})$ for $0<{\epsilon}<\infty$ is odd. Fig.~\ref{ReG} shows $\text{Re}\,\mathcal{G}(i{\epsilon})$ in the screened and unscreened phases at ${\epsilon}_d=0.1$. Not surprisingly, $\ell$ is even in the screened phase, and odd in the unscreened one, in which case the last term in Eq.~\eqn{n-2-AIM} is finite and equal to $-1/2$. For ${\epsilon}_d<0$ the correction is actually $+1/2$ since the real part of $\mathcal{G}(i{\epsilon})$ changes signs after a particle-hole transformation that brings ${\epsilon}_d\to-{\epsilon}_d$.\\ The $\pm 1/2$ correction is exactly the missing quantised term noticed in Ref.~\cite{Hewson-2018}, and thus Eq.~\eqn{n-2-AIM} does reproduce the correct electron number. \begin{figure}[ht] \vspace{-0.5cm} \centerline{\includegraphics[width=0.49\textwidth]{Figure3.pdf}} \vspace{-0.8cm} \caption{$\text{Re}\,\mathcal{G}(i{\epsilon})$ versus the Matsubara frequency ${\epsilon}$ at ${\epsilon}_d=0.1$ in the screened phase, $U=1.75$ red curve, and in the unscreened one, $U=2$ blue curve. } \label{ReG} \end{figure} We note that Fig.~\ref{ReG} explicitly demonstrates that, crossing the point at which perturbation theory breaks down, $\ell$ changes by one, from $\ell=2$ in the screened phase to $\ell=1$ in the unscreened one, as earlier discussed. Besides the electron number, $n=\sum_{i\sigma}\,n_{i\sigma}$, the Hamiltonian \eqn{HAIM} admits other conserved quantities, e.g., the magnetisation $m=\sum_{i}\,(n_{i\uparrow}-n_{i\downarrow})$ and the relative orbital occupancy $n_f=\sum_{\sigma}\,(n_{1\sigma}-n_{2\sigma})$. A field that couples to any of those conserved quantities does not spoil the quantum critical point~\cite{Lorenzo-PRB2004}. We may then wonder whether conventional Luttinger's theorem also fails in providing the values of those quantities as it does for the electron number when crossing the critical point. Let us consider, for instance, the magnetisation $m$. According to Luttinger's theorem, we could calculate $m$ through \begin{eqs} m_L = \sum_{i=1}^2\, \fract{\text{arg}\big(\mathcal{G}_{i\uparrow}(0)\big)- \text{arg}\big(\mathcal{G}_{i\downarrow}(0)\big) }{ \pi}\;.\label{m_L} \end{eqs} Evidently, both $m$ and $m_L$ vanish when $SU(2)$ symmetry holds. Therefore, we add to the Hamiltonian \eqn{HAIM} with ${\epsilon}_d=0.1$ a Zeeman splitting term $-B\,m$, with very small $B=0.0001$ that nonetheless makes $\mathcal{G}_{1\uparrow}(i{\epsilon})= \mathcal{G}_{2\uparrow}(i{\epsilon})\not = \mathcal{G}_{1\downarrow}(i{\epsilon})= \mathcal{G}_{2\downarrow}(i{\epsilon})$. \begin{figure}[ht] \centerline{\includegraphics[width=0.49\textwidth]{Figure4.pdf}} \caption{The behaviour of $\Delta n=n_L-n$ and $\Delta m=m_L-m$ as function of $U$ at ${\epsilon}_d=0.1$ and $B=0.0001$. Here, $n_L$ and $m_L$ are, respectively, the electron number and magnetisation calculated through Luttinger's theorem, while $n$ and $m$ their actual value.} \label{magnetic} \end{figure} In Fig.~\ref{magnetic} we show the deviation $\Delta m$ of $m_L$ in Eq.~\eqn{m_L} from the actual value $m$ as function of $U$. For comparison, we also plot the deviation $\Delta n$ of the Luttinger's theorem prediction for the number of particles, \begin{eqs} n_L &= 4 + \sum_{i\sigma}\,\fract{\;\text{arg}\big(\mathcal{G}_{i\sigma}(0)\big)\;} {\pi}\;,\label{n_L} \end{eqs} from the correct result $n$. We observe that while $\Delta n$ jumps from 0 to 2 crossing the critical point, consistent with the missing contribution from the Luttinger integral, see Eq.~\eqn{n-2-AIM}, $\Delta m$ remains always zero, showing that the corresponding Luttinger integral vanishes also in the unscreened phase, despite the breakdown of perturbation theory. In reality, if we instead take $B\gg{\epsilon}_d$, the situation is reversed: $\Delta m$ jumps from 0 to -2, while $\Delta n$ remains zero. \\ More generally, if we add different fields ${\epsilon}_d$, $B$ and $B_f$ that couple to $n$, $m$ and $n_f$, respectively, the strongest one identifies the channel where Luttinger's theorem breaks down, whereas the theorem still applies for the other two channels. \subsection{The unscreened phase as paradigm of a pseudo-gapped metal} \label{The unscreened phase as paradigm of a pseudo-gapped metal} Lot of effort has been put over the past decades into modelling the self-energy of the pseudo-gap phase in underdoped cuprates~\cite{Rice-PRB2006,Rice-RPP2011,Imada-PRL2011,Alexei-RPP2019}, also revealed by cluster extensions of dynamical mean field theory in the Hubbard model doped away from the half-filled Mott insulator \cite{Kotliar-PRB2006,Civelli-PRL2016,Georges-PNAS2018,Georges-PRX2018}. Since the unscreened phase of the impurity model \eqn{HAIM} is also pseudo-gapped~\cite{Lorenzo-PRB2004}, it is worth modelling its self-energy, which is easily accessible by NRG at and away from p-h symmetry, as well as at zero and finite temperature. \\ \begin{figure}[ht] \centerline{\includegraphics[width=0.49\textwidth]{{Figure5}.pdf}} \caption{Real and imaginary parts of the retarded self-energy at $U=2$, ${\epsilon}_d=0.1$ and temperatures $T=10^{-8}$, left panels, and $T=2\times 10^{-5}$, right panels, together with the ansatz \eqn{fit} with fitted parameters (dotted lines).} \label{fit-Sigma} \end{figure} We find that the retarded impurity self-energy $\Sigma_+({\epsilon}) \equiv \Sigma({\epsilon}+i0^+)$ in the unscreened phase is well fitted at low energy ${\epsilon}$ and temperature $T$ by~\cite{mio,mio-2}, see Fig.~\ref{fit-Sigma}, \begin{eqs} \Sigma_+({\epsilon}) = \fract{\Delta^2}{\;{\epsilon} - \mu + i\,\gamma\,\big({\epsilon}^2+\pi^2\,T^2\big)\;}\;, \label{fit} \end{eqs} where all real parameters $\Delta$, $\mu$ and $\gamma$ depend on $U$, $T$ and on the strength ${\epsilon}_d$ of the p-h symmetry breaking term. In particular, $\Delta^2$ and $1/\gamma$ vanish quadratically approaching the critical line $U=U_c$~\cite{Lorenzo-PRB2004}, while, consistently with Fig.~\ref{ReG}, $\mu$ has the same sign of ${\epsilon}_d$ and vanishes at ${\epsilon}_d=0$. In Fig.~\ref{fit-par} we show the parameters $\Delta$ and $\gamma$ extracted by the fit as function of $T$ and different $U>U_c$ at ${\epsilon}_d=0.1$. \\ \begin{figure}[ht] \centerline{\includegraphics[width=0.49\textwidth]{Figure6.pdf}} \vspace{-0.4cm} \caption{Temperature dependence of the parameters $\Delta$ and $\gamma$ in Eq.~\eqn{fit} fitted through NRG results at ${\epsilon}_d=0.1$ and different $U>U_c$. We note that both $\Delta^2$ and $1/\gamma$ vanish approaching $U_c$, although $1/\gamma \ll \Delta^2$.} \label{fit-par} \end{figure} \noindent At $T={\epsilon}_d=0$, \begin{eqs} \Sigma_+({\epsilon}) = \fract{\Delta^2}{\;{\epsilon} + i\,\gamma\,{\epsilon}^2\;}\underset{{\epsilon}\to 0}{\simeq} \fract{\Delta^2}{\;{\epsilon}\;} - i\,\gamma\,\Delta^2\,,\label{Sigma-ph} \end{eqs} corresponds to the highly singular expression found in Ref.~\cite{Lorenzo-PRB2004}, which, as earlier mentioned, is the local counterpart of a Luttinger surface. On the contrary, at ${\epsilon}_d\not= 0$ and for ${\epsilon},T \ll \mu$, \begin{equation} \Sigma_+({\epsilon}) \simeq -\fract{\Delta^2}{\mu} - \fract{\Delta^2}{\mu^2}\;{\epsilon} -i\,\fract{\Delta^2\,\gamma}{\mu^2}\;\big({\epsilon}^2+\pi^2\,T^2\big)\,,\label{Sigma-no-ph} \end{equation} has a conventional Fermi-liquid behaviour, despite the spectral function pseudo-gap, and the `Luttinger surface` has disappeared.\\ The quantum critical point entails the existence at finite temperature of a quantum critical region delimited by a crossover temperature $T_*$ that, in the unscreened phase, can be identified with the temperature below which the pseudo gap opens, see Fig.~\ref{DOS.vs.T} where we plot the impurity density-of-states (DOS) $\rho({\epsilon})$, at ${\epsilon}_d=0.1$, $U=2$ and different $T$. \begin{figure}[hbt] \centerline{\includegraphics[width=0.4\textwidth]{Figure7.pdf}} \caption{Impurity density of states at $U=2$, ${\epsilon}_d=0.1$ and different temperatures below and above the pseudogap temperature $T_*$.} \label{DOS.vs.T} \end{figure} The phase diagram~\ref{phase diagram} shows that the critical line can be also crossed starting from the unscreened phase at particle-hole symmetry and rising ${\epsilon}_d$, namely, by doping. In Fig.~\ref{mu T} we show how the parameter $\mu$ behaves as function of ${\epsilon}_d$ from 0 up to the critical point ${\epsilon}_d\simeq 0.26$ at $U=2$ and almost zero temperature.\\ \begin{figure}[hbt] \centerline{\includegraphics[width=0.4\textwidth]{Figure9.pdf}} \caption{Parameter $\mu$ in Eq.~\eqn{fit} at $T=10^{-8}$ and $U=2$ as function of ${\epsilon}_d$ from 0 to above the critical point, see Fig.~\ref{phase diagram}.} \label{mu T} \end{figure} We remark that a key feature of the self-energy \eqn{fit} is the imaginary part in the denominator, i.e., $\gamma\big({\epsilon}^2+\pi^2\,T^2\big)$, vanishing quadratically for ${\epsilon},T\to 0$, see Fig.~\ref{fit-ima}. This guarantees the existence of a well-defined `quasiparticle' excitation, namely whose decay rate at $T=0$ \begin{eqs} \gamma({\epsilon}) &\equiv -Z({\epsilon})\,\text{Im}\,\Sigma_+({\epsilon})\propto {\epsilon}^2\,, \end{eqs} with \begin{eqs} Z({\epsilon}) = \left(1-\fract{\partial\text{Re}\,\Sigma_+({\epsilon})}{\partial{\epsilon}}\right)\,, \end{eqs} vanishes at zero energy, even in the singular case at p-h symmetry~\cite{mio,mio-2}. Such property distinguishes Eq.~\eqn{fit} from all model self-energies introduced to describe the pseudo-gap phase of underdoped cuprates, where either the imaginary part is missing, or assumed to be constant. We believe that Eq.~\eqn{fit}, though referring to a specific impurity model, is actually representative of generic pseudo-gap metal phases, and thus can be regarded as paradigmatic of such physical systems. \begin{figure}[hbt] \centerline{\includegraphics[width=0.45\textwidth]{{Figure10}.pdf}} \caption{ Imaginary part of the self-energy in Eq.~\eqn{Sigma-no-ph} divided by $\Delta^2\,\gamma\,\big({\epsilon}^2 +\pi^2\,T^2\big)$ as function of $\big({\epsilon}^2+\pi^2\,T^2\big)$ for different $T$ at ${\epsilon}_d=0.1$ and $U=2$. Note the collapse of all curves at low Matsubara frequencies.} \label{fit-ima} \end{figure} \section{Luttinger's theorem in a non-magnetic Mott insulator} \label{Rosh} We now discuss the failure of Luttinger's theorem in the non-magnetic two-orbital Mott insulator analysed in Ref.~\cite{Rosh-2007}. The model is essentially the bulk generalisation of the two-orbital Anderson impurity model \eqn{HAIM}, with $U$ and $J$ much larger than the width $W$ of the conduction band, whose dispersion is assumed to be ${\epsilon}_{ij}(\mathbf{k})=\delta_{ij}\,{\epsilon}_\mathbf{k}$, with $i,j=1,2$ the orbital indices. In that limit and for small enough deviation $\mu$ from particle-hole symmetry, the ground state of the model represents a non-magnetic Mott insulator with two electrons per site, each in a different orbital, locked into a spin singlet. The spin and orbital independent Green's function is \begin{eqs} G(i{\epsilon},\mathbf{k}) &= \fract{1}{\;i{\epsilon} -{\epsilon}_\mathbf{k}+\mu-\Sigma(i{\epsilon},\mathbf{k})\;}\;,\label{G-Rosh} \end{eqs} where, absorbing the Hartree-Fock self-energy in $\mu$, and to leading order in the hopping~\cite{Rosh-2007}, \begin{eqs} \Sigma(i{\epsilon},\mathbf{k}) &\simeq \fract{\Delta^2}{i{\epsilon}+\mu}\;,\label{Sigma-Rosh} \end{eqs} with $2\Delta= U+6J$ for the Hamiltonian \eqn{HAIM}. Rigorously speaking, the expression \eqn{Sigma-Rosh} is valid if also $|\mu| \gg W$, otherwise additional $\mathbf{k}$-dependent terms appear in the denominator~\cite{Rosh-2007}. Therefore, we hereafter assume consistently that \begin{eqs} U\,,J\, ,|\mu| \gg W\,,\label{Rosh-assume} \end{eqs} which also implies that a Luttinger surface is absent. The Green's function \eqn{G-Rosh} describes an insulator lacking a Fermi surface if \begin{eqs} \text{Re}\,G(0,\mathbf{k})^{-1}=G(0,\mathbf{k})^{-1} &= -{\epsilon}_\mathbf{k}+\mu -\fract{\Delta^2}{\mu}\;, \end{eqs} never vanishes within the Brillouin zone. That implies either $0<\mu<\mu_+$, in which case $G(0,\mathbf{k})^{-1}<0$, or $\mu_-<\mu<0$, in which case $G(0,\mathbf{k})^{-1}>0$, where~\cite{Rosh-2007} \begin{equation} \mu_++\mu_- = {\epsilon}_\mathbf{k}\,,\quad \mu_+-\mu_- = \sqrt{{\epsilon}_\mathbf{k}^2 + 4\Delta^2\;}\simeq 2\Delta\,. \end{equation} If we use conventional Luttinger's theorem, according to which the number of electrons per site is simply $n = 2 + 2\,\text{sign}\big( G(0,\mathbf{k})^{-1}\big)$, we obtain the wrong result that $n=4$ if $\mu_-<\mu<0$ and $n=0$ if $0<\mu<\mu_+$. However, in this case we can use Eq.~\eqn{PT-NO-AIM} to calculate the correct electron number. Indeed, since \begin{eqs} \lim_{{\epsilon}\to\infty}\text{Re}\,G(i{\epsilon},\mathbf{k})^{-1} &= \mu-{\epsilon}_\mathbf{k} \simeq \mu\,, \end{eqs} the $\text{Re}\,G(i{\epsilon},\mathbf{k})$ crosses zero an odd number of times from ${\epsilon}=0$ to ${\epsilon}=\infty$. According to Eq.~\eqn{PT-NO-AIM}, valid for a local self-energy close to half-filling, that implies $n_{1\mathbf{k}\sigma}+n_{2\mathbf{k}\sigma} =1$, which is indeed correct. \\ If the condition $|\mu|\gg W$ is not fulfilled, the no more negligible momentum dependent terms in the denominator of the self-energy Eq.~\eqn{Sigma-Rosh} yield a true Luttinger surface for a small interval of $\mu$ around zero~\cite{Rosh-2007}, in which case Eq.~\eqn{Oshikawa result} provides the correct electron number. \subsection{Atomic limit of the $SU(N)$ Hubbard model} \label{SU(N)} At $J=0$ the previous model becomes the $N=4$ $SU(N)$ Hubbard model at half-filling, which admits, for strong enough $U$, a Mott insulating state at any integer density $n=1,\dots,N-1$. In the atomic limit, $W =0$, also this model strongly violates Luttinger's theorem~\cite{Phillips-PRL2013}. However, the ground state in the atomic limit has an extensive degeneracy, $\binom{N}{n}$ per site, and thus divergent susceptibilities. In this situation, one does not expect Luttinger's theorem to apply~\cite{Igor-PRB2003,Potthoff-2006}.\\ Nonetheless, to make a connection with the previous discussion, we note that the sum of the $N$ local Green's functions in the atomic limit at $T=0$~\cite{Phillips-PRL2013}, \begin{eqs} N\,G(i{\epsilon}) &= \fract{n}{\;i{\epsilon}+{\epsilon}_-\;}+\fract{N-n}{\;i{\epsilon}-{\epsilon}_+\;}\\ &= \fract{\partial}{\partial i{\epsilon}}\,\ln\big(i{\epsilon}+{\epsilon}_-\big)^n\, \big(i{\epsilon}-{\epsilon}_+\big)^{N-n}\\ &\equiv -\fract{\partial \ln G_N(i{\epsilon})}{\partial i{\epsilon}}\,, \label{G-SU(N)} \end{eqs} where ${\epsilon}_+=Un-\mu > 0$ and ${\epsilon}_-=\mu-U(n-1)>0$ are, respectively, the energies for adding and removing an electron from the atomic $n$-particle ground state. Therefore, \beal\nonumber N\,G(i{\epsilon})\,\fract{\partial\Sigma(i{\epsilon})}{\partial i{\epsilon}} &= N\,G(i{\epsilon}) + \fract{\partial\ln G(i{\epsilon})^N}{\partial i{\epsilon}} \\ &= \fract{\partial}{\partial i{\epsilon}}\,\ln \fract{\;G(i{\epsilon})^N\;}{G_N(i{\epsilon})}\;, \end{eqs} is consistent with Eq.~\eqn{I_L-deriv} and quantised. In this case, it trivially follows that the role of the Luttinger integral is to freeze the occupation per orbital at $n/N$ rather than the value predicted by Luttinger's theorem, which is either 0 or $1$~\cite{Phillips-PRL2013} depending on $\mu$, in that similar to what we have found close to a half-filled Mott insulator. This suggests a natural extension of our results to multi-band models close to a Mott insulator at fractional filling $n/N$. \\ We end observing that $G_N(i{\epsilon})$ is equivalent to the determinant of the $N\times N$ Green's function matrix corresponding to the same Hamiltonian but in presence of an infinitesimally small symmetry breaking field that lowers $n$ orbitals with respect to the other $N-n$ ones. In this case, Luttinger's theorem does hold, as Logan \textit{et al.} have explicitly demonstrated in the simpler $SU(2)$ case~\cite{Logan-JPC2015}. \section{Discussion} We have shown that the Luttinger integral, which provides the missing contribution to the electron count when Luttinger's theorem is violated, is a boundary zero-energy term and it is quantised in integer values when the self-energy is analytic at any non-zero imaginary frequency. Specifically, in a periodic single-band model of interacting electrons, Luttinger's theorem is violated when perturbation theory breaks down and a Luttinger surface appears in the Brillouin zone. Taking properly into account the quantised contribution from the Luttinger surface, we have found that the volume fraction of the Fermi pockets only measures the doping fraction away from half-filling rather than the full filling fraction. \\ In addition, a by-product of our derivation of Luttinger's theorem is the prediction that quasiparticles do exist even in half-filled non-symmetry-breaking Mott insulators, provided they possess a Luttinger surface, thus extending the results of Ref.~\cite{mio-2} to the case of a hard gap. We emphasise that our formal construction in Sec.~\ref{Luttinger's theorem} just relies on the assumption \eqn{LT: condition quasiparticles}, with no reference to a model Hamiltonian. However, the analogy with so-called $U(1)$ spin-liquid insulators~\cite{Moessner-PRL2001,Motrunich-PRL2002,Fisher&Balents-PRB2004,Balents-PRL2008,Senthil-PRX2016} is self-evident, and suggests that the quasiparticles are actually spinons, and the Luttinger surface their 'Fermi' surface. \begin{acknowledgments} We acknowledge helpful discussions with Alessandro Toschi. This work received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, Grant agreement No. 692670 ``FIRSTORM''. \end{acknowledgments} \bibliographystyle{apsrev4-2}
2,877,628,089,929
arxiv
\section{Introduction} The Penrose process (PP) \cite{pen} is one of remarkable physical effects typical of general relativity and other theories of gravity. Let us suppose that in a space-time there exists a region (called ergosphere) where the Killing energy $E$ measured at infinity can be negative. If a particle 0 within the ergosphere splits to two fragment 1 and 2, one of them can have E_{1}<0$. Then, the conservation of energy entails that the second fragment has $E_{2}>E_{0}$. This is just PP. Originally, it was found in the vicinity of rotating black holes. However, it was understood later that the similar effect should occur in the background of static charged black holes \cit {ruf}. It was investigated further in detail \cite{den} - \cite{dad3}. Moreover, there exists a limit to flat space-time when the effect under discussion persists \cite{df}, \cite{flat}. If the energy of a new particle 2 can be arbitrarily large, such a kind of the PP is called super-Penrose process (SPP). In recent decade, a new venue for the PP became popular after discovering the so-called Ba\~{n}ados-Silk-West (BSW) effect \cite{ban}. It consists in acceleration of particles to high energies due to collisions near black holes. If two particles collide near the horizon, under certain conditions their energy $E_{c.m.}$ in the center of mass frame can become indefinitely large. However, one should not mix $E_{c.m.}\,$\ and $E$. It turns out that for collisions of neutral particles near black holes PP does indeed occur (it is called collisional Penrose process) but SPP is forbidden (see, e.g. \cite{is} and references therein). However, SPP\ is indeed possible for the charged black holes, even for pure radial particle motion in the Reissner-Nordstrom (RN) background \cite{rn}, \cite{nem}. (More general scenarios can include both the electric charge and rotation \cite{hlz}.) In the present work, we show that there exists one more type of the SPP. It is realized in the background of naked singularities. This is different from high energy processes near the RN naked singularity considered earlier \cit {naked}, \cite{ns}. In aforementioned papers (i) collisions of shells were studied, (ii) the high energy process implies high $E_{c.m.}$ Meanwhile, we show that (i) the effect under discussion is valid for test particles, (ii) it involves ultra-high energies $E$, (iii) there are no collisions at all, the process represents a standard PP, not a collisional one. The examples of the PP, known before, share the common feature. If one wants to gain large energy in PP, the electric charge is also should be large. This concerns both the standard and collisional PP as well as the confined one \cite{conf}. However, the electric charge of elementary particles, atoms or nuclei cannot be arbitrary large \cite{rn}, \cite{axis}. Also, there exist similar restrictions for macroscopic bodies \cite{nem}. Meanwhile, we demonstrate that for naked singularities not only PP but also SPP does exist for a finite value of a particle charge. As is known, the electric charge of astrophysical objects is rather small. (Although it can, in principle, lead to observable effects \cite{tur}.) However, in some aspect the electrical charge can model what happens in more complicated realistic astrophysical systems with rotation. Meanwhile, the RN metric is much easier than, say, the Kerr metric describing the vacuum solution of the Einstein equations with rotation. Anyway, the PP is one of most universal and nontrivial processes in gravitating systems, so it is necessary to study all its potential manifestations. We use the geometric system of units in which fundamental constants $G=c=1$. \section{Basic equations} We consider the Reissner-Nordstr\"{o}m metri \begin{equation} ds^{2}=-dt^{2}f+\frac{dr^{2}}{f}r^{2}d\omega ^{2}\text{,} \end{equation where $d\omega ^{2}=d\theta ^{2}+\sin ^{2}\theta d\phi ^{2}$ \begin{equation} f=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\text{.} \end{equation Here $M$ and $Q$ are the mass and electric charge, respectively. We take Q>0 $. We assume that $M<Q$, so there is a naked singularity at $r=0$ in this space-time. Now, we consider motion of test particles. We restrict ourselves by pure radial motion. This is sufficient to demonstrate that the effect under discussion does exist. Then, equations of motion rea \begin{equation} m\dot{t}=\frac{X}{f}, \end{equation \begin{equation} X=E-q\varphi \text{,} \label{X} \end{equation \begin{equation} m\dot{r}=\sigma P\text{, }P=m\sqrt{U}\text{, }U=\frac{X^{2}}{m^{2}}-f, \label{P} \end{equation $\sigma =\pm 1$ depending on the direction of motion, dot denotes differentiation with respect to the proper time $\tau $. Here, $q$ is the particle's electric charge, $m$ being its mass, $E$ the energy. The forward-in-time condition $\dot{t}>0$ entail \begin{equation} X>0. \label{ft} \end{equation} The electric Coulomb potentia \begin{equation} \varphi =\frac{Q}{r}\text{.} \label{pot} \end{equation Hereafter, we use notations $\varepsilon =\frac{E}{m},$ $\tilde{q}=\frac{q}{ }$. Then \begin{equation} U(r)=(\varepsilon -\frac{\tilde{q}Q}{r})^{2}-(1-\frac{2M}{r}+\frac{Q^{2}} r^{2}})\text{.} \label{U} \end{equation} Then, the possible turning points $r_{t}$ can be found from the condition P=0$, whence \begin{equation} r_{t}^{\pm }=\frac{1}{\varepsilon ^{2}-1}(\varepsilon \tilde{q}Q-M\pm \sqrt{ })\text{,} \label{tp} \end{equation \begin{equation} C=(M-\varepsilon \tilde{q}Q)^{2}+(1-\tilde{q}^{2})Q^{2}(\varepsilon ^{2}-1). \label{C} \end{equation} \section{Scenario of decay} In what follows, we will be mainly interested in the situation when decay occurs near the singularity, so the point of decay $r_{0}\rightarrow 0$. Then, for fixed $q_{0}$ and $E_{0}$, the forward-in-time condition (\ref{ft ) requires $q_{0}<0$. Assuming also that particle moves from infinity, so \varepsilon _{0}>1$, we see from (\ref{tp}) that no more than one turning point $r_{t}^{+}$ can exist, provided $\left\vert \tilde{q}_{0}\right\vert <1 $. Let particle 0 with $\varepsilon >1$ fall from infinity. In some point r_{0} $ it decays to two new fragments 1 and 2. We assume the conservation of the energy and electric charge in the point of decay, s \begin{equation} E_{0}=E_{1}+E_{2}\text{,} \label{e} \end{equation \begin{equation} q_{0}=q_{1}+q_{2}\text{.} \label{q} \end{equation The necessary condition that makes decay possible is \begin{equation} m_{0}\geq m_{1}+m_{2}\text{.} \label{m} \end{equation} For given characteristics $E_{0},m_{0}$, $q_{0}$ of particle 0, one can solve (\ref{e}), (\ref{q}) with (\ref{P}) taken into account. We can take advantage of already obtained results \ - see eqs. (19) -\ (25) of Ref. \cit {centr}. Only minimum changes are required: (i) instead of indices 3, 4 we use here 1,2, (ii) the quantity $X$ is defined according to (\ref{P}) instead of eq. (5) of \cite{centr}. Then \begin{equation} X_{1}=\frac{1}{2m_{0}^{2}}\left( X_{0}\Delta _{+}+P_{0}\delta \sqrt{D \right) , \label{X1} \end{equation \begin{equation} X_{2}=\frac{1}{2m_{0}^{2}}\left( X_{0}\Delta _{-}-P_{0}\delta \sqrt{D \right) , \label{X2} \end{equation \begin{equation} P_{1}=\left\vert \frac{P_{0}\Delta _{+}+\delta X_{0}\sqrt{D}}{2m_{0}^{2} \right\vert \text{,} \end{equation \begin{equation} P_{2}=\left\vert \frac{P_{0}\Delta _{-}-\delta X_{0}\sqrt{D}}{2m_{0}^{2} \right\vert , \label{P2} \end{equation where $\delta =\pm 1$ \begin{equation} \Delta _{\pm }=m_{0}^{2}\pm (m_{1}^{2}-m_{2}^{2}), \end{equation \begin{equation} D=\Delta _{+}^{2}-4m_{0}^{2}m_{1}^{2}=\Delta _{-}^{2}-4m_{0}^{2}m_{2}^{2}. \label{D} \end{equation It follows from (\ref{m}) that $D\geq 0$. The equality holds if m_{0}=m_{1}+m_{2}$ only. The solutions (\ref{X1}) - (\ref{D}) are classified according to 4 quantities $(\sigma _{2}$, $h_{2}$, $h_{1}$, $\delta )$. The corresponding allowed combinations are listed in eq. (30) of \cite{centr}. (A reader should bear in mind that the role of particle 3 in \cite{centr} is played now by particle 2). Here, \begin{equation} h_{1}=sgnH_{1}\text{, }H_{1}=\Delta _{+}\sqrt{f}-2m_{1}X_{0}\text{,} \end{equation \begin{equation} h_{2}=sgnH_{2}\text{, }H_{2}=\Delta _{-}\sqrt{f}-2m_{2}X_{0}\text{.} \label{H2} \end{equation} We are interested in the situation when particle 2 escapes. If it moves after decay immediately to infinity, $\sigma _{2}=+1$. We will consider this type of scenario first. (Afterwards, we will also discuss an alternative scenario when particle 2 bounces back from the potential barrier.) Then, according to \cite{centr}, there are only two possibilities: 1(+,+,+,-)$ and $2(+,+,-,-)$. Thus $\delta =-1$ and we should also have \begin{equation} H_{2}>0 \label{h2} \end{equation while $H_{1}$ can have any sign. \section{Energy extraction} The Penrose process implies that particle 1 moves towards the center with E_{1}<0$, whereas particle 2 escapes to infinity with $E_{2}>0$. In doing so, $\sigma _{1}=-1$ and $\sigma _{2}=+1$. Our goal is to obtain the energy extraction as large as possible. According to (\ref{X}) \begin{equation} E_{2}=X_{2}+\frac{q_{2}Q}{r_{0}}. \label{e2} \end{equation} Taking into account (\ref{ft}), we see that if $q_{2}>0$ and r_{0}\rightarrow 0$, the energy $E_{2}$ is unbounded. Is it possible to achieve this goal in our scenario? When $r_{0}\rightarrow 0,$ condition (\ref{ft}) requires $q_{0}<0$. Thus we should hav \begin{equation} q_{0}<0\text{, }q_{2}>0\text{.} \end{equation Further, we should consider two different cases depending on whether or not there is a turning point for particle 0. \subsection{No turning point} This case is realized if $\left\vert \tilde{q}_{0}\right\vert >1$ since both roots (\ref{tp}) become negative. Then, we can take the limit r_{0}\rightarrow 0$ directly. In this limit, it follows from (\ref{X}), (\re {P}), (\ref{pot}) tha \begin{equation} X_{0}\approx \frac{\left\vert q_{0}\right\vert Q}{r_{0}}, \label{xoa} \end{equation \begin{equation} P_{0}\approx \frac{Q}{r_{0}}\sqrt{q_{0}^{2}-m_{0}^{2}}. \label{pa} \end{equation Now, using (\ref{X2}), (\ref{e2}, )(\ref{xoa}) and (\ref{pa}), one obtains in the main approximatio \begin{equation} E_{2}\approx \frac{Q}{r_{0}}[q_{2}+\frac{1}{2m_{0}^{2}}(\left\vert q_{0}\right\vert \Delta _{-}+\sqrt{q_{0}^{2}-m_{0}^{2}}\sqrt{D})] \label{no} \end{equation and $E_{2}\rightarrow \infty $ when $r_{0}\rightarrow 0$. Now \begin{equation} H_{2}\approx \frac{Q}{r_{0}}(\Delta _{-}-2m_{2}\left\vert q_{0}\right\vert ). \end{equation Eq. (\ref{h2}) is valid, i \begin{equation} \left\vert q_{0}\right\vert <\frac{\Delta _{-}}{2m_{2}}\text{.} \end{equation It is consistent with $\left\vert \tilde{q}_{0}\right\vert >1$. This is not the end of story. We must check that particle 2 does escape, so it does not have a new turning point for all $r>r_{0}$, $U>0$. We should verify that $U_{2}(r)>U_{2}(r_{0})$ for any $r>r_{0}$. Using (\ref{U}), it is easy to find tha \begin{equation} U_{2}(r)-U_{2}(r_{0})\approx \frac{(r_{2}-r_{0})}{rr_{0}^{2}}B\text{, } \end{equation \begin{equation} B=Q^{2}[1+\tilde{q}_{2}^{2}+\frac{r_{0}(1-\tilde{q}_{2}^{2})}{r}+\frac{q_{2 }{m_{2}^{2}m_{0}^{2}}(\left\vert q_{0}\right\vert \Delta _{-}+\sqrt q_{0}^{2}-m_{0}^{2}}\sqrt{D})]+2(\frac{E_{0}q_{2}Q}{m_{2}^{2}}-M)r_{0}. \end{equation Obviously, if $r_{0}\rightarrow 0$, $B>0$ for any $r>r_{0}$. Thus, \begin{equation} U_{2}(r)>U_{2}(r_{0})>0 \label{U2} \end{equation and there are no additional turning points, so particle 2 escapes to infinity freely. \subsection{Decay in the turning point} Let us suppose now that the turning point for particle 0 does exist. As we try to obtain the maximum possible $E_{2}$, it makes sense to choose (for given values of other parameters) the minimum possible value of $r_{0}$. To this end, we put $r_{0}=r_{t}^{(+)}$. We assumed (as is explained above) that $q_{0}<0$. Then, the existence of a turning point requires $\left\vert \tilde{q}_{0}\right\vert \leq 1$. In this point we have $P_{0}=0$ by definition, so it follows from (\ref{X1}) - (\ref{P2}) tha \begin{equation} X_{1}=\frac{X_{0}\Delta _{+}}{2m_{0}^{2}}\text{,} \label{x1} \end{equation \begin{equation} X_{2}=\frac{X_{0}\Delta _{-}}{2m_{0}^{2}}, \label{x2} \end{equation \begin{equation} P_{1}=P_{2}=\frac{X_{0}}{2m_{0}^{2}}\sqrt{D}\text{,} \label{p2} \end{equation where no \begin{equation} X_{0}=E_{0}+\frac{\left\vert q_{0}\right\vert Q}{r_{0}}, \label{x0} \end{equation \begin{equation} E_{2}=\frac{E_{0}\Delta _{-}}{2m_{0}^{2}}+\frac{Q}{r_{0}}(q_{2}+\left\vert q_{0}\right\vert \frac{\Delta _{-}}{2m_{0}^{2}})\text{.} \label{e2t} \end{equation In doing so, $X_{0}>0$ for any $r_{0}$ due to $q_{0}<0$, so condition (\re {ft}) holds. As we want to minimize $r_{0}$, we choos \begin{equation} \left\vert \tilde{q}_{0}\right\vert =1-\beta \text{, }\beta \ll 1\text{.} \label{qb} \end{equation Then, it follows from (\ref{tp}) that \begin{equation} r_{0}\approx \frac{Q^{2}\beta }{M+\varepsilon Q}\text{.} \end{equation} can be made as small as one likes. As a result, we have from (\ref{e2t}) tha \begin{equation} E_{2}\approx \frac{(M+\varepsilon Q)}{Q\beta }(q_{2}+\left\vert q_{0}\right\vert \frac{\Delta _{-}}{2m_{0}^{2}}). \end{equation When $\beta \rightarrow 0$, $E_{2}\rightarrow \infty $, so the SPP does exist. Eq. (\ref{U2}) is valid in the case under consideration as well. It is worth noting that if $q=0,$ the expression (\ref{tp}) coincides with eq. (13) of \cite{naked}. However, we saw that in both versions of the scenario under consideration (with a turning point or without it) it is essential for the SPP that $q_{0}\neq 0$. Thus, this process is possible for charged particles and is absent for neutral ones. \section{Alternative type of scenario} For completeness, we must consider the case when \ particle 2 after decay moves in the same direction as particle 1, so $r$ continues to decrease. However, immediately after decay particle 2 bounces back from the potential barrier. This means that $r_{0}$ is the turning point for particle 2, so P_{2}=0$ and, therefore \begin{equation} X_{2}=m_{2}\sqrt{f}\text{.} \end{equation} When $r_{0}\rightarrow 0$, \begin{equation} X_{2}\approx \frac{m_{2}Q}{r_{0}}\text{.} \label{x2a} \end{equation} According to (\ref{P2}), we must take $\delta =+1$ an \begin{equation} P_{0}\Delta _{-}=X_{0}\sqrt{D}\text{.} \end{equation It is easy to check that this is equivalent to $H_{2}=0$ in (\ref{H2}), s \begin{equation} \Delta _{-}\sqrt{f}=2m_{2}X_{0}\text{.} \label{h20} \end{equation} If $q_{0}<0$, there is no turning point for particle 0. In the limit r_{0}\rightarrow 0$, $X_{0}\approx \frac{\left\vert q_{0}\right\vert Q}{r_{0 }$, $\sqrt{f}\approx \frac{Q}{r_{0}}$ and we obtain from (\ref{h20}) \begin{equation} \left\vert q_{0}\right\vert \approx \frac{\Delta _{-}}{2m_{2}}\text{.} \end{equation} If $q_{0}>0$, choosing $r_{0}$ to be a turning point for particle 0 as well, we have $X_{0}=m_{0}\sqrt{f}$, so it follows from (\ref{h20}) that $\Delta _{-}=2m_{0}m_{2}$, whence $m_{0}=m_{1}+m_{2}$. In both cases, according to \ref{x2a}) \begin{equation} E_{2}\approx \frac{Q(m_{2}+q_{2})}{r_{0}}\text{.} \end{equation} Here, we should take $q_{2}>-m_{2}$. Thus the SPP does exist in this case also. \section{Flat space-time limit} Decay in the case of the flat space-time is of special interest. More precisely, we put $M=Q=0$ in the metric thus neglecting the influence of the electromagnetic field on space-time. However, we take into account the electric charge in equations of motion. Again, let particle 0 decay in the point $r_{0}$. Then, eqs. (\ref{X1}), (\ref{X2}) are now valid with $f=1$. Now we will show that the SPP is still possible for a finite value of \left\vert q_{0}\right\vert $ (the corresponding scenarios were overlooked in our previous paper \cite{flat}). If the turning point $P_{0}=0$ exists, its coordinate is given by \begin{equation} r_{0}=\frac{q_{0}Q}{E_{0}-m_{0}}\text{.} \label{r0f} \end{equation} Here, it is assumed that $E_{0}>m_{0}$. Now there are two different cases. \subsection{No turning point} Let $q_{0}<0$. Then, the turning point for particle 0 is absent. We want particle 2 to escape to infinity, so the turning point for particle 2 should be absent as well, \thinspace $q_{2}=-\left\vert q_{2}\right\vert <0$. Proceeding along the same lines as before, we obtai \begin{equation} E_{2}\approx E_{0}+\frac{Q}{r_{0}}[\frac{\left\vert q_{0}\right\vert } 2m_{0}^{2}}(\Delta _{-}+\sqrt{D})-\left\vert q_{2}\right\vert ] \label{e2f} \end{equation} This expression differs from (\ref{no}) since in (\ref{no}) it was implied that $f\sim \frac{Q^{2}}{r_{0}^{2}}\rightarrow \infty $, whereas now $f=1.$ The positivity of $E_{2}$ for small $r_{0}$ require \begin{equation} \left\vert q_{2}\right\vert <\frac{\left\vert q_{0}\right\vert }{2m_{0}^{2} (\Delta _{-}+\sqrt{D})\text{.} \end{equation} Formally, (\ref{e2f}) grows indefinitely when $r_{0}\rightarrow 0$. Actually, as we neglected the corresponding term $\frac{Q^{2}}{r^{2}}$ in the RN metric, there is an additional constraint on $r_{0}$. Restoring dimensionality, we have $r_{0}\gg \frac{\sqrt{G}Q}{c^{2}}$, so there is restriction $E_{2}\ll \frac{c^{2}}{\sqrt{G}}[\frac{\left\vert q_{0}\right\vert }{2m_{0}^{2}}(\Delta _{-}+\sqrt{D})-\left\vert q_{2}\right\vert ]$ that is rather weak in the case $G\rightarrow 0$. Now, \begin{equation} H_{2}=\Delta _{-}-2m_{2}(E_{0}+\frac{\left\vert q_{0}\right\vert Q}{r_{0}}). \label{H2+} \end{equation It is seen that for a fixed $\left\vert q_{0}\right\vert $ and r_{0}\rightarrow 0$ eq. (\ref{h2}) cannot be fulfilled. It means that the scenario under discussion cannot be realized, \ so particle 2 falls in the center along with particle 1 and does not escape. The situation changes if we take $\left\vert q_{0}\right\vert =\alpha r_{0}$, where $\alpha =O(1)$. Then, we have from (\ref{H2+}) tha \begin{equation} \alpha <Q^{-1}(\frac{\Delta _{-}}{2m_{2}}-E_{0}). \label{al} \end{equation} \subsection{Decay in the turning point} Now, $P_{0}=0$, so eqs. (\ref{x1}) - (\ref{x0}) apply. As \ a particle falls from infinity, $E_{0}>m_{0}$. According to (\ref{r0f}), we must also take q_{0}>0$. For particle 2 we take $q_{2}<0$ to exclude a possible turning point after decay. As we want $r_{0}\rightarrow 0$, it is seen from (\re {r0f}) that we must assume $q_{0}\rightarrow 0$. Then, in eq. (\ref{e2t}) the main contribution becomes negative, so the SPP is absent in this case. \subsection{Alternative scenario} Is it possible the alternative scenario in the flat case? It implies that particle 2 bounces back from the turning point. Then, $P_{2}=0$, s \begin{equation} X_{2}=m_{2}\text{.} \label{xm} \end{equation Now, one should put $f=1$ in eq. (\ref{h20}) typical of a turning point. Then, putting there $X_{0}\approx \frac{\left\vert q_{0}\right\vert Q}{r_{0}} $, we obtai \begin{equation} \left\vert q_{0}\right\vert \approx \frac{r_{0}}{Q}\frac{\Delta _{-}}{2m_{2} . \end{equation Thus for a fixed $q_{0}$ we hav \begin{equation} E_{2}=m+\frac{q_{2}Q}{r_{0}}\text{,} \end{equation where we took into account (\ref{xm}). Then \begin{equation} E_{2}\approx \frac{q_{2}\Delta _{-}}{2m_{2}\left\vert q_{0}\right\vert Q \text{,} \end{equation where it is assumed $q_{2}>0$. If $q_{0}\rightarrow 0$, $E_{2}\rightarrow \infty $, so the SPP occurs. As now $X_{2}$ is monotonically increasing function of $r$, there are no other turning points and particle 2 escapes$.$ \section{Conclusions} Thus we found one more type of systems for which the SPP is possible. It exists in almost all scenarios considered above. It is instructive to compare the case under discussion with the collisional PP relevant in the context of the BSW effect near black holes \cite{rn}. To gain large energy E $ in the latter case, one is led to prepare a particle in the fine-tuned state (so-called critical one). But there is a bound on $q<Ze$, where $e$ is the value of an elementary charge and $Z=170$ comes from quantum electrodynamics. This places restriction on the fractional enhancement of energy in collisions. As the energy of a particle produced in collision turns out to be proportional to the charge $q$ \cite{rn}, this gives rise to the restriction of the efficiency of the PP. Meanwhile, in the configuration considered in the present paper (i) the SPP can occur without fine-tuning at all and (ii) to gain large $E$, there is no need to have large $q$. More precisely, fine-tuning is required for the subcase with the turning point \ref{qb}) only but is absent if such a point is missing. One more difference between the present version of SPP and that near a black hole consists in that now the original particle needs not be ultrarelativistic having any finite value $\varepsilon >1$ (cf. Sec. IV C3 in \cite{axis}). In this sense, the present version of the SPP is less restrictive than the previous one typical of the BSW effect. We would also like to stress that the effect under discussion concerns indefinitely large Killing energy $E$, whereas E_{c.m.}$ is irrelevant since we considered particle decay, not collision. In combination with the previous results \cite{ruf}, \cite{den}, \cite{rn}, \cite{df}, \cite{flat}, this means that any type of the Reissner-Nordstr\"{o m space-time \ (black hole, flat space-time, naked singularity) is pertinent to the SPP. A separate interesting question is whether and how the naked singularity can produce the SPP for neutral rotating naked singularities.
2,877,628,089,930
arxiv
\section{Introduction} The inflationary cosmology predicts the existence of primordial gravitational waves (PGWs) that stems from quantum fluctuations. Hence, the detection of PGWs gives strong evidence of inflationary cosmology. In particular, if we could observe the quantum nature of PGWs, it would imply a discovery of gravitons. Therefore, there are several experimental projects for detecting PGWs~\cite{Hazumi:2019lys,LiteBIRD:2022cnt,Kawamura:2011zz, Amaro-Seoane:2012aqc}. Remarkably, the quantum state of gravitons gets squeezed during inflation~\cite{Grishchuk:1989ss,Grishchuk:1990bj,Albrecht:1992kf,Polarski:1995jg,Kanno:2021vwu}. Hence, one way to prove the quantum nature of PGWs would be to find evidence of the squeezed state of gravitons. However, it has still been a challenge to detect PGWs by laser interferometers through the statistical property of the squeezed state~\cite{Allen:1997ad,Allen:1999xw}. Some other ideas for detecting non-classical PGWs using their squeezed state are proposed. One is to use the Hanbury Brown-Twiss interferometry, which can distinguish non-classical particles from classical ones by measuring intensity-intensity correlations~\cite{Giovannini:2010xg,Kanno:2018cuk}. Another idea is to detect primordial gravitons indirectly by measuring their noise in the interferometers~\cite{Parikh:2020nrd,Kanno:2020usf,Parikh:2020kfh,Parikh:2020fhy} or by measuring the decoherence time of a quantum object caused by the surrounding primordial gravitons~\cite{Kanno:2021gpt}. We expect that the gravitons that went through inflation keep their squeezed states until today unless the environmental effects on them are considered. However, if the gravitons were surrounded by matter fields during inflation, they may not be able to keep their squeezed states anymore. We can think of a scalar field (inflaton) and a vector field as the matter field during inflation. Since the inflaton field couples with PGWs in the form of the gradient, the coupling with the vector field would be more effective. From the point of view of observations, primordial magnetic fields may have existed during inflation. In fact, there are observations that cannot be explained without the presence of primordial magnetic fields~\cite{Grasso:2000wj,Ando:2010rb,Neronov:2010gir,Essey:2010nd,Tavecchio:2010ja,Finke:2015ona}. Furthermore, if a coherence length of magnetic fields in mega-parsec scales were found, we need to consider magnetic fields generated during inflation~\cite{Turner:1987bw,Ratra:1991bn}. Hence, in this paper, we investigate whether gravitons surrounded by primordial magnetic fields can keep their squeezed states. The presence of background magnetic fields causes the conversion of gravitons into photons and vice versa~\cite{Gertsenshtein:1962,Raffelt:1987im}. These photons could be the dark photon~\cite{Masaki:2018eut}. Therefore, we need to investigate the effect of the conversion process on the squeezed state of gravitons. In our previous paper, we assumed the presence of primordial magnetic fields at the beginning of inflation and examined the evolution of the squeezing parameters of gravitons and photons in the process of graviton to photon conversion mediated by the background magnetic field~\cite{Kanno:2022ykw}. There, it turned out that the squeezing of gravitons was robust against the conversion process. This was because the background magnetic field rapidly decays due to inflation. Then we concluded that gravitons keep their squeezed states even in the presence of the background magnetic fields. However, if magnetic fields decay slowly during inflation, gravitons may lose their squeezed states. In this case, graviton to photon conversion never ends as long as magnetic fields survive during inflation. Hence, in this paper, we study the conversion process of gravitons in the presence of magnetic fields that decays slowly during inflation and see if the gravitons can keep their squeezed states until today. Remarkably, we find that the magnetic fields generate the maximal entanglement between gravitons and photons. As a consequence, the quantum state of gravitons becomes a mixed state instead of the squeezed (pure) state. Furthermore, we find that the number of gravitons significantly decreases compared to that without entanglement. Namely, the quantum entanglement between gravitons and photons partially destroys the squeezed state of gravitons. Thus, the power spectrum of the PGWs turns out to be the blue spectrum $\Omega_{\rm GW}(k) \propto k^2$ instead of the scale-invariant one. Because of this suppression, there exists statistical anisotropy in the PGW spectrum. The specific direction is determined by the magnetic field. The organization of the paper is as follows. In section 2, we introduce a model describing the situation where magnetic fields are persistently generated. Then, we review the graviton-photon conversion during inflation. In section 3, we solve the dynamics and calculate Bogoliubov coefficients describing the time evolution of the quantum state. We obtain four-mode squeezed state as a consequence of graviton to photon conversion. In section 4, we calculate entanglement entropy between gravitons and photons. We discuss the quantum state at present in the presence of the entanglement. In particular, we reveal the effects of quantum entanglement on the power spectrum of PGWs. \section{Graviton-photon conversion } We begin with the Einstein-Hilbert action and the action for a $U(1)$ gauge field coupled to a scalar field: \begin{eqnarray} S=S_g+S_\phi+S_A =\int d^4x \sqrt{-g}\, \left[ \frac{M_{\rm pl}^2}{2} \,R -\frac{1}{2}(\partial_\mu \phi)(\partial^\mu \phi)-V(\phi) -\frac{1}{4} f^2(\phi) F^{\mu\nu} F_{\mu\nu} \right] \label{original action}\,, \end{eqnarray} where $M_{\rm pl}=1/\sqrt{8\pi G}$ is the Planck mass. The gauge field $A_\mu$ represents photons and the field strength is defined by $F_{\mu\nu}=\partial_\mu A_{\nu}-\partial_\nu A_{\mu}$. The background inflationary dynamics is determined by the metric \begin{eqnarray} ds^2=a^2(\eta)\left[-d\eta^2+\delta_{ij} dx^idx^j\right]\,, \end{eqnarray} and the inflaton $\phi (\eta)$. Once the background is given, the coupling function can be regarded as a function of the conformal time $\eta$; $ f = f(\eta) \ . $ We also assume the presence of constant magnetic fields $ B_i = {\rm constant} \,. $ It should be emphasized that the physical magnetic fields are not $B_i$ but $f B_i$. In the next section, we consider quantum evolution of gravitons and photons in the above background. \subsection{Primordial gravitational waves} We consider gravitons in a spatially flat expanding background represented by tensor mode perturbations in the three-dimensional metric $h_{ij}$, \begin{eqnarray} ds^2=a^2(\eta)\left[-d\eta^2+\left(\delta_{ij}+h_{ij}\right)dx^idx^j\right]\,, \end{eqnarray} where $h_{ij}$ satisfies the transverse traceless conditions $h_{ij}{}^{,j}=h^i{}_i=0$. The spatial indices $i,j,k,\cdots$ are raised and lowered by $\delta^{ij}$ and $\delta_{k\ell}$. In the case of de Sitter space, the scale factor is given by $a(\eta)=-1/(H\eta)$ where $-\infty<\eta<0$. Expanding the Einstein-Hilbert action up to the second order in perturbations $h_{ij}$, we have \begin{eqnarray} \delta S_g=\frac{M_{\rm pl}^2}{8}\int d^4x\,a^2\left[ h^{ij\prime}\,h_{ij}^\prime-h^{ij,k}h_{ij,k} \right]\,, \label{action:g} \end{eqnarray} where a prime denotes the derivative with respect to the conformal time. At this quadratic order of the action, it is convenient to expand $h_{ij}(\eta,x^i)$ in Fourier modes, \begin{eqnarray} h_{ij}(\eta,x^i)=\frac{2}{M_{\rm pl}} \sum_{P}\frac{1}{(2\pi)^{3/2}} \int d^3 k\,h^{P}_{\bm k}(\eta)\, e_{ij}^{P}(\bm{k})\,e^{i\bm{k}\cdot\bm{x}} \ , \label{fourier_h} \end{eqnarray} where three-vectors are denoted by bold math type and $e_{ij}^{P}(\bm{k})$ are the polarization tensors for the ${\bm k}$ mode normalized as $e^{ijP}(\bm{k})e_{ij}^{Q}(\bm{k})=\delta^{PQ}$ with $P,Q=+,\times$. Then the action (\ref{action:g}) in the Fourier modes becomes \begin{eqnarray} \delta S_g=\frac{1}{2}\sum_{P}\int d^3k\,d\eta\,a^2\left[\, |h_{\bm k}^{P\prime}|^2-k^2|h_{\bm k}^P|^2 \,\right]\,. \label{action_fourier_h} \end{eqnarray} \subsection{Primordial magnetic fields} Next, we consider the action for the photon up to the second order in perturbations $A_i$, which is given by \begin{eqnarray} \delta S_A=\frac{1}{2}\int d^4x\ f^2 \left[A_i^{\prime\, 2}-A_{k,i}^2\right]\,, \label{action:A} \end{eqnarray} where the photon field satisfies the Coulomb gauge $A_0=0$ and $A^i{}_{,i}=0$. If we expand the $A_i(\eta,x^i)$ in the Fourier modes, we find \begin{align} A_i(\eta,x^i)=\sum_{P} \frac{\pm i}{(2\pi)^{3/2}} \int d^3 k\,A^{P}_{\bm k}(\eta)\,e_i^{P}(\bm{k})\, e^{i\bm{k}\cdot\bm{x}} \label{fourier_A} \ , \end{align} where $e_i^{P}(\bm{k})$ are the polarization vectors for the ${\bm k}$ mode normalized as $e^{iP}(\bm{k}) e_i^{Q}(\bm{k})=\delta^{PQ}$ with $P,Q=+,\times$. The sign of $\pm i$ corresponds to the $P,Q=+,\times$. The action (\ref{action:A}) in the Fourier modes is \begin{eqnarray} \delta S_A=\frac{1}{2}\sum_{P}\int d^3k\,d\eta\,f^2\left[\, |A_{\bm k}^{P\prime}|^2-k^2|A_{\bm k}^P|^2 \,\right]\,. \label{action_fourier_A} \end{eqnarray} \subsection{Graviton-photon interaction} The action for the interaction between the graviton and the photon up to second order in perturbations $h_{ij}, A^i$ is found to be \begin{eqnarray} \delta S_{\rm I}=\int d^4x \left[ \varepsilon_{i\ell m} f^2 B_m h^{ij}\left(\partial_j A_\ell -\partial_\ell A_j\right) \right]\,. \label{action:I} \end{eqnarray} Note that $B_m=\varepsilon_{mj\ell}\,\partial_j A_\ell$ is a constant background magnetic field that we assumed the presence at the beginning of inflation. In the Fourier mode defined in Eqs.~(\ref{fourier_h}) and (\ref{fourier_A}), \begin{eqnarray} \delta S_I = \frac{2}{M_{\rm pl}}\sum_{P,Q}\int d^3 k\,d\eta\,f^2\left[ \varepsilon_{i\ell m}\,B_m\,h_{\bm k}^PA_{-\bm k}^Q \,e_{ij}^P(\bm k)\Bigl\{ik_\ell\,e_{j}^Q(-\bm k)-ik_j\,e_{\ell }^Q(-\bm k)\Bigr\}\right] \label{action:I2}\,, \end{eqnarray} where $k=|\bm k|$. Polarization vectors $e^{i+}, e^{i\times}$ and a vector $k^i/k$ constitute an orthonormal basis. Without loss of generality, we assume the constant background magnetic field is in the ($k^i, e^{i \times}$)-plane as depicted in FIG.~\ref{Configuration}. \begin{figure}[H] \centering \includegraphics[keepaspectratio, scale=0.55]{Configuration4.pdf} \renewcommand{\baselinestretch}{3} \caption{Configuration of the polarization vector ${\bm e}^P(\bm k)$, wave number ${\bm k}$, and background magnetic field ${\bm B}$.} \label{Configuration} \end{figure} \noindent The polarization tensors can be written in terms of polarization vectors $e^{i+}$ and $e^{i\times}$ as \begin{align} &e_{ij}^+(\bm{k}) =\frac{1}{\sqrt{2}} \Bigl\{ e^+_i(\bm{k}) e^+_j(\bm{k})-e^\times_i(\bm{k}) e^\times_j(\bm{k}) \Bigr\}\,,\\ &e_{ij}^\times(\bm{k}) =\frac{1}{\sqrt{2}} \Bigl\{ e^+_i(\bm{k}) e^\times_j(\bm{k})+e^\times_i(\bm{k}) e^+_j(\bm{k}) \Bigr\}\, . \end{align} Below, we assume $ e_i^\times(-\bm{k})=-e_i^\times(\bm{k}) $. The action (\ref{action:I2}) is then written as \begin{eqnarray} \delta S_I&=&\int d^3k\,d\eta\,f^2\, \lambda(\bm k) \left[\,h_{\bm k}^+(\eta)\,A_{-\bm k}^+(\eta)+\,h_{\bm k}^\times(\eta)\,A_{-\bm k}^\times(\eta)\,\right]\, , \label{action:I3} \end{eqnarray} where we defined the coupling between graviton and photon as \begin{align} \lambda(\bm{k}) \equiv \frac{\sqrt{2}}{M_{\rm pl}} \varepsilon^{i\ell m}\,e_i^+\,k_\ell\,B_m\,. \label{coupling} \end{align} Here, the conditions for the graviton and photon to be real read, $h_{-\bm k}^{+,\times}(\eta)=h_{\bm k}^{*\,+,\times}(\eta)$ and $A_{-\bm k}^{+,\times}(\eta)=-A_{\bm k}^{*\,+,\times}(\eta)$\,. In the following, we focus on the plus polarization and omit the index $P$ unless there may be any confusion. \subsection{Total action in canonical variables} If we use the canonical variable $y^P_{\bm k}(\eta)=a\,h_{\bm k}^{P}(\eta)$ and $x^P_{\bm k}(\eta)=f\,A_{\bm k}^{P}(\eta)$, the total action of Eqs.~(\ref{action_fourier_h}), (\ref{action_fourier_A}) and (\ref{action:I3}) are written as \begin{eqnarray} \delta S&=&\delta S_y+\delta S_x+\delta S_I \nonumber\\ &=&\frac{1}{2}\int d^3 k\,d\eta\left[\, |y_{\bm k}^{\prime}|^2 -\left(k^2-\left(\frac{a^\prime}{a}\right)^2\right)|y_{\bm k}|^2 -\frac{a^\prime}{a}\left( y_{\bm k}\,y_{-\bm k}^{\prime} +y_{-\bm k}\,y_{\bm k}^{\prime} \right) \right]\nonumber\\ &&+\frac{1}{2}\int d^3 k\,d\eta\left[\, |x_{\bm k}^{\prime}|^2 -\left(k^2-\left(\frac{f^\prime}{f}\right)^2\right)|x_{\bm k}|^2 -\frac{f^\prime}{f}\left( x_{\bm k}\,x_{-\bm k}^{\prime} +x_{-\bm k}\,x_{\bm k}^{\prime} \right) \right]\,\nonumber\\ &&+\int d^3 k\,d\eta\left[\,\frac{f}{a}\,\lambda(\bm{k})\, y_{\bm k}\,x_{-\bm k} \,\right]\,. \label{totalaction} \end{eqnarray} The variation of the actions (\ref{totalaction}) with respect to the graviton and the photon fields gives \begin{align} &y_{\bm{k}}''+\left(k^2-\frac{a^{\prime\prime}}{a}\right)y_{\bm{k}}=-\lambda f \frac{x_{\bm k}}{a(\eta)} , \label{eom:graviton} \\ &x_{\bm{k}}''+\left(k^2-\frac{a^{\prime\prime}}{a}\right)x_{\bm{k}}=-\lambda f \frac{y_{\bm k}}{a(\eta)} . \label{eom:photon} \end{align} In this paper, we suppose $B_m/M_{\rm pl}\ll 1$ so that the coupling between graviton and photon~(\ref{coupling}) is weak. Then we solve the Eqs.~(\ref{eom:graviton}) and (\ref{eom:photon}) iteratively up to the second order in $y_{\bm{k}}$ and $x_{\bm{k}}$ in the next section. We assume the gauge kinetic function in the form \begin{eqnarray} f(\eta) =a(\eta)^{-2c}, \label{f} \end{eqnarray} where $c$ is a constant parameter. We take $c=-1/2$ to make the analysis easier. For this parameter, the power spectrum of the electromagnetic fields $A_\mu$ is scale-invariant~\cite{Kanno:2009ei}. While, the spectrum of magnetic fields become $P(B) \propto k^2$. We have a choice of taking $c=-1$ in order to have a scale-invariant primordial magnetic fields. However, we will see $c=-1/2$ is sufficient to get a significant modification of the quantum state of gravitons. \section{Time evolution of quantum state} Using the basic equations presented in the previous section, we perturbatively derive mode functions in de Sitter space in this section. If we define the Lagrangian in the actions (\ref{totalaction}) by $\delta S=\int d\eta\,L$, the conjugate momenta of graviton $p_{\bm k}$ and photon $\pi_{\bm k}$ are respectively given by \begin{align} &p_{\bm{k}}(\eta)=\frac{\partial L}{\partial y^\prime_{-\bm k}}=y_{\bm{k}}'(\eta)+\frac{1}{\eta}y_{\bm{k}}(\eta) \ , \label{p}\\ &\pi_{\bm{k}}(\eta)=\frac{\partial L}{\partial x^\prime_{-\bm k}}=x_{\bm{k}}'(\eta)+\frac{1}{\eta}x_{\bm{k}}(\eta) \ . \label{pi} \end{align} Now we promote variables $y_{\bm k}(\eta)\,,x_{\bm k}(\eta)$ and their momenta $p_{\bm k}(\eta), \pi_{\bm k}(\eta)$ into operators. Annihilation operators for the graviton and photon are respectively expressed by canonical variables as \begin{eqnarray} &&\hat{a}_y(\eta,{\bm k})=\sqrt{\frac{k}{2}}\hat{y}_{\bm k}(\eta)+\frac{i}{\sqrt{2k}}\hat{p}_{\bm k}(\eta)\,, \label{y:annihi} \\ &&\hat{a}_x(\eta,{\bm k})=\sqrt{\frac{k}{2}}\hat{x}_{\bm k}(\eta)+\frac{i}{\sqrt{2k}}\hat{\pi}_{\bm k}(\eta)\,. \label{A:annihi} \end{eqnarray} The commutation relations $[\hat{a}_y(\eta,{\bm k}),\hat{a}^\dag_y(\eta,-{\bm k}^\prime)]=\delta({\bm k}+{\bm k}^\prime)$ and $[\hat{a}_x(\eta,{\bm k}),\hat{a}^\dag_x(\eta,-{\bm k}^\prime)]=\delta({\bm k}+{\bm k}^\prime)$ guarantee the canonical commutation relations $[y_{\bm k}(\eta),p_{{\bm k}^\prime}(\eta)]=i\delta({\bm k}-{\bm k}^\prime)$ and $[x_{\bm k}(\eta),\pi_{{\bm k}^\prime}(\eta)]=i\delta({\bm k}-{\bm k}^\prime)$. Note that the annihilation operator becomes time dependent through the time dependence of canonical variables. Thus, the vacuum defined by $\hat{a}(\eta,{\bm k})|0\rangle =0$ is time dependent as well and the vacuum in this formalism turns out to be defined at every moment. \subsection{Mode functions} \subsubsection{Zeroth order} By letting $\lambda=0$ in Eqs.~(\ref{eom:graviton}) and (\ref{eom:photon}), the equations of the zeroth order approximation become \begin{align} &\hat{y}_{\bm{k}}^{(0)\prime\prime}+\left(k^2-\frac{2}{\eta^2}\right)\hat{y}_{\bm{k}}^{(0)}=0\,, \label{GWeqs} \\ &\hat{x}_{\bm{k}}^{(0)\prime\prime}+\left(k^2-\frac{2}{\eta^2}\right) \hat{x}_{\bm{k}}^{(0)}=0 \, , \label{EMeqs} \end{align} where the superscript $(0)$ denotes the zeroth order. Since Eqs.~(\ref{GWeqs}) and (\ref{EMeqs}) are the same form, the mode function for the graviton and the photon at the zeroth order becomes identical. Then the solutions of the above equations can be written as \begin{align} &\hat{y}_{\bm k}^{(0)}(\eta)=u_{\bm k}^{(0)}(\eta) ~\hat{c} +u_{\bm k}^{(0)*}(\eta)~\hat{c}^\dagger\,, \label{0th:graviton}\\ &\hat{x}_{\bm k}^{(0)}(\eta)=u_{\bm k}^{(0)}(\eta) ~\hat{d} +u_{\bm k}^{(0)*}(\eta)~\hat{d}^\dagger, \label{0th:photon} \end{align} where $\hat{c}$\,($\hat{d}$) and its conjugate $\hat{c}^\dag$($\hat{d}^\dag$) are constant operators of integration. We choose the properly normalized positive frequency mode in the remote past as a basis, which is expressed as \begin{align} u_{\bm k}^{(0)}(\eta)=\frac{1}{\sqrt{2k}} \biggl(1-\frac{i}{k\eta}\biggr) e^{-ik\eta}. \label{u} \end{align} \subsubsection{First order} Plugging the solutions of zeroth order approximation (\ref{0th:graviton}) and (\ref{0th:photon}) into the right-hand side (rhs) of Eqs.~(\ref{eom:graviton}) and (\ref{eom:photon}) as the source terms, the equations of the first order approximation are written as \begin{align} &\hat{y}_{\bm{k}}^{(1)\prime\prime}+\left(k^2-\frac{2}{\eta^2}\right)\hat{y}_{\bm{k}}^{(1)} =-\lambda \hat{x}_{\bm{k}}^{(0)}\,, \label{GWeq1} \\ &\hat{x}_{\bm{k}}^{(1)\prime\prime}+\left(k^2-\frac{2}{\eta^2}\right)\hat{x}_{\bm{k}}^{(1)}=-\lambda \hat{y}_{\bm{k}}^{(0)} \,. \label{EMeq1} \end{align} The effect of photon and graviton comes in the Eqs of graviton~(\ref{GWeq1}) and photon (\ref{EMeq1}) respectively as a source term. However, because both mode function at the zeroth order is the same, we focus on the solution of graviton in the following. The Green function is given by \begin{eqnarray} G_{\rm dS}(\eta,\eta')= \frac{1}{2ik} \biggl(1+\frac{i}{k\eta'}\biggr) \biggl(1-\frac{i}{k\eta}\biggr) e^{-ik(\eta-\eta')} -\frac{1}{2ik} \biggl(1-\frac{i}{k\eta'}\biggr) \biggl(1+\frac{i}{k\eta}\biggr) e^{ik(\eta-\eta')}\,. \end{eqnarray} Then the solution of graviton at the first order is obtained in the form \begin{align} \hat{y}^{(1)}_{\bm k}(\eta) &=\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\,\hat{x}^{(0)}_{\bm k}(\eta')\nonumber\\ &=\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\, u^{(0)}_{\bm k}(\eta')~\hat{d} +\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\, u^{(0)*}_{\bm k}(\eta') ~\hat{d}^\dagger \nonumber\\ &\equiv u^{(1)}_{\bm k}(\eta)~\hat{d}+u^{(1)*}_{\bm k}(\eta)~\hat{d}^\dagger\,, \label{1st:graviton} \end{align} where $\eta_i$ is an initial time. From the first line to the second line we used Eq.~(\ref{0th:photon}). In the last line, we defined the first order correction due to the source of photon to the positive frequency mode of graviton by \begin{align} u^{(1)}_{\bm k}(\eta)&\equiv \int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \, \lambda \, u^{(0)}_{\bm k}(\eta') \ . \label{u1Green} \end{align} After integration, we have \begin{eqnarray} u^{(1)}_{\bm k}(\eta)&=& \frac{\lambda\,e^{-ik\eta}}{4\sqrt{2}\,k^9\,\eta\,\eta_i} \biggl[ -2-2ik\eta+e^{2ik\eta-\eta_i)}(i+k\eta)(-2i+k\eta_i) \nonumber\\ &&\hspace{2.2cm} -k\eta_i\Bigl\{3i+k\eta\Bigl(3+2ik(\eta-\eta_i)\Bigr)+2k\eta_i\Bigr\} \biggr] . \label{u1} \end{eqnarray} Similarly, the solution of photon is written by the same mode function \begin{align} \hat{x}^{(1)}_{\bm k}(\eta) = u^{(1)}_{\bm k}(\eta)~\hat{c}+u^{(1)*}_{\bm k}(\eta)~\hat{c}^\dagger . \label{1st:photon} \end{align} \subsubsection{Second order} By plugging the solution of the first order approximation (\ref{1st:graviton}) and (\ref{1st:photon}) into the rhs of Eqs.~(\ref{eom:graviton}) and (\ref{eom:photon}) as the source terms, the equations of the second order approximation are \begin{align} &y_{\bm{k}}^{(2)\prime\prime}+\left(k^2-\frac{2}{\eta^2}\right)y_{\bm{k}}^{(2)} =-\lambda\,x_{\bm{k}}^{(1)}\,, \label{GWeq2} \\ &x_{\bm{k}}^{(2)\prime\prime}+\left(k^2-\frac{2}{\eta^2}\right)x_{\bm{k}}^{(2)}=-\lambda \,y_{\bm{k}}^{(1)} \,. \label{EMeq2} \end{align} At this order, the effect of graviton itself comes in Eq.~(\ref{GWeq2}). The solution is written by the Green function $G_{\rm dS}$ such as \begin{align} \hat{y}^{(2)}_{\bm k}(\eta) &=\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\, \hat{x}^{(1)}_{\bm k} (\eta') \nonumber\\ &=\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\, \, u^{(1)}_{\bm k}(\eta')~\hat{c} +\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\, \, u^{(1)*}_{\bm k}(\eta')~\hat{c}^\dagger \nonumber\\ &=\int_{\eta_i}^{\eta} d\eta' G_{\rm dS} (\eta,\eta')\, \lambda \Biggl(\int_{\eta_i}^{\eta'}d\eta'' G_{\rm dS}(\eta',\eta'')\lambda \, u^{(0)}_{\bm k}(\eta'') \Biggr)~ \hat{c} \nonumber\\ &\qquad+\int_{\eta_i}^{\eta} d\eta' G_{\rm dS} (\eta,\eta') \lambda \, \Biggl(\int_{\eta_i}^{\eta'}d\eta''G_{\rm dS}(\eta',\eta'') \lambda \, u^{(0)*}_{\bm k}(\eta'') \Biggr)~ \hat{c}^\dagger \nonumber\\ &\equiv u^{(2)}_{\bm k}(\eta)~\hat{c}+u^{(2)*}_{\bm k}(\eta)~\hat{c}^\dagger, \end{align} where we defined \begin{align} u^{(2)}_{\bm k}(\eta) &\equiv \int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda\, \, u^{(1)}_{\bm k}(\eta') \nonumber\\ &=\int_{\eta_i}^\eta d\eta' G_{\rm dS} (\eta,\eta') \,\lambda \, \int_{\eta_i}^{\eta'} d\eta'' G_{\rm dS}(\eta',\eta'') \, \lambda \, u^{(0)}_{\bm k} (\eta''). \label{u2Green} \end{align} By performing the integration, the explicit form of the $u^{(2)}_{\bm k}(\eta)$ is found to be \begin{eqnarray} u^{(2)}_{\bm k}(\eta) &=& \frac{\lambda ^2 e^{-i \eta k}}{16 \sqrt{2} \eta \eta_i k^{13/2}} \biggl[ 2i\eta_i k^3 \left(3 \eta ^2-4 \eta \eta_i+\eta_i^2\right)\nonumber\\ &~&~~~~~~~~~~~~~~~~~~~~ +k^2\left(-4 \eta ^2+15 \eta \eta_i-8 \eta_i^2\right) -2\eta \eta_i k^4 (\eta-\eta_i)^2\nonumber\\ &~&~~~~~~~~~~~~~~~~~~~~ \hspace{-2cm} +e^{2 i k (\eta -\eta_i)} \Bigl(2 i \eta \eta_i k^3 (\eta -\eta_i)+k^2 (\eta -2 \eta_i) (4 \eta -\eta_i)+3 i k (4 \eta -3 \eta_i)-12\Bigr) \nonumber\\ &~&~~~~~~~~~~~~~~~~~~~~ +3 i k (4 \eta -5 \eta_i)+12 \biggr] \ . \label{u2} \end{eqnarray} In the same way, the effect of photon itself comes in Eq.~(\ref{EMeq2}) and the solution is given by \begin{align} \hat{x}^{(2)}_{\bm k}(\eta) =u^{(2)}_{\bm k}(\eta)~\hat{d}+u^{(2)*}_{\bm k}(\eta)~\hat{d}^\dagger. \end{align} \subsection{Bogoliubov coefficients} By solving Eqs. (\ref{eom:graviton}) and (\ref{eom:photon}) iteratively up to the second order, we can take into account the backreaction of graviton and photon respectively. For the graviton, the field and its conjugate momentum are now given by \begin{eqnarray} \hspace{-1cm} \hat{y}_{\bm k}(\eta)&=&\Bigl(u^{(0)}_{\bm k}+u^{(2)}_{\bm k}\Bigr)\,\hat{c} +u^{(1)}_{\bm k}\,\hat{d} +{\rm H.c.} \,, \\ \label{pGWsol} \hat{p}_{\bm k}(\eta)&=&\Bigl(u^{(0)\,\prime}_{\bm k}+u^{(2)\,\prime}_{\bm k}\Bigr)\,\hat{c} +u^{(1)\,\prime}_{\bm k}\,\hat{d} +\frac{1}{\eta}\Bigl\{\left( u^{(0)}_{\bm k}+u^{(2)}_{\bm k} \right)\hat{c}+u^{(1)}_{\bm k}\,\hat{d}\Bigr\}+{\rm H.c.}\,, \end{eqnarray} where we used Eq.~(\ref{p}) and H.c. represents Hermitian conjugate. For the photon, the field and its conjugate momentum become \begin{eqnarray} \hat{x}_{\bm k}(\eta)&=&\Bigl(u^{(0)}_{\bm k}+u^{(2)}_{\bm k}\Bigr)\,\hat{d} +u^{(1)}_{\bm k}\,\hat{c} +{\rm H.c.}\,,\\ \hat{\pi}_{\bm k}(\eta)&=& \Bigl(u^{(0)\,\prime}_{\bm k}+u^{(2)\,\prime}_{\bm k}\Bigr)\,\hat{d} +u^{(1)\,\prime}_{\bm k}\,\hat{c} +\frac{1}{\eta}\Bigl\{\left( u^{(0)}_{\bm k}+u^{(2)}_{\bm k} \right)\hat{d}+u^{(1)}_{\bm k}\,\hat{c}\Bigr\} +{\rm H.c.}\,, \label{pEMsol} \end{eqnarray} where we used Eq.~(\ref{pi}). Then the annihilation operators for the graviton and photon are obtained by using Eqs.~(\ref{y:annihi}) and (\ref{A:annihi}) such as \begin{align} \hat{a}_{y}(\eta,\bm{k}) &= \Bigl( \psi_{p}^{(0)} +\psi_{p}^{(2)} \Bigr)\hat{c} +\Bigl( \psi_{m}^{(0)*} +\psi_{m}^{(2)*} \Bigr)\hat{c}^\dagger +\psi_{ p}^{(1)} \hat{d} +\psi_{m}^{(1)*} \hat{d}^\dagger\,, \label{y:annihifull} \\ \hat{a}_{x}(\eta,\bm{k}) &= \Bigl( \psi_{p}^{(0)} +\psi_{p}^{(2)} \Bigr)\hat{d} +\Bigl( \psi_{ m}^{(0)*} +\psi_{ m}^{(2)*} \Bigr)\hat{d}^\dagger +\psi_p^{(1)} \hat{c} +\psi_m^{(1)*} \hat{c}^\dagger\,. \label{A:annihifull} \end{align} Here, we defined new variables \begin{align} &\psi_{p}^{(j)} =\sqrt{\frac{k}{2}} u^{(j)}_{\bm k}(\eta) +\frac{i}{\sqrt{2k}} \Bigl(u^{(j)\prime}_{\bm k}(\eta)+\frac{1}{\eta}u^{(j)}_{\bm k} (\eta) \Bigr) ,\label{psip}\\ &\psi_{ m}^{(j)} =\sqrt{\frac{k}{2}} u^{(j)}_{\bm k}(\eta) -\frac{i}{\sqrt{2k}} \Bigl(u^{(j)\prime}_{\bm k}(\eta)+\frac{1}{\eta}u^{(j)}_{\bm k}(\eta) \Bigr), \label{psim} \end{align} where $j=0,1,2$ denotes the order of perturbations. We see that all mode functions other than the zeroth order given in Eqs.~(\ref{u1Green}) and (\ref{u2Green}) vanish at the initial time $\eta_i$. Thus only the zeroth order of the above Eqs.~(\ref{psip}) and (\ref{psim}) remains at the initial time. This means that annihilation operators in Eqs.(\ref{y:annihifull}) and (\ref{A:annihifull}) at the initial time are expressed by the zeroth order variables \begin{align} \hat{a}_y(\eta_i,\bm{k}) &=\left(1-\frac{i}{2k\eta_i}\right)e^{-ik\eta_i}\,\hat{c} +\frac{i}{2k\eta_i} e^{ik\eta_i}\,\hat{c}^\dagger, \label{ycRel} \\ \hat{a}_x(\eta_i,\bm{k}) &=\left(1-\frac{i}{2k\eta_i}\right)e^{-ik\eta_i}\,\hat{d} +\frac{i}{2k\eta_i} e^{ik\eta_i}\,\hat{d}^\dagger. \label{AdRel} \end{align} Combining Eqs. (\ref{ycRel}) and (\ref{AdRel}) with their complex conjugate, we can express the $\hat{c}$ and the $\hat{d}$ by the initial creation and annihilation operators as \begin{eqnarray} \hat{c} &=& \left( 1+\frac{i}{2k\eta_i}\right)e^{ik\eta_i}\,\hat{a}_y(\eta_i,\bm{k} ) -\frac{i}{2k\eta_i} e^{ik\eta_i}\,\hat{a}_y^\dagger (\eta_i,-\bm{k}) \ , \\ \hat{d} &=& \left( 1+\frac{i}{2k\eta_i}\right)e^{ik\eta_i}\,\hat{a}_x(\eta_i,\bm{k} ) -\frac{i}{2k\eta_i} e^{ik\eta_i}\,\hat{a}_x^\dagger (\eta_i,-\bm{k}) \ . \end{eqnarray} Inserting the above back into Eqs. (\ref{y:annihifull}) and (\ref{A:annihifull}), the time evolution of annihilation operator of graviton is described by the Bogoliubov transformation in the form \begin{align} &\hat{a}_y(\eta,\bm{k})= \Biggl[ \biggl( \psi_{p}^{(0)} +\psi_{p}^{(2)} \biggr) \Bigl( 1+\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} +\biggl( \psi_{m}^{(0)*} +\psi_{m}^{(2)*} \biggr) \frac{i}{2k\eta_i} e^{-ik\eta_i} \Biggr] \hat{a}_y(\eta_i,\bm{k})\nonumber\\ &\hspace{1.5cm} +\Biggl[ \biggl( \psi_{p}^{(0)} +\psi_{p}^{(2)} \biggr)\Bigl(-\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} +\biggl( \psi_{m}^{(0)*} +\psi_{ m}^{(2)*} \biggr) \Bigl(1-\frac{i}{2k\eta_i}\Bigr) e^{-ik\eta_i} \Biggr]\hat{a}_y^\dagger(\eta_i,-\bm{k}) \nonumber\\ &\hspace{1.5cm} +\Biggl[ \psi_{p}^{(1)} \Bigl(1+\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} + \psi_{m}^{(1)*} \frac{i}{2k\eta_i} e^{-ik\eta_i} \Biggr]\hat{a}_x(\eta_i,\bm{k}) \nonumber\\ &\hspace{1.5cm} +\Biggl[ \psi_{p}^{(1)} \Bigl(-\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} +\psi_{ m}^{(1)*} \Bigl(1-\frac{i}{2k\eta_i}\Bigr) e^{-ik\eta_i} \Biggr]\hat{a}_x^\dagger(\eta_i,-\bm{k}), \label{y:bogoliubov1} \end{align} and the time evolution of annihilation operator of photon is expressed by the Bogoliubov transformation such as \begin{align} &\hat{a}_x(\eta,\bm{k})= \Biggl[ \biggl( \psi_{p}^{(0)} +\psi_{p}^{(2)} \biggr) \Bigl( 1+\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} +\biggl( \psi_{m}^{(0)*} +\psi_{ m}^{(2)*} \biggr) \frac{i}{2k\eta_i} e^{-ik\eta_i} \Biggr] \hat{a}_x(\eta_i,\bm{k})\nonumber\\ &\hspace{1.5cm} +\Biggl[ \biggl( \psi_{p}^{(0)} +\psi_{p}^{(2)} \biggr)\Bigl(-\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} +\biggl( \psi_{m}^{(0)*} +\psi_{ m}^{(2)*} \biggr) \Bigl(1-\frac{i}{2k\eta_i}\Bigr) e^{-ik\eta_i} \Biggr]\hat{a}_x^\dagger(\eta_i,-\bm{k}) \nonumber\\ &\hspace{1.5cm} +\Biggl[ \psi_{p}^{(1)} \Bigl(1+\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} + \psi_{m}^{(1)*} \frac{i}{2k\eta_i} e^{-ik\eta_i} \Biggr]\hat{a}_y(\eta_i,\bm{k}) \nonumber\\ &\hspace{1.5cm} +\Biggl[ \psi_{p}^{(1)} \Bigl(-\frac{i}{2k\eta_i} \Bigr) e^{ik\eta_i} +\psi_{ m}^{(1)*} \Bigl(1-\frac{i}{2k\eta_i}\Bigr) e^{-ik\eta_i} \Biggr]\hat{a}_y^\dagger(\eta_i,-\bm{k}). \label{A:bogoliubov1} \end{align} We see that the Bogoliubov transformations for graviton and photon are symmetric each other. The Bogoliubov transformations show the particle production during inflation and the mixing between graviton and photon. Let us introduce a matrix form of the Bogoliubov transformations for the calculations below. The Bogoliubov transformation (\ref{y:bogoliubov1}) and (\ref{A:bogoliubov1}) and their conjugate can be accommodated into the simple $4\times 4$ matrix form $M$ \begin{eqnarray} \begin{pmatrix} a_y(\eta)\\ a_y^{\dagger}(\eta)\\ a_x(\eta)\\ a_x^{\dagger}(\eta)\\ \end{pmatrix} =M \begin{pmatrix} a_y(\eta_i)\\ a_y^{\dagger}(\eta_i)\\ a_x(\eta_i)\\ a_x^{\dagger}(\eta_i)\\ \end{pmatrix} = \begin{pmatrix} A & B\\ B & A\\ \end{pmatrix} \begin{pmatrix} a_y(\eta_i)\\ a_y^{\dagger}(\eta_i)\\ a_x(\eta_i)\\ a_x^{\dagger}(\eta_i)\\ \end{pmatrix}\ . \hspace{-6mm} \label{bogoliubov} \end{eqnarray} Here, the $M$ consists of $2\times 2$ matrices $A$ and $B$. The $A$ is written by the zeroth order and the second order solutions, $A=A_0 +A_2$ such as \begin{eqnarray} A_0= \begin{pmatrix} K_0^* & -L_0^* \\ -L_0 & K_0 \\ \end{pmatrix}\,, \end{eqnarray} where we defined \begin{eqnarray} &&K_0=\left( 1+\frac{i}{2k\eta}\right)\left( 1-\frac{i}{2k\eta_i}\right)e^{ik(\eta-\eta_i)} -\frac{1}{4k^2\eta\eta_i }e^{-ik(\eta-\eta_i)}\ ,\\ &&L_0=-\frac{i}{2k\eta_i }\left( 1+\frac{i}{2k\eta}\right) e^{ik(\eta-\eta_i)} +\frac{i}{2k\eta }\left( 1+\frac{i}{2k\eta_i}\right) e^{-ik(\eta-\eta_i)} \ . \end{eqnarray} And the $A_2$ is \begin{eqnarray} A_2= \begin{pmatrix} K_2^*&-L_2^*\\ -L_2&K_2\\ \end{pmatrix}\ , \end{eqnarray} where \begin{eqnarray} K_2&=&\psi^{(2)*}_p \Bigl( 1-\frac{i}{2 k \eta_i} \Bigr) e^{-ik\eta_i} - \psi^{(2)}_m \frac{i}{2 k \eta_i} e^{ik\eta_i}, \\ L_2&=&-\psi^{(2)*}_p \frac{i}{2 k \eta_i} e^{-ik\eta_i} - \psi^{(2)}_m \Bigl( 1+\frac{i}{2 k \eta_i} \Bigr) e^{ik\eta_i}. \end{eqnarray} The $B$ comes from the first order solution expressed as \begin{eqnarray} B = \begin{pmatrix} K_1^* & -L_1^*\\ -L_1&K_1\\ \end{pmatrix}\,, \end{eqnarray} where we defined \begin{eqnarray} K_1&=&\psi^{(1)*}_p \Bigl( 1-\frac{i}{2 k \eta_i} \Bigr) e^{-ik\eta_i} - \psi^{(1)}_m \frac{i}{2 k \eta_i} e^{ik\eta_i}, \\ L_1&=&-\psi^{(1)*}_p \frac{i}{2 k \eta_i} e^{-ik\eta_i} - \psi^{(1)}_m \Bigl( 1+\frac{i}{2 k \eta_i} \Bigr) e^{ik\eta_i}. \end{eqnarray} \subsection{Inversion of the Bogoliubov transformation} In the previous subsection, we obtained the Bogoliubov transformation that mix the operators $\hat{a}_y(\eta,{\bm k})$, $\hat{a}_x(\eta,{\bm k})$ and their Hermitian conjugates $\hat{a}^\dagger_y(\eta,-{\bm k})$, $\hat{a}^\dagger_x(\eta,-{\bm k})$. Then the initial state is defined by \begin{eqnarray} \hat{a}_y(\eta_i,\bm{k}) |\overline{{\rm BD}}\rangle= \hat{a}_x(\eta_i,\bm{k}) |\overline{{\rm BD}}\rangle =0\,. \label{BD} \end{eqnarray} Here, $|\overline{{\rm BD}}\rangle$ is a vacuum deviated from the Bunch-Davies vacuum due to the presence of the constant background magnetic field. In order to impose these conditions, we need to invert the Bogoliubov transformations (\ref{y:bogoliubov1}) and (\ref{A:bogoliubov1}) into the form \begin{eqnarray} \hat{a}_y(\eta_i,\bm{k}) &=&\alpha\,\hat{a}_y (\eta,\bm{k})+ \beta\,\hat{a}_y^\dagger(\eta,-\bm{k}) +\gamma\,\hat{a}_x(\eta,\bm{k})+ \delta\,\hat{a}_x^\dagger(\eta,-\bm{k})\,, \label{y:invert}\\ \hat{a}_x(\eta_i,\bm{k}) &=&\gamma \,\hat{a}_y(\eta,\bm{k})+ \delta \,\hat{a}_y^\dagger(\eta,-\bm{k}) +\alpha \,\hat{a}_x (\eta,\bm{k})+ \beta \,\hat{a}_x^\dagger(\eta,-\bm{k})\,, \label{A:invert} \end{eqnarray} where $\alpha$, $\beta$, $\gamma$, $\delta$ are the Bogoliubov coefficients. In order to find these coefficients, we need the inverse of the matrix $M$, which is calculated as \begin{eqnarray} M^{-1}= \begin{pmatrix} \left( A - B A^{-1} B\right)^{-1} & -\left( A B^{-1}A - B\right)^{-1}\\ -\left( A B^{-1}A - B\right)^{-1} & \left( A - B A^{-1} B\right)^{-1}\\ \end{pmatrix} \, .\label{inverse1} \end{eqnarray} From Eqs.~(\ref{y:invert}) and (\ref{A:invert}), the $M^{-1}$ is also written as \begin{eqnarray} M^{-1}= \begin{pmatrix} \alpha & \beta & \gamma & \delta \\ \beta^* & \alpha^* & \delta^* & \gamma^* \\ \gamma & \delta & \alpha & \beta \\ \delta^* & \gamma^* & \beta^* & \alpha^* \\ \end{pmatrix} \, . \label{inverse2} \end{eqnarray} By comparing Eq.(\ref{inverse1}) with (\ref{inverse2}), we can obtain the Bogoliubov coefficients $\alpha$, $\beta$, $\gamma$, $\delta$ numerically. \subsection{Squeezed state} In the previous subsection, we obtained the Bogoliubov coefficients of Eqs.~(\ref{y:invert}) and (\ref{A:invert}). If we apply the Eqs.~(\ref{y:invert}) and (\ref{A:invert}) to the definition of the initial state~(\ref{BD}) and use the relations $[\hat{a}_y(\eta,{\bm k}),\hat{a}^\dag_y(\eta,-{\bm k}^\prime)]=\delta({\bm k}+{\bm k}^\prime)$\,, $[\hat{a}_x(\eta,{\bm k}),\hat{a}^\dag_x(\eta,-{\bm k}^\prime)]=\delta({\bm k}+{\bm k}^\prime)$ and $[\hat{a}_y(\eta,{\bm k}),\hat{a}_x(\eta,-{\bm k}^\prime)]=0$, the initial state can be written by using squeezing parameters $\Lambda$ and $\Xi$ of the form \begin{eqnarray} |\overline{{\rm BD}}\rangle = \prod_{k=-\infty}^{\infty}\exp\left[\frac{\Lambda}{2}\, \hat{a}_y^\dag (\eta,\bm{k}) \hat{a}_y^\dag (\eta,-\bm{k})+\Xi\,\hat{a}_y^\dag (\eta,\bm{k}) \hat{a}_x^\dag (\eta,-\bm{k}) +\frac{\Lambda}{2}\,\hat{a}_x^\dag (\eta,\bm{k}) \hat{a}_x^\dag (\eta,-\bm{k})\right]|0\rangle,\nonumber \label{BD0} \\ \end{eqnarray} where $|0\rangle$ is the instantaneous vacuum defined by \begin{eqnarray} \hat{a}_y(\eta,{\bm k}) |0\rangle=\hat{a}_x(\eta,{\bm k}) |0\rangle =0 \,. \end{eqnarray} Note that $\Lambda$ and $\Xi$ are complex parameters. The squeezing parameter of a graviton-graviton pair and a photon-photon pair is written by the same $\Lambda$. This is because the Bogoliubov transformations (\ref{y:bogoliubov1}) and (\ref{A:bogoliubov1}) are symmetric. The squeezing of graviton-photon pair is expressed by the $\Xi$. This describes a four mode squeezed state of pairs of graviton $y$ and photon $x$. In a different context, a four-mode squeezed state of two free massive scalar fields is discussed in~\cite{Albrecht:2014aga,Kanno:2015ewa}. If we expand the exponential function in Taylor series, we find \begin{eqnarray} |\overline{{\rm BD}}\rangle = \prod_{\bm k} \sum_{p\,,q\,,r=0}^{\infty} \frac{\Lambda^{p+r}\,\Xi^q}{2^{p+r}p!\,q!\,r!} |p+q \rangle_{y,{\bm k}} \otimes |p \rangle_{y,-{\bm k}} \otimes |r \rangle_{x,{\bm k}} \otimes |q+r \rangle_{x,-{\bm k}}\,. \end{eqnarray} This is a four-mode squeezed state which consists of an infinite number of entangled particles in the ${\cal H}_{y,{\bm k}}\otimes{\cal H}_{y,{-\bm k}}\otimes{\cal H}_{x,{\bm k}}\otimes{\cal H}_{x,-{\bm k}}$ space. In particular, in the highly squeezing limit $\Lambda\,,\Xi\rightarrow 1$, the Bunch-Davies vacuum becomes the maximally entangled state from the point of view of the instantaneous vacuum. Now we find the squeezing parameters $\Xi$ and $\Lambda$. The condition $\hat{a}_y(\eta_i,{\bm k})|\overline{{\rm BD}}\rangle=0$ of Eq.~(\ref{BD}) yields \begin{eqnarray} \alpha \Lambda +\beta +\gamma \Xi =0 \ , \qquad \alpha \Xi +\gamma \Lambda +\delta =0\,, \end{eqnarray} and another condition $\hat{a}_x(\eta_i,{\bm k}) |\overline{{\rm BD}}\rangle=0$ gives the same equations. Then, we obtain the two squeezing parameters $\Lambda$ and $\Xi$ of the form \begin{eqnarray} \Lambda= \frac{\gamma\delta -\beta \alpha}{\alpha^2 -\gamma^2} \ , \qquad \Xi= \frac{\beta\gamma - \alpha\delta}{\alpha^2 -\gamma^2} \, . \label{squeezingparameters} \end{eqnarray} The results of numerical calculations for the squeezing parameters $\Lambda$ and $\Xi$ versus $a(\eta)$ with different values of $k$ are plotted in FIGs.~\ref{graph1} and \ref{graph2}, respectively, where we normalized the scale factor at the end of the inflation as $a(\eta_f)=1$. We note that the coupling Eq.~{\ref{coupling}} is expressed as \begin{align} \lambda(\bm{k}) \equiv \frac{\sqrt{2}}{M_{\rm pl}} k\,|{\bf B}| \sin\theta \, , \label{coupling2} \end{align} where $\theta$ is the angle between the magnetic field and the wave number vectors depicted in FIG.~\ref{Configuration}. Apparently, the magnitude of the $\lambda(\bm{k})$ depends on the $\theta$. Therefore, the squeezing parameters depend on the direction of the wave-number vector of gravitons for the fixed magnetic field. For simplicity, we take $\theta =\pi/2$ in the following. But we consider the effects of the angle in Section~4.3. In FIG.~\ref{graph1}, we plotted the squeezing parameter $\Lambda$ as a function of the scale factor $a(\eta)$. We see that the amplitude of $\Lambda$ goes to unity after the horizon exit and graviton and photon pair production become maximum during inflation. That is, the maximum entangled pairs of graviton and photon are produced. The FIG.~\ref{graph2} shows that graviton-photon pair production occurs but the production keeps decreasing after the horizon exit. \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph1.pdf} \renewcommand{\baselinestretch}{3} \caption{The squeezing parameter of gravitons or photons as a function of the scale factor of $a(\eta)$. Other parameters are set as $H=10^{14}\,{\rm GeV}, \eta_i=-1\,{\rm GeV}^{-1}, \eta_f=-10^{-14}\,{\rm GeV}^{-1}, a(\eta_i)=10^{-14},a(\eta_f)=1$.} \label{graph1} \end{figure} \noindent \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph2.pdf} \renewcommand{\baselinestretch}{3} \caption{The squeezing parameter of graviton and photon pair as a function of $a(\eta)$. Other parameters are set as $H=10^{14}\,{\rm GeV}, \eta_i=-1\,{\rm GeV}^{-1}, \eta_f=-10^{-14}\,{\rm GeV}^{-1}, a(\eta_i)=10^{-14},a(\eta_f)=1$.} \label{graph2} \end{figure} \noindent \section{Graviton state at present} The standard lore about PGWs is that the quantum state of PGWs (gravitons) becomes squeezed during inflation due to the mechanism we discussed in the previous section. Hence, it is believed that finding the squeezed gravitons turns out to prove inflation. In this section, we see the quantum state of gravitons can be a mixed state in the presence of magnetic fields. This result tells us that we have to reconsider how to estimate the non-classicality of the primordial gravitational waves. \subsection{Schmidt decomposition} In the previous section, we found that the squeezing of the graviton-photon pair is produced but eventually disappears during inflation. In this subsection, we reveal the entanglement between the graviton-photon pairs. We compute the entanglement entropy between gravitons and photons by tracing over the photons using the method developed in~\cite{Maldacena:2012xp,Kanno:2014lma}. The initial state expressed in Eq.~(\ref{BD0}) and it is difficult to trace over the photon degree of freedom. Thus, we perform the following Bogoliubov transformation \begin{eqnarray} \hat{C}_{y,{\bm k}}=\Phi\,\hat{a}_{y,{\bm k}}+\Psi\,\hat{a}_{y,-{\bm k}}^\dagger \ , \quad ~~ \hat{C}_{x,{\bm k}}=\Upsilon\,\hat{a}_{x,{\bm k}}+\Omega\,\hat{a}_{x,-{\bm k}}^\dagger \ , \label{transform} \end{eqnarray} where $|\Phi|^2-|\Psi|^2=1$, $|\Upsilon|^2-|\Omega|^2=1$ so that the state $|\overline{{\rm BD}}\rangle$ becomes in the Schmidt form \begin{eqnarray} |\overline{{\rm BD}}\rangle=\prod_{{\bm k}=-\infty}^\infty \exp\bigl[ \,\rho\,\hat{C}^\dagger_{y,{\bm k}}\,\hat{C}^\dagger_{x,-{\bm k}} \bigr] |0'\rangle_{y,{\bm k}}|0'\rangle_{x,-{\bm k}}\,. \label{newBD} \end{eqnarray} Note that we consider different Bogoliubov coefficients between $(\Phi\,,\Psi)$ and $(\Upsilon\,,\Omega)$ because the $\Lambda$ and $\Xi$ in Eq.~(\ref{BD0}) are complex parameters. Here new vacuum states are defined by \begin{eqnarray} \hat{C}_{y,{\bm k}}\,|0'\rangle_{y,{\bm k}}=0 \ , \quad \hat{C}_{x,{\bm k}}\,|0'\rangle_{x,{\bm k}}=0\ . \end{eqnarray} Performing the new operators $\hat{C}_{y,{\bm k}}$ and $\hat{C}_{x,{\bm k}}$ on Eq.~(\ref{newBD}), we obtain the following relations, \begin{eqnarray} &&\hat{C}_{y,{\bm k}}\, |\overline{{\rm BD}}\rangle =\rho\,\hat{C}^\dagger_{x,-{\bm k}}\, |\overline{{\rm BD}}\rangle,\\ &&\hat{C}_{x,{\bm k}}\, |\overline{{\rm BD}}\rangle =\rho\,\hat{C}^\dagger_{y,-{\bm k}}\, |\overline{{\rm BD}}\rangle\,. \end{eqnarray} By using Eq.~(\ref{transform}), the above relations lead to the equations for the Bogoliubov coefficients as \begin{eqnarray} \begin{pmatrix} \Lambda & 1& 0&-\rho\Xi\\ \Xi&0&-\rho&-\rho\Lambda \\ -\rho^*&-\rho^*\Lambda^*&\Xi^*&0\\ 0&-\rho^*\Xi^*&\Lambda^*&1 \end{pmatrix} \begin{pmatrix} \Phi\\ \Psi\\ \Upsilon^*\\ \Omega^* \end{pmatrix} =0. \label{newbogoliubov} \end{eqnarray} In order to find a nontrivial solution, the determinant of the above 4 by 4 matrix has to be zero. That is, $|\rho|^2$ satisfies \begin{eqnarray} |\rho|^2=Q -\sqrt{Q^2-1}\,, \end{eqnarray} where we have defined \begin{eqnarray} Q =\frac{\left(|\Lambda|^2-1\right)^2+|\Xi|^4-2{\rm Re}(\Xi^2\Lambda^{*2})}{2|\Xi|^2} \ . \nonumber\\ \end{eqnarray} \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph3.pdf} \renewcommand{\baselinestretch}{3} \caption{Plots of the parameter $|\rho (a)|^2$ as a function of $a(\eta)$. Other parameters are set as $H=10^{14}\,{\rm GeV}, \eta_i=-1\,{\rm GeV}^{-1}, \eta_f=-10^{-14}\,{\rm GeV}^{-1},\lambda=10\,{\rm GeV}^2, a(\eta_i)= 10^{-14},a(\eta_f)=1$.} \label{graph3} \end{figure} \noindent In FIG.~\ref{graph3}, we plotted $|\rho|^2$ versus $a(\eta)$ for various values of $k$ under a fixed value of $|{\bm B}|$. Here, $\lambda$ is automatically determined once $k$ is fixed because of Eq.~(\ref{coupling2}) where we take $\theta=\pi/2$. We see that $|\rho|^2$ goes to unity irrespective of the value of $k$ after the horizon exit if the value of $|{\bm B}|$ is fixed. Hence, the squeezing of graviton-photon pair in the basis $|0'\rangle_{y,{\bm k}}|0'\rangle_{x,-{\bm k}}$ turns out to be almost maximum, while $\Xi$ in the basis of $|0\rangle$ eventually vanishes as shown in FIG.~\ref{graph2}. \subsection{Entanglement entropy} Since gravitons and photons are coupled each other through $\lambda$ as in Eqs.~(\ref{eom:graviton}) and (\ref{eom:photon}), they are expected to get entangled eventually. In the previous subsection, we find the squeezing of graviton-photon pair becomes almost maximum in the basis of $|0'\rangle_{y,{\bm k}}|0'\rangle_{x,-{\bm k}}$ but eventually vanish in the basis of $|0\rangle$. In order to clarify whether they get entangled or not, we compute the entanglement entropy as a measure of entanglement. The entanglement entropy is basis independent. We define the density operator of the vacuum $|\overline{{\rm BD}}\rangle$ in Eq.~(\ref{newBD}) by \begin{eqnarray} \sigma &=& |\overline{{\rm BD}}\rangle \langle \overline{{\rm BD}}|\nonumber\\ &=&\left(1-|\rho|^2\right)\prod_{{\bm k},-{\bm k}}\sum_{n',m'=0}^\infty\rho^{n'}\,\rho^{*m'}\,|n'\rangle_{y,{\bm k}}\,|n'\rangle_{x,{-\bm k}}\,\, {}_{y,{\bm k}}\,\langle m'|\,{}_{x,-{\bm k}}\langle m'| \ . \end{eqnarray} The reduced density operator for the gravitons is obtained by tracing over the degree of freedom of photons such as \begin{eqnarray} \sigma_y &=& {\rm Tr}_x\,|\overline{{\rm BD}}\rangle \langle \overline{{\rm BD}}|=\sum_i{}_{x,{\bm k'}}\langle\,i\,|\overline{{\rm BD}}\rangle \langle \overline{{\rm BD}}|\,i\,\rangle_{x,{\bm k'}}\nonumber\\ &=&\left(1-|\rho|^2 \right)\sum_{n'=0}^\infty |\rho|^{2n}\, |n'\,\rangle_{y,{\bm k}}\,\,{}_{y,{\bm k}}\langle n'\,| \label{graviton_state}\ . \end{eqnarray} The entanglement entropy between the graviton and photon can be characterized by \begin{eqnarray} S&=&-{\rm Tr}_y\,\sigma_y \log \sigma_y =-\sum_{n'=0}^\infty\left(1-|\rho|^2\right)|\rho|^{2n'} \Bigl(\log\left(1-|\rho|^2\right)+n'\log|\rho|^2\Bigr) \nonumber\\ &=&-\log \left(1-|\rho|^2\right) - \frac{|\rho|^2}{1-|\rho|^2} \log |\rho|^2 \ . \end{eqnarray} \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph4.pdf} \renewcommand{\baselinestretch}{3} \caption{Entanglement entropy between graviton and photon as a function of $a(\eta)$. Other parameters are set as $H=10^{14}{\rm GeV}$, $\eta_i=-1{\rm GeV}^{-1}$, $\eta_f=-10^{-14}{\rm GeV}^{-1}$, $a(\eta_i)= 10^{-14}$, and $a(\eta_f)=1$.} \label{graph4} \end{figure} \noindent In FIG.\,\ref{graph4}, we plotted the entanglement entropy for various values of $k$ under a fixed value of $|{\bm B}|$, which clearly shows that the graviton and photon are highly entangled during inflation. As well as the result of FIG.~\ref{graph3}, the asymptotic value of $S(a)$ becomes the same irrespective of the value of $k$. \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph5.pdf} \renewcommand{\baselinestretch}{3} \caption{Entanglement entropy of graviton and photon field induced by the coupling of background magnetic field. Other parameters are set as $H=10^{14}\,{\rm GeV}, \eta_i=-1\,{\rm GeV}^{-1},$ and $\eta_f=-10^{-14}\,{\rm GeV}^{-1}$, $a(\eta_i)= 10^{-14}$, and $a(\eta_f)=1$.} \label{graph5} \end{figure} \noindent In FIG.\,\ref{graph5}, the entanglement entropy for various values of $\lambda$ under a fixed value of $k$ is plotted. In this case, the different $\lambda$ corresponds to different $|{\bm B}|$ because of Eq.~(\ref{coupling2}) where $\theta=\pi/2$. We see that the entanglement entropy grows fast but the asymptotic value of $S(a)$ decreases as $\lambda$ increases. We will figure out why the asymptotic value becomes smaller as the value of $\lambda$ becomes larger in the next subsection. \subsection{Observational implications} In the conventional inflation, the quantum state of gravitons is a squeezed state, that is a pure state. In our setup, there exits the entanglement between gravitons and photons. Therefore, the state of gravitons is a mixed state described by the density operator (\ref{graviton_state}). Then, the spectrum of number operator is expected to be different. In this subsection, we calculate the spectrum of graviton number operator. If there is no interaction between graviton and photon, the Bogoliubov transformation in Eq.~(\ref{y:invert}) is written only by $\alpha$ and $\beta$. Then the number density of gravitons is described by the squeezing parameter of graviton $\Lambda$ such as \begin{eqnarray} N_0 \equiv \langle {\rm BD}|\,n\,|{\rm BD}\rangle= \langle {\rm BD}|\,\hat{a}_y^\dag (\eta,\bm{k})\hat{a}_y (\eta,-\bm{k})\,|{\rm BD}\rangle =\frac{|\Lambda|^2}{1-|\Lambda|^2} \ , \label{standard} \end{eqnarray} where $|{\rm BD}\rangle$ is the Bunch-Davies vacuum. The above number density can be estimated analytically as $N_0 \sim 1/4k^2 \eta^2 \sim 0.25\times10^{24}$ for the wave number of graviton $k=10^2$ GeV$^2$ and the time of end of inflation $\eta_f =10^{-14}$ GeV$^{-1}$~\cite{Maggiore:1999vm}. On the other hand, if graviton and photon interact each other, the operator of graviton becomes Eq.~(\ref{transform}). Then we need to invert the Bogoliubov transformation such as \begin{eqnarray} \hat{a}_{y,{\bm k}}=\Phi^* \hat{C}_{y,{\bm k}}- \Psi\hat{C}_{y,-{\bm k}}^\dagger \ . \end{eqnarray} The number density of gravitons can be calculated by \begin{eqnarray} N_{B} \equiv \langle \overline{{\rm BD}}|\,n\,|\overline{{\rm BD}}\rangle &=& {\rm Tr}_y\,\sigma_y\,\hat{a}_{y,-{\bm k}}^\dagger\, \hat{a}_{y,{\bm k}} = \left(1-|\rho|^2 \right)\sum_{n'=0}^\infty |\rho|^{2n'} \langle n' | \hat{a}_{y,-{\bm k}}^\dagger \hat{a}_{y,{\bm k}} |n' \rangle \nonumber \\ &=& \left(1-|\rho|^2 \right)\sum_{n'=0}^\infty |\rho|^{2n'} \langle n' | \left(\Phi\,\hat{C}^\dag_{y,-{\bm k}}- \Psi^*\hat{C}_{y,{\bm k}}\right) \left(\Phi^* \hat{C}_{y,{\bm k}}- \Psi\,\hat{C}_{y,-{\bm k}}^\dagger\right) |n' \rangle \nonumber \\ &=& \left(1-|\rho|^2 \right)\sum_{n'=0}^\infty |\rho|^{2n'} \left(n' + |\Psi|^2 + 2n' |\Psi|^2 \right) \nonumber \\ &=& \frac{(|\Psi|^2+1)|\rho|^2+|\Psi|^2}{1-|\rho|^2} \label{gravitons} \ . \end{eqnarray} where we used $|\Phi|^2-|\Psi|^2=1$ and $|\Psi|^2$ is obtained by solving the three of four equations in Eq.~(\ref{newbogoliubov}) such as \begin{eqnarray} |\Psi|^2 = \frac{\left|\Lambda + |\rho|^2 \frac{\Xi}{ \Xi^*} \Lambda^*\right|^2}{\left|1-|\Xi|^2 |\rho|^2 +\frac{\Xi}{\Xi^*}\Lambda^{*2} |\rho|^2 \right|^2-\left|\Lambda + |\rho|^2 \frac{\Xi }{ \Xi^*} \Lambda^*\right|^2} \ . \end{eqnarray} \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph6.pdf} \renewcommand{\baselinestretch}{3} \caption{Expectation values of the number operator $N_B$ as a function of $a(\eta)$. Dotted lines represent the case of $\lambda=0\,{\rm GeV}^2$. Other parameters are set as $H=10^{14}\,{\rm GeV}, \eta_i=-1\,{\rm GeV}^{-1},$ and $\eta_f=-10^{-14}\,{\rm GeV}^{-1}$, $a(\eta_i)= 10^{-14}$, and $a(\eta_f)=1$.} \label{graph6} \end{figure} \noindent \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph7.pdf} \renewcommand{\baselinestretch}{3} \caption{Expectation value of the number operator $N_B$ as a function of $a(\eta)$. A dotted line represents the case of $\lambda=0\,{\rm GeV}^2$. Other parameters are set as $H=10^{14}\,{\rm GeV}$, $\eta_i=-1\,{\rm GeV}^{-1},$ and $\eta_f=-10^{-14}\,{\rm GeV}^{-1}$, $a(\eta_i)= 10^{-14}$, and $a(\eta_f)=1$.} \label{graph7} \end{figure} \noindent Plots in FIG.\,\ref{graph6} show a comparison of $N_B$ for $\lambda=0$ (Dotted lines) and $\lambda\neq 0$ (Solid lines). The $N_B$ for various values of $k$ under a fixed value of $|{\bm B}|$ is plotted. We see that three solid lines for various values of $k$ converge to the same value irrespective of $k$ under a fixed value of $|{\bm B}|$. Namely, the spectrum of the number of gravitons is scale-invariant. In FIG.\,\ref{graph7}, we plotted $N_B$ for various values of $\lambda$ under a fixed value of $k$. We find that the number of gravitons decreases as $\lambda$ increases. This is because the increase of $\lambda$ corresponds to the increase of $|{\bm B}|$. And as $|{\bm B}|$ increases, the growth of entanglement between gravitons and photons becomes fast as shown in FIG.~\ref{graph5}. As the entanglement grows fast, the $N_B$ turns out to be decreased. Because of this decrease of $N_B$, the asymptotic value of $S(a)$ will decrease as shown in FIG.~\ref{graph5}. In FIG.\,\ref{graph8}, the ratio of Eq.~(\ref{gravitons}) to the (\ref{standard}) is plotted for various values of $k$ under a fixed value of $|{\bm B}|$. We can read off that the ratio increases by two digits as $k$ increases by a digit. So, we find that the ratio is proportional to $k^2$. \begin{figure}[H] \centering \includegraphics[keepaspectratio,width=0.68\linewidth]{graph8.pdf} \renewcommand{\baselinestretch}{3} \caption{Ratios of the expectation number with and without background magnetic field is depicted. Other parameters are set as $H=10^{14}{\rm GeV}, \eta_i=-1{\rm GeV}^{-1},$ and $\eta_f=-10^{-14}{\rm GeV}^{-1}$, $a(\eta_i)= 10^{-14}$, and $a(\eta_f)=1$.} \label{graph8} \end{figure} \noindent In the absence of magnetic fields, it is known that graviton number density is proportional to $k^{-4}$ at present. In the presence of magnetic fields, we find $N_B \propto k^0$ during inflation, if gravitons go through the radiation dominated era, the number density decreases as $k^{-2}$. Thus the $N_B$ at present would be proportional to $k^{-2}$. In terms of the density parameter~\cite{Maggiore:1999vm}, \begin{eqnarray} \Omega_{\rm GW} (k) = \frac{d\,\rho_{\rm GW}}{d\log k} = \frac{k^4 }{\pi^2}\ \langle \overline{{\rm BD}}|\,n\,|\overline{{\rm BD}}\rangle_{\rm present} \ , \end{eqnarray} where the number density of gravitons at present is expressed by $ \langle \overline{{\rm BD}}|\,n\,|\overline{{\rm BD}}\rangle_{\rm present}\propto k^{-2}$, we can expect that the power spectrum at present becomes \begin{eqnarray} \Omega_{\rm GW} (k) \propto k^2 \ . \end{eqnarray} Thus, the blue spectrum is expected to be observed. This means that it is difficult to observe primordial gravitational waves on the CMB scales. Next, let us discuss the angular dependence of the power spectrum of PGWs. Since $\lambda({\bm k})$ depends on $\sin \theta$ as shown in Eq.~(\ref{coupling2}), we can see the angular dependence in the power spectrum of PGWs once we find the $\lambda({\bm k})$ dependence of the power spectrum. In fact, we can read off the $\lambda({\bm k})$ dependence by fitting the numerical results in FIG.\,\ref{graph8} such as \begin{eqnarray} \Omega_{\rm GW} (k) \sim 10^{25} \lambda^{-4\sqrt{1+\tanh{({\log{\lambda}}})}} \,. \end{eqnarray} Then if we use Eq.~(\ref{coupling2}), we can read off the angular dependence of the power spectrum. It would be intriguing to understand this statistical anisotropy by comparing with the other mechanism producing the statistical anisotropy\cite{Watanabe:2009ct,Watanabe:2010fh,Dulaney:2010sq,Gumrukcuoglu:2010yc}. \section{Conclusion} We studied primordial gravitational waves (PGWs) in the presence of magnetic fields that survive during inflation. In contrast to conventional inflation, where only PGWs are highly squeezed, we considered a system that electromagnetic fields are highly squeezed as well. We showed that graviton to photon conversion and vice versa never end as long as inflation lasts, and then gravitons and photons get highly entangled. We derived a reduced density matrix of the gravitons and calculated the entanglement entropy of them by using the reduced density matrix. We revealed that quantum states of the primordial gravitons observed today are not squeezed (pure) states but mixed states. Consequently, the prediction for PGWs becomes quite different. That is, the number of primordial gravitons is significantly decreased due to the entanglement between gravitons and photons. Hence, the power spectrum of PGWs is not scale-invariant but blue $\Omega_{\rm GW}(k) \propto k^2$. Remarkably, due to the specific direction of the magnetic field, there appears statistical anisotropy in the PGW spectrum. Our findings have important implications for the quantum state of primordial gravitons. So far, states of primordial gravitons are regarded as squeezed pure states. However, if magnetic fields had coupled with gravitons during inflation, the primordial gravitons observed today would be mixed states. Then the estimation of observables has to be changed. At least, the power spectrum of the PGWs is significantly decreased on large scales due to the entanglement with the magnetic field during inflation. This result makes it difficult to measure the quantum nature of gravitons today. Hence, we need to come up with a new strategy to probe the non-classical PGWs. Our results also open up the possibility of probing primordial magnetic fields through the observations of the non-classicality of primordial gravitational waves. As we mentioned below Eq.~(\ref{f}), we considered the coupling parameter $c=-1/2$ instead of the scale-invariant one $c=-1$. In the case of $c=-1/2$, the physical magnetic fields decay as $fB\propto 1/a$, which is slower than the normal scaling $\propto 1/a^2$. On the other hand, in the case of $c=-1$, the physical magnetic fields $fB$ does not decay during inflation. Hence, we would be able to expect more drastic effects on the quantum state of gravitons. We leave the analysis of this case for future work. \section*{Acknowledgments} S.\ K. was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Number JP22K03621. J.\ S. was in part supported by JSPS KAKENHI Grant Numbers JP17H02894, JP17K18778, JP20H01902, JP22H01220. K.\ U. was supported by the JSPS KAKENHI Grant Number 20J22946. \printbibliography \end{document}
2,877,628,089,931
arxiv
\section{Introduction} (Ga,Mn)As and related ferromagnetic semiconductors are unique due to their dilute moment nature and the strong spin-orbit coupling.\cite{Matsukura:2002_a,Jungwirth:2006_a} Doped with only $\sim$1-10\% of Mn magnetic moments, the saturation magnetization, $M_s$, and the magnetic dipole interaction fields are $\sim$100-10 times weaker in these materials than in conventional ferromagnets. This could make possible dense integration of ferromagnetic semiconductor microelements with minimal dipolar cross-links. Despite the low $M_s$ the magnetic anisotropy fields, $H_a$, routinely reach $\sim$10~mT due to the large, spin-orbit coupling induced magnetocrystalline terms.\cite{Dietl:2001_b,Abolfath:2001_a} The magnetocrystalline anisotropy can, therefore, take the role normally played by dipolar shape anisotropy fields in the conventional systems. The combination of appreciable and tunable $H_a$ and low $M_s$ leads to outstanding micromagnetic characteristics. One particularly important example is the orders of magnitude lower critical current in the spin-transfer-torque magnetization switching\cite{Sinova:2004_b,Chiba:2004_b} than observed for dense moment conventional ferromagnets, which follows from the approximate scaling of $j_c\sim H_aM_s$. Critical currents for domain wall switching of the order 10$^{5}$~Acm$^{-2}$ have been reported and the effect thoroughly explored in perpendicularly magnetized (Ga,Mn)As thin film devices at temperatures close to the Curie temperature.\cite{Yamanouchi:2004_a,Chiba:2006_a,Yamanouchi:2006_a} Here we demonstrate that it is possible to locally tune and control spin-orbit coupling induced magnetocrystalline anisotropies in (Ga,Mn)As, which is achieved in our devices by lithographically producing strain relaxation. This is the central result of our work and it represents the necessary prerequisite for future highly integrated microdevices fabricated in the dilute-moment ferromagnets. It also makes possible a range of new studies of extraordinary magnetotransport and magnetization dynamics effects in such systems. As a demonstration we link the achieved local control of magnetocrystalline anisotropy with a study of current induced domain wall switching which is currently one of the most hotly debated areas of theoretical and experimental spintronics research.\cite{Freitas:1984_a,Yamanouchi:2004_a,Yamaguchi:2004_a,Saitoh:2004_a,Tatara:2004_a,Li:2004_b,Tatara:2004_b,Barnes:2005_a,Thiaville:2005_a,Chiba:2006_a,Yamanouchi:2006_a,Gould:2006_a,Hayashi:2006_a,Thomas:2006_a,Dugaev:2006_a,Xiao:2006_b,Ohe:2006_b,Araujo:2006_b,Duine:2006_a} We report in-plane domain-wall switchings well below the Curie temperature at $j_c\sim$10$^{5}$~Acm$^{-2}$ whose characteristics strongly depend on the locally induced changes of magnetic anisotropy. The phenomenology of the current induced switching we observe shows signatures of domain wall spin-transfer-torque effects. The paper is organized as follows: In Section~\ref{relax_exp} we introduce the studied (Ga,Mn)As microstructures and the anisotropic magnetoresistance (AMR) technique for detecting local magnetization orientation along the channels.\cite{Hayashi:2006_a} This technique is particularly useful in dilute moment ferromagnets where direct imaging methods, such as the magneto-optical Kerr effect, lack the required sensitivity due to the low $M_s$. Numerical simulations of the lattice relaxation in the microbars and microscopic calculations of the corresponding changes of magnetocrystalline anisotropies are discussed in Section~\ref{relax_theor}. Current induced switching experiments in our structures with locally controlled anisotropies are presented in Section~\ref{dw}. A brief summary of the main results is given in Section~\ref{summary}. \section{Lattice relaxation and local control of magnetic anisotropy} \subsection{Experiment} \label{relax_exp} Fig.~\ref{figure1} shows scanning electron micrographs of one of the devices studied. The structure consists of a macroscopic Van der Pauw device and an L-shaped channel patterned on the same wafer, the arms of which are Hall-bars aligned along the [1$\overline{1}$0] and [110] directions. The trench-isolation patterning was done by e-beam lithography and reactive ion etching in a 25~nm thick Ga$_{0.95}$Mn$_{0.05}$As epilayer, which was grown along the [001] crystal axis on a GaAs substrate. Results for two samples are reported: device~A(B) has 4(1)~$\mu$m wide, 80(20)~$\mu$m long Hall bars. Isolated magnetic elements with the dimensions of these Hall bars and $M_s\sim$50~mT of the Ga$_{0.95}$Mn$_{0.05}$As would have in-plane shape anisotropy fields below $\sim$1~mT, which is an order of magnitude lower than the magnetocrystalline anisotropy fields. In-plane shape anisotropies are further reduced in our devices as they are defined by narrow (200nm) trenches with the remaining magnetic epilayer left in place. The Curie temperature of 100~K was obtained from Arrot plots of anomalous Hall data. Hole density of 5$\times$10$^{20}$cm$^{-3}$ was estimated from high-field Hall measurements. At this doping the compressive strain in the Ga$_{0.95}$Mn$_{0.05}$As epilayer grown on the GaAs substrate produces a strong magnetocrystalline anisotropy which forces the magnetization vector to align parallel with the plane of the magnetic epilayer.\cite{Dietl:2001_b,Abolfath:2001_a} Magnetization orientations in the individual microbars are monitored locally by measuring longitudinal and transverse components of the AMR at in-plane magnetic fields. The magnetization rotation experiments at saturation magnetic field measured on device~B and on the macroscopic Van der Pauw device are presented in Figs.~\ref{figure2}(a) and (b). (For the detailed discussion of the origins of the AMR and microscopic modeling of this extraordinary magnetoresistance coefficient in (Ga,Mn)As see Ref.~\onlinecite{Rushforth:2007_a}.) \begin{figure}[h] \vspace{-1.5cm} \hspace*{-.8cm}\includegraphics[width=1.2\columnwidth,angle=-0]{Fig01.pdf} \vspace*{-1.5cm} \caption{(a) Scanning electron micrograph of the L-shaped microdevice~B and the macroscopic Van der Pauw device. (b) Detail of the L-shaped microdevice with the longitudinal (L) and transverse (T) resistance contacts in the bars and the corner (C) resistance contacts. Positive hole current in the p-type (Ga,Mn)As is defined to propagate from the [1$\overline{1}$0]-bar to the [110]-bar.} \label{figure1} \end{figure} Examples of magnetoresistance measurements for external magnetic field sweeps in which the field angle $\theta$, measured from the [1$\overline{1}$0]~axis, is constant are shown in Figs.~\ref{figure2}(c) and (d). The strongly $\theta$-dependent low-filed magnetoresistance is attributed to magnetization rotations. At high fields, the magnetoresistance becomes purely isotropic, i.e., the differences between resistances for different angles $\theta$ become independent of the magnitude of the external field. This property and the much smaller magnitude of the isotropic magnetoresistance compared to the low-field anisotropic magnetoresistance allows us to use the high-field measurements in Figs.~\ref{figure2}(a),(b) for determining the one to one correspondence between a change in the low-field resistance and a change in magnetization orientation. Note that the 45$^{\circ}$ phase shift between the longitudinal and transverse AMR traces (see Figs.~\ref{figure2}(a),(b)) allows us to determine unambiguously the change in the magnetization angle if both resistance components are measured simultaneously. The technique of detecting magnetization rotations via AMR measurements is exploited in Section~\ref{dw} where we compare field induced and current induced magnetization switchings. Importantly, the multiterminal design of our L-shaped microbars also allows to apply this electrical measurement of magnetization angle locally at the corner and at different parts of the L-shaped Hall bars and, therefore, to track the propagation of domain walls if present in the system. In this section we use the fixed-$\theta$ magnetoresistance measurements to first determine local magnetic anisotropies in the individual microbars. Values of $\theta$ corresponding to easy-axis directions have the smallest low-field magnetoresistance. For values of $\theta$ not corresponding to easy-axis directions the magnetization undergoes a (partially) continuous rotation at low fields resulting in different orientations, and hence different measured resistances, at saturation and remanence. We find that the technique can be used to determine the easy-axis directions within $\pm 1^{\circ}$. The effect of microfabrication on the magnetic anisotropy is apparent in Fig.~\ref{figure3}. In the bulk, magnetization angle $30^{\circ}$ corresponds to an easy-axis while $7^{\circ}$ and $55^{\circ}$ are significantly harder. For device~B, $7^{\circ}$ is an easy-axes in the [1$\overline{1}$0]-bar and $55^{\circ}$ is an easy-axis in the [110]-bar. All easy-axes found in devices~A and B and in the bulk are summarized in Tab.\ref{tab}. The bulk material has the cubic anisotropy of the underlying zincblende structure plus an additional uniaxial [1$\overline{1}$0] anisotropy as is typical (Ga,Mn)As epilayers.\cite{Sawicki:2004_a} This results in two easy-axes tilted by 15$^{\circ}$ from the [100] and [010] cube edges towards the [1$\overline{1}$0] direction. In the microdevices, the easy-axes are rotated from their bulk positions towards the direction of the respective bar and the effect increases with decreasing bar width. \begin{figure}[h] \hspace*{-0.8cm}\includegraphics[width=1.2\columnwidth,angle=-0]{Fig02.pdf} \caption{Device B longitudinal (a) and transverse (b) AMRs measured at 4.2~K in a rotating 4~T in-plane field with the field angle measured from the [1$\overline{1}$0] axis, and bulk transverse AMR measured in the Van der Pauw device with current lines oriented along the [010] axis. ($\Delta\rho\equiv \rho-\overline{\rho}$ where $\overline{\rho}$ is the average value over all angles.) In-plane, fixed-angle field sweep measurements of the longitudinal magnetoresistances of the (c) [1$\overline{1}$0]-bar and (d) [110]-bar bar of device~B. (Same average resistances as in (a) and (b) are subtracted to obtain $\Delta R$)} \label{figure2} \end{figure} \begin{figure}[h] \vspace{-1cm} \hspace*{-4cm}\includegraphics[width=1.8\columnwidth,angle=0]{Fig03.pdf} \vspace{-1cm} \caption{Comparison of the low-filed measurements at 4.2~K of the transverse resistance in the bulk Van de Pauw device (upper panel) and of the longitudinal resistance of the [1$\overline{1}$0] and [110]-bar in device B (lower panels).} \label{figure3} \end{figure} \begin{table}[h] \begin{tabular}{c|c|c|c|c|c} sample & bulk & A [1$\overline{1}$0] & A [110] & B [1$\overline{1}$0] & B [110] \\ \hline easy-axis angle & $\pm30^{\circ}$ & $\pm15^{\circ}$ & $\pm36^{\circ}$ & $+7^{\circ}$,$-8^{\circ}$ & $+55^{\circ}$,$-63^{\circ}$ \\ \end{tabular} \caption{Easy-axes angles, measured from the [1$\overline{1}$0] crystal direction, determined by magnetoresistance measurements in the macroscopic Van der Pauw device (bulk) and in the [1$\overline{1}$0] and [110]-bars of the L-shaped devices~A and B.} \label{tab} \end{table} \subsection{Theory} \label{relax_theor} The local changes in the magnetocrystalline anisotropy can be understood in the following way. Ga$_{0.95}$Mn$_{0.05}$As epilayers grown on GaAs substrate are compressively strained in the (001) plane with the typical value of the strain parameter $f\equiv(a^{\ast}_{GaMnAs}-a^{\ast}_{GaAs})/a^{\ast}_{GaAs}\approx 0.2-0.3$\%, where $a^{\ast}_{GaAs}$ and $a^{\ast}_{GaMnAs}$ are the lattice parameters of the cubic fully relaxed GaAs and (Ga,Mn)As film, respectively. With the (Ga,Mn)As material removed in the trenches along the bars, the lattice can relax in the transverse direction and the corresponding extension can be roughly estimated as $f t/w\sim 0.01\%$, where $t=25$~nm is the thickness of the (Ga,Mn)As film and $w$ is the bar width. On a quantitative level, the strength of the lattice relaxation in the microbars is obtained from numerical elastic theory simulations for the realistic sample geometry. (GaAs values of the elastic constants are considered for the whole wafer including the Ga$_{0.95}$Mn$_{0.05}$As epilayer.) Results of such calculations are illustrated in Fig.~\ref{figure4} for the [1$\overline{1}$0]-bar of device B. In panel (a) we show the strain component along the growth-direction [001]-axis with respect to the lattice parameter of a fully relaxed cubic GaAs, $e_{[001]}=(a_{[001]}-a^{\ast}_{GaAs})/a^{\ast}_{GaAs}$. Since all strain components scale linearly with $f$ we plot $e_{[001]}/f$. The figure highlights the growth induced lattice matching strain; because of the in-plane compression of the (Ga,Mn)As lattice the elastic medium reacts by expanding the lattice parameter in the growth direction, as compared to $a^{\ast}_{GaMnAs}$, i.e., $e_{[001]}/f>1$. Within the plane, the lattice can relax only in the direction perpendicular to the microbar orientation. The corresponding strain component, calculated again with respect to the GaAs, is plotted in Fig.~\ref{figure4}(b) over the entire cross-section of device B and, in Figs.~\ref{figure4}(c) and (d), along various cuts through the [001]-[110] plane. While in the center of the bar the in-plane relaxation is relatively weak, i.e. the lattice parameter remains similar to that of the GaAs substrate, the lattice is strongly relaxed near the edges of the bar. Averaged over the entire cross-section of the (Ga,Mn)As bar we obtain relative in-plane lattice relaxation of several hundredths of a per cent, i.e., of the same order as estimated by the $f t/w$ expression. The microscopic magnetocrystalline energy calculations discussed in the following paragraphs confirm that these seemingly small lattice distortions can fully account for the observed easy-axis rotations in the strongly spin-orbit coupled (Ga,Mn)As. Our microscopic calculations of the magnetization angle dependent total energies are based on combining the six-band ${\bf k}\cdot {\bf p}$ description of the GaAs host valence band with kinetic-exchange model of the coupling to the local Mn$_{\rm Ga}$ $d^5$-moments.\cite{Dietl:2001_b,Abolfath:2001_a} The theory is well suited for the description of spin-orbit coupling phenomena in the top of the valence band whose spectral composition and related symmetries are dominated, as in the familiar GaAs host, by the $p$-orbitals of the As sublattice. The ${\bf k}\cdot {\bf p}$ modeling also provides straightforward means of accounting for the effects of lattice strains on the (Ga,Mn)As band structure.\cite{Dietl:2001_b,Abolfath:2001_a} (As in the above macroscopic simulations we assume that the elastic constants in (Ga,Mn)As have the same values as in GaAs.) This theory, which uses no adjustable free parameters, describes accurately the sign and magnitude of the AMR data in Fig.~\ref{figure2}.\cite{Rushforth:2007_a} It has also explained the previously observed transitions between in-plane and out-of-plane easy magnetization orientations in similar (Ga,Mn)As epilayers grown under compressive and tensile strains and provided a consistent account of the signs and magnitudes of corresponding AMR effects.\cite{Jungwirth:2006_a} For the modeling of the magnetocrystalline energy of the microbars we assume homogeneous strain in the (Ga,Mn)As layer corresponding to the average value of $e_{[110]}$ obtained in the macroscopic elastic theory simulations. The input parameters of the microscopic calculations\cite{Dietl:2001_b,Abolfath:2001_a} are then strain components, related to the fully relaxed cubic (Ga,Mn)As lattice, in the [100]-[010]-[001] ($x-y-z$) coordinate system which are given by: \begin{eqnarray} e_{ij} &=& \left(\begin{array}{ccc} e_{xx} & e_{xy}& 0 \\ e_{yx} & e_{yy}& 0 \\ 0 & 0 & e_{zz} \\ \end{array}\right)\nonumber \\ \nonumber \\ &=& \left(\begin{array}{ccc} \frac{e_{[110]}}{2} - f& \pm \frac{e_{[110]}}{2}& 0 \\ \pm \frac{e_{[110]}}{2} & \frac{e_{[110]}}{2} - f & 0 \\ 0 & 0 & e_{[001]} - f \\ \end{array}\right)\;, \end{eqnarray} where $\pm$ corresponds to the [1$\overline{1}$0]-bar and [110]-bar respectively. In Fig.~\ref{figure5}(b) we plot calculated magnetocrystalline energies as a function of the in-plane magnetization angle for $f=0.3$\% and $e_{xy}$ ranging from zero (no in-plane lattice relaxation) to typical values expected for the [1$\overline{1}$0]-bar ($e_{xy}>0$) and for the [110]-bar ($e_{xy}<0$). Consistent with the experiment, the minima at [100] and [010] for $e_{xy}=0$ move towards the [1$\overline{1}$0] direction for lattice expansion along [110] direction ($e_{xy}>0$) and towards the [110] direction for lattice expansion along [1$\overline{1}$0] direction ($e_{xy}<0$). Note that the asymmetry between experimental easy-axes rotations in the two bars is due to the a [110]-uniaxial component present already in the bulk material whose microscopic origin is not known but can be modeled\cite{Sawicki:2004_a} by an intrinsic (not induced by micropatterning) strain $e^{bulk}_{xy}\sim+0.01$\%. \begin{figure}[h] \hspace*{-1.2cm}\includegraphics[width=1.3\columnwidth,angle=0]{Fig04.pdf} \caption{Numerical simulations of lattice parameters in the 1~$\mu$m wide [1$\overline{1}$0]-bar of device B defined by 200~nm wide and 75~nm deep trenches in the 25~nm thick (Ga,Mn)As film on a GaAs substrate. (a) Strain component along the [001]-axis with respect to the lattice parameter of a fully relaxed cubic GaAs, $e_{[001]}=(a_{[001]}-a^{\ast}_{GaAs})/a^{\ast}_{GaAs}$. The epitaxial growth induced strain parameter $f$ is defined as, $f=(a^{\ast}_{GaMnAs}-a^{\ast}_{GaAs})/a^{\ast}_{GaAs}$ where $a^{\ast}_{GaMnAs}>a^{\ast}_{GaAs}$ is the lattice parameter of the cubic fully relaxed (Ga,Mn)As film. (b) Same as (a) for in-plane strain component $e_{[110]}$ in the direction perpendicular to the bar orientation. (c) and (d) Strain components $e_{[110]}$ along different cuts through the [001]-[110] plane. The cuts and the corresponding $e_{[110]}/f$ curves are highlighted by colored arrows in (b) and the corresponding color coding of curves in (c) and (d).} \label{figure4} \end{figure} \begin{figure}[h] \hspace*{-0cm}\includegraphics[width=1\columnwidth,angle=-0]{Fig05.pdf} \caption{(a) Schematics of the easy-axes orientations in the [1$\overline{1}$0] and [110]-bars of the L-shaped devices A and B. Arrows indicate the direction and strength of the patterning induced lattice relaxation. (b) Theoretical magnetocrystalline energies as a function of the in-plane magnetization angle for zero shear strain (black line), for $e_{xy}=0.004,..,0.02$\% (red lines) corresponding to lattice extension along [110] axis, and for $e_{xy}=-0.004,..,-0.02$\% (blue lines) corresponding to lattice extension along [1$\overline{1}$0] axis. The magnetic easy-axes at $e_{xy}=0$, 0.02\% and -0.02\% are highlighted by black, red, and blue arrows, resp. Lattice deformations breaking the [1$\overline{1}$0]-[110] symmetry of the microscopic magnetocrystalline energy profile are illustrated by the diamond-like unit cells extended along [110] axis for the [1$\overline{1}$0]-bar (red diamond) and along the [1$\overline{1}$0] axis for the [110]-bar (blue diamond). } \label{figure5} \end{figure} \section{Demonstration in current induced switching} \label{dw} The L-shaped geometry of our devices is well suited for a systematic study of the link between the locally adjusted magnetic anisotropies in the individual microbars and their current induced switching characteristics. Apart from the distinct magnetocrystalline anisotropy fields, the two bars in each device have identical material parameters and lithographical dimensions. They can also be expected to share a common domain-wall nucleation center at the corner of the L-shaped channel since in this region the lattice relaxation effects and the corresponding enhancement of the magnetocrystalline anisotropies are less pronounced. Apart from this effect, the domain wall nucleation at the corner can be expected to be supported by an enhanced current induced heating in this part of the device. The basic phenomenology of current induced switchings that we observe in all our L-shaped microbars is illustrated in Figs.~\ref{figure6} and \ref{figure7}. The particular field-assisted switching data plotted in the figures were measured in the [110]-bar of device A at $\theta=7^{\circ}$. At this off-easy-axis angle the current induced switching can be easily induced and detected due to the hysteretic bistable character of the low field magnetization and the clear AMR signal upon reversal (see Fig.~\ref{figure7}(a)). We start with assessing the role of heating in the current induced switching experiments. Figs.~\ref{figure6}(a) and (b) compare the temperature dependence of the longitudinal resistance at low current density (10$^{3}$~Acm$^{-2}$) with the dependence on current density measured in liquid helium. As seen from the plots, the maximum current density of 1$\times$10$^{6}$~Acm$^{-2}$ used in the experiments corresponds to heating the sample by approximately 20~K, which is well below the Curie temperature of 100~K. Nevertheless, a suppression due to heating of the effective barrier between metastable and stable states and thermally induced reversals are possible near the switching fields and these effects have to be considered when analyzing the current induced switching experiments below. The measurements presented in Figs.~\ref{figure7}(b)-(f) were performed by first applying a saturation field and then reversing the field and setting it to a value close to but below the switching field in the field-sweep experiment (see Fig.~\ref{figure7}(a)). Then, the first current ramp was applied which triggered the reversal, followed by subsequent control current ramps of the same polarity which showed no further changes in the magnetization. Constant current sweep rate of $5\times 10^{4}$~Acm$^{-2}$s$^{-1}$ was used in all experiments. In Figs.~\ref{figure7}(b)-(f) we plot the difference, $\delta R$, between resistances of the first and the subsequent current ramps. We note that no switchings were observed in these experiments up to the highest applied currents in the [1$\overline{1}$0]-bar. In this bar with the stronger magnetocrystalline anistropy, the magnitude of the low current (10$^{3}$~Acm$^{-2}$) switching field at $\theta=7^{\circ}$ is $\approx 8$~mT, as compared to the $\approx 5.5$~mT switching field in the [110]-bar. First we discuss data in Fig.~\ref{figure7}(b) and (c) taken at -4~mT external field and negative current ramps. The two independent experiments (panels (b) and (c) respectively) performed at nominally identical conditions demonstrate the high degree of reproducibility achieved in our devices. This includes the step-like features which we associate with domain wall depinning/pinning events preceding full reversal. To understand this process in more detail we complement the longitudinal (black curve) and transverse (red curve) resistance measurements in the [110]-bar with the resistance measurements at the corner (blue curve) of the L-shaped channel. The schematic plot of the respective voltage probes is shown in the inset. The first magnetization switching event at $j\approx-5\times$10$^{5}$~Acm$^{-2}$ is detected by the step in the $\delta R_C$ signal, i.e., occurs in the corner region between the $R_C$ contacts. For current densities in the range between $j\approx-5\times$10$^{5}$~Acm$^{-2}$ and $j\approx-6\times$10$^{5}$~Acm$^{-2}$ the domain wall remains pinned in the corner region. The next domain wall propagation and pinning event in $\delta R_C$ is observed between $j\approx-6\times$10$^{5}$~Acm$^{-2}$ and $j\approx-7\times$10$^{5}$~Acm$^{-2}$ and for $|j|>7\times$10$^{5}$~Acm$^{-2}$ the region between the $R_C$ contacts is completely reversed. The depinning events at $j\approx-5\times$10$^{5}$~Acm$^{-2}$ and $j\approx-6\times$10$^{5}$~Acm$^{-2}$ are also registered by the $R_L$ and $R_T$ contacts through noise spikes in the respective $\delta R_L$ and $\delta R_T$ signals. However, beyond these spikes, $\delta R_L$ and $\delta R_T$ remain constant for $|j|<7\times$10$^{5}$~Acm$^{-2}$ indicating that the domain wall has not reached the section of the [110]-bar between the $R_L$ contacts at these current densities. Constant $\delta R_C$ and step-like changes in $\delta R_L$ and $\delta R_T$ at$|j|>7\times$10$^{5}$~Acm$^{-2}$ are signatures of the domain wall leaving the corner section and entering the part of the [110]-bar between the $R_L$ contacts. The reversal of this part is completed at $j\approx-8\times$10$^{5}$~Acm$^{-2}$. Note that both the $\delta R_L$, averaging over the whole bar between the longitudinal contacts, and the $\delta R_T$, reflecting the local structure near the respective transverse contacts, show switching at the same current and the sense and magnitude of the overall change in $\delta R_L$ and $\delta R_T$ are consistent with those observed in the field sweep measurement (see Fig.~\ref{figure7}(a)). This indicates that the contacts have a negligible effect on the anisotropy in this bar and allows us to unambiguously determine the magnetization angles of the initial state, $39\pm1^{\circ}$, and of the final state, $211\pm1^{\circ}$. This -4~mT field assisted current induced switching is not observed at positive current ramps up to the highest experimental current density of $j=1\times$10$^{6}$~Acm$^{-2}$ which indicates that spin-transfer-torque effects can be contributing to the reversal. Note also that the domain wall propagates in the direction opposite to the applied hole current, in agreement with previous spin-transfer-torque studies of perpendicularly magnetized (Ga,Mn)As films.\cite{Chiba:2006_a} (The anomalous direction of the domain wall propagation is assigned to the antiferromagnetic alignment of hole spins with respect to the total moment in (Ga,Mn)As.\cite{Yamanouchi:2004_a,Chiba:2006_a,Yamanouchi:2006_a}) \begin{figure}[h] \hspace*{-1.3cm}\includegraphics[width=1.3\columnwidth,angle=0]{Fig06.pdf} \caption{ (a) Temperature dependence of ${\rm rel}^T R_L \equiv[R_L(T)-R_L(4.2)]/R_L(4.2)$ at current density 10$^{3}$~Acm$^{-2}$. For completeness, ${\rm rel}^T R_L$ over a wide range of temperatures below and above the Curie temperature is shown in the inset. (b) First (solid line) and second (dashed line) current ramps at -4~mT field applied along $\theta=7^{\circ}$; relative resistances are plotted with respect to the zero-current resistance in the first ramp. Switching at $j_c\approx-7.5\times$10$^{5}$~Acm$^{-2}$ is marked. } \label{figure6} \end{figure} \begin{figure}[h] \vspace*{-.5cm} \hspace*{-.2cm}\includegraphics[width=1.05\columnwidth,angle=-0]{Fig07.pdf} \vspace{-.5cm} \caption{ (a) Field-sweep measurements at $\theta=7^{\circ}$ in the [110]-bar of device~A. (b) Differences between the first and second negative current ramps for the longitudinal (black lines) and transverse (red lines) resistance in the [110]-bar and in the corner (blue lines) of device~A at -4~mT external field applied along $\theta=7^{\circ}$. Arrows indicate the current ramp direction. (c) Same as (b) for the second independent experiment. (d) Same as (b) and (c) for positive current ramps. The inset shows contacts used for measurements of $R_L$, $R_T$, and $R_C$ in all panels. (e),(f) -5~mT field assisted current induced switching experiments.} \label{figure7} \end{figure} A suppression of the role of the spin-transfer-torque relative to the thermally assisted switching mechanism is expected at fields closer to the coercive field. The data taken at -5~mT field shown in Fig.~\ref{figure7}(e) and (f) are fully consistent with this expectation. Current induced switchings are observed here at lower critical currents and for both current polarities. Nevertheless, the asymmetry between the negative and positive critical currents is still apparent and consistent with a picture of cooperative effects of heating and spin-transfer-torque for negative currents and competing effects of the two mechanisms for positive currents. The distinct current induced switching characteristics achieved by patterning one bar along the [110] direction and the other bar along the [1$\overline{1}$0] direction are illustrated in Figs.~\ref{figure8} and \ref{figure9} on a set of experiments in device~B. The measurements shown in Figs.~\ref{figure8}(b)-(d) were taken on the [1$\overline{1}$0]-bar in an external field of a magnitude of -9~mT applied along $\theta=0^{\circ}$ (see corresponding field sweep measurements in Fig.~\ref{figure8}(a)). Up to the highest experimental current densities, the switching (from magnetization angle 9$^{\circ}$ to 180$^{\circ}$) is observed only for the positive current polarity. A less detailed tracking of the domain wall is possible in this experiment compared to the data in Fig.~\ref{figure7} due to the larger magnitude of the external field (larger coercive field of device B) and smaller separation of the contacts used to monitor $R_C$ in this device. Nevertheless, the -9~mT field assisted reversal process shown in Fig.~\ref{figure8} is clearly initiated in the corner and, again, the domain wall propagates in the direction opposite to the applied hole current. Since for the opposite magnetic field sweep we observe the current induced switching at +9~mT also at positive currents (compare Figs.~\ref{figure8}(b) and (d)), the Oersted fields are unlikely to be the dominant switching mechanism. Note also that the Oersted fields generated by our experimental currents are estimated to be two orders of magnitude weaker than the anisotropy fields.\cite{Yamanouchi:2006_a} \begin{figure}[h] \vspace*{-1cm} \hspace*{-0.5cm}\includegraphics[width=1.1\columnwidth,angle=-0]{Fig08.pdf} \vspace{-3cm} \caption{(a) Field-sweep measurements at $\theta=0^{\circ}$ in the [1$\overline{1}$0]-bar of device~B. (b) Difference between the first and second positive current ramps in the [1$\overline{1}$0]-bar of device~B at -9~mT field applied along $\theta=0^{\circ}$. Note that $-\delta R_T$ is plotted for clarity. (c) Same as (b) at negative current ramps at $\pm 9$~mT. The inset shows contacts used for measurements of $R_L$ and $R_T$ in all panels.(d) Same as (b) at $+9$~mT field.} \label{figure8} \end{figure} The character of the current induced switching in device B at -9~mT is completely different in the [110]-bar compared to the [1$\overline{1}$0]-bar, as shown in Figs.~\ref{figure9}(c) and (d). The switching occurs at much lower current densities due to the lower coercive field of the [110]-bar at $\theta=0^{\circ}$ (compare Figs.~\ref{figure8}(a) and \ref{figure9}(a)), and the asymmetry between the positive and negative switching currents is small, suggesting that heating plays an important role in this experiment. Although we see clear jumps in $\delta R_L$, which are consistent with the field-sweep data in Fig.~\ref{figure9}(a), the absence of the $\delta R_T$ switching signal in the [110]-bar hinders the unambiguous determination of the switching angles. This feature is ascribed to a fabrication induced strong pinning at the $R_T$ contacts; indeed the field-sweep measurements for the [110]-bar show an incomplete switching at 10~mT in the longitudinal resistance and no clear signature of switching for the transverse resistance contacts at this field. (Full saturation of the entire bar including the transverse contacts region is achieved at 100~mT.) \begin{figure}[h] \vspace*{-1cm} \hspace*{-.4cm}\includegraphics[width=1.1\columnwidth,angle=-0]{Fig09.pdf} \vspace{-2cm} \caption{ (a) Field-sweep measurements at $\theta=0^{\circ}$ in the [110]-bar of device~B. (b) Difference between the first and second negative current ramps in the [110]-bar of device~B at zero field. (c) Difference between the first and second negative current ramps at -9~mT field applied along $\theta=0^{\circ}$. (d) Same as (c) at for positive current ramps. The inset shows contacts used for measurements of $R_L$ and $R_T$ in all panels.} \label{figure9} \end{figure} In Fig.~\ref{figure9}(b) we exploit the pinning at the $R_T$ contacts to study current induced switching at zero magnetic field. Note that if the switching of the whole bar was complete the zero-field 180$^{\circ}$ rotation from negative to positive easy-axis directions would be undetectable by the AMR measurement. We again see no switching signal in $\delta R_T$ but a clear step in $\delta R_L$. As for all field-assisted experiments, the sense and magnitude of the jump in $\delta R_L$ for zero field correlates well with the field sweep measurements (see the dashed line in Fig.~\ref{figure9}(a)). Also consistent with the trends in the field-assisted experiments, the switching occurs at larger current than in the -9~mT field assisted switching. Up to the highest experimental current density, the zero-field switching is observed only in the negative current ramp, as we would expect for the domain wall propagation from the corner (see the $\delta R_c$ signal in Fig.~\ref{figure9}(b)) to the [110]-bar due to spin-transfer-torque. We emphasize however that a detailed understanding of the origin of the observed current induced switchings in our L-shaped devices is beyond the scope of this work. Our main aim was to demonstrate that the local control of the magnetocrystalline anisotropy we achieved in these dilute moment ferromagnetic structures is a new powerful tool for investigating spin dynamics phenomena. \section{Summary} \label{summary} In summary, (Ga,Mn)As microchannels with locally controlled magnetocrystalline anisotropies and inherently weak dipolar fields represent a new favorable class of systems for exploring magneto-electronic effects at microscale. We have observed easy-axes rotations which depend on the width and crystal orientation of the microchannel. Based on numerical simulations of strain distribution for the experimental geometry and microscopic calculations of the corresponding spin-orbit coupled band structures we have explained the effect in terms of lattice relaxation induced changes in the magnetocrystalline anisotropy. The observation and explanation of micropatterning controlled magnetocrystalline anisotropy of the (Ga,Mn)As dilute moment ferromagnet represents the central result of our paper. In addition to that we have demonstrated that the structures are well suited for a systematic study of current induced switching phenomena well bellow Curie temperature at relatively low current densities. We have found indications that domain-wall spin-transfer-torque effects contribute strongly to the observed switchings. This suggests that our structures represent a new favorable system for exploring these technologically important yet still physically controversial spin dynamics phenomena. {\em Note added}: After the completion of our work, independent and simultaneous studies of the lattice relaxation induced changes of magnetocrystalline anisotropies in (Ga,Mn)As have been posted on the Los Alamos Archives and some of them published during the processing of our manuscript.\cite{Humpfner:2006_a,Pappert:2007_a,Wenisch:2007_a} The crystal orientations and widths of the nanochannels considered in these works are different than in our study. Nevertheless, the reported effects are of the same origin and our works provide a mutual confirmation that the seemingly tiny changes in the lattice constant can completely overwrite the magnetocrystalline energy landscape of the host (Ga,Mn)As epilayer. \section*{Acknowledgment} We acknowledge discussions with A. H. MacDonald, V. Nov\'ak, and support from EU Grant IST-015728, from EPSRC Grant GR/S81407/01, from GACR and AVCR Grants 202/05/0575, 202/06/0025, 202/04/1519, FON/06/E002, AV0Z1010052, LC510, from MSM Grant 0021620834, from NSF Grant DMR-0547875, and from ONR Grant N000140610122.
2,877,628,089,932
arxiv
\subsection{Benchmark data} \noindent We compare our MILP models to a standard formulation of DARP from the literature. For this purpose, we use the basic mathematical model of DARP introduced by \cite{Cordeau2006}. Note that a tighter 2-index formulation is given by \cite{Ropke2007}. However, this comes at the price of an exponentially growing number of constraints. It is thus better suited for a solution within a B\&C framework and we did not include it in our comparison. The model introduced by \cite{Cordeau2006}, referred to as C-DARP in the following, is based on a complete directed graph. The node set comprises all pick-up and drop-off locations and two additional nodes $0$ and $2n+1$ for the depot. Thus, the node set is equal to the set $P\cup D \cup \{0, 2n+1\}$. We use the MILP formulation with a reduced number of variables and constraints, described in \cite{Cordeau2006}, which includes aggregated variables $B_j$ (modelling the beginning of service time) and $Q_j$ (modelling the vehicle load) at every node $j$ except the origin and destination depot. The objective is to minimize the total routing costs. We do not add any additional valid inequalities to either of the MILP formulations in this comparison. We use the two sets of benchmark instances\footnote{The instances are available at \url{http://neumann.hec.ca/chairedistributique/data/darp/branch-and-cut/}.} set $a$ and set $b$ created by the same author to compare C-DARP to Model~\ref{mod2} and \ref{mod3}. The characteristics of the instances are summarized in Table~\ref{tab:charbm}. In all test instances we tighten the remaining time windows, i.e.\ the time windows not given by the pick-up time of inbound requests or by the drop-off time of outbound requests, respectively, as described in \cite{Cordeau2006}: The bounds of the missing time windows can be calculated according to equations \eqref{inbound} and \eqref{outbound} stated earlier in Section~\ref{sec:model}. \ctable[botcap, caption=Characteristics of the benchmark test instances., label =tab:charbm, nosuper, doinside=\footnotesize]{ccccccp{4ex}cccccc}{}{ \cmidrule(lr){1-6} \cmidrule(lr){8-13} Instance & $Q$ & $n$ & $\vert K\vert$ & $L_i$ & $T$ && Instance & $Q$ & $n$ & $\vert K\vert$ & $L_i$ & $T$ \\ \cmidrule(lr){1-6} \cmidrule(lr){8-13} a2-16& 3 & 16 & 2 & 30 & 480 && b2-16 & 6 & 16 & 2 & 45 & 480 \\ a2-20& 3 & 20 & 2 & 30 & 600 && b2-20 & 6 & 20 & 2 & 45 & 600 \\ a2-24& 3 & 24 & 2 & 30 & 720 && b2-24 & 6 & 24 & 2 & 45 & 720 \\ a3-18& 3 & 18 & 3 & 30 & 360 && b3-18 & 6 & 18 & 3 & 45 & 360 \\ a3-24& 3 & 24 & 3 & 30 & 480 && b3-24 & 6 & 24 & 3 & 45 & 480 \\ a3-30& 3 & 30 & 3 & 30 & 600 && b3-30 & 6 & 30 & 3 & 45 & 600 \\ a3-36& 3 & 36 & 3 & 30 & 720 && b3-36 & 6 & 36 & 3 & 45 & 720 \\ a4-16& 3 & 16 & 4 & 30 & 240 && b4-16 & 6 & 16 & 4 & 45 & 240 \\ a4-24& 3 & 24 & 4 & 30 & 360 && b4-24 & 6 & 24 & 4 & 45 & 360 \\ a4-32& 3 & 32 & 4 & 30 & 480 && b4-32 & 6 & 32 & 4 & 45 & 480 \\ a4-40& 3 & 40 & 4 & 30 & 600 && b4-40 & 6 & 40 & 4 & 45 & 600 \\ \cmidrule(lr){1-6} \cmidrule(lr){8-13} } \ctable[width=\textwidth, botcap, caption = Solution values and computing times for the benchmark test instances 'a' using JULIA., label =tab:ainstances, nosuper, doinside=\footnotesize]{cXXXXXXXXXc}{ \tnote[-- ]{Set up of MILP not completed within the time limit.} \tnote[N/A ]{Not applicable. No integer solution found within the time limit.} }{ \toprule & \multicolumn{4}{c}{C-DARP} & \multicolumn{3}{c}{Model~\ref{mod2}} & \multicolumn{3}{c}{Model~\ref{mod3}} \\ \cmidrule(lr){2-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}\addlinespace[0.1em] Inst. & Obj.v.& Gap & CPU & $\text{CPU}^\star$ & Obj.v. & CPU & $\text{CPU}^\star$ & Obj.v. & CPU & $\text{CPU}^\star$ \\ \midrule a2-16 & 294.2 & &2.11 &2.20 &294.2&0.96 &8.05 &294.2&0.34 &2.81\\ a2-20 & 344.8 & &19.61 &19.75 &344.8&3.80 &19.60 &344.8&1.14 &7.04\\ a2-24 & 431.1 & &108.70 &108.92&431.1&12.07&55.87 &431.1&2.78 &15.28\\ a3-18 & 300.5 & &268.18&268.30&300.5&2.23 &11.12 &300.5&0.75 &4.62\\ a3-24 & 346.8 &15.6&7200 &7200 &344.8&33.35&77.35 &344.8&14.05 &26.25\\ a3-30 & 498.0 &26.0&7200 &7200 &494.8&40.43&267.43&494.8&10.89 &44.89\\ a3-36 & 587.8 &19.2&7200 &7200 & -- & -- & -- &583.2&103.55&196.45\\ a4-16 & 282.7 &12.3&7200 &7200 &282.7&2.22 &7.58 &282.7&0.96 &7.68\\ a4-24 & 375.0 &27.1&7200 &7200 &375.0&12.47&56.57 &375.0&3.66 &16.46\\ a4-32 & N/A & N/A&7200 &7200 &485.5&73.99&426.99 &485.5&22.18 &78.28\\ a4-40 & N/A & N/A &7200 &7200 & -- & -- & -- &557.7&279.18&469.18\\ \bottomrule } \ctable[width=\textwidth, botcap, caption = Solution values and computing times for the benchmark test instances 'b' using JULIA., label =tab:binstances, nosuper, doinside=\footnotesize]{cXXXXXXXXXc}{ \tnote[N/A ]{Not applicable. No integer solution found within the time limit.} }{ \toprule & \multicolumn{4}{c}{C-DARP} & \multicolumn{3}{c}{Model~\ref{mod2}} & \multicolumn{3}{c}{Model~\ref{mod3}} \\ \cmidrule(lr){2-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}\addlinespace[0.1em] Inst. & Obj.v. & Gap & CPU & $\text{CPU}^\star$ & Obj.v. & CPU & $\text{CPU}^\star$ & Obj.v. & CPU & $\text{CPU}^\star$ \\ \midrule b2-16 &309.4& &13.97 &15.05 &309.4&0.36&2.29&309.4&0.19&1.37\\ b2-20 &332.6& &8.70 &8.84 &332.6&0.03&0.39&332.6&0.03&0.31\\ b2-24 &444.7& &90.46 &90.68 &444.7&0.18&1.72&444.7&0.10&1.09\\ b3-18 &301.6& &372.63 &372.75 &301.6&0.08&0.36&301.6&0.08&0.31\\ b3-24 &394.5& &1613.31&1613.52&394.5&1.62&11.05&394.5&0.86&4.82\\ b3-30 &531.9&35.1&7200 &7200 &531.4&1.06&7.98&531.4&0.43&3.62\\ b3-36 &614.5&22.6&7200 &7200 &603.8&12.29&41.09&603.8&3.02&12.46\\ b4-16 &297.0& &804.32 &804.42 &297.0&0.04&0.10&297.0&0.04&0.96\\ b4-24 &371.4&16.8&7200 &7200 &371.4&0.56&3.53&371.4&0.30&2.02\\ b4-32 &501.8&28.2&7200 &7200 &494.8&0.13&1.39&494.8&0.09&1.06\\ b4-40 &N/A &N/A &7200 &7200 &656.6&11.83&57.83&656.6&4.02&16.72 \\ \bottomrule } A summary of the computational results can be found in Tables~\ref{tab:ainstances} and \ref{tab:binstances}. For each of the three considered models C-DARP, Model~\ref{mod2} and \ref{mod3}, the objective value (Obj.v.) of the cost objective, the computational time in seconds (CPU) and the computational time in seconds including the time needed to set up the MILP in JULIA before it is handed over to CPLEX ($\text{CPU}^{\star}$) are reported. The last quantity is included because the solve time returned by CPLEX for Model~\ref{mod2} and \ref{mod3} is rather low, but the time needed to set up the MILP is rather high compared to the set up time of C-DARP. This is due to the fact that, as explained in Section \ref{sec:model}, in Model~\ref{mod2} and \ref{mod3} the number of variables and constraints is bounded from above by $O(n^{Q+1})$ and $O(n^{2Q-1})$, respectively, while there are at most $O(n^2) $ variables and constraints in C-DARP. For example, in instance a2-16 the lp-file generated from Model~\ref{mod2} and \ref{mod3} has a size of 180 MB and 24~MB, respectively, while for C-DARP its size is 474 KB. This is reflected by the high difference of the values in columns CPU and $\text{CPU}^{\star}$ for Model~\ref{mod2} and \ref{mod3}. For C-DARP we also report the relative gap (Gap) as some of the instances have not been solved to optimality within the time limit of 7200 seconds. By taking a closer look at Tables~\ref{tab:ainstances} and \ref{tab:binstances}, it becomes evident that Model~\ref{mod3} outperforms Model~\ref{mod2} in all of the instances with only one exception at instance b4-16. In all of the instances the CPU time needed by CPLEX to solve Model~\ref{mod3} is smaller than the solution time for Model~\ref{mod2}. Instances a3-36 and a4-40 are the two largest instances in terms of the number of users and the possible user allocations in the vehicle (note that $q_i=1$ for all $i\in R$ in the a-instances). For these instances, the MILP for Model~\ref{mod2} could not be set up within 7200 seconds, so that we had to interrupt the execution of the program. Modelling the same instances with Model~\ref{mod3} took about 1.5 minutes (instance a3-36) and about 3 minutes (instance a4-40). Moreover, one can see clearly from the results that Model~\ref{mod3} yields a more efficient formulation than C-DARP: Starting from instance a3-34, the a-instances could not be solved to optimality within two hours (or even no integer solution was found) with C-DARP. The computational time needed to solve these instances with Model~\ref{mod3} ranges from 7.68 seconds to less than eight minutes. In general, with Model~\ref{mod3} the b-instances are easier to solve than the a-instances, as all instances but instances b3-36 and b4-40 have been solved in less than five seconds. This reflects the fact that the size of the MILP decreases tremendously when users request more than one seat, which can be traced back to the fact that the size of the event-based graph decreases in this case. With C-DARP, CPLEX was able to solve more of the b-instances within the time limit than of the a-instances, but C-DARP still performs poorly when compared to Model~\ref{mod3}. \subsection{Artificial data -- City of Wuppertal} \noindent Ride-hailing services are usually operated by taxis or mini-buses, whose passenger capacity is often equal to three or six. Thus, in the artificially created instances we consider the case that $Q\in\{3,6\}$. In the case that $Q=3$, we assume that each user requests one seat each, while for $Q=6$ the number of requested seats per user is chosen randomly from $\{1,\ldots, 6\}$. Moreover, the service time $s_j$ associated with location $j$ is set to be equal to the number of requested seats $q_j$. This is in accordance with the benchmark test instances for DARP created in \cite{Cordeau2006}. The instance size is determined by the number of users. For both $Q=3$ and $Q=6$ and for each number of users $n\in \lbrace 10, 15, 20, 25, 30, 35, 40\rbrace$ we generate a set of $5$ instances with $n$ users each. We denote the instances by $Q3.n.m$ and $Q6.n.m$, indicating the $m$-th instance with $n$ users, $m\in\lbrace1, \ldots,5\rbrace$. Pick-up and drop-off locations are chosen randomly from a list of streets in Wuppertal, Germany. The taxi depot is chosen to be located next to the main train station in Wuppertal. The cost $c_a$ for an arc $a$ in the event-based network is computed as the length of a shortest path from its tail to its head in an OpenStreetMap network corresponding to the city of Wuppertal. The shortest path is calculated based on OpenStreetMap data using the shortest path method of the Python package NetworkX. Due to slowly moving traffic within the city center, the average travel speed is set to 15~km/h, so that the travel time in minutes is equal to $t_a = 4 \,c_a$. Earliest pick-up times are chosen randomly from five-minute intervals within the next 15--60 minutes (we consider inbound requests only) and the time windows are chosen to be equal to 15~minutes, as we assume that users of ride-hailing services in a city, where public transport operates at high frequencies, want to be picked-up without long waiting times. A user $i$'s maximum ride time $L_i$ is set to $1.5$ times the ride time for a direct ride from the pick-up to the drop-off location. The maximum duration of service $T$ is set to $150$ minutes. A summary of the artificial instances' remaining characteristics can be found in Table~\ref{tab:characteristics}. \ctable[botcap, caption=Characteristics of the Wuppertal artificial test instances., label =tab:characteristics, nosuper, doinside=\footnotesize]{cccccccc}{}{ \toprule Instances & $Q$ & $n$ & $\vert K\vert$ & Instances & $Q$ & $n$ & $\vert K\vert$ \\ \midrule Q3.10.1-5& 3 & 10 & 6 & Q6.10.1-5& 6 & 10 & 6 \\ Q3.15.1-5& 3 & 15 & 7 & Q6.15.1-5& 6 & 15 & 9 \\ Q3.20.1-5& 3 & 20 & 9 & Q6.20.1-5& 6 & 20 & 11\\ Q3.25.1-5& 3 & 25 & 9 & Q6.25.1-5& 6 & 25 & 15 \\ Q3.30.1-5& 3 & 30 & 11 & Q6.30.1-5& 6 & 30 & 16 \\ Q3.35.1-5& 3 & 35 & 12 & Q6.35.1-5& 6 & 35 & 18 \\ Q3.40.1-5& 3 & 40 & 15 & Q6.40.1-5 & 6 & 40 & 20 \\ \bottomrule } It has been shown in the previous subsection that Model~\ref{mod3} performs better than Model~\ref{mod2}. Therefore, we restrict the following tests to Model~\ref{mod3}. We compare the impact of employing different objective functions from Section~\ref{sec:obj} on the economic efficiency and the customer satisfaction of the final routing solution. The respective objective functions are used in Model~\ref{mod3} in the place of \eqref{costobj}. Moreover, for the objective function $f_{rce}$ we add variables $p_i$, $i\in R$ to Model~\ref{mod3} and replace constraints \eqref{passpickedup} by constraints \eqref{passpick?}. In case of the objective functions $f_e$, $f_{ce}$ and $f_{rce}$, we add variables $d_i \geq 0$, $i\in R$ and constraints \eqref{excess} to Model~\ref{mod3}. In case of the objective functions involving the users' \emph{maximum excess ride time}, i.e.\ $f_{e_{\max}}$ and $f_{ce_{\max}}$, we additionally add the variable $d_{\max} \geq 0$ and constraints \eqref{maxexcess} to Model~\ref{mod3}. The weights in the objective functions involving more than one criterion are chosen from a user perspective and based on extensive numerical tests. In the first part of the computational experiments we consider the single-objective functions $f_c$, $f_e$ and $f_{e_{\max}}$. The weights in $f_{ce}$ and $f_{ce_{\max}}$ are then chosen so that the values of total and maximum excess ride time, respectively, in $f_{ce}$ and $f_{ce_{\max}}$ deviate not more than 2\% from the optimal objective values for $f_e$ and $f_{e_{\max}}$. After some preliminary testing the weights are set to $\alpha=3$ and $\beta=\frac{3n}{5}$, which yields \begin{align*} f_{ce} & = \sum_{a \in A}c_a \, x_a + 3 \sum_{i\in R}d_i \quad \text{and} \\ f_{ce_{\max}} & = \sum_{a \in A}c_a \, x_a + \frac{3n}{5}\, d_{\max}. \end{align*} Note that choosing the weighting parameter $\beta$ as a function of $n$ ensures that not only the first term in $f_{ce_{\max}}$ grows with the number of users. Some pick-up and drop-off times or locations may force the drivers to make large detours, which may induce significantly increased routing costs or excess ride times. By using the objective function $f_{rce}$ we can measure the impact of denying certain requests on the served users' excess ride time and the total routing costs. The weighting parameter $\gamma$ in $f_{rce}$ defines the trade-off between the general requirement of answering as many requests as possible on one hand, and the goal of cost and time efficient transport solutions on the other hand. After testing several weights, it turns out that $\gamma = 60$ is a reasonable choice, yielding \begin{equation*} f_{rce} = \sum_{a \in A} c_a \, x_a + 3\sum_{i \in R} d_i + 60\left(n - \sum_{i \in R} p_i \right). \end{equation*} In our numerical experiments, on average at most 10\% of the requests are rejected when using these weights. Note that several authors using weighted sum objectives base their choice of weights on \cite{Jorgensen2007} (see e.g.\ \cite{Mauri2009, Kirchler2013}). However, the total routing costs, the total/maximum excess ride time and the number of unanswered requests depend strongly on the test instances and may vary considerably. Since in this section we create a new class of test instances, we refrain from using these predetermined weights. \ctable[ botcap, caption = {Average values on the Q3 and Q6 test instances solved with the objective functions $f_c$ and $f_e$.}, label =tab:ce, nosuper, doinside=\footnotesize]{cccccccccc }{ }{ \toprule & \multicolumn{4}{c} {$f_c$} & \multicolumn{5}{c}{$f_e$}\\ \cmidrule(lr){2-5}\cmidrule(lr){6-10}\addlinespace[0.1em] {$n$} & {$f_c$} & {$f_e$} & {CPU} & {$\text{CPU}^\star$} & {$f_c$} & {$f_e$} & {Gap} & {CPU} & {$\text{CPU}^\star$} \\ \midrule \textbf{Q3} & & & & & & & & & \\ \addlinespace[0.2em] 10&78.8 &128.2&0.17 &1.04 &88.1 &15.3 & &0.14&0.93\\ 15&109.1&191.3&0.75 &5.20 &128.8&34.2 & &0.79&4.85\\ 20&129.3&240.3&7.04 &22.36 &154.5&55.6 & &7.39&21.17\\ 25&156.6&288.1&69.79 &111.14 &172.7&111.6& &73.99&111.86\\ 30&184.7&353.3&179.01 &279.99 &202.7&141.1& &180.42&274.20\\ 35&199.0&403.6&1191.93&1403.76&231.3&131.1&6 &3606.83&3756.82\\ 40&241.2&498.0&2944.48&3366.15&280.3&144.6&16&5946.63&6066.63\\ \midrule \textbf{Q6} & & & & & & & & & \\ \addlinespace[0.2em] 10&80.5 &92.5 &0.02 &0.26 &86.3 &15.2& &0.02&0.25\\ 15&119.1&122.4&0.05 &0.63 &128.7&20.9 & &0.05&0.61\\ 20&152.4&178.8&0.20 &2.00 &164.6&48.0 & &0.23&1.88\\ 25&183.6&228.0&0.88 &6.56 &211.1&29.9 & &0.93&6.14\\ 30&222.8&256.5&1.02 &7.14 &244.6&46.7 & &1.06&6.39\\ 35&238.0&340.4&45.51&98.10&277.5&44.7 & &43.80&86.40\\ 40&296.0&360.4&8.65 &31.66&324.1&127.5& &9.24&30.51\\ \bottomrule \addlinespace[0.2em] } \setlength{\tabcolsep}{2.3pt} \ctable[width=\textwidth, botcap, caption = {Average values on the Q3 and Q6 test instances solved with the objective function $f_{ce}$ and $f_{rce}$.}, label =tab:cerce, nosuper, doinside=\footnotesize]{cccc@{\hspace{1.2pt}}c@{\hspace{1.2pt}}cccccc@{\hspace{1.2pt}}c@{\hspace{1.2pt}}cc }{ }{ \toprule & \multicolumn{6}{c}{$f_{ce}$} & \multicolumn{7}{c}{$f_{rce}$} \\ \cmidrule(lr){2-7}\cmidrule{8-14}\addlinespace[0.1em] {$n$} & {Obj.v.} & {$f_c$} & {$f_e$} & {Gap} & {CPU} & {$\text{CPU}^\star$} & {Obj.v.}& {$f_c$} & {$f_e$} & {a.r.} & {Gap} & {CPU} & {$\text{CPU}^\star$} \\ \midrule \textbf{Q3}& & & & & & & & & & & & & \\ \addlinespace[0.2em] 10&133.6&87.8 &15.3 & &0.10 &0.73 &129.4 &86.2 &10.4&98& &0.09&0.73 \\ 15&230.9&128.2&34.2 & &0.40 &3.56 &226.1 &121.7&22.4&96& &0.35&3.48 \\ 20&318.9&151.1&56.0 & &4.07 &14.48 &277.9 &136.0&15.3&92& &3.56&13.50 \\ 25&506.0&171.1&111.7& &38.91&64.27 &409.0 &152.9&33.4&90& &35.92&61.36 \\ 30&624.2&199.7&141.5& &99.38&154.37 &515.5&183.2&58.8&91& &88.15&142.85 \\ 35&621.8&227.9&131.3&3&3004.61&3102.21&554.6&216.1&56.8&92&1&2016.91&2113.72 \\ 40&712.7&277.3&145.1&9&4945.03&5123.23&638.3&260.7&65.9&92&6&3611.54&3823.74 \\ \midrule \textbf{Q6}& & & & & & & & & & & & & \\ \addlinespace[0.2em] 10&131.0&85.4&15.2& &0.02&0.26&129.7&81.7&8.0&96& &0.02&0.26\\ 15&191.0&128.2&20.9& &0.05&0.65&189.8&125.6&17.4&99& &0.05&0.61\\ 20&308.4&164.2&48.1& &0.22&2.04&281.8&154.0&22.6&95& &0.18&2.00\\ 25&296.9&206.3&30.2& &0.93&6.55&283.9&202.4&19.2&98& &0.92&6.57\\ 30&383.7&243.6&46.7& &1.04&6.92&359.7&231.8&22.6&97& &0.94&6.80\\ 35&410.3&275.3&45.0& &43.81&88.24&407.2&266.5&30.9&98& &43.23&88.14\\ 40&704.1&318.3&128.6& &9.16&31.96&613.6&297.4&53.4&93& &8.10&30.94\\ \bottomrule\addlinespace[0.2em] } \ctable[ botcap, caption = {Average values on the Q3 and Q6 test instances solved with the objective functions $f_{e_{\max}}$ and $f_{ce_{\max}}$.}, label =tab:emaxcemax, nosuper, doinside=\footnotesize]{cccccccccccc }{ }{ \toprule & \multicolumn{5}{c}{$f_{e_{\max}}$}& \multicolumn{6}{c}{$f_{ce_{\max}}$} \\ \cmidrule(lr){2-6}\cmidrule(lr){7-12}\addlinespace[0.1em] {$n$} &{$f_c$} & {$f_e$} &{$f_{e_{\max}}$}&{CPU}&{$\text{CPU}^\star$}&{Obj.v.}&{$f_c$} &{$f_e$} &{$f_{e_{\max}}$}&{CPU}&{$\text{CPU}^\star$}\\ \midrule \textbf{Q3}& & & & & & & & & & & \\ \addlinespace[0.2em] 10&88.5&40.2&7.3&0.08&0.67&130.0&85.4&46.0&7.4&0.10&0.76 \\ 15&130.2&64.2&10.0&0.37&3.35&216.5&126.0&77.8&10.1&0.39&3.63 \\ 20&157.0&110.6&11.1&3.70&13.51&280.0&145.5&130.7&11.2&3.69&14.13 \\ 25&178.1&179.5&12.3&36.09&59.96&354.4&169.4&199.1&12.3&36.34&62.16 \\ 30&210.5&216.0&13.3&87.24&137.65&442.7&201.1&223.5&13.4&87.90&143.22 \\ 35&237.2&222.5&10.4&457.55&571.89&448.3&228.6&232.8&10.5&562.73&684.52 \\ 40&291.5&258.9&11.4&1139.52&1354.48&552.1&279.1&283.2&11.4&1190.24&1443.64 \\ \midrule \textbf{Q6}& & & & & & & & & & & \\ \addlinespace[0.2em] 10 &86.1 &27.3 &6.1 &0.02 &0.25&121.6&84.9&26.9&6.1&0.02&0.26 \\ 15 &130.1&58.6 &8.6 &0.04 &0.59&203.0&125.4&74.0&8.6&0.06&0.63 \\ 20 &164.8&93.7 &9.6 &0.21 &1.90&276.0&160.3&102.2&9.7&0.21&2.03 \\ 25 &212.5&70.8 &6.9 &0.87 &6.00&309.6&206.1&83.7&6.9&0.92&6.35 \\ 30 &249.2&137.0&11.1&1.02 &6.25&436.2&236.4&159.6&11.1&1.06&6.79 \\ 35 &281.4&131.5&9.4 &42.73&85.14&459.6&261.5&191.9&9.4&43.18&88.42 \\ 40 &330.2&252.0&12.6&8.72 &29.82&620.0&318.2&272.7&12.6&8.79&31.75 \\ \bottomrule \addlinespace[0.2em] } In Tables~\ref{tab:ce}, \ref{tab:cerce} and \ref{tab:emaxcemax} the results are summarized. All figures reported are average values over five instances $Q3.n.m$, $m\in \{1,\ldots,5\}$ and $Q6.n.m$, $m\in \{1,\ldots,5\}$, respectively. The tables contain information about the total routing costs, the total excess ride time (and where applicable the maximum excess ride time and the percentage of answered requests) for each number of user requests $n\in\{10,15,20,25,30,35,40\}$ of the instance sets $Q3$ and $Q6$. Furthermore, the relative gap, the CPU time returned by CPLEX and the CPU time including the set up time in JULIA is reported. If no relative gap is shown, all instances have been solved to optimality. The computational times show that the artificial instances are harder to solve than the benchmark instances. This may be explained by a smaller planning horizon (240--720 minutes for the benchmark instances and 150 minutes for the artificial instances) during which the same number of user requests have to be served, i.e.\ the ratio of user requests per time is higher. This is also reflected by the number of required vehicles. While there are only two to four vehicles in the benchmark test instances to serve between $16$ and $40$ user requests, there are $6$ to $20$ vehicles required in the artificial instances. The computational time needed to solve the Q6-instances deviates strongly from the time needed to solve the Q3-instances. This is in accordance with the results of the benchmark instances, where the $b$-instances are much faster to solve than the $a$-instances. While the scenario that users may request any number of seats between one and six reflects the conditions under which ride-hailing services operate in reality, a uniform distribution of $q_i$ in the set $\{1, \ldots, 6\}$ is probably not realistic. In future work, this should be tested at real world data. Nevertheless, in the context of a static framework, the results show that Model~\ref{mod3} can be solved within reasonable time: For up to $30$ users, the average solve time over five Q6 instances is at most seven seconds. The solve time is less than one second for $n\in \{10,15\}$. Thus, Model~\ref{mod3} can indeed be applied in a rolling horizon approach for medium sized instances. \ctable[ botcap, caption = {Comparison of the change in total routing costs, total excess ride time and $\text{CPU}^\star$ (all in percent) on the Q3 and Q6 test instances solved with the objective functions $f_c$, $f_e$, $f_{ce}$, $f_{e_{\max}}$ and $f_{ce_{\max}}$.}, label =tab:compcecerce, nosuper, doinside=\footnotesize]{ccccccccccccc}{ }{ \toprule & \multicolumn{2}{c} {$f_e$ vs. $f_c$ } & \multicolumn{2}{c}{$f_{ce}$ vs. $f_c$}& \multicolumn{2}{c}{$f_{ce}$ vs. $f_e$} & \multicolumn{2}{c}{$f_{e_{\max}}$ vs. $f_c$} & \multicolumn{2}{c}{$f_{ce_{\max}}$ vs. $f_c$} & \multicolumn{2}{c}{$f_{ce_{\max}}$ vs. $f_{e_{\max}}$}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}\cmidrule(lr){8-9}\cmidrule(lr){10-11}\cmidrule(lr){12-13}\addlinespace[0.1em] $n$ & $\Delta f_c$ & $\Delta f_e$ & $\Delta f_c$ & $\Delta \text{CPU}^\star$ & $\Delta f_e$ & $\Delta \text{CPU}^\star$ & $\Delta f_c$ & $\Delta f_e$ & $\Delta f_c$ & $\Delta \text{CPU}^\star$ & $\Delta f_{e_{\max}}$ & $\Delta \text{CPU}^\star$ \\ \midrule \textbf{Q3} & & & & & & & & & & & & \\ \addlinespace[0.2em] $10$ & $12$ & $-88$ & $11$ & $-30$ & $0$ & $-22$ & $12$ & $-69$ & $8$ & $-27$ & $2$ & $14$ \\ $15$ & $18$ & $-82$ & $18$ & $-32$ & $0$ & $-27$ & $19$ & $-66$ & $15$ & $-30$ & $0$ & $8$ \\ $20$ & $19$ & $-77$ & $17$ & $-35$ & $1$ & $-32$ & $21$ & $-54$ & $12$ & $-37$ & $1$ & $5$ \\ $25$ & $10$ & $-61$ & $9$ & $-42$ & $0$ & $-43$ & $14$ & $-38$ & $8$ & $-44$ & $0$ & $4$ \\ $30$ & $10$ & $-60$ & $8$ & $-45$ & $0$ & $-44$ & $14$ & $-39$ & $9$ & $-49$ & $1$ & $4$ \\ $35$ & $16$ & $-68$ & $15$ & $121$ & $0$ & $-17$ & $19$ & $-45$ & $15$ & $-51$ & $1$ & $20$ \\ $40$ & $16$ & $-71$ & $15$ & $52$ & $0$ & $-16$ & $21$ & $-48$ & $16$ & $-57$ & $0$ & $7$ \\ Avg. & $15$ & $-72$ & $13$ & $-1$ & $0$ & $-28$ & $17$ & $-51$ & $12$ & $-42$ & $1$ & $9$ \\ \midrule \textbf{Q6} & & & & & & & & & & & &\\ \addlinespace[0.2em] $10$ & $7$ & $-84$ & $6$ & $-1$ & $0$ & $1$ & $7$ & $-71$ & $6$ & $3$ & $0$ & $8$ \\ $15$ & $8$ & $-83$ & $8$ & $3$ & $0$ & $7$ & $9$ & $-52$ & $5$ & $0$ & $0$ & $7$ \\ $20$ & $8$ & $-73$ & $8$ & $2$ & $0$ & $9$ & $8$ & $-48$ & $5$ & $2$ & $0$ & $7$ \\ $25$ & $15$ & $-87$ & $12$ & $0$ & $1$ & $7$ & $16$ & $-69$ & $12$ & $-3$ & $0$ & $6$ \\ $30$ & $10$ & $-82$ & $9$ & $-3$ & $0$ & $8$ & $12$ & $-47$ & $6$ & $-5$ & $0$ & $9$ \\ $35$ & $17$ & $-87$ & $16$ & $-10$ & $1$ & $2$ & $18$ & $-61$ & $10$ & $-10$ & $1$ & $4$ \\ $40$ & $9$ & $-65$ & $8$ & $1$ & $1$ & $5$ & $12$ & $-30$ & $7$ & $0$ & $0$ & $6$ \\ Avg. & $11$ & $-80$ & $9$ & $-1$ & $0$ & $6$ & $12$ & $-54$ & $7$ & $-2$ & $0$ & $7$ \\ \bottomrule \addlinespace[0.2em] } \ctable[ botcap, caption = {Comparison of the change in total routing costs, total excess ride time, $\text{CPU}^\star$ and number of answered requests (all in percent) on the Q3 and Q6 test instances solved with the objective functions $f_{ce}$, $f_{ce_{\max}}$ and $f_{rce}$.}, label =tab:cecemaxrce, pos=htb, nosuper, doinside=\footnotesize]{cccccccc}{ }{ \toprule & \multicolumn{3}{c} {$f_{ce_{\max}}$ vs. $f_{ce}$ } & \multicolumn{4}{c}{$f_{rce}$ vs. $f_{ce}$}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-8}\addlinespace[0.1em] $n$ & $\Delta f_c$ & $\Delta f_e$ & $\Delta \text{CPU}^\star$ & $\Delta f_c$ & $\Delta f_e$ & $\Delta \text{CPU}^\star$& $\Delta $ a.r. \\ \midrule \textbf{Q3} & & & & & & & \\ \addlinespace[0.2em] $10$ & $ -3$ & $201$ & $4 $ & $-2$ & $-32$ & $0$ & $-2$ \\ $15$ & $ -2$ & $127$ & $2 $ & $-5$ & $-35$ & $-2$ & $-4$ \\ $20$ & $ -4$ & $133$ & $-2 $ & $-10$ & $-73$ & $-7$ & $-8$ \\ $25$ & $ -1$ & $78$ & $-3 $ & $-11$ & $-70$ & $-5$ & $-10$ \\ $30$ & $ 1$ & $58$ & $-7 $ & $-8$ & $-58$ & $-7$ & $-9$ \\ $35$ & $ 0$ & $77$ & $-78 $ & $-5$ & $-57$ & $-32$ & $-8$ \\ $40$ & $ 1$ & $95$ & $-72 $ & $-6$ & $-55$ & $-25$ & $-8$ \\ Avg. & $ -1$ & $110$ & $-22 $ & $-7$ & $-54$ & $-11$ & $-7$ \\ \midrule \textbf{Q6} & & & & & & & \\ \addlinespace[0.2em] $10$ & $ 0$ & $77$ & $4 $ & $-4$ & $-47$ & $2$ & $-4$ \\ $15$ & $ -2$ & $254$ & $-3 $ & $-2$ & $-17$ & $-6$ & $-1$ \\ $20$ & $ -2$ & $113$ & $-1 $ & $-6$ & $-53$ & $-2$ & $-5$ \\ $25$ & $ 0$ & $177$ & $-3$ & $-2$ & $-36$ & $0$ & $-2$ \\ $30$ & $ -3$ & $242$ & $-2 $ & $-5$ & $-52$ & $-2$ & $-3$ \\ $35$ & $ -5$ & $326$ & $0 $ & $-3$ & $-31$ & $0$ & $-2$ \\ $40$ & $ 0$ & $112$ & $-1 $ & $-7$ & $-58$ & $-3$ & $-7$ \\ Avg. & $-2$ & $186$ & $-1 $ & $-4$ & $-42$ & $-2$ & $-3$ \\ \bottomrule \addlinespace[0.2em] } A comparison of the effects of different objective functions in Model~\ref{mod3} can be found in Tables~\ref{tab:compcecerce} and \ref{tab:cecemaxrce}. In the second and third column of Table~\ref{tab:compcecerce} we illustrate the change (in percent) in total routing costs and total excess ride time when replacing $f_c$ by $f_e$. While the excess ride time decreases by an average of 72\% and 80\% (Q3- and Q6-instances, respectively), the routing costs only increase by 15\% and 11\% on average. This shows that by including the criterion of excess ride time in the objective function we can spare users a large amount of unnecessary ride or waiting time. This comes at the cost of higher routing costs. However, even from a service provider's perspective it might be reasonable to accept a rather small loss in order to improve user convenience and to remain competitive. In column six of the same table we demonstrate that the weights chosen in the cost-excess objective $f_{ce}$ indeed reflect a user perspective: There is an increase of at most 1\% in total excess ride time compared to solely optimizing w.r.t.\ $f_e$. Moreover, there is an average increase in routing costs of 13\% (Q3-instances) and 9\% (Q6-instances) compared to the costs when using routing costs as the only optimization criterion, i.e.\ when using $f_c$ as the objective function. Comparing $f_{ce}$ to $f_c$, we observe a decrease in $\text{CPU}^\star$ time for $Q=3$ and $n\in \{10,15,20,25,30\}$ ranging from 30\% to 45\% and resulting in an average decrease of 1\% for the Q3-instances. For the Q6-instances the change (in percent) in $\text{CPU}^\star$ time ranges from -10\% to 3\%. In comparison to $f_e$ we observe an average decrease in $\text{CPU}^\star$ time of 28\% for the Q3-instances but an average increase of 6\% for the Q6-instances. Similar results are obtained for the objective functions $f_c$, $f_{e_{\max}}$ and $f_{ce_{\max}}$, although the average decrease in excess ride time when comparing $f_{e_{\max}}$ to $f_c$ is only 51\% (Q3-instances) and 54\% (Q6-instances). The meaningful choice of the weighting parameter in $f_{ce_{\max}}$ is demonstrated by the second last column in Table~\ref{tab:compcecerce}. Since both of the weighted sum objectives $f_{ce}$ and $f_{ce_{\max}}$ improve user convenience and increase routing costs compared to $f_c$, we evaluate which of the objective functions is more effective in this respect. Columns two to four in Table~\ref{tab:cecemaxrce} illustrate that the average computational time over five instances decreases by up to 78\% when using $f_{ce_{\max}}$ instead of $f_{ce}$. For the Q3-instances we observe an average decrease of $\text{CPU}^\star$ time of 22\%, for the Q6-instances there is an average decrease of 1\%. Despite its computational superiority and the fact that the objective function $f_{ce_{\max}}$ generally improves the user satisfaction of optimal tours, it has some shortcomings. Indeed, if there is one user with a high excess ride time $d_i$, for instance, because he or she is the last user in a tour to be dropped-off, the time loss of all other users $j$ with $d_j < d_i$ has no impact on the objective function. Therefore, the other users may not be driven to their drop-off points as quickly as possible. This is reflected by the increase in excess ride time using the objective $f_{ce_{\max}}$ as compared to $f_{ce}$, shown in Table~\ref{tab:cecemaxrce}: 110\% (Q3-instances) and 186\% (Q6-instances). The routing costs remain roughly the same; there is an average decrease of 1\% (Q3-instances) and 2\% (Q6-instances). Due to these shortcomings, we do not consider a tri-criterion weighted sum objective function $f_{rce_{\max}}$, but restrict our attention to $f_{rce}$ in the remaining discussion. \begin{figure} \centering \includegraphics[width=0.7\textwidth, clip,bb= 50 40 388 335]{images/Q3n103ce_+-.png} \caption{Vehicle routes of instance $Q3n10.3$ solved using the objective function $f_{ce}$.} \label{figce} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth, clip,bb= 50 40 388 335]{images/Q3n103rce_+-.png} \caption{Vehicle routes of instance $Q3n10.3$ solved using the objective function $f_{rce}$.} \label{figrce} \end{figure} \ctable[ botcap, caption = {Vehicle routes (without depot) of instance $Q3n10.3$ solved using the objective functions $f_{ce}$ and $f_{rce}$.}, label =tab:figce, nosuper, doinside=\footnotesize]{@{\extracolsep{1ex}}ll@{}p{0.01cm}llllp{0.1cm}llll}{ }{ \toprule & & & \multicolumn{4}{c}{$f_{ce}$} & & \multicolumn{4}{c}{$f_{rce}$} \\ \cmidrule(lr){4-7}\cmidrule(lr){9-12}\addlinespace[0.2em] \textbf{Tour 1\quad} & Location & & $9^+$ & $9^-$ & & & & $9^+$ & $9^-$ & & \\ &Time[m] & & 20.0 & 54.0 & & & & 20.0 & 54.0 & & \\ \cmidrule(lr){4-7}\cmidrule(lr){9-12}\addlinespace[0.2em] \textbf{Tour 2} &Location & & $3^+$ & $3^-$ & $2^+$ & $2^-$ & & $3^+$ & $3^-$ & $2^+$ & $2^-$ \\ &Time[m] & & 15.0 & 22.3 & 32.3 & 45.1 & &15.0 & 22.3 & 32.3 & 45.1 \\ \cmidrule(lr){4-7}\cmidrule(lr){9-12}\addlinespace[0.2em] \textbf{Tour 3} &Location& & $5^+$ & $5^-$ & $4^+$ & $4^-$& & $4^+$ & $4^-$ & & \\ &Time[m] & & 20.0 & 25.3 & 45.8 & 74. 6 & & 35.0 & 63.8 & & \\ \cmidrule(lr){4-7}\cmidrule(lr){9-12}\addlinespace[0.2em] \textbf{Tour 4} &Location & & $8^+$ & $8^-$ & $7^+$ & $7^-$& & $8^+$ & $8^-$ & $7^+$ & $7^-$ \\ &Time[m] & & 20. 0 & 28.3 & 48.7 & 59.9 & &20. 0 & 28.3 & 48.7 & 59.9 \\ \cmidrule(lr){4-7}\cmidrule(lr){9-12}\addlinespace[0.2em] \textbf{Tour 5} &Location & & $10^+$ & $10^-$ & & & & $10^+$ & $10^-$ & & \\ &Time[m] & & 30.0 & 47.0 & & & & 30.0 & 47.0 & & \\ \cmidrule(lr){4-7}\cmidrule(lr){9-12}\addlinespace[0.2em] \textbf{Tour 6} &Location & & $6^+$ & $6^-$ & $1^+$ & $1^-$ & & $5^+$ & $5^-$ & $1^+$ & $1^-$\\ &Time[m] & & 20.8 & 49.1 & 57.7 & 65. 0 & & 20.0 & 25.32 & 45.0 & 52.32 \\ \bottomrule } The last four columns of Table~\ref{tab:cecemaxrce}, $f_{rce}$ vs. $f_{ce}$, illustrate the decrease in the overall routing costs and excess ride time that is obtained when rejecting some of the requests. This is illustrated at the instance $Q3n10.3$ in Figure~\ref{figce}, showing an optimal tour w.r.t.\ $f_{ce}$, and Figure~\ref{figrce} where an optimal tour w.r.t.\ $f_{rce}$ is shown. In Figure~\ref{figce} the driver has to make a large detour to transport request $6$. When we allow to reject unprofitable requests using $f_{rce}$, variables $p_i \in \{0,1\}$, $i\in R$ and constraints \eqref{passpick?} instead of $f_{ce}$ and constraints \eqref{passpickedup} in Model~\ref{mod3}, it becomes obvious from Figure~\ref{figrce} that the service provider benefits from a large decrease in routing costs. Table~\ref{tab:figce} contains the corresponding vehicle routes including pick-up and drop-off times in minutes after the start of service. In the vehicle tours computed using $f_{rce}$, users $1$ and $4$ are transported without any time loss. For the Q3- and Q6-instances, on average 7\% less and 3\% less of the requests are answered in comparison to the results obtained with the objective function $f_{ce}$. In turn, we observe an average decrease of the total routing costs and the total excess ride time for all instance sizes. While the decrease in routing costs ranges from 2\% to 11\%, there are huge savings in excess ride time: There is an average decrease of 54\% and 42\% (Q3- and Q6-instances, respectively.) Furthermore, computational time decreases by 11\% (Q3-intances) and 2\% (Q6-instances). While most real world instances of DARPs are much larger, the proposed approach could still prove useful as a subroutine also for realistically-sized instances. Particularly, the extension to the online version of DARP in which requests arrive over time could be a promising application for our models. Indeed, often only very few new requests arrive simultaneously and re-routing of already scheduled users is only acceptable if it decreases their arrival time. Consequently, the number of simultaneous users in such a rolling-horizon version of a dial-a-ride problem is relatively small and could potentially be solved exactly using one of our models. Note that this does in general not lead to a global optimal solution of the offline problem. \section{Introduction}\label{sec:intro} \input{intro} \section{Literature Review}\label{sec:lit} \input{lit} \section{Problem Description}\label{sec:prob} \input{prob} \section{Mixed Integer Linear Programming Formulations}\label{sec:model} \input{model} \section{Numerical Experiments}\label{sec:experiments} \input{experiments} \section{Conclusions}\label{sec:concl} \input{conclusion} \section*{Acknowledgements} \noindent This work was partially supported by the state of North Rhine-Westphalia (Germany) within the project ``bergisch.smart.mobility''. \input{main_ridehailing.bbl} \end{document} \subsection{Event-based graph model}\label{subsec:event} \noindent The event-based MILP formulations presented in this paper are motivated by the work of \cite{bertsimas19online} who propose an optimization framework for taxi routing, where only one passenger is transported at a time. Their algorithm can handle more than $25,000$ users per hour. \cite{bertsimas19online} propose a graph-based formulation in which an arc $(i, j)$ represents the decision to serve passenger $j$ directly after dropping off passenger $i$. The allocation of users to a vehicle with capacity $Q$ can be written as a $Q$-tuple. Unassigned user slots (i.e., empty seats) are indicated by zeros. If, for example, $Q=3$ and two requests, request $1$ and request $2$ with $q_1=q_2=1$, are assigned to the same vehicle, the user allocation may be, for instance, represented by the tuple $\left(2,1,0\right)$. Accordingly, an empty vehicle is represented by $(0,0,0)$. To additionally incorporate information about the most recent pick-up or drop-off location visited, we write \begin{equation*} (2^+,1,0) \quad \text{or} \quad (2^-,1,0), \end{equation*} if user $1$ is seated and user $2$ has just been picked-up or dropped-off, respectively. Note that this encoding implicitly specifies the location of the vehicle since, in this example, user $2$ is picked-up at his or her pick-up location $2^+$, i.e., the $3$-tuple $(2^+,1,0)$ can be associated with the location $2^+$, and user $2$ is dropped-off at his or her drop-off location $2^-$, i.e., the $3$-tuple $(2^-,1,0)$ can be associated with the location $2^-$. The node $\boldsymbol{0}=(0,0,0)$ is associated with the depot. Since all permutations of such a $Q$-tuple specify the same user allocation, we order the components of the $Q$-tuple such that the first component contains the information regarding the last pick-up or drop-off stop and the remaining $Q-1$ components are sorted in descending order. Users can only be placed together in a vehicle if the total number of requested seats does not exceed the vehicle capacity. This constraint limits the possible combinations of users in the vehicle and hence the set of possible $Q$-tuples representing user allocations. Now DARP can be represented by a directed graph $G = (V,A)$, where the node set $V$ represents events rather than geographical locations. The set of event nodes corresponds to the set of all feasible user allocations. The set of all nodes that represent an event in which a request (or user) $i\in R$ is picked up is called the set of \emph{pick-up nodes} and is given by \begin{equation*} \begin{split} V_{i^+} \coloneqq \Biggl\lbrace (v_1, v_2, &\ldots, v_Q) \colon v_1 = i^+,\; v_j \in R \cup \{0\} \setminus \{i\} \; \forall j \in\{2,\ldots,Q\},\\ &\Bigl(v_j > v_{j+1} \vee v_{j+1}=0 \Bigr) \; \forall j \in\{2,\ldots,Q-1\}, \; \sum_{j=1}^Q q_{v_j} \leq Q\Biggr\rbrace. \end{split} \end{equation*} Similarly, the set of \emph{drop-off nodes} corresponds to events where a request (or user) $i \in R$ is dropped off and is given by \begin{equation*} \begin{split} V_{i^-} \coloneqq \Biggl\lbrace (v_1, v_2, &\ldots, v_Q) \colon v_1 = i^-,\; v_j \in R\cup \{0\} \setminus \{i\} \; \forall j \in\{2,\ldots,Q\},\\ &\Bigl(v_j > v_{j+1} \vee v_{j+1}=0 \Bigr) \; \forall j \in\{2,\ldots,Q-1\}, \; \sum_{j=1}^Q q_{v_j} \leq Q\Biggr\rbrace. \end{split} \end{equation*} Note that for each request $i \in R$ there is only one pick-up and one drop-off location, but more than one potential pick-up node and drop-off node. Hence, there is a unique mapping of nodes to locations, while a location may be associated to many different nodes. For convenience, we write $V_0 \coloneqq \{\boldsymbol{0}\}$. The overall set of nodes $V$ is then given by \begin{equation*} V = V_0 \cup \bigcup_{i = 1}^n V_{i^+} \cup \bigcup_{i = 1}^n V_{i^-}. \end{equation*} The arc set $A$ of $G$ is defined by the set of possible transits between pairs of event nodes in $V$. It is composed of six subsets, i.e., \[ A = \bigcup_{i = 1}^6 A_i, \] where the subsets $A_i$, $i=1,\dots,6$ are defined as follows: \begin{itemize} \item Arcs that describe the transit from a pick-up node in a set $V_{i^+}$, i.e., from a user $i$'s pick-up location, to a drop-off node in $V_{j^-}$, i.e., to the drop-off location of a user $j$, where $j=i$ is possible, but where $j$ may also be another user from the current passengers in the vehicle: \begin{equation*} \begin{split} A_1 \coloneqq \Bigl\lbrace & \bigl(\bigl(i^+, v_2, \ldots, v_Q\bigr), \bigl(j^-, w_2, \ldots, w_Q\bigr)\bigr)\in V\times V \colon \\ & \{j, w_2, \ldots, w_Q\}=\{i, v_2, \ldots, v_Q\}\Bigr\rbrace. \end{split} \end{equation*} Note that such arcs reflect the case that the vehicle travels from a user $i$'s pick-up location to a user $j$'s drop-off location, and all users except user $j$ (if there are any) remain seated. \item Arcs that describe the transit from a pick-up node in a set $V_{i^+}$, i.e., from a user $i$'s pick-up location, to another pick-up node from a set $V_{j^+}$ with $j\neq i$, i.e., to another user $j$'s pick-up location: \begin{equation*} \begin{split} A_2 \coloneqq \Bigl\lbrace & \bigl(\bigl( i^+, v_2, \ldots, v_{Q-1}, 0\bigr) , \bigl(j^+, w_2, \ldots, w_Q\bigr)\bigr)\in V\times V \colon \\ & \{i, v_2, \ldots, v_{Q-1}\}=\{w_2, \ldots, w_Q\} \Bigr\rbrace. \end{split} \end{equation*} Arcs in $A_2$ thus represent the trip of the vehicle from a user $i$'s pick-up location to another user $j$'s pick-up location, where user $j$ additionally enters the vehicle. \item Arcs that describe the transit from a drop-off node in a set $V_{i^-}$, i.e., from a user $i$'s drop-off location, to a pick-up node in a set $V_{j^+}$, $j\neq i$, i.e., to another user $j$'s pick-up location: \begin{equation*} A_3 \coloneqq \Bigl\lbrace \bigl(\bigl(i^-, v_2, \ldots, v_Q\bigr) , \bigl( j^+, v_2, \ldots, v_Q\bigr)\bigr) \in V\times V \colon i \neq j \Bigr \rbrace. \end{equation*} \item Arcs that describe the transit from a drop-off node in a set $V_{i^-}$, i.e., from a user $i$'s drop-off location, to a node in $V_{j^-}$, $j\neq i$, i.e., to another user $j$'s drop-off location: \begin{equation*} \begin{split} A_4 \coloneqq \Bigl\lbrace &\bigl(\bigl( i^-, v_2, \ldots, v_Q\bigr) , \bigl( j^-, w_2, \ldots, w_{Q-1}, 0\bigr)\bigr)\in V\times V \colon \\ &\{ v_2, \ldots, v_Q\}=\{ j, w_2, \ldots, w_{Q-1}\} \Bigr\rbrace. \end{split} \end{equation*} Thus, the arcs in $A_4$ reflect the case that a vehicle travels from a user $i$'s drop-off location to a user $j$'s drop-off location, and all users except user $i$ remain in the vehicle until user $j$ is dropped-off. In particular, user $j$ must already be in the vehicle when user $i$ is dropped-off. \item Arcs that describe the transit from a drop-off node in a set $V_{i^-}$, i.e., from a user $i$'s drop-off location, to the depot: \begin{equation*} A_5 \coloneqq \Bigl\lbrace \bigl(\bigl( i^-, 0, \ldots,0\bigr) , (0,\ldots,0)\bigr)\in V\times V \Bigl\rbrace . \end{equation*} \item Arcs that describe the transit from the depot to a pick-up node in a set $V_{i^+}$, i.e., to a user $i$'s pick-up location: \begin{equation*} A_6 \coloneqq \Bigl\lbrace \bigl((0,\ldots,0) , \bigl(i^+,0,\ldots,0\bigr)\bigr)\in V\times V \Bigr\rbrace. \end{equation*} \end{itemize} \begin{example} We give an example of the event-based graph $G=(V,A)$ with three users and vehicle capacity $Q=3$. Let $R=\{1,2,3\}$, $q_1 = q_2 = 2 $ and $q_3 = 3$. By the above definitions we obtain the graph illustrated in Figure~\ref{fig:example}. Note that there are no nodes $v\in V$ that simultaneously contain users $1$ (i.e., $1^+$ or $1^-$) and $3$ (i.e., $3^+$ or $3^-$) as the total number of requested seats specified by these users exceeds the vehicle capacity. Similary, the seats requested by users $2$ and $3$ together exceed the vehicle capacity of three. Two feasible tours for a vehicle in $G$ are given, for example, by the dicycles \begin{equation*} \begin{split} C_1 =\Bigl\lbrace & \big(\big(0,0,0\big),\big(1^+,0,0\big)\big),\ \big(\big(1^+,0,0\big),\big(2^+,1,0\big)\big),\ \big(\big(2^+,1,0\big),\big(2^-,1,0\big)\big), \\ & \big(\big(2^-,1,0\big),\big(1^-,0,0\big)\big),\ \big(\big(1^-,0,0\big),\big(0,0,0\big)\big)\Bigr\rbrace \end{split} \end{equation*} and \begin{equation*} C_2 = \Bigl\lbrace \big(\big(0,0,0\big),\big(3^+,0,0\big)\big),\ \big(\big(3^+,0,0\big), \big(3^-,0,0\big)\big),\ \big(\big(3^-,0,0\big),\big(0,0,0\big)\big)\Bigr\rbrace. \end{equation*} \begin{figure} \begin{scaletikzpicturetowidth}{\textwidth} \footnotesize \begin{tikzpicture}[scale = \tikzscale] \useasboundingbox (-12,-11) rectangle (12,0.75); \node (null) at (0,0) {$(0,0,0)$}; \node (100) at (-10,-3) {$\left(1^+,0,0\right)$}; \node (200) at (-2,-3) {$\left(2^+,0,0\right)$}; \node (300) at (6,-3) {$\left(3^+,0,0\right)$}; \path (null) edge[-latex] (100); \path (null) edge[-latex] (200); \path (null) edge[-latex] (300); \node (400) at (-6,-3) {$\left(1^-,0,0\right)$}; \node (500) at (2,-3) {$\left(2^-,0,0\right)$}; \node (600) at (10,-3) {$\left(3^-,0,0\right)$}; \path (100) edge[-latex] (400) (200) edge[-latex] (500) (300) edge[-latex] (600); \path (400) edge[-latex] (null) (500) edge[-latex] (null) (600) edge[-latex] (null); \path (400) edge[-latex] (200) (400) edge[-latex, bend angle = 20, bend right] (300); \path (500) edge[-latex, bend angle = 20, bend left] (100) (500) edge[-latex] (300); \path (600) edge[-latex, bend angle = 20, bend left] (100) (600) edge[-latex, bend angle = 20, bend left] (200); \node (210) at (-8,-6) {$\left(2^+,1,0\right)$}; \path (100) edge[-latex] (210); \node (120) at (0,-6) {$\left(1^+,2,0\right)$}; \path (200) edge[-latex] (120); \node (420) at (-6,-9) {$\left(1^-,2,0\right)$}; \node (510) at (-2,-9) {$\left(2^-,1,0\right)$}; \path (210) edge[-latex] (420) (210) edge[-latex] (510); \path (120) edge[-latex] (420) (120) edge[-latex] (510); \draw (510) edge[-latex, bend angle = 15, bend right] (400); \draw (420) edge[-latex, bend angle = 10, bend left] (500); \end{tikzpicture} \end{scaletikzpicturetowidth} \caption{Graph representation of an example with three users\label{fig:example}.} \end{figure} \end{example} In order to evaluate the complexity of this event-based graph repsentation of DARP, we first evaluate the respective cardinalities of the node set $V$ and of the arc set $A$. Note that the number of nodes and arcs in the event-based graph model depends on the vehicle capacity, the number of users and the number of requested seats per user. Given $Q$ and $n$, the number of nodes and arcs is maximal if all users request only one seat, i.e.\ if $q_i=1$ for all $i\in R$. In this case, it is easy to see that \begin{equation*} \vert V \vert = 1 + 2\,n \sum_{j=0}^{Q-1} \binom{n-1}{j} \end{equation*} and \begin{equation*} \begin{split} \vert A\vert =\ & 2\,n + n \sum_{j=0}^{Q-1} \binom{n-1}{j}(j+1) + 3\,n\,(n-1)\sum_{j=0}^{Q-2} \binom{n-2}{j} \\ &+ \frac{n (n-1)\cdot\ldots\cdot (n-Q)}{(Q-1)!}, \end{split} \end{equation*} where we use the convention that $\binom{m}{k}\coloneqq 0 $ when $k > m$. From the above formulas, we deduce that the number of nodes is bounded by $\mathcal{O}(n^Q)$ for $n\geq Q$ and the number of arcs is bounded by $\mathcal{O}(n^{Q+1})$ for $n\geq Q+1$. This is in general considerably more than what is obtained when a classical, geometrical DARP model on a complete directed graph is used, which has only $\mathcal{O}(n)$ nodes and $\mathcal{O}(n^2)$ arcs, see e.g.\ \cite{Cordeau2006, Ropke2007}. However, in practice, ride-hailing services are usually operated by taxis or mini-busses, so that $Q \in \{3,6\}$. Moreover, the number of nodes and thereby the number of arcs, reduce substantially if we do not consider the ``worst-case-scenario'' $q_i = 1$ for all $i\in R$, in which all combinations of requests are possible user allocations in the vehicle. Besides that, the event-based formulation has the clear advantage that important constraints like vehicle capacity constraints, pairing constraints and precedence constraints that have to be formulated in classical models are implicitly handled using the event-based graph model, as will be seen in the next section. \subsection{Event-based MILP models} \noindent With the above definitions, DARP can be modeled as a minimum cost integer flow problem with additional constraints, where both the source and the sink are represented by the node $\boldsymbol{0}$. We will formulate corresponding MILP models in the subsequent subsections. Therefore, we need the following additional parameters and variables, which are also summarized in Tables~\ref{tab:paras} and \ref{tab:vars}. Since each node in $V$ can be associated with a unique request location $j \in P\cup D \cup \{0\}$, we can associate a routing cost $c_a$ and a travel time $t_a$ with each arc $a=(v,w)\in A$. More precisely, both values $c_a$ and $t_a$ are calculated by evaluating the actual routing cost and travel time from the location associated with $v$ to the location associated with $w$. We assume that all routing costs and all travel times are nonnegative and satisfy the triangle inequality. Finally, let $\delta^{\text{in}}(v)\coloneqq \{(u,w) \in A \colon w = v\}$ and $\delta^{\text{out}}(v)\coloneqq\{(u,w) \in A \colon u = v\}$ denote the set of incoming arcs of $v$ and the set of outgoing arcs of $v$, respectively. Moreover, for $a\in A$ let the variable value $x_a = 1$ indicate that arc $a$ is used by a vehicle, and let $x_a = 0$ otherwise. Thus, a vehicle tour is represented by a sequence of events in a dicycle $C$ in $G$ where $x_a=1$ for all $a \in C$. A request is matched with a vehicle if the vehicle's tour, i.e., the sequence of events in the corresponding dicycle, contains the event of picking-up and dropping-off of the corresponding user. Since we allow users' requests to be denied, let the variable value $p_i = 1$ indicate that request $i\in R$ is accepted, and let $p_i = 0$ otherwise. Let the variable $B_v$ store the information on the beginning of service time at node $v \in V$, which can be deduced from the variable values $x$ and $p$. Recall that the beginning of service time has to be within the associated time window of the respective location of node $v$. \ctable[width=\textwidth, botcap, caption=List of parameters., label=tab:paras, doinside=\small ]{cX}{}{ \toprule Parameter & Description \\ \midrule $n$ & number of transport requests \\ $R$ & set of transport requests \\ $i^+$, $i^-$ & pick-up and drop-off location of request $i$ \\ $P$, $D$ & set of pick-up locations/requests, set of drop-off locations\\ $K$ & fleet of vehicles \\ $Q$ & vehicle capacity \\ $q_j$ & load associated with location $j$ \\ $s_j$ & service duration associated with location $j$ \\ $\left[e_j, \ell_j \right]$ & time window associated with location $j$ \\ $T$ & maximum duration of service \\ $L_i$ & maximum ride time associated with request $i$ \\ $V$ & node set \\ $V_{i^+}$, $V_{i^-} $ & set of pick-up nodes, set of drop-off nodes corresponding to request $i$ \\ $A$ & arc set \\ $c_a$ & routing cost on arc $a$ \\ $t_a$ & travel time on arc $a$ \\ $t_i$ & travel time along the shortest path for request $i$ \\ $\delta^{\text{in}}(v)$, $\delta^{\text{out}}(v)$ & incoming arcs, outgoing arcs of node $v$ \\ \bottomrule } \ctable[width=\textwidth, botcap, caption=List of variables., label=tab:vars, doinside=\small ]{cX}{}{ \toprule Variable & Description\\ \midrule $p_i$ & binary variable indicating if user $i$ is transported or not \\ $B_v$ & continuous variable indicating the start of service time at node $v$ \\ $x_a$ & binary variable indicating if arc $a$ is used or not \\ $d_i$ & excess ride time of user $i$ w.r.t.\ $e_{i^-}$ \\ $d_{\max}$ & maximum excess ride time\\ \bottomrule } Based on the event-based graph model, we are now ready to formulate our first MILP model for DARP. \subsection{Basic MILP for DARP} \noindent In this subsection, we propose an event-based mixed integer linear program for DARP. First, we present a nonlinear mixed integer programming formulation, which is based on the event-based graph model presented in Section~\ref{subsec:event} above. In a second step, this model is transformed into an MILP by a reformulation of time window and ride time constraints, i.e., constraints involving the variables $B_v$, $v\in V$, using a big-M method. The DARP can be formulated as the following nonliner mixed integer program: \begin{subequations} \begin{align} \min \;& \sum_{a\in A} c_a\,x_a \label{costobj_first}\\[1ex] \text{\textit{s.\,t.}}\; &\sum_{a\in \delta^{\text{in}}(v)} \!\!\!x_a - \sum_{a\in \delta^{\text{out}}(v)}\!\!\! x_a = 0 \quad \forall v\in V, \label{flowconservation1} \\ & \sum_{\substack{a\in\delta^{\text{in}}(v)\\ v \in V_{i^+}}}\!\!\! x_a = 1 \quad \forall i \in R, \label{passpickedup1} \\ & \sum_{a \in \delta^{\text{out}}(\boldsymbol{0})} \! \! \! \!\! \! x_a \leq \vert K\vert, \label{numberofvehicles1} \\ & e_j \leq B_v \leq \ell_j \quad \forall j \in P \cup D \cup \{0\},\, v\in V_j \label{timewindows} \\ & (B_w - B_v - s_{i^+})\!\! \sum_{a \in \delta^{\text{in}}(v)}\!\!\! x_a\! \! \sum_{a \in \delta^{\text{in}}(w)}\!\!\! x_a \leq L_i \quad \forall i \in R,\, v \in V_{i^+}, \, w \in V_{i^-}, \label{ridetime_nonlin} \\ & B_w \geq (B_v + s_{v_1} + t_{(v,w)})\,x_{(v,w)} \quad \forall (v,w) \in A, \label{traveltime_nonlin} \\ & x_a \in \{0,1\} \quad \forall a \in A, \\ & B_v \geq 0 \quad \forall v \in V. \end{align} \end{subequations} The objective function \eqref{costobj_first} minimizes the total routing cost. While constraints \eqref{flowconservation1} are flow conservation constraints, it is ensured by constraints \eqref{passpickedup1} that each request $i\in R$ is accepted and that exactly one node of all nodes which contain the request's pick-up location is reached by exactly one vehicle. Together with constraints \eqref{flowconservation1} and \eqref{passpickedup1}, the number of feasible dicycles in $G$ is bounded from above by the number of vehicles in constraints \eqref{numberofvehicles1}. For all nodes in $V$ the start of service has to take place within the time window corresponding to the associated location of the node, which is handled by constraints \eqref{timewindows}. An upper bound on the ride time is ensured by constraints \eqref{ridetime_nonlin}. Note that we only impose a bound on the variables $B_w$, $B_v$, if both $v\in V_{i^+}$ and $w\in V_{i^-}$ are in fact the pick-up and drop-off nodes that are used to serve request $i$. Finally, constraints \eqref{traveltime_nonlin} define the difference in time needed to travel from one node to another. Vehicle capacity, pairing and precedence constraints are ensured by the structure of the underlying network. This formulation is nonlinear due to constraints \eqref{ridetime_nonlin} and \eqref{traveltime_nonlin}. In the following MILP these constraints are substitued by a linearized reformulation: \begin{model}\label{mod2} \begin{subequations} \begin{align} \min \;& \sum_{a\in A} c_a\,x_a \label{costobj1}\\[1ex] \text{s.\,t.}\; &\text{constraints}\ \eqref{flowconservation1} - \eqref{timewindows} \notag \\ &B_w - B_v - s_{i^+} \leq L_i + M_i\,\biggl(1-\sum_{a \in \delta^{\text{in}}(v)}x_a\!\! + 1-\sum_{a \in \delta^{\text{in}}(w)}x_a \biggr) \notag\\ &\hspace*{5cm}\forall i \in R,\, v \in V_{i^+} \, w \in V_{i^-}, \label{ridetime1} \\ &B_w \geq B_v + s_{v_1} + t_{(v,w)} - \tilde{M}_{v,w}\,(1-x_{(v,w)}) \quad \forall (v,w) \in A, \label{traveltime1} \\ & x_a \in \{0,1\} \quad \forall a \in A, \\ & B_v \geq 0 \quad \forall v \in V, \end{align} \end{subequations} where $M_i\geq \ell_{i^-} - e_{i^+} - L_i - s_{i^+}$ and $\tilde{M}_{v,w}\geq \ell_{v_1} - e_{w_1} + s_{v_1} + t_{(v,w)}$ are sufficiently large constants. \end{model} To include the option to deny requests in DARP, variables $p_i \in \lbrace 0, 1\rbrace$, $i \in R$ have to be added to Model~\ref{mod2} and constraints \eqref{passpickedup1} have to be changed to \begin{equation} \sum_{\substack{a\in\delta^{\text{in}}(v)\\ v \in V_{i^+}}}\!\!\! x_a = p_i \qquad \forall i \in R. \label{passpick?} \end{equation} Hence, if a user is not picked-up (i.e., if $p_i=0$), then none of the nodes which contain his or her pick-up location are traversed by any vehicle. Note that in this case a reasonable objective function (see Section~\ref{sec:obj}) has to penalize the denial of user requests since otherwise an optimal solution is given by $p = 0$, $x=0$ and $B_v = e_{v_1}$ for all $v\in V$. In the computational experiments in Section~\ref{sec:experiments} we consider both cases, i.e., the scenario that all users have to be served and the scenario that some requests may be denied. Assuming $q_i=1$ for all requests $i\in R$ and $n\geq Q+1$ the total number of variables in Model~\ref{mod2} can be bounded by $\mathcal{O}(n^{Q+1})$ with $\mathcal{O}(n^{2Q-1})$ constraints, of which $\mathcal{O}(n^{2Q-1})$ constraints are ride time constraints \eqref{ridetime1}. If $q_i \in \{2, \ldots, Q\}$ for some requests, the number of variables and constraints decreases. In each of the ride time constraints in Model~\ref{mod2}, the sums $\sum_{a \in \delta^{\text{in}}(v)}x_a $ and $\sum_{a \in \delta^{\text{in}}(w)}x_a$ are evaluated. This is computationally expensive, as will be demonstrated in the computational tests carried out in Section~\ref{sec:experiments}. By taking advantage of the relationship between the pick-up and drop-off time windows associated with request $i$, we show in the following how constraints \eqref{ridetime1} can be reformulated without using big-M constraints, resulting in a second MILP formulation referred to as Model~\ref{mod3}. \subsection{Reformulation of time-related constraints} The MILP model presented in this subsection differs from the previous model, Model~\ref{mod2}, in the formulation of time windows and ride time constraints. In Model~\ref{mod2}, the ride time constraints are modeled as big-M constraints that are used to deactivate the respective constraints for pick-up and drop-off nodes that are not contained in a vehicle's tour. By reformulating the time window constraints and using the relationship between earliest pick-up and latest drop-off times, the numerically unfavorable big-M constraints can be replaced by simpler constraints in the following model (Model~\ref{mod3}). This model is faster to solve which is verified by the numerical experiments presented in Section~\ref{sec:experiments}. In this model, the ride time constraints, which ensure that a user does not spend more than $L_i$ minutes in the vehicle, are given by \begin{equation} B_w - B_v - s_{i^+} \leq L_i \qquad \forall i \in R,\,\forall v \in V_{i^+}, \,\forall w \in V_{i^-}. \label{ridetime_test} \end{equation} To show that these constraints, together with a reformulation of constraints \eqref{timewindows}, reflect the modeling assumptions, we first observe that in general applications of DARP, as described for example in \cite{Cordeau2006}, users often formulate \emph{inbound} requests and \emph{outbound} requests. In the first case, users specify a desired departure time from the origin, while in the case of an outbound request, users specify a desired arrival time at the destination. In both cases a time window is constructed around the desired time, so that we end up with a pick-up time window for inbound requests and a drop-off time window for outbound requests. Now the remaining time window is constructed as follows (based on \cite{Cordeau2006}): For an inbound request the drop-off time window is given by the bounds \begin{equation} e_{i^-} = e_{i^+} + s_{i^+} + t_{i} \qquad \text{and} \qquad\ell_{i^-} = \ell_{i^+} + s_{i^+} + L_i, \label{inbound} \end{equation} where $t_i$ denotes the direct travel time $t_{(v,w)}$ from a node $v\in V_{i^+}$ to a node $w\in V_{i^-}$ , i.e.\ $t_i = t_{(v,w)}$ with $v = (i^+,0,\ldots,0)$ and $w = (i^-,0,\ldots,0)$. Similarly, for an outbound request the pick-up time window is defined by \begin{equation} e_{i^+} = e_{i^-} - L_i - s_{i^+} \qquad \text{and} \qquad \ell_{i^+} = \ell_{i^-} - t_i - s_{i^+}. \label{outbound} \end{equation} Secondly, we define the notion of an \emph{active} node $v\in V$: We say that a node $v\in V$ is \emph{active} if at least one of its incoming arcs is part of a dicycle flow, i.e., if \begin{equation*} \sum_{a\in \delta^{\text{in}}(v)}x_a = 1. \end{equation*} Otherwise, we call $v$ \emph{inactive}. Note that due to constraints \eqref{passpickedup1}, for each request $i\in R$ we have exactly one associated active pick-up node and one associated active drop-off node. In case we include the option of denying user requests, i.e.\ we use constraints \eqref{passpick?} instead of constraints \eqref{passpickedup1} and add variables $p_i \in \{0,1\}$, $i\in R$ to the MILP, each request either has exactly one active pick-up and drop-off node (in this case we have $p_i=1$), or no associated active node at all (this is the case when $p_i=0$). Now, if both $v$ and $w$ in constraints \eqref{ridetime_test} are active nodes, inequalities \eqref{ridetime_test} and \eqref{ridetime1} coincide. In case both $v$ and $w$ are inactive, the values $B_v$, $B_w$ can be ignored in an interpretation of an optimal solution, as $v$ and $w$ are not contained in any of the vehicle tours. Hence, the critical two cases are the cases where one of the nodes is active and the other node is inactive. Let $v^{\text{off}}$ and $v^{\text{on}}$ denote the inactive and the active node from the set $\{v,w\}$, respectively. Then, we do not want that $B_{v^{\text{off}}}$ influences the value of $B_{v^{\text{on}}}$ in constraints \eqref{ridetime_test}: \begin{description} \item[Case 1: $v$ is active, $w$ is inactive.] Resolving \eqref{ridetime_test} for $B_v$ we obtain $B_v \geq B_w - L_i - s_{i^+}$. Now, we do not want to impose any additional constraints on $B_v$. Thus, we demand $B_w - L_i - s_{i^+} \leq e_{i^+}$. Accordingly, it has to hold that $B_w \leq e_{i^+} + L_i + s_{i^+}$. Recall that here we assume $w$ to be inactive. If $w$ is active, $B_w \leq \ell_{i^-}$ needs to hold. Putting these restrictions together for an inbound request, we get \begin{align} B_w &\leq e_{i^+} + L_i + s_{i^+} + (\ell_{i^-}- (e_{i^+} + L_i + s_{i^+}))\sum_{a \in \delta^{\text{in}}(w)} x_a \notag \\ & = e_{i^+} + L_i + s_{i^+} + (\ell_{i^+} + L_i + s_{i^+} - (e_{i^+} + L_i + s_{i^+}))\sum_{a \in \delta^{\text{in}}(w)} x_a \notag \\ &= e_{i^+} + L_i + s_{i^+} + (\ell_{i^+} - e_{i^+} )\sum_{a \in \delta^{\text{in}}(w)} x_a \label{B_w_bound1}, \end{align} using the reformulation of $\ell_{i^-}$ from equations \eqref{inbound} in the second step. In the same manner, we use equations \eqref{outbound} to substitute $e_{i^+} = e_{i^-} - L_i - s_{i^+}$ and obtain that \begin{equation} B_w \leq e_{i^+} + L_i + s_{i^+} + (\ell_{i^-} - e_{i^-} )\sum_{a \in \delta^{\text{in}}(w)} x_a\label{B_w_bound2} \end{equation} has to hold for an outbound request. Without loss of generality, we may assume that the length of the time window $\left[e_{i^+}, \ell_{i^+}\right]$ constructed around an inbound request has the same length as the time window $\left[e_{i^-}, \ell_{i^-}\right]$ constructed around an outbound request, hence the two formulations \eqref{B_w_bound1} and \eqref{B_w_bound2} coincide. \item[Case 2: $\boldmath{v}$ is inactive, $\boldmath{w}$ is active.] Resolving \eqref{ridetime_test} for $w$ we obtain $B_w \leq L_i + B_v + s_{i^+}$. In order for the latter inequality to be redundant for $B_w$, we demand that $L_i + B_v + s_{i^+} \geq \ell_{i^-}$. It follows that $B_v \geq \ell_{i^-} - L_i - s_{i^+}$ has to hold. For an inbound request, using the equivalence from \eqref{inbound}, this can be resolved to \begin{equation*} B_v \geq (\ell_{i^+} + L_i + s_{i^+}) - L_i - s_{i^+} = \ell_{i^+}. \end{equation*} Recall that we assumed that $v$ is inactive. In case $v$ is active, the less tighter constraint $B_v \geq e_{i^+}$ has to hold, so that we arrive at \begin{equation*} B_v \geq e_{i^+} + (\ell_{i^+} - e_{i^+}) \,\biggl(1-\sum_{a \in \delta^{\text{in}}(v)} x_a\biggr). \end{equation*} In a similar fashion, we obtain \begin{equation*} B_v \geq e_{i^+} + (\ell_{i^-} - e_{i^-}) \,\biggl(1-\sum_{a \in \delta^{\text{in}}(v)} x_a\biggr), \end{equation*} for an outbound request. With the same argumentation as above, we conclude that the two lower bounds on $B_v$ coincide. \end{description} As desired, by reformulating the constraints \eqref{timewindows} on the variables $B_v$, $v\in V$, we obtain a simpler version of the ride time constraints \eqref{ridetime1}. We put these results together in a second MILP formulation of DARP. \begin{model}\label{mod3} \begin{subequations} \begin{align} \min \;& \sum_{a\in A} c_a\,x_a \label{costobj}\\[1ex] \text{s.\,t.}\; & \sum_{a\in \delta^{\text{in}}(v)} x_a - \sum_{a\in \delta^{\text{out}}(v)} x_a = 0 \quad \forall v\in V, \label{flowconservation} \\ & \sum_{\substack{a\in\delta^{\text{in}}(v)\\ v \in V_{i^+}}} x_a = 1 \quad \forall i \in R, \label{passpickedup} \\ & \sum_{a \in \delta^{\text{out}}(\boldsymbol{0})} x_a \leq \vert K\vert, \label{numberofvehicles} \\ & e_0 \leq B_{\boldsymbol{0}} \leq \ell_0,\label{timewindows_0}\\ & e_{i^+} + (\ell_{i^+} - e_{i^+})\,\biggl(1- \sum_{a \in \delta^{\text{in}}(v)}\!\!x_a\biggr) \leq B_v \leq \ell_{i^+} \quad \forall i \in R,\, v\in V_{i^+}, \label{timewindows_pickup} \\ & e_{i^-} \leq B_v \leq e_{i^+} + L_i + s_{i^+} + (\ell_{i^+} - e_{i^+} ) \!\sum_{a \in \delta^{\text{in}}(v)}\!\!x_a \quad \forall i\in R,\, v \in V_{i^-}, \label{timewindows_dropoff} \\ & B_w - B_v - s_{i^+}\leq L_i \quad \forall i \in R,\, v \in V_{i^+}, \, w \in V_{i^-}, \label{ridetime} \\ & B_w \geq B_v + s_{v_1} + t_{(v,w)} - \tilde{M}_{v,w}\,(1-x{(v,w)}) \quad \forall (v,w) \in A, \label{traveltime} \\ & x_a \in \{0,1\} \quad \forall a \in A, \\ & B_v \geq 0 \quad \forall v \in V. \label{lastconstr} \end{align} \end{subequations} \end{model} We substitute ride time constraints \eqref{ridetime1} for a simpler version \eqref{ridetime}. In return we use a more complex version of the time window constraints \eqref{timewindows_0} -- \eqref{timewindows_dropoff} instead of the short form \eqref{timewindows}. Similar to Model~\ref{mod2}, variables $p_i \in \{0,1\}$, $i\in R$ may be added to Model~\ref{mod3} and constraints \eqref{passpickedup} may be substituted by constraints \eqref{passpick?} to include the option of denying user requests. In this case, the objective function should be modified to contain a term penalizing the denial of requests. As there are $\mathcal{O}(n^{2Q-1})$ ride time constraints, for each of which two sums $\sum_{a \in \delta^{\text{in}}(v)} x_a$ have to be evaluated in the longer version \eqref{ridetime1}, but only $\mathcal{O}(n^Q)$ time window constraints, for each of which one sum of the above form has to be evaluated in the longer version \eqref{timewindows_0} - \eqref{timewindows_dropoff}, we obtain a more efficient new MILP formulation. Similar to Model~\ref{mod2}, for $n\geq Q+1$ there are at most $\mathcal{O}(n^{Q+1})$ variables and at most $\mathcal{O}(n^{2Q-1})$ constraints, of which $\mathcal{O}(n^{2Q-1})$ constraints are ride time constraints \eqref{ridetime}. \subsection{Objective functions}\label{sec:obj} \noindent In most of the research on DARP only one objective is used, which is often the minimization of total routing costs. An excellent overview is given by \cite{Ho2018}. Other popular objectives are, for example, the minimization of total route duration, number of vehicles used, users' waiting time, drivers' working hours, or deviation from the desired pick-up and drop-off times. In this paper, we focus on three prevalent (and possibly conflicting) criteria, namely the total routing cost, the total number of unanswered requests and the total excess ride time or the maximum excess ride time, and combine these three criteria into weighted sum objective functions. The first and probably most important criterion is the total routing cost, which can be computed as \begin{equation} f_c(x) \coloneqq \sum_{a \in A} c_a \, x_a. \label{f_c} \end{equation} We refer to $f_c$ as \emph{cost-objective}. The second objective function, \begin{equation*} f_{n} (p)\coloneqq n - \sum_{i\in R} p_i, \end{equation*} measures the total number of unsanswered requests. The next optimization criterion relates to customer satisfaction: We measure the response time to a service request by assessing a user's total excess ride time, which aims at penalizing overly long travel times as well as possibly delayed pick-up times. Let the variable $d_i \geq 0$, $i \in R$ measure the difference in time compared to a user's earliest possible arrival time. We refer to $d_i$ as a user's \emph{excess ride time}. Moreover, let the variabe $d_{\max} \geq 0$ measure the \emph{maximum excess ride time}. By introducing constraints \begin{align} d_i &\geq B_v - e_{i^-} \qquad\forall i\in R,\, \forall v \in V_{i^-} , \label{excess}\\ d_{\max} & \geq d_i \qquad \forall i \in R, \label{maxexcess} \end{align} we can now minimize the total or average excess ride time, or the maximum excess ride time (i.e., the excess ride time in the worst case), respectively. The total excess ride time is thus given by the \emph{excess-objective} \begin{equation} f_e(d) \coloneqq \sum_{i \in R} d_i, \label{f_e} \end{equation} while the \emph{maximum-excess-objective} is given by \begin{equation} f_{e_{\max} }(d_{\max})\coloneqq d_{\max}. \label{f_emax} \end{equation} The discussion above highlights the fact that we have to consider different and generally conflicting objective functions that are relevant when solving DARP. While the cost objective $f_c$ aims at minimizing total travel cost and thus takes the perspective of the service provider, the quality of service which rather takes a user's perspective is better captured by objective functions like $f_n$, $f_e$ and $f_{e_{\max}}$. We approach this technically bi- or multi-objective problem by using a weighted sum approach with fixed weights, i.e., by combining the relevant objective functions into one weighted sum objective. When using the total excess time as quality criterion, we obtain \begin{equation} f_{ce} (x, d)\coloneqq \sum_{a \in A} c_a \, x_a + \alpha\sum_{i \in R} d_i \label{f_ce} \end{equation} which will be referred to as \emph{cost-excess-objective}, and when using the maximum excess time as quality criterion, we get \begin{equation} f_{ce_{\max}} (x, d_{\max})\coloneqq \sum_{a \in A}c_a \, x_a + \beta d_{\max} \label{f_cemax} \end{equation} which will be referred to as \emph{cost-max-excess-objective}. The parameters $\alpha>0$ and $\beta>0$ are weighting parameters that can be selected according to the decision maker's preferences. We refer to \cite{ehrgott05multicriteria} for a general introduction into the field of multi-objective optimization. Last but not least, we consider a form of DARP in which it is allowed to deny certain user requests. Denying requests can be reasonable if accepting them would mean to make large detours and in turn substantially increase ride times or waiting times of other users. This is accomplished by substituting constraints \eqref{passpickedup1} in Model~\ref{mod2} or \eqref{passpickedup} in Model~\ref{mod3}, respectively, by constraints \eqref{passpick?} and adding variables $p_i \in \{0,1\}$, $i\in R$ to Model~\ref{mod2} or \ref{mod3}. In this case, the number of accepted requests has to be maximized or, equivalently, the number of unanswered requests has to be minimized. At the same time, routing costs and excess ride time should be as small as possible. The optimization of these opposing criteria is reflected by the \emph{request-cost-excess-objective} given by \begin{equation} f_{rce}(x, d, p) \coloneqq \sum_{a \in A} c_a \, x_a + \alpha\sum_{i \in R} d_i + \gamma \left(n - \sum_{i \in R} p_i \right), \label{f_rce} \end{equation} where $\gamma>0$ is an additional weighting parameter. While the third part of the objective refers to the number of unanswered requests and is equal to $\gamma\, n $ at maximum (i.e., if all requests are accepted), the values of the total routing costs and of the total excess ride time strongly depend on the underlying network and request data. Note that meaningful choices of the weighting parameters have to reflect this in order to avoid situations where one part of the objective overrides the others. This will be discussed in Section~\ref{sec:experiments}. We emphasize at this point that the objective functions $f_{ce}$, $f_{ce_{\max}}$ and $f_{rce}$ can be interpreted as weighted sum objectives of bi-objective and tri-objective optimization problems, respectively.
2,877,628,089,933
arxiv
\section{} Quasars are complex sources, characterized by broad band spectra from radio through optical to X-ray band, with numerous emission and absorption features. This complexity leads to rich diagnostics. However, \citet{bg92} used Principal Component Analysis (PCA), and with this analysis they were able to show significant correlations between the measured parameters. The leading component, related to Eigenvector 1 (EV1) was dominated by the anticorrelation between the Fe${\mathrm{II}}$ optical emission and [OIII] line and EV1 alone contained 30\% of the total variance. It opened a way in defining a quasar main sequence, in close analogy to the stellar main sequence on the Hertzsprung-Russel (HR) diagram (\citealt{sul01}). The question still remains which of the basic theoretically motivated parameters of an active nucleus (Eddington ratio, black hole mass, accretion rate, spin, and viewing angle) is the main driver behind the EV1. Here we limit ourselves to the optical waveband, and concentrate on theoretical modelling the Fe${\mathrm{II}}$ to H$\mathrm{\beta}$ ratio, and we test the hypothesis that the physical driver of EV1 is the maximum of the accretion disk temperature, reflected in the shape of the spectral energy distribution (SED). We performed computations of the H$\mathrm{\beta}$ and optical Fe${\mathrm{II}}$ for a broad range of SED peak position using CLOUDY photoionisation code. We assumed that both H$\mathrm{\beta}$ and Fe${\mathrm{II}}$ emission come from the Broad Line Region represented as a constant density cloud in a plane-parallel geometry. We expected that a hotter disk continuum will lead to more efficient production of Fe${\mathrm{II}}$ but our computations show that the Fe${\mathrm{II}}$ to H$\mathrm{\beta}$ ratio actually drops with the rise of the disk temperature. Thus either hypothesis is incorrect, or approximations used in our paper for the description of the line emissivity is inadequate. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} quasars: broad line region, Eigenvector 1, FeII strength, accretion disk temperature, constant density, photoionisation: CLOUDY} \end{abstract} \section{Introduction} Quasars are rapidly accreting supermassive black holes at the centres of massive galaxies. In type 1 AGN, we see the nucleus directly, the continuum emission dominating the energy output in the optical/UV band comes from an accretion disk surrounding a supermassive black hole \cite[e.g.][]{Czerny1987,cap2015}, and the optical/UV emission broad emission lines, Fe${\mathrm{II}}$ pseudo-continuum and Balmer Component are usually considered to be coming from the Broad Line Region (BLR) clouds. Broad band spectral properties and line emissivity are highly correlated \cite{bg92,sul00,sul02,sul07,yip04,sh14,sun15}, and the PCA analysis is a powerful tool herein. As suggested by \cite{sul01}, those correlations allow the identification of the quasar main sequence, analogous to the stellar main sequence on the HR diagram where the classification was also based purely on spectral properties of the stellar atmospheres. The stellar main sequence found the dependence of the spectra on the effective temperature of stars. Quasar main sequence was suggested to be driven mostly by the Eddington ratio \cite{bg92,sul00,sh14} but also on the additional effect of the black hole mass, viewing angle and the intrinsic absorption \cite{sh14,sul00,kura09}. \par We postulate that the true driver behind the R$_{\mathrm{FeII}}$ is the maximum of the temperature in a multicolor accretion disk which is also the basic parameter determining the broad band shape of the quasar continuum emission. The hypothesis seems natural because the spectral shape determines both broad band spectral indices as well as emission line ratios, and has already been suggested by \cite{b07}. We expect an increase in the maximum of the disk temperature as the R$_{\mathrm{FeII}}$ increases. According to Figure 1 from \cite{sh14}, increase in R$_{\mathrm{FeII}}$ implies increase in the Eddington ratio or decrease in the mass of the black hole. We expect that this maximum temperature depends not only on the Eddington ratio \citep{c06} but on the ratio of the Eddington ratio to the black hole mass (or, equivalently, on the ratio of the accretion rate to square of the black hole mass). \section{Theory} Most of the quasar radiation comes from the accretion disk and forms the Big Blue Bump (BBB) in the optical-UV (\citealt{c87, rich06}), and this thermal emission is accompanied by an X-ray emission coming from a hot optically thin mostly compact plasma, frequently refered to as a corona \cite{c87,haa91,Fabian2015}. The ionizing continuum emission thus consists of two physically different spectral components. We parameterize this emission in the following way. For convenience, the BBB component is parameterized by the maximum temperature of an accretion disk. In the standard accretion disk model this temperature is related to the black hole mass and accretion rate \begin{equation} \text{T}_{\text{BBB}} = \left[\frac{3\text{GM}\dot{\text{M}}}{8\pi \sigma \text{r}^3}\left(1 - \sqrt{\frac{\text{R}_{\text{in}}}{\text{r}}}\right)\right]^{0.25} = 1.732\times 10^{19} \left(\frac{\dot{\text{M}}}{\text{M}^{2}}\right)^{0.25}\label{eq:01} \end{equation} where $\text{T}_{\text{BBB}}$ - maximum temperature corresponding to the Big Blue Bump; G - gravitational constant; M - black hole mass; $\dot{\text{M}}$ - black hole accretion rate; r - radial distance from the centre; $\mathrm{R_{in}}$ - radius corresponding to the innermost stable circular orbit. $\text{M}$ and $\dot{\text{M}}$ are in cgs units. Similar formalism has been used by \cite{b07} although the coefficient differs by a factor of 2.6 from Equation 1. This maximum is achieved not at the innermost stable orbit around a non-rotating black hole (3R$_{\mathrm{Schw}}$) but at 4.08$\bar{3}$ R$_{\mathrm{Schw}}$. The SED component peaks at the frequency \begin{equation} \nu_{\text{max}} \sim \left[\frac{\frac{\text{L}}{{\text{L}_{\text{Edd}}}}}{\text{M}}\right]^{\text{0.25}}\label{eq:02}. \end{equation} where $\nu_{\text{max}}$ - frequency corresponding to $\text{T}_{\text{BBB}}$; L - accretion luminosity $\left (=\eta \dot{\text{M}} \text{c}^2 \right )$; $L_{\text{Edd}}$ - Eddington limit $\left (= \mathrm{\frac{4\pi GMm_{p}c}{\sigma_{T}}}, \text{where\;} \mathrm{m_{p}} - \text{mass\; of\; a\; proton}, \sigma_{\text{T}} - \text{Thompson \;cross\; section}\right )$. The exact value of the proportionality coefficient has to be calculated numerically, and for a standard Shakura-Sunyaev disk $\mathrm{h \nu_{max}/k T_{BBB}} = 2.092$. We expect that the thin-disk formalism applies to all the Type 1 AGN radiating above 0.01$L_{\text{Edd}}$ and below 0.3$L_{\text{Edd}}$. Instead of a full numerical model of an accretion disk spectrum, we simply use a power law with the fixed slope, $\alpha_{uv}$, and the value of $\text{T}_{\text{BBB}}$ to determine an exponential cut-off. The X-ray coronal component shape is defined by the slope ($\mathrm{\alpha_{x}}$) and has an X-ray cut-off. The relative contribution is determined by fixing the broad band spectral index $\mathrm{\alpha_{ox}}$, and finally the absolute normalization of the incident spectrum is set assuming the source bolometric luminosity. We fix most of the parameters, and $\text{T}_{\text{BBB}}$ is the the basic parameter of our model. Some of this radiation is reprocessed in the BLR which produces the emission lines. In order to calculate the emissivity, we need to assume the mean hydrogen density ($\mathrm{n_H}$) of the cloud, and a limiting column density (N$_\mathrm{H}$) to define the outer edge of the cloud. Ionization state of the clouds depends also on the distance of the BLR from the nucleus. We fix it using the observational relation by \cite{b13} \begin{equation} \left(\frac{\text{R}_{\text{BLR}}}{1\;\text{lt-day}}\right) = 10^{\left[1.555 + 0.542\; \text{log}\left(\frac{\lambda \text{L}_{\lambda}}{10^{44} \;\text{erg s}^{-1}}\right)\right]}\label{eq:03} \end{equation} The values for the constants considered in Equation 3 are taken from the Clean $\mathrm{H \beta\; R_{BLR} - L}$ model from \citet{b13} where $\lambda$ = 5100 \AA. \section{Results and Discussions} As a first test we check the dependence of the change in the R$_{\mathrm{FeII}}$ as a function of the accretion disk maximum temperature, T$_\mathrm{{BBB}}$ at constant values of L$_\mathrm{bol}$, $\mathrm{\alpha_{uv}}$ , $\mathrm{\alpha_{ox}}$ , $\mathrm{n_H}$ and $\mathrm{N_H}$. We fix the bolometric luminosity at the AGN, $\mathrm{L_{bol}}$ = $\mathrm{10^{45} \;erg\; s^{-1}}$ with accretion efficiency $\epsilon$ = 1/12, since we consider a non-rotating black hole in Newtonian approximation (see Eq.~1). This determines the accretion rate, $\dot M$. The BBB's exponential cutoff value is determined by the maximum temperature of the disk. Our branch of solutions covers the disk temperature range between $1.06\times 10^4$\;K and $1.53\times 10^5$\;K. The corresponding range of the black hole mass range obtained from Equation (1) is [$2.35\times 10^7\; \mathrm{M_{\odot}}$, $4.90\times 10^9\; \mathrm{M_{\odot}}$], and it implies the range of Eddington ratio ($\mathrm{L/L_{Edd}}$) [0.002, 0.33] calculated from the mentioned range of maximum disk temperatures. Large disk temperature corresponds to low black hole mass, since we fix the bolometric luminosity. Finally, we use a two-power law SED with optical-UV slope , $\mathrm{\alpha_{uv}}$ = -0.36, and X-ray slope, $\mathrm{\alpha_{x}}$ = -0.91 \citep{roz14}. The exponential cutoff for the X-ray component is fixed at 100 keV (\citealt{fra02} and references therein). By setting a value for the spectral index, $\mathrm{\alpha_{ox}}$ = -1.6, we specify the optical-UV and X-ray luminosities. An example of SED is shown in upper panel of Figure~\ref{fig:SED}. \par We now use this one-dimensional family of SED to calculate the line emission. We have dropped the X-ray power-law component in the subsequent analyses which we plan to re-introduce once we start to see the expected trend in the R$_{\mathrm{FeII}}$ - $\mathrm{T_{BBB}}$. As a start, we use the values of parameters from \cite{bv08} i.e. $\mathrm{log[n_H/(cm^{-3})] = 11}$, $\mathrm{log[N_H/(cm^{-2})] = 24}$, without including microturbulence (the motion that occurs within a cloud's line-forming region to whose variation the line formation and the emission spectrum is sensitive). The distance of the cloud from the source depends on adopted disk temperature. From the incident continuum, we estimate the $\mathrm{L_{5100 \AA}}$ that in turn is used to calculate the inner radius of the BLR cloud using Equation 3. \par Knowing the irradiation, we produce the intensities of the broad Fe${\mathrm{II}}$ emission lines from the corresponding levels of transitions present in CLOUDY 13.04 \citep{f13}. We calculate the Fe${\mathrm{II}}$ strength (R$_{\mathrm{FeII}}$ = EW$_{\mathrm{Fe{II}}}$ / EW$_{\mathrm{H\beta}}$), which is the ratio of Fe${\mathrm{II}}$ EW within 4434-4684 \AA \;to broad H$\beta$ EW. This prescription is taken from \cite{sh14}. \par The results are shown in lower panel of Figure~\ref{fig:SED}. The rise of the disk temperature initially leads to weak change of the cloud distance from the source, since the SED maximum is close to 5100 \AA~ for such massive black holes, and later with increasing distance from the source it decreases. The Fe${\mathrm{II}}$ intensity changes monotonically with $\mathrm{T_{BBB}}$ but it is a decreasing, not an increasing trend. This is not what we have expected --- high temperatures should correspond to low mass high accretion rate sources (\citealt{ss73}), Narrow Line Seyfert 1 galaxies, which show strong Fe${\mathrm{II}}$ component. This monotonic trend appears despite non-monotonic change with the disk temperature both in H$\beta$ and Fe${\mathrm{II}}$ itself. \par We thus extend our study for a broader parameter range, allowing for $\mathrm{log(n_H)}$ in the range 10 - 12, and $\mathrm{log(N_H)}$ from the range 22.0 - 24.0. The range of values obtained for R$_{\mathrm{FeII}}$ went up from [0.005, 0.4] to [0.4, 1.95] with increasing $\mathrm{N_H}$. The change in the local density is also important. For a constant $\mathrm{log(N_H)} = 24$, changing $\mathrm{log(n_H)}$ = 10 - 12 shifts the maximum of R$_{\mathrm{FeII}}$ from 1.93 (for $\mathrm{log(n_H)}$ = 10) down to 0.095 (for $\mathrm{log(n_H)}$ = 12), thus, there is a declining trend in the maximum of R$_{\mathrm{FeII}}$ with an increase in $\mathrm{n_H}$ at constant $\mathrm{N_H}$. We see a definite change in the trend going from lower mean density to higher in the character of $\mathrm{T_{BBB} - {R_{Fe{II}}}}$ dependence. In the case of the lower $\mathrm{n_H}$ case, we see the turnover peak close to $\mathrm{log[T_{BBB}(K)] = 4.2}$ which couldn't be reproduced by the models generated using higher values of $\mathrm{n_H}$ and $\mathrm{N_H}$ owing to non-convergence of the CLOUDY code at lower values of T$_{\mathrm{BBB}}$. But on the higher end of T$_{\mathrm{BBB}}$ we still get the same declining behaviour of R$_{\mathrm{FeII}}$. The two extreme cases of changing both parameters are in Figure~\ref{fig:R_Tbb_2}. We thus find that the obtained values of R$_{\mathrm{FeII}}$ are heavily affected by the change in the maximum temperature of the BBB-component. The range of the R$_{\mathrm{FeII}}$ is well covered, in comparison with the plots of \cite{sh14}: higher density solutions reproduce large values and lower values are obtained by lowering the local density and column density. But, in general, there is a decay in the $\mathrm{Fe{II}}$ strength with the rise of the disk temperature while it was expected to follow a rising curve. \par To understand the nature of this trend in our CLOUDY computations we plot H$\beta$ and Fe${\mathrm{II}}$ emissivity profiles (Figure~\ref{fig:prof1} and ~\ref{fig:prof2}) where we consider only the first five Fe${\mathrm{II}}$ transitions in the 4434-4684 \AA\;range. We compute these profiles by varying $\mathrm{n_H}$ ($\mathrm{log(n_H)}$ = [10,12]), $\mathrm{N_H}$ ($\mathrm{log(N_H)}$ = [22,24]) and testing the dependence of $\mathrm{T_{BBB}}$ for three different temperature cases. The H$\beta$ nearly always dominates over the selected Fe${\mathrm{II}}$ emissions. But close to the outer surface of the cloud i.e., as $\mathrm{log(N_H) \rightarrow 24}$, the H$\beta$ emission starts to drop while the Fe${\mathrm{II}}$ increases with increasing $\mathrm{N_H}$, and there is some overlap region (see Figure~\ref{fig:prof1} and ~\ref{fig:prof2}). In Fig. ~\ref{fig:prof1}, we find that with increasing $\mathrm{n_H}$, the peak of the H$\beta$ formation shifts closer to the inner surface of the cloud, so the relative contribution of Fe${\mathrm{II}}$ rises. However, with increasing $\mathrm{T_{BBB}}$ the emissivity zones move deeper, and the relative role of H$\beta$ (see the extreme right panel of Fig. ~\ref{fig:prof2}) goes down. \par In general, the emissivity profile is much more shallow for H$\beta$ while Fe${\mathrm{II}}$ emission is more concentrated towards the back of the cloud. Thus, an increase in $\mathrm{N_H}$ brings the R$_{\mathrm{FeII}}$ ratio up, but increasing irradiation pushes the H$\beta$ and Fe$\text{II}$ emitting regions deeper into the cloud and R$_{\mathrm{FeII}}$ drops (see Figure~\ref{fig:prof1} and ~\ref{fig:prof2}). Our sequence of solutions for fixed bolometric luminosity and rising accretion disk temperature creates an increasing irradiation, and apparently the change of the SED shape cannot reverse the trend. \par Therefore, the question is whether our hypothesis of the dominant role is incorrect or the set of computations is not satisfactory. To answer it we used two objects with well measured SED as well as R$_{\mathrm{FeII}}$: RE J1034+396 \citealt{c16} and an X-Shooter quasar composite from \citealt{sel16}. In order to determine the parameter $\text{T}_{\text{BBB}}$ for those sources we created a set of full-GR disk models following the Novikov-Thorne prescription, we simulate an array of SED curves with $\mathrm{L_{Edd}}$ parametrization where we consider simultaneous dependence on spin ($0 \le \mathrm{a} \le 0.998$) and accretion rate ($0.01 \le \mathrm{\dot{m}} \le 10$; where $\mathrm{\dot{m}} = \mathrm{\dot{M}/{\dot{M}_{Edd}}}$, $\mathrm{\dot{M}} = 1.678\times 10^{18}\mathrm{\frac{M}{M_{\odot}}}$). The value of the black hole mass has been taken from \cite{cap16}. The two values represent the extreme tails of the possible trend, with X-Shooter composite having the SED peak in UV and RE J1034+396 peaikng in soft X-rays. The corresponding points are shown in Figure~\ref{fig:R_Tbb_2}. Observations show a rise in the value of R$_{\mathrm{FeII}}$ with increase in $\mathrm{T_{BBB}}$ which the simulations have been unable to reproduce so far. However, the rise in R$_{\mathrm{FeII}}$ is not very large, from 0.3 to 0.5, despite huge change in the disk temperature difference implied by the observed SED in the two objects. \section{Future} The reason for starting the project from purely theoretical modeling of the line ratios is the fact that determinations of the black hole mass, accretion rate and the observational parameter R$_{\mathrm{FeII}}$ available in the literature are not accurate enough to be used to test our hypothesis about the nature of the EV1 \citep{s17}. The subsquent tasks will be to check the R$_{\mathrm{FeII}}$ dependence on other parameters (see \citealt{sul00, mao09, marz15}) which are used (L$_\mathrm{bol}$ , $\mathrm{\alpha_{uv}}$ , $\mathrm{\alpha_{ox}}$ , $\mathrm{n_H}$, $\mathrm{N_H}$, $\mathrm{cos}(i)$, $\mathrm{a}$ and others). We intend to incorporate the microturbulence as suggested in \cite{bv08}. Better, more physical description of the SED may be needed, i.e model of a disk + corona with full GR and more complex geometry of the BLR using \cite{c11}, \cite{ch11}, and \cite{c15}. We intend to implement the constant density LOC (Locally Optimized Cloud) model and subsequently the constant pressure model to repeat the tests and check for discrepancies with respect to the current model. Next stage to consider is the possibility of shielding of some BLR regions by the puffed inner disk \cite[e.g.][]{Wang2014}, or to consider independent production regions of H$\beta$ and Fe${\mathrm{II}}$. Different studies have proposed that Fe${\mathrm{II}}$ is mainly produced in BLR (\citealt{bv08,shi10}) while many others have suggested that these emissions are mostly produced in the accretion disk (\citealt{mar15} and references therein). Finally, we have to test our theory observationally for more sources with known SED peak position. To have an overview of the EV1, it is necessary to study it in other ranges of frequencies, including X-ray, radio, UV and IR spectral ranges. Considerable progress along these line have been made by \cite{Sul2007}, \cite{Sul17} (UV range), \cite{dul99} (Figure 4), \cite{mar15} (IR range). \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} SP has tested the basic model and carried out the photoionisation simulations based on the idea and formalism proposed by BC. CW has provided computational assistance and helped solve the $\mathrm{T_{BBB}}$ issue. \section*{Funding} Part of this work was supported by Polish grant Nr. 2015/17/B/ST9/03436/. \section*{Acknowledgments} The authors would like to acknoweledge the anonymous referees for their comments and suggestions to bring the paper to its current state. SP would like to acknowledge the organising committee and the participants of the “Quasars at all Cosmic Epochs” conference held during 02nd Apr - 07th Apr 2017 in Padova, Italy and, subsequently for providing the opportunity to present a talk on his research after being adjudged with the Best Poster award. SP would also like to extend his gratitude to the Center for Theoretical Physics and Nicolaus Copernicus Astronomical Center, Warsaw, the National Science Center (NSC) OPUS 9 grant for financing the project and Dr. Gary Ferland and Co. for the photoionisation code CLOUDY. SP would like to acknowledge the unending academic and personal support from Mr. Tek Prasad Adhikari. SP is also obliged to the Strong Gravity group at CAMK, Warsaw for engaging discussions resulting in this work. \bibliographystyle{frontiersinSCNS_ENG_HUMS}
2,877,628,089,934
arxiv
\section{Introduction} The Drell--Yan (DY) production of high-energy lepton pairs at the LHC and at its High-Luminosity successor (HL-LHC) is a powerful probe of new physics encapsulated in Effective Field Theory (EFT) operators~\cite{deBlas:2013qqa,Farina:2016rws,Dawson:2018dxp,Ricci:2020xre}. It is particularly sensitive to the operators that induce growing-with-energy effects in the partonic cross-sections through their interference with the Standard Model (SM) contribution. These are the flavor-universal operators constructed with one quark and one lepton current, listed in Table~\ref{tab:GenOp}.\footnote{Operators that are diagonal in the fermions mass-basis, but not flavor-universal, can also interfere. However they are generically too strongly constrained by flavor measurements to be relevant.} Thanks to the high energy that is available at the LHC, DY measurements can extend the current experimental sensitivity to these operators by more than one order of magnitude. This generically holds also for operator combinations that are already accurately probed by the ElectroWeak Precision Tests (EWPT) performed at LEP and other low-energy experiments~\cite{Farina:2016rws}. The relevance of high-energy DY measurements actually extends beyond their intrinsic sensitivity to new physics. The DY processes are the simplest concrete incarnation of the so-called ``high-$p_T$'' probes of new physics, aimed at capturing growing-with-energy EFT effects in the tail of kinematical distributions. Other candidate processes include diboson and boson-plus-Higgs~\cite{Butter:2016cvz,Zhang:2016zsp,Liu:2018pkg,Green:2016trm,Biekoetter:2014jwa,Falkowski:2015jaa,Baglio:2017bfe,Franceschini:2017xkh,Panico:2017frx,Grojean:2018dqj,Banerjee:2018bio}, diquark and top production~\cite{Alioli:2017nzr,Farina:2018lqo} and others~\cite{Greljo:2017vvb,Henning:2018kys}. Robust phenomenological studies of DY, possibly accompanied by preliminary LHC experimental results and HL-LHC projections, are instrumental to reveal issues and outline solutions towards the development of the high-$p_T$ measurement program and its extension to the other processes listed above. For instance the DY process is the simplest playground for the assessment of the impact of Parton Distribution Function (PDF) uncertainties and of the potential advantages for new physics sensitivity of reducing them with a simultaneous fit to the EFT and to the PDF~\cite{Carrazza:2019sec}. The DY final state is so simple that it can be characterized completely with fully-differential cross-section measurements. Fully (i.e.~triply) differential measurements of the neutral DY process have already been performed by ATLAS with early run-$1$ data~\cite{Aaboud:2017ffb}.\footnote{Double differential measurements have been performed in Refs.~\cite{Aad:2016zzw,CMS:2014hga}.} Quantifying the sensitivity to the EFT operators of fully-differential measurements of the neutral ($\ell^+\ell^-$) and charged ($\ell\nu$) DY processes at the LHC, and its improvement relative to the sensitivity of the single-differential analysis, is the purpose of the present paper. We employ the framework of Ref.~\cite{Ricci:2020xre} to obtain accurate DY predictions in the presence of the $7$ EFT operators of Table~\ref{tab:GenOp} by analytic event reweighting. This allows us to explore the $7$-dimensional EFT parameter space efficiently, based on few Monte Carlo simulations performed under the SM hypothesis. The large dimensionality of the EFT parameter space, combined with the large number of analysis bins that are employed for the fully-differential analysis, would have made this project computationally too demanding if employing a Monte Carlo implementation not based on reweighting. The statistical treatment of uncertainties is also borrowed from Ref.~\cite{Ricci:2020xre}. The paper is organized as follows. In Section~\ref{treedits} we develop a qualitative understanding of the fully-differential measurement potential to improve the sensitivity to new physics in several directions of the $7$-dimensional EFT parameter space. Sensitivity projections for HL-LHC are presented in Section~\ref{WYSubSec} focusing on the $2$-dimensional ``Universal'' subspace that corresponds to the W any Y parameters~\cite{Farina:2016rws,Ricci:2020xre}. The $7$-dimensional global analysis results are presented in Section~\ref{GCCSubSec} by employing a basis in the parameter space that is designed to outline the hierarchy in sensitivity among the various directions and to make the differential analysis advantages manifest. In Section~\ref{sec:Zprime} we quantify the fully-differential analysis potential concretely in a simple Beyond the SM (BSM) Z$^\mathbf{\prime}$ scenario. We report our conclusions in Section~\ref{sec:conc}. Finally, in the appendices we discuss the definition of the kinematical variables beyond the tree-level approximation (Appendix~\ref{app:Kin}), we give the explicit expressions for the tree-level amplitude (Appendix~\ref{app:Form}) and we report some selected results of our analysis for the LHC run 3 (Appendix~\ref{app:LHCProj}) \begin{table} \centering \begin{tabular}{l} current-current quark-lepton operators\\ \hline \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{lq}^{(3)}=(\bar{l}_L\sigma_I \gamma^\mu {l}_L)(\bar{q}_L\sigma_I\gamma_\mu {q}_L)$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{lq}^{(1)}=(\bar{l}_L\gamma^\mu {l}_L) (\bar{q}_L\gamma_\mu {q}_L)$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{eu}=(\bar{e}_R\gamma^\mu {e}_R) (\bar{u}_R\gamma_\mu {u}_R)$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{ed}=(\bar{e}_R\gamma^\mu {e}_R) (\bar{d}_R\gamma_\mu {d}_R)$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{lu}=(\bar{l}_L \gamma^\mu {l}_L) (\bar{u}_R \gamma_\mu {u}_R)$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{ld}\hspace{0.11cm}=(\bar{l}_L \gamma^\mu {l}_L) (\bar{d}_R \gamma_\mu {d}_R)$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}_{qe}=(\bar{q}_L \gamma^\mu {q}_L) (\bar{e}_R \gamma_\mu {e}_R)$ \end{tabular} \hspace{2em} \begin{tabular}{ll} \multicolumn{2}{c}{W and Y current-current operators}\\ \hline \hspace{1em}\rule{0pt}{1.5em}${\cal O}'_{2W}=J_L^{a,\mu} J^a_{L,\mu}$ & $J_L^{a,\mu} = \sum_f \bar f \gamma^\mu T^a f$\\ \hspace{1em}\rule{0pt}{1.5em}${\cal O}'_{2B} = J_Y^{\mu} J_{Y,\mu}$ & $J_Y^{\mu} = \sum_f \bar f \gamma^\mu Y_f f$\\ \hspace{1em}\rule{0pt}{3.0em}$G_{Lq}^{(3)} = \frac{1}{2} G'_{2W}$&\\ \hspace{1em}\rule{0pt}{1.5em}$G_{Lq}^{(1)} = -\frac{1}{6} G'_{2B}$ & $G_{eu} = -\frac{4}{3} G'_{2B}$\\ \hspace{1em}\rule{0pt}{1.5em}$G_{ed} = \frac{2}{3} G'_{2B}$ & $G_{lu} = - \frac{2}{3} G'_{2B}$\\ \hspace{1em}\rule[-.7em]{0pt}{2.2em}$G_{ld} = \frac{1}{3} G'_{2B}$ & $G_{qe} = - \frac{1}{3} G'_{2B}$ \end{tabular} \caption{Left table: Quark-lepton current-current operators in the Warsaw basis~\cite{Grzadkowski:2010es}. The operators are flavor universal and thus the generation indices have been suppressed. Right table: The operators related to the W and Y parameters, together with their rewriting in terms of Warsaw basis operators. \label{tab:GenOp}} \end{table} \section{Fully-differential Drell-Yan}\label{sec:diff_distr} We start our investigation of the fully-differential DY dilepton production and its sensitivity to new physics by developing a semi-analytic qualitative understanding based on the structure of the tree-level distributions. Quantitative estimates of the sensitivity are performed in Sections~\ref{WYSubSec} and~\ref{GCCSubSec}. \subsection{Tree-level distributions}\label{treedits} Consider the neutral process $q\bar q \to \ell^+ \ell^-$. The fully-differential cross-section is given by \begin{align}\label{eq:sigma_ll} \begin{aligned} \frac{d^3\sigma}{d m_{\ell\ell}^2\, dc_*\, dy}=\frac{\tau}{3\cdot 64\,\pi\, m_{\ell\ell}^4} \sum_{q} \Big\{\left[ (1+c_*)^2 \mathcal{L}_q(\tau,y) +(1-c_*)^2 \mathcal{L}_q(\tau,-y)\right]P_s^q (m_{\ell\ell})\qquad \\ + \left[(1-c_*)^2 \mathcal{L}_q(\tau,y)+(1+c_*)^2 \mathcal{L}_q(\tau,-y) \right]P_o^q(m_{\ell\ell}) \Big\}\,, \end{aligned} \end{align} where $m_{\ell\ell}=\sqrt{\hat{s}}$ is the dilepton invariant mass and $\tau=\hat{s}/S$ (with $\sqrt{S}$ the collider energy), while $y$ is the absolute value of the rapidity (relative to the beam axis) of the dilepton system. We define $c_*=\cos\theta_*$ as the cosine of the angle formed, in the rest frame of the dilepton pair, by the charge-minus lepton and the direction of motion of the dilepton rest frame relative to the lab frame. At tree-level, $\theta_*$ as defined above is the angle between the $\ell^-$ and the most energetic incoming parton.\footnote{It is essential \emph{not} to define $\theta_*$ with respect to a fixed beam-axis orientation. With that definition, the fully-differential cross-section in eq.~(\ref{eq:sigma_ll}) would depend only on the combination $(P_s^q +P_o^q)$ like the single-differential cross-section in eq.~(\ref{eq:sigma_s}), and all the advantages of the fully-differential analysis would be lost.\label{foot1}} The detailed definition of the kinematical variables beyond tree-level is reported in Appendix~\ref{app:Kin}. The sum in eq.~(\ref{eq:sigma_ll}) spans over the light quarks $q=\{u,d,c,s,b\}$, and, for each quark species $q$, ${\mathcal{L}}_q$ is the product of the corresponding $q$ and $\overline{q}$ parton distribution functions (PDFs), namely \begin{equation}\displaystyle\label{pdf} {\mathcal{L}}_q(\tau,y)=f_q(\sqrt{\tau}\, e^y;m_{\ell\ell}^2)\,f_{\overline{q}}(\sqrt{\tau} \,e^{-y};m_{\ell\ell}^2)\,. \end{equation} The coefficient functions $P_s^q$ ($P_o^q$) parametrize the contributions, including both SM and new physics, from the subprocesses where the chirality of the incoming quarks is the same (opposite) one of the outgoing leptons. Our target new physics operators are flavor-universal, like the SM contribution to the scattering amplitudes. Therefore the coefficient functions are the same for all the up-type and for all the down-type quarks, for a total of four independent functions $P_s^{u,d}$ and $P_o^{u,d}$. In the high energy regime $m_{\ell\ell}\gg m_Z$, and at the linear interference level in the new physics contribution, the coefficient functions read \begin{equation}\displaystyle\label{P} \begin{array}{l} P_s^{u,d} (m_{\ell\ell}) \simeq P_{{\rm{SM}},s}^{u,d} +m_{\ell\ell}^2 \left( \vec{V}_s^{u,d} \cdot \vec{G} \right) \equiv P_{{\rm{SM}},s}^{u,d} + m_{\ell\ell}^2 \,G_s^{u,d} \,,\\ \rule{0pt}{2.em}P_s^{u,d} (m_{\ell\ell}) \simeq P_{{\rm{SM}},o}^{u,d} + m_{\ell\ell}^2 \left( \vec{V}_o^{u,d} \cdot \vec{G} \right) \equiv P_{{\rm{SM}},s}^{u,d} + m_{\ell\ell}^2 \,G_o^{u,d} \,, \end{array} \end{equation} where $\vec G$ denotes the Wilson coefficients of the seven effective four-fermion operators defined in Table~\ref{tab:GenOp}. In these expressions both the SM terms $P_{{\rm{SM}},s}^{u,d}$ and $P_{{\rm{SM}},o}^q$, and the vectors $\vec{V}_{s,o}^{u,d}$, are kinematics-independent numerical coefficients, reported in Appendix~\ref{app:Form}. At the linear level, and up to tiny effects suppressed by $m_Z^2/m_{\ell\ell}^2$, the neutral DY cross-section depends on new physics only through the four linear combinations $\smash{{G}^{u,d}_{s,o}=\vec{V}_s^{u,d} \cdot \vec{G}}$ of the seven Wilson coefficients. With the fully-differential analysis we can probe each of these four directions in the new physics parameters space independently, at least in line of principle. Consider for comparison the single-differential cross-section $d \sigma/d m_{\ell\ell}^2$. By integrating eq.~(\ref{eq:sigma_ll}) over $c_*$ and $y$, we get \begin{equation}\displaystyle\label{eq:sigma_s} \frac{d\sigma}{d m_{\ell\ell}^2} = \frac{\tau}{72\, \pi\, m_{\ell\ell}^4} \sum_q \frac{d {\cal L}_{q}}{d \tau} \left[P_s^q(m_{\ell\ell}) + P_o^q(m_{\ell\ell})\right]\,, \end{equation} where the parton luminosities are defined as \begin{equation}\displaystyle \frac{d {\cal L}_{q}}{d \tau} = \int\limits_{-y_{\rm{max}}}^{y_{\rm{max}}} \!\!\! dy\, {\cal L}_q(\tau, y)\,,\;\;\;\;\;y_{\rm{max}} = - {\frac{1}{2} \log \tau}\,. \end{equation} We see that the single-differential cross-section only depends on the sum of the ``$s$'' and ``$o$'' coefficient functions. At linear level, using eq.~(\ref{P}), it is thus only sensitive to ${G}^{u}_{s}+{G}^{u}_{o}$ and ${G}^{d}_{s}+{G}^{d}_{o}$, i.e. to two combinations of the four directions in the EFT parameter space that the fully-differential analysis can probe. Actually it is not difficult to see that the single-differential analysis is not even sensitive to ${G}^{u}_{s}+{G}^{u}_{o}$ and ${G}^{d}_{s}+{G}^{d}_{o}$ independently, but only to the combination \begin{equation}\displaystyle\label{singdir} (G_o^u + G_s^u) + 0.5\,(G_o^d + G_s^d)\,. \end{equation} This is because the ratio between the up and the down quarks luminosities (that dominate over the one of the other quark flavors) is nearly constant in $\tau$ in the most sensitive energy range $m_{\ell\ell}\sim 1 - 2$~TeV. The ratio is approximately equal to $2$ owing to the valence quarks content of the proton. The advantages of performing a fully-differential measurement can now be appreciated by analyzing the various regions in the $(c_*,y)$ kinematic space. In the kinematical regime with small center of mass rapidity ($y \simeq 0$), one has ${\cal L}_q(\tau, y) \simeq {\cal L}_q(\tau, -y)$, and the cross-section in eq.~(\ref{eq:sigma_ll}) becomes proportional to $(1+c_*^2) {\cal L}_q(\tau, 0)(P_s^q + P_o^q)$. Hence this region provides sensitivity to the same combination of Wilson coefficients that can be probed through the $d \sigma/d m_{\ell\ell}^2$ distribution. On the other hand, in the region with large $y$ we have ${\cal L}_q(\tau, y) \gg {\cal L}_q(\tau, -y)$ for the (dominant) up and down quarks because the valence quarks are typically more energetic than the sea anti-quarks. Therefore the cross-section is proportional to $(1+c_*)^2 P_s^q + (1-c_*)^2 P_o^q$ and it is sensitive to both $P_s^q$ and $P_o^q$ for $c_* \simeq +1$ and $c_* \simeq -1$, respectively. Measuring the fully-differential distribution can also mitigate the degeneracy between the up and down quark contributions that is due, as previously discussed, to the similar shape of the parton luminosities. Indeed the dependence of ${\cal L}_q(\tau, y)$ on $y$ is significantly different for the two quark species. In particular the up quark distribution is peaked at larger values of $y$ than the one of the down quark.\footnote{A detailed discussion of this feature can be found in Ref.~\cite{Panico:2016ary} (see in particular Figure~3).} The discussion above shows that the fully-differential cross-section measurement has the potential to disentangle the four $G_{s,o}^{u,d}$ linear combinations of Wilson coefficients. This is a significant improvement relative to the single-differential measurement that is sensitive to one combination only. The quantitative assessment of this improvement is postponed to Section~\ref{GCCSubSec}. It should be stressed that our findings are based on the dependence of the cross-section on the Wilson coefficients at the linear order. At the quadratic level, all Wilson coefficients enter in the $P_{s,o}^q$ functions with comparable coefficients (see the explicit expression for the amplitudes in Appendix~\ref{app:Form}). Therefore also the combinations of parameters that do not enter or are suppressed in the linear term can be determined through their quadratic contributions. These combinations are all expected to be tested less effectively than the ones contributing to the linear terms, but with similar precision among them. A similar analysis can be performed for the charged DY process $q\bar q' \to \ell \nu$. In this case, however, a fully-differential measurement has a milder impact. The reason for this is twofold. First, the charged process is only affected by one operator, namely ${\mathcal{O}}_{lq}^{(3)}$, so that no issue in disentangling various new physics contributions is present. Second, due to the presence of a neutrino, only two independent kinematic variables can be accessed, for instance the transverse momentum of the charged lepton $p_{T,\ell}$ and its rapidity $\eta_\ell$. The new-physics contributions depend on the center of mass energy $\sqrt{\hat s}$, which is closely correlated with $p_{T,\ell}$, but has a very mild correlation with $\eta_\ell$. The additional benefit of considering both kinematical variables rather than only $p_{T,\ell}$ is therefore expected to be small. However it should be taken into account that more differential information in the charged channel might help reducing the impact of PDF uncertainties in the combination with the fully-differential neutral DY measurements. Indeed some advantage of the doubly-differential measurement in charged DY will be observed in the analyses presented below. \subsection{Bounds on the Universal parameters W and Y}\label{WYSubSec} As a first quantitative analysis we focus on the specific set of dimension-6 operators related to the Universal parameters W and Y. As in Refs.~\cite{Farina:2016rws,Ricci:2020xre}, we define the W and Y parameters \begin{equation}\displaystyle G'_{2W} = - \frac{g^2}{2 m_W^2} \textrm{W}\,, \qquad \quad G'_{2B} = - \frac{g'^2}{2 m_W^2} \textrm{Y}\,, \end{equation} in terms of the coefficients of the four-fermion operators \begin{equation}\displaystyle {\cal O}'_{2W} = J_L^{a,\mu} J^a_{L,\mu}\,,\qquad {\cal O}'_{2B} = J_Y^{\mu} J_{Y,\mu}\,, \end{equation} built from the $\textrm{SU}(2)_L$ and hypercharge $\textrm{U}(1)_Y$ currents \begin{equation}\displaystyle J_L^{a,\mu} = \sum_f \bar f \gamma^\mu T^a f\,, \qquad J_Y^{\mu} = \sum_f \bar f \gamma^\mu Y_f f\,. \end{equation} In the DY processes, ${\cal O}'_{2W}$ and ${\cal O}'_{2B}$ are equivalent to lepton-quark operators.\footnote{This only holds up to EW running effects. These effects are included in our prediction as in Ref.~\cite{Ricci:2020xre}, duly taking into account the contribution to the running of the quark-quark and lepton-lepton components of ${\cal O}'_{2W}$ and ${\cal O}'_{2B}$. The operator coefficients, and in turn W and Y, are defined at the renormalization scale of $10$~TeV.\label{runfoot}} The explicit correspondence is given in Table~\ref{tab:GenOp}. There are some crucial differences between the W and the Y parameter, which make the latter more difficult to test. While W can be probed in both the charged and the neutral DY channels, Y only affects the neutral DY process. Furthermore, if the neutral channel is analyzed by fitting only the invariant mass distribution, the single combination of Wilson coefficients that is probed at the linear level, in eq.~(\ref{singdir}), turns out to be proportional to $\textrm{W} + 0.6\,\textrm{Y}$ (using Table~\ref{tab:VSVD}). Therefore the sensitivity to Y is almost two times weaker than to W. \begin{figure} \centering \includegraphics[width=0.635\textwidth]{WY1TeV.pdf} \hfill \includegraphics[width=0.343\linewidth]{XS.pdf} \caption{Left panel: Absolute value of the logarithmic derivative of the tree-level differential cross-section with respect to the W and Y parameters in the $(c_*,y)$ plane. Right panel: Level contours of the tree-level SM differential cross-section (in arbitrary units). Both plots are obtained at fixed $m_{\ell\ell} = 1\;\textrm{TeV}$. \label{Fig:WYlogder}} \end{figure} As we discussed above, a fully-differential analysis in the neutral channel can help to disentangle different new physics contributions. This happens also for the W and Y parameters. To illustrate this point we show in the left panel of Figure~\ref{Fig:WYlogder}\footnote{We employed the package of Ref.~\cite{Clark:2016jgm} to obtain semi-analytic expressions for the PDF.} the logarithmic derivatives of the tree-level differential cross-section with respect to W and Y, namely \begin{equation}\displaystyle \left.\frac{1}{d \sigma} \frac{\partial d \sigma}{\partial \textrm{W}} \right|_{\text{SM}}\,, \qquad\quad \left.\frac{1}{d \sigma} \frac{\partial d \sigma}{\partial \textrm{Y}} \right|_{\text{SM}}\,, \end{equation} evaluated at the SM point $\textrm{W}=\textrm{Y}=0$. For definiteness the dilepton invariant mass has been set to $m_{\ell\ell} = 1\;\textrm{TeV}$ in the figure. The logarithmic derivative scales like $m_{\ell\ell}^2$ as a function of the mass. As expected, in most of the kinematic space, i.e.~for small rapidity and for $c_* \gtrsim 0$, the cross-section dependence on W is roughly twice stronger than on Y. In particular this happens in the regions with larger cross-section, as can be seen from the plot in the right panel of Figure~\ref{Fig:WYlogder}. The behavior, however, drastically changes in the corner with $c_* \lesssim -0.5$ and $y/y_{{\rm{max}}} \gtrsim 0.5$. For these configurations the differential cross-section mostly depends on Y, while the sensitivity to W is small. This feature can be easily understood from the analysis we performed in the previous section. For large rapidity and $c_* \sim -1$ the differential cross-section is controlled by $P_o^q$, which gets contributions from subprocesses with opposite fermion chiralities. Since W corresponds to an operator with only left-handed fields, it can contribute only to the same-chirality subprocesses and not to $P_o^q$. Exploiting the fully-differential distribution for the fit is thus expected to improve the determination of Y. It must be however noticed that the differential cross-section in the $y \sim y_{\rm{max}}$ and $c_* \sim -1$ corner is somewhat suppressed, and is an order of magnitude smaller than in the $c_* > 0$ region. This means that a significant improvement in the Y determination can be obtained only when a high number of signal events are collected, so that the $y \sim y_{{\rm{max}}}$ and $c_* \sim -1$ region is sufficiently populated at high $m_{\ell\ell}$. To give an idea, at the HL-LHC, out of $\sim\!12000$ SM events with $m_{\ell\ell} > 1.1\;\textrm{TeV}$, only $210$ events are expected in the region with $y/y_{\rm{max}} > 0.4$ and $c_* < -0.6$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{2Dvs1DNew.pdf} \caption{$95{\%}$ CL allowed regions for W and Y at the $14\;\textrm{TeV}$ HL-LHC. The green and orange shaded regions correspond to the bounds from charged and neutral DY, respectively, whereas the combined bounds are given by the blue shaded regions. The fully-differential analysis results are reported with solid contours while the dashed contours represent the sensitivity of the single-differential measurements.}\label{Fig:WYbound} \end{figure} We show in Figure~\ref{Fig:WYbound} the comparison of the projected exclusion reach on the W and Y parameters obtained from a fit taking into account the fully-differential distribution or the single-differential (invariant mass or transverse momentum for neutral and charged DY, respectively) distributions. To obtain the bounds we considered the HL-LHC benchmark, with collider energy $14\;\textrm{TeV}$ and ${\cal L} = 3\;\textrm{ab}^{-1}$ integrated luminosity, and we assumed that the experimental measurements of the cross-section coincide with the SM predictions.\footnote{Results for the LHC run 3 benchmark are reported in Appendix~\ref{app:LHCProj}.} The fit of the charged DY process was obtained by considering a set of bins in the transverse momentum and rapidity of the charged lepton, whose boundaries are \begin{eqnarray} p_{T,\ell}: && \{150, 180, 225, 300, 400, 550, 750, 1000, 1300, 7000\}\;\textrm{GeV}\,,\\ \rule{0pt}{1.25em}\eta_l/\eta_{\rm{max}}: && \{0, 1/3, 2/3, 1\}\,, \end{eqnarray} where $\eta_{\rm{max}}$ is the minimum between the acceptance cut of $2.5$ and the maximal kinematically allowed rapidity $\arctanh[(1- 4 p_{T,\ell}^2/S)^{1/2}]$. For the neutral DY channel we instead used a binning in $m_{\ell\ell}$, $c_*$ and $y$, with boundaries \begin{eqnarray} m_{\ell\ell}: && \{300, 360, 450, 600, 800, 1100, 1500, 2000, 2600, 14000\}\;\textrm{GeV}\,,\\ \rule{0pt}{1.25em}c_*: && \{-1, -0.6, -0.2, 0.2, 0.6, 1\}\,,\\ \rule{0pt}{1.25em}y/y_{\rm{max}}: && \{0, 0.2, 0.4, 0.6, 1\}\,. \end{eqnarray} The cross-section predictions are obtained as in Ref.~\cite{Ricci:2020xre}, at NLO in QCD combined with parton showering (based on {\tt{POWHEG}}~\cite{Alioli:2008gx} and {\tt{PYTHIA 8}}~\cite{Sjostrand:2014zea}) and at the NLL order in the EW expansion. The effects due to the W and Y parameters (and the EW logarithms) are included through reweighting, which enables fast and accurate Monte Carlo predictions in the relatively large number of bins (a total of $234$) that we consider in the fully-differential analysis. The $O_{2W}'$, $O_{2B}'$ operators have been defined at the renormalization scale of $10$~TeV. The projected bounds take into account, following again Ref.~\cite{Ricci:2020xre}, the PDF uncertainties estimated through the Hessian set~\textsc{PDF4LHC15\_nlo\_30\_pdfas}~\cite{Buckley:2014ana, PDF1,PDF2,PDF3,PDF4}, and a $2\%$ luminosity uncertainty. We considered an $80\%$ reconstruction efficiency for each muon and $65\%$ for each electron. The results do not take into account any additional experimental systematic uncertainty. This is because we expect that the size and the correlation of these uncertainties will strongly depend on the binning and it will be quite different in the fully-differential measurement and in the single-differential one. Since a quantitative estimate of the uncertainties is not available, we set them to zero for a fair comparison of the two analysis procedures. A qualitative assessment of their potential impact is presented in Section~\ref{sec:uncertainty}. From Figure~\ref{Fig:WYbound} we see that the fully-differential analysis gives a strong boost to the sensitivity of the neutral DY channel, improving in particular the sensitivity along the $\textrm{W} + 0.6\,\textrm{Y}=0$ line that is weakly probed by the single-differential analysis as previously discussed. The charged DY sensitivity also improves. However it should be taken into account that the single-differential analysis is performed (like in Refs.~\cite{Farina:2016rws,Ricci:2020xre}) on the sum of the charge plus and charge minus cross-sections in each $p_{T,\ell}$ bin. The two charges are instead separately measured and combined in the fully-differential analysis, which is helpful to mitigate the impact of PDF uncertainties. The improvement we observe in the charged channel is partly due to this effect. Interestingly, the improvement of the fully-differential analysis is quite significant for the combination of the neutral and charged DY channels. The $95\%$CL single-parameter bounds from the combined fit are given by \begin{equation}\displaystyle\label{eq:boundsWY} \begin{array}{rll} \textrm{W}: &\quad [-2.9, 3.0] \times 10^{-5} \quad & ([-4.9, 5.2] \times 10^{-5})\,,\\ \rule{0pt}{1.5em}\textrm{Y}: & \quad [-6.8, 7.1] \times 10^{-5} \quad & ([-8.3, 8.8] \times 10^{-5})\,, \end{array} \end{equation} where the numbers in brackets correspond to the single-differential fit. The constraint on W becomes nearly a factor $2$ more stringent, whereas the determination of Y improves more mildly. It is interesting to notice that part of the improvement in the W determination does not come from the naive sum of the log likelihood for the neutral and charged processes, but is instead a consequence of the reduced impact of the PDF uncertainties. The PDF errors, in fact, are strongly correlated in the two channels, so that including both of them simultaneously in the fit allows one to distinguish their effects from the contributions due to new physics. \subsection{General quark-lepton interactions}\label{GCCSubSec} We now consider the impact of the fully-differential analysis on the determination of the complete set of lepton-quark current-current operators listed in Table~\ref{tab:GenOp}. In order to make the comparison with the single-differential analysis more straightforward, it is convenient to choose a basis in the space of Wilson coefficients which is aligned with the directions that appears in the invariant mass distribution for the neutral dilepton channel. As we discussed in Section~\ref{treedits}, the $d \sigma/d m_{\ell\ell}^2$ distribution depends at the linear level only on two particular combinations of parameters, $G_s^u + G_o^u$ and $G_s^d + G_o^d$, with $G_{s,o}^{u,d}$ defined as in eq.~(\ref{P}). Moreover the ratio of up and down parton luminosities singles out one combination of Wilson coefficients~(\ref{singdir}) that is most effectively probed in the invariant mass distribution. We thus include in our basis the combination \begin{equation}\label{eq:GE+def}\displaystyle G_E^+ = (G_o^u + G_s^u) + 0.5\,(G_o^d + G_s^d)\,, \end{equation} and the orthogonal one, which we denote by \begin{equation}\label{eq:GE-def}\displaystyle G_E^- = 0.5\,(G_o^u + G_s^u) - (G_o^d + G_s^d)\,. \end{equation} We further consider the two remaining combinations of parameters, which contribute to the fully-differential distribution at the linear level but not to the invariant mass distribution \begin{equation}\label{eq:GOdef}\displaystyle G_O^+ = (G_o^u - G_s^u) + 0.5\,(G_o^d - G_s^d)\,,\qquad\qquad G_O^- = 0.5\,(G_o^u - G_s^u) - (G_o^d - G_s^d)\,. \end{equation} Finally, we complete the seven-dimensional basis with the $\widetilde G_{lq}^{(3)}=G_{lq}^{(3)}$ coefficient and two additional combinations, $G_s^{\bot}$ and $G_o^{\bot}$, that are orthogonal to all the others. The explicit expressions are reported in Appendix~\ref{app:Form}. The $G_s^{\bot}$ and $G_o^{\bot}$ coefficients contribute (at the quadratic level) to the same-chirality and opposite-chirality subprocesses, respectively. It is important to stress that the change of basis we are performing is not orthogonal. In particular this means that the $G_{lq}^{(3)}$ coefficient of the Warsaw basis not only gives rise to $\smash{\widetilde G_{lq}^{(3)}}$ (with the same value), but also gives a correlated contribution to the $\smash{G_{E,O}^\pm}$ coefficients. Fully-differential measurements improve the determination of the different parameters defined above, relative to the single-differential analyses, to different extents. The $\widetilde G_{lq}^{(3)}$ coefficient is mainly tested in the charged DY process, where the impact of the differential analysis is less pronounced. Its (single-operator) determination can thus improve only mildly from the combination with the neutral channel and the associated possible reduction of the impact of the PDF uncertainties. The $G_s^{\bot}$ and $G_o^{\bot}$ coefficients contribute only at quadratic order or through very small subleading terms in the $m_Z^2/m_{\ell\ell}^2$ expansion, both to the single and to the fully-differential cross-section. Therefore they will be tested with lower accuracy and they will not improve significantly with the fully-differential analysis. We thus focus on the remaining four coefficients $G_{E,O}^{\pm}$. Among those, $G_{E}^{+}$ will not improve much, since it is already effectively probed in the invariant mass distribution. A significant improvement is instead possible for the other ones. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{FlatDirNew.pdf} \caption{Logarithmic derivative of the tree-level differential cross-section with respect to the $G_{E,O}^{\pm}$ coefficients in the $(c_*,y)$ plane. The plots are obtained setting $ m_{\ell\ell}= 1$~TeV.\label{fig:log_der_4ferm}} \end{figure} For a first assessment of the perspectives for progress we show in Figure~\ref{fig:log_der_4ferm} the logarithmic derivative of the tree-level fully-differential cross-section for the neutral DY channel with respect to the four $G_{E,O}^{\pm}$ parameters. In the low-rapidity region ($y/y_{\rm{max}} \lesssim 0.5$) the cross-section is sensitive dominantly to $G_E^+$. This is not surprising since, as we saw in Section~\ref{treedits}, the distribution in the low-rapidity region depends on the same combination of coefficient functions that enter in the invariant-mass distribution. The high-rapidity configurations, on the contrary, show very different sensitivity patterns to the $G_{E,O}^{\pm}$ coefficients. One can see, in particular, that the $c^* > 0$ region, which has a high SM cross-section (see the right panel of Figure~\ref{Fig:WYlogder}), shows a relatively large logarithmic derivative with respect to $G_O^+$. The fully-differential analysis is therefore expected to improve significantly the determination of this coefficient and to disentangle it from $G_E^+$, which gives a different dependence of the logarithmic derivative as a function of $y$. On the other hand, the $G_O^-$ coefficient affects the distribution mainly in the $c_* \sim -1$, $y \sim y_{\rm{max}}$ corner, in which the cross-section is rather small. For this reason we expect its determination to remain relatively poor. Finally the $G_E^-$ coefficient is in an intermediate situation. The related logarithmic derivative is significantly smaller than for $G_E^+$ and $G_O^+$, but nevertheless shows a distinctive pattern in the region with $y/y_{\rm{max}} \lesssim 0.5$, which has a good cross-section. We thus expect that the fully-differential analysis could provide some improvement on its determination. \begin{figure}[t] \centering \includegraphics[width=.85\linewidth]{Figures/6PlanesNewB2.pdf} \caption{Allowed regions at $95\%$ CL on the six coordinate planes along the four $G_{E,O}^{\pm}$ coefficients. Solid contours correspond to the fully-differential analysis, while the dashed ones are obtained with the single-differential measurements. The blue shaded regions include the full dependence on the Wilson coefficients in the cross-section, while only the linear terms are retained in the orange shaded regions.}\label{fig:6Planes} \end{figure} To estimate the sensitivity to $G_{E,O}^{\pm}$ we performed the same analysis presented in Section~\ref{WYSubSec} for the W and Y parameters. The two-dimensional $95\%$~CL contours for each pair of coefficients, setting the others to zero, are shown in Figure~\ref{fig:6Planes}.\footnote{The plots are obtained by combining the neutral and charged DY channels. Notice that the charged channel does not depend on the $G_E^{\pm}$ and $G_O^{\pm}$ coefficients, so its contribution is only indirect, through a reduction of the impact of the PDF uncertainties. This effect is however small.} Different sets of bounds are compared in the plots. The solid contours correspond to the $95\%$ CL constraints from the fully-differential analysis, whereas the dashed ones are obtained exploiting the invariant-mass distribution in the neutral channel and the transverse-momentum distribution (summed over the two charges as discussed in Section~\ref{WYSubSec}) in the charged channel. The blue shaded regions are obtained by considering the full dependence on the Wilson coefficients in the cross-section, while the orange shaded regions are found by taking into account only the linear terms. The axes of the ellipses for the fully-differential analysis at the linear level are aligned with the reference axes of each plane, owing to our judicious choice of the basis. We also report, in Table~\ref{tab:SBCC}, the expected sensitivity to all the seven parameters $G_{E,O}^{\pm}$, $G_{s,o}^{\bot}$ and $\smash{\widetilde G_{lq}^{(3)}}$. We list both the single-parameter bounds obtained by setting all the others to zero and the bounds profiled over the other parameters. In the case of the fully-differential analysis we also report the results of the linearized fit. We see from the figure and the table that the single-operator determination of $G_E^+$ is only marginally modified, with a modest improvement or order $10\%$. This was expected, as previously discussed, since the determination of $G_E^+$ from the invariant-mass distribution is already quite good. Since $G_E^+$ contributes at the linear level and it is well probed, no significant difference is present between the full fit and the linearized one in this direction. A strong improvement is instead found in the sensitivity to $G_O^+$, as anticipated. The bound from the full fit (i.e.~including both the linear and quadratic dependence on the Wilson coefficient in the cross-section) improves roughly by a factor of $3$. The improvement in the linearized fit is even more dramatic, since $G_O^+$ does not contribute to the invariant mass distribution at the linear level up to small effects, as previously discussed. Correspondingly, an approximate flat direction is present for $G_O^+$ (see for instance the middle plot on the top row of Figure~\ref{fig:6Planes}) in the single-differential linearized contour. The fully-differential analysis is instead strongly sensitive to $G_O^+$ at the linear level and the linearized and the full fit agree very well. The impact of the fully-differential analysis on the $G_E^-$ and $G_O^-$ parameters follows a slightly different pattern. In the full fit a mild improvement of the bounds, of order $15\%$, is found. The results, however, change drastically at the linearized level. In this case the fully-differential analysis is able to significantly improve the constraints on both parameters (see for instance the middle plot on the second row of Figure~\ref{fig:6Planes}). \begin{table}[t] \centering {\small \begin{tabular}{@{\hspace{.2em}}c@{\hspace{.2em}}|c@{\hspace{.65em}}c@{\hspace{.65em}}c|c@{\hspace{.65em}}c@{\hspace{.65em}}c@{\hspace{.2em}}} {\bf{95\%CL}}&\multicolumn{3}{c|}{single parameter}&\multicolumn{3}{c}{profiled}\\ \rule[-.6em]{0pt}{1.85em}$[10^{-3}\,\textrm{TeV}^{-2}]$ & fully diff. & fully diff. lin. & {\rm{single diff.}} & fully diff. & fully diff. lin. & single diff.\\ \hline \rule{0pt}{1.35em}$\widetilde{G}_{lq}^{(3)}$ & $[-0.36 ,\, 0.35]$ & $[-0.36,\, 0.36]$ & $[-0.49 ,\, 0.50]$ & $[-0.53 ,\, 0.48]$ & $[-0.57,\, 0.57]$ & $[-0.97 ,\, 0.77]$\\ \rule{0pt}{1.35em}${G}_{E}^{+}$ & $[ -0.20,\, 0.19]$ & $[-0.20,\, 0.20]$ & $[-0.27 ,\, 0.25]$ & $[-0.55 ,\, 0.32]$ & $[-0.60,\, 0.60]$ & $[-1.19 ,\, 0.45]$\\ \rule{0pt}{1.35em}${G}_{E}^{-}$ & $[-0.61 ,\, 0.65]$ & $[-1.02,\, 1.02]$ & $[-0.77 ,\, 0.90]$ & $[-1.52 ,\, 1.30]$ & $[-1.62,\, 1.62]$ & $[-3.31 ,\, 1.94]$\\ \rule{0pt}{1.35em}${G}_{O}^{+}$ & $[-0.38,\, 0.35]$ & $[-0.37,\, 0.37]$ & $[-1.01 ,\, 1.19]$ & $[-0.44 ,\, 0.60]$ & $[-0.82,\, 0.82]$ & $[-1.32,\, 2.17]$\\ \rule{0pt}{1.35em}${G}_{O}^{-}$ & $[-0.77,\, 0.65]$ & $[-2.74,\, 2.74]$ & $[-0.95 ,\, 0.79]$ & $[-1.58,\, 1.80]$ & $[-5.85,\, 5.85]$ & $[-2.06 ,\, 3.20]$\\ \rule{0pt}{1.35em}${G}_s^{\bot}$ & $[-2.14,\, 1.44]$ & $[-3.74,\, 3.74]$ & $[-2.24 ,\, 1.59]$ & $[-2.46 ,\, 2.19]$ & $[-10.5,\, 10.5]$ & $[-2.91,\, 2.41]$\\ \rule[-.75em]{0pt}{2.1em}${G}_o^{\bot}$ & $[-0.69,\, 0.67]$ & $[-18.6,\, 18.6]$ & $[-0.85 ,\, 0.84]$ & $[-0.98,\, 0.89]$ & $[-25.5,\, 25.5]$ & $[-1.26,\, 1.31]$ \end{tabular} } \caption{$95{\%}$ CL projected bounds (in $10^{-3}\;\textrm{TeV}^{-2}$ units) for the seven coefficients $\widetilde{G}_{lq}^{(3)}$, $G_{E,O}^\pm$ and $G_{s,o}^\bot$. The first three bounds correspond to single-operator fits, in which all other parameters are set to zero, while the last three are profiled over the other parameters. For each set of bounds the three columns correspond to the complete fully-differential fit, the linearized one and the single-differential measurement fit.} \label{tab:SBCC} \end{table} The profiled bounds reported in Table~\ref{tab:SBCC} are more difficult to interpret. They significantly differ from the single operator ones, signaling the presence of non-negligible correlations among the various parameters. We notice that for many parameters the fully-differential analysis improves the profiled bound more than the single-operator one. This pattern is particularly visible for the $\widetilde G_{lq}^{(3)}$, $G_E^+$ and $G_E^-$ parameters, and, to a lesser degree, for $G_O^+$ and $G_O^-$. The origin of this behavior can be traced back to the reduction of flat directions in the fully-differential fit, which helps in reducing the correlations among the various Wilson coefficients. \begin{table}[t] \centering {\small \begin{tabular}{@{\hspace{.2em}}c@{\hspace{.2em}}|c@{\hspace{.65em}}c@{\hspace{.65em}}c|c@{\hspace{.65em}}c@{\hspace{.65em}}c@{\hspace{.2em}}} {\bf{95\%CL}}&\multicolumn{3}{c|}{single parameter}&\multicolumn{3}{c}{profiled}\\ \rule[-.6em]{0pt}{1.85em}$[10^{-3}\,\textrm{TeV}^{-2}]$ & fully diff. & fully diff. lin. & single diff. & fully diff. & fully diff. lin. & single diff.\\ \hline \rule{0pt}{1.35em}${G}_{lq}^{(3)}$ & $[-0.46,\, 0.44]$ & $[-0.45,\, 0.45]$ & $[-0.80,\, 0.75]$ & $[-0.53,\, 0.48]$ & $[-0.57,\, 0.57]$ & $[-0.97,\, 0.77]$\\ \rule{0pt}{1.35em}${G}_{lq}^{(1)}$ & $[-1.95,\, 2.42]$ & $[-2.15,\, 2.15]$ & $[-2.42,\, 3.55]$ & $[-4.69,\, 5.28]$ & $[-12.8,\, 12.8]$ & $[-5.93,\, 8.75]$\\ \rule{0pt}{1.35em}${G}_{qe}$ & $[-2.13,\, 3.29]$ & $[-2.56,\, 2.56]$ & $[-3.19,\, 6.19]$ & $[-4.31,\, 5.35]$ & $[-139,\, 139]$ & $[-7.11,\, 7.05]$\\ \rule{0pt}{1.35em}${G}_{lu}$ & $[-1.76,\, 2.12]$ & $[-1.92,\, 1.92]$ & $[-2.87,\, 4.22]$ & $[-3.18,\, 5.41]$ & $[-69.2,\, 69.2]$ & $[-7.38,\, 8.57]$\\ \rule{0pt}{1.35em}${G}_{ld}$ & $[-8.98,\, 5.01]$ & $[-7.09,\, 7.09]$ & $[-11.9,\, 6.92]$ & $[-9.89,\, 8.95]$ & $[-148,\, 148]$ & $[-12.6,\, 14.0]$\\ \rule{0pt}{1.35em}${G}_{eu}$ & $[-1.22,\, 1.30]$ & $[-1.26,\, 1.26]$ & $[-1.51,\, 1.65]$ & $[-3.38,\, 5.75]$ & $[-16.3,\, 16.3]$ & $[-4.46,\, 7.29]$\\ \rule[-.75em]{0pt}{2.1em}${G}_{ed}$ & $[-4.74,\, 3.55]$ & $[-4.03,\, 4.03]$ & $[-7.12,\, 4.47]$ & $[-9.25,\, 9.33]$ & $[-41.3,\, 41.3]$ & $[-10.7,\, 12.7]$ \end{tabular}} \caption{$95{\%}$ CL projected bounds for the four-fermion operator coefficients in the Warsaw basis. The bounds are given in $10^{-3}\;\textrm{TeV}^{-2}$ units. } \label{tab:Bounds_Warsaw} \end{table} For completeness, we report in Table~\ref{tab:Bounds_Warsaw} the bounds on the four-fermion operators in the Warsaw basis. In this basis we find that $G_{lq}^{(3)}$ is expected to be determined with much higher precision than the other parameters. Moreover its determination is only mildly affected by profiling, differently from the bounds on the other coefficients that significantly degrade in the profiled fit. This behavior is clearly due to the fact that $G_{lq}^{(3)}$ is tested with high precision in the charged DY channel, which is not affected by the other effective operators. The impact of a fully-differential analysis is quite large for many Warsaw operators. In particular the bounds on $G_{lq}^{(3)}$, $G_{qe}$ and $G_{lu}$ become roughly $40\%$ tighter, while the constraints on the other operators improve by an amount of order $10 - 20\%$. \begin{figure}[t] \centering \includegraphics[width=.52\linewidth]{Ladder.pdf} \hfill \raisebox{1.2em}{\includegraphics[width=.46\linewidth]{LadderChart.pdf}} \caption{Left panel: Expected bounds on the Wilson coefficients as a function of the upper cut on the energy of the events $\Lambda_{\rm{cut}}$. Right panel: Relative change in the bounds in the last two bins in $\Lambda_{\rm{cut}}$. The darker shadowing corresponds to the fully-differential analysis, while the lighter shadowing corresponds to the {\rm{single diff.}} analysis.}\label{fig:Ladder} \end{figure} As a last point, we investigate the dependence of our results on the maximal energy scale, $\Lambda_{\rm{cut}}$, of the measurements included in the fit. This gives useful indications on the measurements that contribute more to the final sensitivity and on the energy range of validity of the EFT description of new physics that is theoretically required for the bounds to apply. Following Refs.~\cite{Farina:2016rws,Ricci:2020xre}, we show on the left panel of Figure~\ref{fig:Ladder} how the single-operator bounds change by retaining in the fit only the bins where $m_{\ell\ell} < \Lambda_{\rm{cut}}$ in the neutral channel and $p_{T,\ell} < \Lambda_{\rm{cut}}/2$ in the charged one. We see that the bounds on the Wilson coefficients with more stringent constraints, namely $\widetilde G_{lq}^{(3)}$, $G_E^+$ and $G_O^+$, saturate around $\Lambda_{\rm{cut}} \sim 2\;\textrm{TeV}$. In particular, removing the last bin (starting at $2600$~GeV) has an extremely mild impact. The constraints on the other coefficients, on the contrary, receive sizable contributions from the events in the last bin. This behavior can be explained by recalling that the bounds on the $\widetilde G_{lq}^{(3)}$, $G_E^+$ and $G_O^+$ coefficients in the fully-differential analysis are mainly driven by the linear interference terms in the cross-section prediction. These terms grow linearly with the partonic $\hat s$ so that the energy region with good sensitivity, which we find to be $\sqrt{\hat s} \sim 1 - 2\;\textrm{TeV}$, is where this growth starts being balanced by the decrease of the quark luminosity. For the other parameters, instead, the bounds are driven mostly by the square of the BSM contributions, which grow like $\hat s^2$. The faster growth pushes the sensitive region to higher energies. To appreciate better this point, we show in the right panel of Figure~\ref{fig:Ladder} the relative change in the bounds on the various parameters when the last bin is removed. For the fully-differential fit (darker shadowing), a variation below around $10\%$ is observed for $\smash{\widetilde G_{lq}^{(3)}}$, $G_E^+$ and $G_O^+$. The $\smash{G_O^-}$, $\smash{G_s^\bot}$ and $\smash{G_o^\bot}$ coefficients, instead, show a relatively large change in the bounds ($\simeq 30\%$). Finally, the $G_E^-$ coefficient shows an intermediate behavior, which is explained by the fact that for its determination the linear interference terms and the quadratic terms have comparable weight. It is interesting to notice that the sensitivity to the last bin of $G_O^+$ is quite lower in the fully-differential fit than in the single-differential one (displayed with lighter shadowing in the figure). This is because in the former case the bound is driven by the linear terms, while in the latter it is mainly driven by the quadratic terms. This difference can be also seen in the single-parameter bounds on $G_O^+$, which improve by roughly a factor $3$ with the fully-differential analysis (see Figure~\ref{fig:6Planes} and Table~\ref{tab:SBCC}). \subsubsection*{Running Effects} Our cross-section predictions include EW corrections at the single-log accuracy, among which the ones associated with the Renormalization Group evolution of the EFT operators~\cite{Ricci:2020xre}.\footnote{The tools in Ref.~\cite{Celis:2017hod} have been employed for the implementation.} Therefore our results depend, in line of principle, on the operator renormalization scale. This has been set to $2$~TeV because the measurements at that scale dominate the sensitivity as previously shown. However the running effects are extremely small and our results do not depend on this choice in practice. This has been verified by repeating the fit in two ways. In one case we switched off completely the running, while in the other one we fixed the values of the Wilson coefficients at an energy scale $E = 10\;\textrm{TeV}$. In both cases the bounds on the Wilson coefficients, both single-operator and the profiled ones, change at most by $\textit{few}\;\%$. Our results can thus be safely applied even to EFT operators defined at several tens of TeV. It is important to keep in mind that only quark-lepton current-current operators are included in our calculation. Other EFT operators do not produce growing-with-energy effects in high-energy DY, therefore their contribution is very suppressed relative to the quark-lepton ones and completely negligible if their size is not anomalously large. In particular this means that operators induced by the quark-lepton ones through running are completely negligible because running is itself a small effect. The contribution to the running of the quark-lepton operators by the other ones is also negligible, for the same reason. On the other hand, one cannot firmly exclude the presence of other EFT operators with anomalously large coefficients that are not already excluded or that can not be probed with other LHC measurements. Such operators, if found to exist after a more systematic global exploration of the LHC EFT potential, should be included in the predictions. \subsection{Experimental uncertainties}\label{sec:uncertainty} This section is devoted to uncertainties that are not included in the analysis presented above, namely the presence of systematic uncertainties in the experimental cross-section measurements other than the luminosity uncertainty that was already taken into account. Like in Ref.~\cite{Ricci:2020xre}, our fits are based on the Poisson likelihood, which takes automatically into account the statistical component of the cross-section measurement errors. The systematic component of the experimental error is instead incorporated by nuisance parameters on the expected Poisson countings. Only the nuisance corresponding to a $2\%$ luminosity uncertainty has been included in the analysis and its effect is very small as expected. The dominant experimental errors are indeed those that, unlike the luminosity, distort the shape of the differential distributions~\cite{Ricci:2020xre}. We cannot rely on any estimate of the size of these uncertainties, nor of their correlations across different analysis bins which on the other hand are expected to have a major impact on the sensitivity to new physics. In order to get a feeling of their possible impact, we adopt a crude parametrization of these effects by introducing a fully-uncorrelated $2\%$ error in all bins, both for the fully-differential and for the single-differential fit. Including these uncorrelated systematic errors, the combined $95\%$~CL bound on the W and Y parameters become \begin{equation}\displaystyle \begin{array}{rll} {\rm{W}}: &\quad [-3.6, 3.7] \times 10^{-5} \quad & ([-5.3, 5.8] \times 10^{-5})\\ \rule{0pt}{1.5em}{\rm{Y}}: & \quad [-8.4, 8.9] \times 10^{-5} \quad & ([-10.9, 12.0] \times 10^{-5}) \end{array} \qquad \textrm{with}\ 2\%\ \textrm{uncorr. syst.}\,. \end{equation} The numbers in parentheses refer to the single-differential fit. Comparing with the results in eq.~(\ref{eq:boundsWY}), we see that the bounds from the fully-differential analysis become roughly $25\%$ weaker. In the case of the single-differential analysis, the bounds on Y suffer from a similar change, while the ones on W are less affected and are only $10\%$ weaker. The advantage of a fully-differential analysis is however still evident also in these results. \begin{table}[t] \centering {\small \begin{tabular}{@{\hspace{.2em}}c@{\hspace{.2em}}|c@{\hspace{.65em}}c@{\hspace{.65em}}c|c@{\hspace{.65em}}c@{\hspace{.65em}}c@{\hspace{.2em}}} {\bf{95\%CL}}&\multicolumn{3}{c|}{single parameter}&\multicolumn{3}{c}{profiled}\\ \rule[-.6em]{0pt}{1.85em}$[10^{-3}\,\textrm{TeV}^{-2}]$ & fully diff. & fully diff. lin. & single diff. & fully diff. & fully diff. lin. & single diff.\\ \hline \rule{0pt}{1.35em}$\widetilde{G}_{lq}^{(3)}$ & $[-0.46, 0.46]$&$ [-0.46, 0.46]$&$[-0.62, 0.63]$&$[-0.68, 0.59]$&$[-0.72,0.72]$&$[-1.06, 0.84]$\\ \rule{0pt}{1.35em}${G}_{E}^{+}$ & $[-0.25, 0.23]$&$[-0.24, 0.24]$&$[-0.38, 0.32]$&$[-0.67, 0.36]$&$[-0.73, 0.73]$&$[-1.51, 0.51]$\\ \rule{0pt}{1.35em}${G}_{E}^{-}$ & $[-0.67, 0.70]$&$[-1.45, 1.45]$& $[-0.85, 0.95]$&$[-1.80, 1.61]$&$[-1.96, 1.96]$&$[-3.73, 2.24]$\\ \rule{0pt}{1.35em}${G}_{O}^{+}$ & $[-0.41, 0.37]$&$[-0.40, 0.40]$&$[-1.07, 1.24]$&$[-0.48, 0.72]$&$[-0.97,0.97]$&$[-1.43, 2.49]$\\ \rule{0pt}{1.35em}${G}_{O}^{-}$ & $[-0.79, 0.67]$&$[-3.01, 3.01]$&$[-0.99, 0.83]$&$[-1.84, 2.00]$&$[-6.21, 6.21]$&$[-2.33, 3.61]$\\ \rule{0pt}{1.35em}${G}_s^{\bot}$ & $[-2.18, 1.50]$&$[-4.21, 4.21]$&$[-2.31, 1.70]$&$[-2.59, 2.44]$&$[-11.9,11.9]$&$[-3.17, 2.85]$\\ \rule[-.75em]{0pt}{2.1em}${G}_o^{\bot}$ & $[-0.71, 0.69]$&$[-22.2, 22.2]$&$[-0.89, 0.88]$&$[-1.03, 0.91]$&$ [-31.8, 31.8]$&$ [-1.47, 1.46]$ \end{tabular} } \caption{$95{\%}$ CL projected bounds for the seven coefficients $\widetilde{G}_{lq}^{(3)}$, $G_E^\pm$, $G_O^\pm$ and $G_{s,o}^\bot$, including a $2\%$ uncorrelated systematic uncertainty.} \label{tab:SOR} \end{table} \begin{table}[t] \centering {\small \begin{tabular}{@{\hspace{.2em}}c@{\hspace{.2em}}|c@{\hspace{.65em}}c@{\hspace{.65em}}c|c@{\hspace{.65em}}c@{\hspace{.65em}}c@{\hspace{.2em}}} {\bf{95\%CL}}&\multicolumn{3}{c|}{single parameter}&\multicolumn{3}{c}{profiled}\\ \rule[-.6em]{0pt}{1.85em}$[10^{-3}\,\textrm{TeV}^{-2}]$ & fully diff. & fully diff. lin. & single diff. & fully diff. & fully diff. lin. & single diff.\\ \hline \rule{0pt}{1.35em}${G}_{lq}^{(3)}$&$[-0.57, 0.55]$&$ [-0.56, 0.56]$&$[-0.89, 0.82]$&$ [-0.68, 0.59]$&$ [-0.72, 0.72]$&$[-1.06, 0.84]$\\ \rule{0pt}{1.35em}${G}_{lq}^{(1)}$ & $[-2.21, 2.97]$&$[-2.50, 2.50]$&$[-2.90, 6.60]$&$ [-5.28, 5.88]$&$[-14.9,14.9]$&$[-6.65, 9.92]$\\ \rule{0pt}{1.35em}${G}_{qe}$ &$[-2.39, 4.33]$&$[-3.02, 3.02]$&$[-3.78, 7.83]$&$[-4.40, 5.67]$&$[-174,174]$&$ [-7.70, 8.19]$\\ \rule{0pt}{1.35em}${G}_{lu}$ & $[-1.98, 2.53]$&$[-2.21, 2.21]$&$[-3.49, 8.10]$&$[-3.29, 5.74]$&$[-86.7,86.7]$&$[-7.95, 9.81]$\\ \rule{0pt}{1.35em}${G}_{ld}$ & $[-10.1, 5.58]$&$[-8.83, 8.83]$&$[-13.1, 8.00]$&$ [-10.57, 9.19]$&$[-179,179]$&$ [-15.0, 15.3]$\\ \rule{0pt}{1.35em}${G}_{eu}$ & $[-1.48, 1.63]$&$[-1.55, 1.55]$&$[-1.95, 2.30]$&$[-3.46, 6.58]$&$[-18.7,18.7]$&$[-4.65, 8.67]$\\ \rule[-.75em]{0pt}{2.1em}${G}_{ed}$ & $[-7.83, 4.39]$&$[-5.42, 5.42]$&$[-12.7, 5.62]$&$[-11.0, 9.81]$&$[-46.5, 46.5]$&$ [-13.3, 13.5]$ \end{tabular} } \caption{$95{\%}$ CL projected bounds for the four-fermion operator coefficients in the Warsaw basis, including a $2\%$ uncorrelated systematic uncertainty.} \label{tab:SWB} \end{table} In Tables~\ref{tab:SOR} and~\ref{tab:SWB} we give the bounds on the lepton-quark four-fermion operators including the uncorrelated systematic uncertainty. The impact of the uncertainty is relatively large on $\widetilde G_{lq}^{(3)}$ and $G_E^+$, whose determination becomes roughly $25\%$ weaker. The reduction in sensitivity on the other coefficients is instead milder, at most of order $10\%$. Similar results are found for the operators in the Warsaw basis (Table~\ref{tab:SWB}). In this basis the most affected operators are $G_{lq}^{(3)}$, $G_{eu}$ and $G_{ed}$, with a loss of sensitivity of order $25\%$, while the bounds on the other operators are quite stable. One can also see that the impact of the systematic uncertainty on the fully-differential fit and on the single-differential one is comparable. \section{Sensitivity to minimal Z$^\mathbf{\prime}$ models}\label{sec:Zprime} For a concrete assessment of the benefits of the fully-differential analysis, we consider in this section a minimal BSM scenario featuring a single additional vector boson that gauges a generic linear combination of the hypercharge $\textrm{U}(1)_Y$ and $B-L$. The Lagrangian describing the new vector boson is \begin{align} \mathcal{L}_{Z'} = -\frac{1}{4}F_{\mu \nu}^2+ \frac{1}{2} {M}^2 A^2 + A_{\mu} J^{\mu}\,, \end{align} with the current \begin{equation}\displaystyle J^{\mu} = \sum_{f}\left[g_Y Y(f)+g_{BL}(B-L)(f)\right]\bar{f} \gamma^{\mu} f + g_Y\frac{i}{2} H^{\dagger} {\scriptstyle \overleftrightarrow{\rule{0pt}{.75em}}}\hspace{-.95em}{D} H\,, \end{equation} where $f$ denote the SM fermions and $H$ is the Higgs doublet. In the above formula $g_Y$ and $g_{BL}$ are free parameters, while $Y(f)$ and $B(f)$ and $L(f)$ are the hypercharge, the baryon and the lepton numbers of the various fermions, respectively. This model has been studied extensively in the literature and in particular in Ref.~\cite{Salvioni:2009mt}, where a first projection of LHC direct searches sensitivity was given, and compared with the indirect constraints from precision measurements (EWPT) performed at LEP and other experiments.\footnote{For the model to be free of gauge anomalies, three right handed neutrinos with $B-L$ coupling must be present. Contrary to Ref.~\cite{Salvioni:2009mt}, we take these states to be nearly massless.} \begin{figure}[t] \centering \includegraphics[width=.515\linewidth]{LEPvsLHC.pdf} \hfill \includegraphics[width=.465\linewidth]{DirvsIndir.pdf} \caption{Left panel: $95\%$~CL exclusion reach in the $(g_{Y}/M, g_{BL}/M)$ plane from EWPT (gray shaded region)~\cite{Salvioni:2009mt} and projected exclusion at the HL-LHC (blue and orange shaded regions). Right panel: Comparison of direct (green shaded region) and indirect (blue and orange shaded regions) exclusions at the HL-LHC for a heavy vector of mass $M = 7\;\textrm{TeV}$. In both panels the blue shaded region corresponds to the fully-differential fit, while the orange shaded region is obtained with the single-differential one.} \label{fig:Zprime_comp} \end{figure} When integrated out at tree-level, the massive $Z'$ produces all the flavor-universal lepton-quark operators in Table~\ref{tab:GenOp} except ${\mathcal{O}}_{lq}^{(3)}$, with Wilson coefficients that are readily computed in terms of the three free parameters $g_Y$, $g_{BL}$ and $M$. Clearly the Wilson coefficients are quadratic polynomials in the ratios $g_Y/M$ and $g_{BL}/M$, which are therefore the only two parameter combinations that can be probed by indirect searches\footnote{Strictly speaking this is statement is true only a tree level. In fact, already at our level of accuracy (NLL), the RG flow induces an additional logarithmic dependence on M in our observables. However these effects are extremely small as can be appreciated in Figure~\ref{fig:Mgplane}, where the indirect searches contours are well approximated by straight lines.}. Furthermore the indirect constraints are symmetric under an overall change of sign of the couplings $(g_{Y},g_{BL})\rightarrow(-g_{Y},-g_{BL})$. The $95\%$~CL reach on the model at the HL-LHC is displayed in Figure~\ref{fig:Zprime_comp} on the $(g_{Y}/M, g_{BL}/M)$ plane (left panel) and on the $(g_{Y}, g_{BL})$ plane for a fixed mass $M=7$~TeV (right panel). The bounds are obtained from the fully and single-differential analyses described in the previous section, but including in this case a $2\%$ uncorrelated experimental uncertainty in the measurements, aiming at a more conservative result. The advantage of the fully-differential analysis over the single-differential one is mainly in the region $g_{BL} \simeq -g_Y$. This region is particularly difficult to probe as it entails the cancellation of the $Z'$ coupling to the right-handed electrons as well as the suppression of the couplings to the left handed quark doublets, the right-handed up-type quarks and the left-handed lepton doublets. Therefore in this region $G_{qe}$, $G_{eu}$ and $G_{ed}$ vanish and $G_{lq}^{(1)}$, $G_{lu}$ and $G_{ld}$ are suppressed (and $\smash{G_{lq}^{(3)}}$ is always zero). We notice in passing that the suppression of the couplings to quarks also determines a reduction of the direct production cross-section at the LHC, which makes direct searches less effective. The fully-differential analysis not only improves the sensitivity along the $g_{BL} = -g_Y$ direction, it also mitigates the impact of the quadratic terms in the cross-section prediction. This is shown by the dashed lines in the left panel of Figure~\ref{fig:Zprime_comp}, reporting the results of the linearized fits. The single-differential linearized analysis possesses two very pronounced flat directions that correspond to directions in the $(g_{Y},g_{BL})$ plane where the $G_{E}^{+}$ coefficient cancels. The fully-differential analysis linearized contour is instead quite close to the full fit thanks to the improved sensitivity to $G_O^+$ at the linear level. \begin{figure}[t] \centering \includegraphics[width=0.32\linewidth]{gMPlane1.pdf} \hfill \includegraphics[width=0.32\linewidth]{gMPlane2.pdf} \hfill \includegraphics[width=0.32\linewidth]{gMPlane3.pdf} \caption{HL-LHC 95{\%} CL (1 d.o.f) exclusion reach in the mass/coupling plane for three different $Z'$ benchmark models, namely $g_{BL} = 0$, $g_{BL} = - g_Y$ and $g_{BL} = g_Y$. The blue shaded region can be excluded through the fully-differential di-lepton DY analysis, while the orange one can be probed with the invariant-mass fit. The green shaded region corresponds to the exclusion from direct searches.}\label{fig:Mgplane} \end{figure} In the left panel of Figure~\ref{fig:Zprime_comp} we also compare our result with existing EWPT constraints, extracted from Ref.~\cite{Salvioni:2009mt}. With the fully-differential analysis, the progress of the HL-LHC is of a factor around $3$ in $g/M$ in most of the directions in the $(g_{Y}/M, g_{BL}/M)$ plane, which corresponds to an improvement of one order of magnitude in the sensitivity to the Wilson coefficients that scale like $(g/M)^2$. Furthermore, notice that the EWPT bounds in the figure are based on actual experimental measurements whose central value, while compatible with the SM, disfavors the $Z'$ model. This is easily verified in the direction $g_{BL}=0$, where integrating out the $Z'$ produces only the ${\cal O}'_{2B}$ operator with negative coefficient, that corresponds to a positive Y parameter. The central value of Y measured at LEP is instead negative (see e.g.~Ref.~\cite{Farina:2016rws}) making the EWPT exclusion on the model stronger. Our HL-LHC projections assume instead a central value at the SM point. Depending on the sign of central value that will be eventually observed the actual sensitivity to the model could be stronger or weaker than the projection. We turn now to the comparison of our findings with the projected HL-LHC sensitivity for direct searches of the $Z'$ particle, which are most effectively performed in the dilepton final state. The exclusion on the resonant production cross-section times branching ratio is obtained from the projections in Ref.~\cite{Thamm:2015zwa}, slightly improved to take into account the more recent and refined results in Ref.~\cite{CidVidal:2018eel}. The $Z'$ production cross-section is obtained by two {\sc{MadGraph}} \cite{Alwall:2014hca} simulations (at each $Z'$ mass) with the $Z'$ coupling only to up- or to down-type quarks, rescaled based on the analytical calculation of these couplings as a function of $g_{Y}$ and $g_{BL}$. The branching ratio is also computed analytically. The results are reported in Figure~\ref{fig:Mgplane}, in the mass/coupling plane for three benchmark models $(g_Y=g_*, \, g_{BL} = 0)$, $(g_Y=g_*/\sqrt{2}, \, g_{BL} = -g_*/\sqrt{2})$ and $(g_Y=g_*/\sqrt{2}, \, g_{BL} = g_*/\sqrt{2})$. Notice that the plot extends up to the maximal $g_*$ coupling for which, depending on the model, the width over mass ratio $\Gamma/M$ of the $Z'$ is reasonably small ($<0.3$) enabling a perturbative treatment. The indirect reach from our analyses, and from EWPT, is also reported in the plots. We find a substantial improvement of the mass reach for relatively large $g_*$, up to around $30$~TeV in the first and in the second benchmark model. Finally, in the right panel of Figure~\ref{fig:Zprime_comp} we compare direct and indirect searches in the $(g_{Y}, g_{BL})$ plane at a fixed mass $M=7$~TeV, slightly below the threshold of around $8$~TeV after which direct searches become ineffective. The direction $g_{BL} = -g_Y$ is difficult to probe also directly, as anticipated. The sensitivity improvement of the fully-differential analysis along this direction is significant. \subsubsection*{Discovery and characterization} High-energy measurements have the potential to discover the $Z'$. This is shown on the left panel of Figure~\ref{fig:discovery} by comparing the HL-LHC $5\sigma$~discovery reach with the current exclusion bound from EWPT in the $(g_{Y}/M, g_{BL}/M)$ plane. For $M$ of several TeV or more, direct searches are ineffective and high-energy measurements will provide the only evidence for the existence of the $Z'$. While ``indirect'', i.e.~not based on the detection of a resonant peak, this evidence would be a conclusive and convincing proof of the existence of new physics thanks to the peculiar behavior (growing with energy) of the observed signal and to the possibility of getting confirmations on its nature by the study of angular distributions. The fully-differential analysis would clearly play a major role in this context, on top of course of enabling the discovery itself in a larger region of the parameter space. We illustrate the benefits of the fully-differential analysis for the characterization of a putative signal by picking up a point ($g_{BL}/M = 0.12\;\textrm{TeV}^{-1}$ and $g_Y=0$) which is discoverable at the HL-LHC, but close enough to the boundary of the discovery region to make characterization more difficult. We assume the presence of the corresponding signal in the data and we obtain the $95\%$~CL likelihood contours on the right panel of Figure~\ref{fig:discovery}. A simple question related to characterization is whether we can establish that the underlying $Z'$ couples to the $B-L$ current, rather than for instance to the hypercharge current. The figure shows that this is possible only with the fully-differential analysis. \begin{figure}[t] \centering \raisebox{1em}{\includegraphics[width=.54\linewidth]{Disc.pdf}} \hfill \includegraphics[width=0.4\linewidth]{Car.pdf} \caption{Left panel: $5\sigma$ discovery reach for HL-LHC and $95{\%}$ CL EWPT exclusions. Right panel: $95{\%}$ CL likelihood contours assuming the presence of a $Z'$ with $g_{BL}/M=0.12 \;\text{TeV}^{-1}$ in the measurements.} \label{fig:discovery} \end{figure} \section{Conclusions}\label{sec:conc} We studied the potential of fully-differential DY measurements to probe the seven flavor-universal current-current operators listed in Table~\ref{tab:GenOp}. We found (see Table~\ref{tab:SBCC}) that five directions in this parameter space can be probed effectively at the linear interference level with the fully-differential analysis, while with single-differential measurements this is possible for only two directions. At the HL-LHC, the strongest single-parameter sensitivity improvement, by a factor of $3$, is for the parameter ${G}_{O}^{+}$ because of the reasons explained in Section~\ref{treedits}. Improvements in the ballpark of $30\%$ or $10\%$ are observed for the other single-parameter bounds. The improvement is significantly more pronounced for the profiled bounds, owing to the reduction of correlations in the fully-differential fit. The augmented sensitivity at the interference level makes the fully-differential results generically more stable when the quadratic new physics terms are excluded from the predictions. This is beneficial for considerations related to the validity of the EFT, as it lowers the scale of the measurements that drive the sensitivity and reduces the impact of removing the highest-energy cross-section bins as shown in Figure~\ref{fig:Ladder}. The observed sensitivity improvement is due to two distinct factors. The one with the strongest impact is the extended linear-level sensitivity mentioned above and explained in Section~\ref{treedits} in details. The second, which is also quite a strong effect, is the reduction of the impact of PDF uncertainties due to their correlations across different analysis bins. These correlations are typically different from the ones of the EFT differential cross-section predictions, making harder for the PDF nuisance parameters to mimic the signal and to reduce the sensitivity. Of course the effect is quantitatively so important because the PDF are among the dominant sources of uncertainties in our fit, compatibly with the findings of Ref.~\cite{Ricci:2020xre}. Correspondingly, the benefits of the fully-differential analysis are (mildly) reduced when other sources of systematic uncertainties are assumed to be present, lowering the relative impact of PDF uncertainties in the total error budget. We have verified this fact in Section~\ref{sec:uncertainty} by including a $2\%$ systematic uncertainty uncorrelated across all bins, on top of the fully correlated luminosity uncertainty that is present (but has a totally negligible impact) in the results of Sections~\ref{WYSubSec} and~\ref{GCCSubSec}. The dependence (at the $10\%$ or $20\%$ level) of our results on the assumed patterns of experimental systematic uncertainties outlines the need of detailed experimental projections for DY measurements. Experimental uncertainties that are fully correlated in all bins as in Sections~\ref{WYSubSec} and~\ref{GCCSubSec} are definitely unrealistic. However assuming the uncertainties to be fully uncorrelated, as in Section~\ref{sec:uncertainty}, is equally unrealistic. We do expect correlations, especially in the fully-differential measurements, whose impact could be beneficially for the sensitivity analogously to what we have found happening for the PDF. The final HL-LHC sensitivity could thus be closer to the one in Sections~\ref{WYSubSec} and~\ref{GCCSubSec} than to the one in Section~\ref{sec:uncertainty}. Furthermore, our findings are based on the statistically sub-optimal strategy of comparing cross-section measurements with EFT predictions, rather than comparing directly the EFT with the observed data. More sophisticated and unbinned strategies could be considered to further improve the sensitivity. A significant improvement in the sensitivity is also found in the LHC run 3 projections (see Appendix~\ref{app:LHCProj}). The gain is however much milder than for the HL-LHC, mostly because the number of expected events is too low to efficiently reconstruct the full angular distributions at high energy (i.e.~to populate enough all the bins required for a fully-differential analysis). The sensitivity improvement of fully-differential measurements has a direct impact on concrete putative new physics scenarios, as we discussed in Section~\ref{sec:Zprime} for a simple minimal $Z^\prime$ model. The point is that in models where the charged current $\smash{{\mathcal{O}}_{lq}^{(3)}}$ operator is absent, the single-differential DY analysis is mostly sensitive to a single EFT parameters combination: $\smash{G_{E}^{+}}$. In the new physics model it will be generically possible to suppress $\smash{G_{E}^{+}}$ without particular fine-tuning, making the single-differential analysis loose sensitivity in a large region of the parameter space as in Figure~\ref{fig:Zprime_comp}. The fully-differential analysis will boost the sensitivity in that region. The results of Section~\ref{sec:Zprime} also outline the effectiveness of high-$p_T$ probes on a well-established new physics benchmark that has been investigated since the beginning of the LHC program. High-$p_T$ probes by high energy DY measurements extend (see Figure~\ref{fig:Mgplane}) the projected HL-LHC exclusions well beyond the reach of direct searches in a large region of the parameter space, with sensitivity to masses from $10$ to $30$~TeV. Discovery is also possible up to around $20$~TeV. In the event of a discovery, fully-differential measurements will play a crucial role in the characterization of the observed signal as illustrated in Figure~\ref{fig:discovery}. \section*{Acknowledgements} The work of L.R. was supported by the Swiss National Science Foundation under contract 200021-178999. G.P.~was supported in part by the MIUR under contract 2017FMJFMW (PRIN2017).
2,877,628,089,935
arxiv
\section{Introduction \label{sec:intro}} The universe starts with a nearly uniform distribution of dark matter, and gravity gradually attracts matter to local density peaks to make virialized halos. The inhomogeneity of mass density keeps increasing; especially, mass density grows non-linearly in higher-density regions \citep[cf.,][]{peebles80}. Galaxies are formed according to the distribution of dark matter halos. As dark matter halos become more massive by merging with surrounding halos, the contrast of the number density of galaxies will also be higher. In the local universe, as the result of hierarchical structure formation across cosmic time, we can see various regions such as galaxy clusters, groups, filaments, or voids, which compose the large-scale structure of the universe or cosmic web \citep[e.g.,][]{lapparent86,geller89,alpaslan14,libeskind18}. Galaxy clusters are usually located at the knots of the cosmic web; especially, massive clusters tend to be surrounded by other clusters or groups, known as ``superclusters,'' which are overdense regions in a few tens of Mpc scale. About half of local clusters are found to reside in superclusters based on the Abell's cluster catalog or X-ray survey \citep{bahcall84,chon13}. Therefore, galaxy clusters are key components of the large-scale structure of the universe, where mass density is drastically increased over the initial small fluctuation. In addition, galaxy clusters are good laboratories to understand environmental effects on galaxy evolution. In the local universe, it is well known that galaxy properties in higher-density regions are significantly different from those in lower-density regions: red, massive elliptical galaxies tend to reside in galaxy clusters, and cluster galaxies make a tight sequence on a color-magnitude diagram \citep[e.g.,][]{dressler80,lewis02,kauffmann04,bamford09}. The stellar populations of these galaxies imply that they are generally formed at higher redshifts than field counterparts, and experience a short and intense star-formation activity like star-burst phase early in their formation history \citep{thomas05,raichoor11,gu18}. Furthermore, the physical properties of brightest cluster galaxies depend on the internal structure or dynamical state of their host clusters at $z\lesssim0.5$ \citep{wen13}, and, in the local universe, the morphology of superclusters correlates with e.g., stellar mass and star-formation rate (SFR) of member galaxies \citep{einasto14}. Although in the local universe, we can see the differences of galaxy properties, which are attributed to environmental effects, it is still unclear when and how galaxies are affected by surrounding environments. In parallel with galaxy evolution in high-density environments, the large-scale structure itself is developing over cosmic time. The redshift evolution of both galaxies and large-scale structure is intricately connected due to anisotropic galaxy/mass assembly along to filaments \citep{kraljic18}. This complexity would prominently appear in galaxy clusters, as they are at the knots of the cosmic web. Thus the existence of environmental effects on galaxy evolution are clearly confirmed by the studies of local galaxy clusters. However, in order to reveal the physical mechanisms of environmental effects over the long history of cluster formation, we need to directly observe the early stage of cluster formation, which would allow us to understand the physical properties on their way to mature galaxy clusters. The progenitors of galaxies clusters at high redshifts, or protoclusters, are good laboratories for investigating the relation between galaxy evolution and cluster formation \citep{overzier16}. Galaxy clusters having extended X-ray emission are found up to $z=2.5$ \citep{gobat13,wang16}, and quiescent galaxies reside in some clusters at $z\sim2$ \citep{newman14,strazzullo18}. Beyond $z\sim2$, young star-forming galaxies, such as Lyman break galaxies (LBGs) and Ly$\alpha$ emitters (LAEs), tend to be a dominant galaxy population even in high-density regions \citep{kuiper10,spitler12,contini16} though some quiescent galaxies are also clearly found in protoclusters at $z\sim2\mathrm{-}3$ \citep{kodama07,kubo13,shi19a}. Thus, protoclusters are found to harbor the wide range of galaxy populations. \citet{shimakawa18} showed that protocluster galaxies at $z\sim2\mathrm{-}4$ are more actively forming stars than in fields, and \citet{forrest17} found that extreme [O{\sc iii}]$+$H$\beta$ emission line galaxies are clustered in an overdense region. Similarly, dusty star-burst galaxies identified by submillimeter imaging are frequently discovered in protoclusters \citep{casey16,umehata17,miller18,zeballos18}. However, \citet{tran17} reported that H$\alpha$ emitters exhibit similar stellar growth regardless of environments. It is still unclear what causes the diversity of protocluster properties, such as star-forming activity. Some studies imply that there is a large amount of cold gas around protoclusters \citep{cucciati14,lemaux18}. Such cold gas could enhance the star-formation of protocluster galaxies, or ignite star-burst if it falls into the core of a halo as cold stream. Even if the total amount of cold gas around protoclusters is the same, how much star-formation is enhanced can largely vary because the accretion rate of such cold gas is dependent on surrounding large-scale structures such as the number of filaments connected to knots \citep{dekel09,aragon10,liao19}. Although there are other possible physical mechanisms, investigating protoclusters from the viewpoints of the large-scale structure is one of the approaches to reveal galaxy evolution in high-density environments. However, the rarity of protoclusters at high redshifts makes it difficult to conduct a systematic study. So far, the number of known protoclusters is only a few tens at $z\gtrsim2$ (only $\sim10$ at $z\gtrsim4$). To find such rare objects, many studies have used radio galaxies (RGs) or quasars (QSOs) as the signpost of overdense regions \citep[e.g.,][]{venemans07,wylezalek13} because such galaxies are expected to be located in massive dark matter halos \citep[e.g.,][]{shen07,orsi16}. However, the relation between these objects and environment is still under debate: \citet{noirot18} confirmed protoclusters around RGs at $1.4<z<2.8$ while \citet{uchiyama18} found that there is no correlation between QSOs and environments at $z\sim4$. The fraction of active galactic nuclei (AGNs) is different among protoclusters \citep{lehmer13,krishnan17,macuga19}. Thus, the method of using signposts has the potential to pick up only a subset of protoclusters. Complementary protocluster searches based on blank surveys without such signposts of protoclusters have been extensively performed recently. For example, the spectroscopic survey of the VIMOS Ultra-Deep Survey \citep{fevre15} has found many protoclusters at $z\sim3\mathrm{-}5$ by the direct investigation of spatial and redshift clustering of galaxies \citep{lemaux14,cucciati14,lemaux18,cucciati18}; the wide-field imaging survey of the Hyper SuprimeCam \citep{aihara18} has made a systematic sample of protocluster candidates up to $z\sim6.6$ based on the projected overdensity of LBGs and LAEs \citep{toshikawa18,higuchi19}. It should be noted that even blank searches would identify only another subset of protoclusters. Photometric surveys require a certain selection of galaxy population; on the other hand, spectroscopic surveys tend to observe brighter targets compared with photometric surveys though most of them observe down to well below the characteristic luminosity at a given redshift. From these various searches, the number of known protoclusters are gradually increasing \citep{badescu17,oteo18}, which enables us to see a large variety of protoclusters (e.g., overdensity, size, galaxy population, and physical properties of member galaxies). \citet[][hereafter T16]{toshikawa16} have also carried out a blank protocluster search in the $4\,\mathrm{deg^2}$ area of the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) Deep Fields \citep{gwyn12}, and identified 21 protocluster candidates at $z\sim3\mathrm{-}6$, which are defined as $>4\sigma$ significance overdense regions of $u$-, $g$-, $r$-, or $i$-dropout galaxies. By comparison with theoretical model \citep{henriques12}, 76\% of candidates are expected to be in real protoclusters. Following this search for protocluster candidates, a follow-up spectroscopic observation is conducted on several of them. Three among four spectroscopically observed candidates are confirmed as genuine protoclusters at $z=3.13$, 3.24, and 3.67 with more than five members spectroscopically confirmed. Although we have made follow-up spectroscopy for only four candidates at $z\sim3\mathrm{-}4$, this success rate is consistent with theoretical expectation. In other two candidates, close galaxy pairs are found at $z=4.89$ and 5.75, suggesting the existence of protoclusters, though more complete follow-up spectroscopy is required to reach a conclusion. In this study, we will extend follow-up spectroscopy for the protocluster candidates found by T16 to discover more protoclusters and make a close investigation into each protocluster. We observe three overdense regions using KeckII/DEIMOS \citep{faber03}: the one includes a plausible protocluster candidate at $z=4.89$ because a close galaxy pair was found; the second is not observed by the previous follow-up spectroscopy of T16; the last contains a known protocluster at $z=3.67$ in order to make a more detailed investigation by increasing the number of the identification of member galaxies. The wide field-of-view (FoV) of KeckII/DEIMOS allows us to discuss the spatial and redshift distribution of galaxies from the viewpoint of large-scale structure. Here we present the results of our follow-up spectroscopy, including newly confirmed protoclusters. The structure of this paper is as follows. Section \ref{sec:obs} describes new observations and the details of targets. In Section \ref{sec:res}, the results of follow-up spectroscopy are shown, and the probability of the existence of protoclusters is estimated. We discuss the primordial large-scale structure and the internal structures of protoclusters in Section \ref{sec:disc}. The conclusions are provided in Section \ref{sec:conc}. We assume the following cosmological parameters: $\Omega_\mathrm{M}=0.3, \Omega_\Lambda=0.7, \mathrm{H}_0=70 \mathrm{\,km\,s^{-1}\,Mpc^{-1}}$, and magnitudes are given in the AB system. \section{Observations \label{sec:obs}} \subsection{Targets} We have obtained follow-up spectroscopy on three overdense regions in the CFHTLS Deep Fields, which are identified by T16. Here is a brief description of the imaging dataset and protocluster search in T16. The CFHTLS Deep Fields consist of four separate fields, and, in each field, five optical broad-band datasets are available over $\sim1\,\mathrm{deg^2}$. The depth is almost uniform between field to field, and the $3\sigma$ limiting magnitudes are $\sim28.1$, 28.3, 27.8, 27.3, and $26.4\,\mathrm{mag}$ at $u$-, $g$-, $r$-, $i$-, and $z$-bands, respectively. This corresponds to about $M^*_\mathrm{UV}+2$ at $z\sim4\mathrm{-}5$ \citep[where $M^*_\mathrm{UV}$ is the characteristic magnitude of the Schechter function;][]{bouwens07,burg10}. From these wide and deep fields, we have selected $u$-, $g$-, $r$-, and $i$-dropout galaxies by the standard Lyman break technique over $\sim4\,\mathrm{deg^2}$ area. Local surface galaxy number density is calculated by counting dropout galaxies within an aperture of $0.75\,(1.0)\,\mathrm{Mpc}$ radius in physical scale for $u$-, $g$-, and $r$-dropout ($i$-dropout) galaxies because about 65\% of mass of the progenitors of $1\mathrm{-}3\times10^{14}\,M_\sun$ halos is enclosed in this radius according to theoretical predictions \citep{chiang13}. Although larger apertures can include protocluster galaxies more completely, the excess of number density by a protocluster will weaken due to the contamination of fore/background galaxies due to the large redshift uncertainty of Lyman break technique ($\Delta z\sim1$). The apertures are distributed over the whole CFHTLS Deep Fields, and protocluster candidates are defined as regions where the number density excess from the average is $>4\sigma$ significance. Comparing with a theoretical model \citep{henriques12}, 76\% of $>4\sigma$ overdense regions are expected to grow into galaxy clusters with the halo mass of $>10^{14}\,\mathrm{M_\sun}$ (refer to T16 for the details). Due to the large redshift uncertainty of Lyman break selection, the completeness of our protocluster search is very small ($\sim10\%$). In particular, the progenitors of smaller galaxy clusters would be largely affected. Thus, it should be noted that our search by using dropout galaxies preferentially identifies more massive protoclusters. \begin{deluxetable*}{ccccccc} \tablecaption{Overview of the targets of the follow-up spectroscopy \label{tab:target}} \tablewidth{0pt} \tablehead{Name & R.A. (J2000) & Decl. (J2000) & Population & Overdensity\tablenotemark{a} & $N_\mathrm{LBG}$\tablenotemark{b} & T16\tablenotemark{c}} \startdata D1RD01 & 02:24:45.3 & $-$04:55:56.5 & $r$-dropout & $4.4\sigma$ & 40 & Yes (15) \\ D1GD02 & 02:25:56.2 & $-$04:48:30.4 & $g$-dropout & $4.2\sigma$ & 153 & No \\ D4GD01 & 22:16:47.3 & $-$17:16:52.7 & $g$-dropout & $4.3\sigma$ & 153 & Yes (60) \\ \enddata \tablenotetext{a}{Overdensity at the peak.} \tablenotetext{b}{Number of dropout galaxies within $3\,\mathrm{arcmin}$ radius from the overdensity peak. Noted that DEIMOS can observe more galaxies since its FoV is wider than $6\,\mathrm{arcmin}$.} \tablenotetext{c}{The overdense regions observed by follow-up spectroscopy in T16 are marked as ``Yes.'' The number of spectroscopically observed galaxies in T16, which are located within $3\,\mathrm{arcmin}$ radius from its overdensity peak, is described in the parenthesis.} \vspace{-10mm} \end{deluxetable*} By this criterion, 21 protocluster candidates are identified from $z\sim3$ to $z\sim6$. Eight (two at each redshift) of them were observed by follow-up spectroscopy in T16. Based on the theoretical model, we have evaluated the spatial distribution of protocluster members, which will merge into the same halo at $z=0$ (see Figure 8 in T16), and a typical redshift size of protoclusters is found to be $\Delta z\lesssim0.03$. Then, three of the targeted eight candidates show strong redshift clustering within this redshift range, and we have investigated whether these observed concentrations can coincidentally be reproduced from a random galaxy distribution drawn from the redshift selection function of dropout galaxies. As a result, they cannot be reproduced from a random distribution $>99\%$ of the time. Therefore, these three candidates are confirmed to be real protoclusters at $z=3.13$, 3.24, and 3.67. In the same manner, since the redshift distribution in one overdense region of $g$-dropout galaxies is consistent with a random distribution probability of 21\%, it is not regarded as a confirmed protocluster. In the other four candidates of $r$- and $i$-dropout galaxies, we cannot conclude whether they are real protoclusters or not because of the insufficient follow-up observations for higher-redshift candidates. The interested reader should refer to T16 for more details. In this study, we focus on protocluster candidates at $z\gtrsim4$ because the number of known protoclusters is particularly small at these redshifts. Also, it would be observationally difficult to confirm protoclusters at $z\sim6$ ($i$-dropouts) due to the shallow images of $z$-band in the CFHTLS Deep Fields. Therefore, we have made follow-up spectroscopic observations of the three overdense regions of $r$- and $g$-dropout galaxies in the D1 and D4 fields, which are termed ``D1RD01'', ``D1GD02'', and ``D4GD01'' in T16 respectively. Table \ref{tab:target} shows the basic information of these three target regions (e.g., R.A., Decl., or overdensity). The overdense regions of D1RD01 and D4GD01 were already observed by follow-up spectroscopy in T16, while follow-up spectroscopy is for the first time performed for the overdense region of D1GD02 in this study. In the two overdense regions which are spectroscopically observed in T16, the slits of the previous spectroscopic observation are allocated to less than half of the dropout galaxies. Thus, even for the previously observed overdense regions, further follow-up spectroscopy is necessary in order to make a closer investigation and to draw firm conclusions about the possible protocluster. We briefly summarize information regarding the follow-up spectroscopy for these three target regions below, and give further details in Section 4 of T16. \begin{description} \item[D1RD01] We have identified the redshifts of only six $r$-dropout galaxies in total. Although two galaxies out of six are clustered in redshift as well as spatial space, this is too small a number to conclude that it is a protocluster. The high overdensity of the projected number of galaxies can be attributed to the chance alignment of some small groups along the line of sight, instead of a single massive structure like a protocluster. \item[D1GD02] This region is not the target of the previous follow-up spectroscopy of T16. \item[D4GD01] The overdense region of D4GD01 is already confirmed to include a protocluster at $z=3.67$ composed of eleven member galaxies at least. An AGN is also found in this region, but it would be hard to regard it as a member of the protocluster because the AGN is located far behind the protocluster (line-of-sight separation between the AGN and the protocluster is $\Delta z=0.05$ or $\sim8\,\mathrm{Mpc}$ in the physical scale at $z=3.7$). \end{description} \subsection{Follow-up Spectroscopy} \begin{deluxetable*}{ccccccc} \tablecaption{Overview of our spectroscopic observations\tablenotemark{*} \label{tab:obs}} \tablewidth{0pt} \tablehead{Target & Date & Grism & $t_\mathrm{exp}$ (min) & Seeing & $N_\mathrm{obs}$\tablenotemark{a} & $N_\mathrm{det}$\tablenotemark{b}} \startdata D1RD01 & 2015 Sep 14 & 830G & 175 & $0\farcs8$ & 47 (4) & 15 \\ & 2016 Sep 9 \& 10 & 900ZD & 313 & $0\farcs9$ & 38 (20) & 8 \\ D1GD02 & 2015 Sep 15 & 900ZD & 157 & $0\farcs8$ & 101 & 21 \\ & 2016 Oct 28 & 900ZD & 270 & $0\farcs7$ & 85 (2) & 29 \\ D4GD01 & 2016 Oct 28 & 900ZD & 200 & $0\farcs7$ & 90 (9) & 10 \\ \enddata \tablenotetext{*}{Each row shows the information on one mask.} \tablenotetext{a}{Number of observed galaxies. The number of galaxies observed by previous observations is given in the parenthesis. A DEIMOS mask can typically contain $\sim90\mathrm{-}100$ slits; thus, we have also observed dropout galaxies at other redshifts as mask fillers. For example, the masks targeting $r$-dropout overdense regions include $g$-dropout galaxies as well. Since such mask fillers do not belong to the candidates of protoclusters, they are not used in this study.} \tablenotetext{b}{Number of spectroscopically detected galaxies.} \vspace{-10mm} \end{deluxetable*} Our spectroscopic observations in the two overdense regions were conducted as part of the Keck Observatory programs of U033D in 2015 and U130D in 2016. We used KeckII/DEIMOS with Multi-Object Spectroscopy (MOS) mode. The slits have a length of $4.0\,\mathrm{arcsec}$ at minimum and a width of $1.0\,\mathrm{arcsec}$. We use the gratings of 830G and 900ZD, which have high efficiency at the wavelengths of $\sim5000\mathrm{-}8000\,\mathrm{\AA}$ corresponding to the wavelength of the redshifted Ly$\alpha$ emission line of $g$- or $r$-dropout galaxies. The spectral resolution of this configuration ($2.1\mathrm{-}2.5\,\mathrm{\AA}$) is high enough to resolve the [O{\sc ii}] doublet (the wavelength separation is $3.9\mathrm{-}5.6\,\mathrm{\AA}$ in the observed-frame). The wide spectral range of DEIMOS enables us to fully cover the expected wavelength range of Ly$\alpha$ emissions from $g$- or $r$-dropout galaxies, or to detect H$\alpha$, H$\beta$, and [O{\sc iii}] emission lines simultaneously if they are from contaminating low-redshift galaxies. Therefore we can distinguish a single Ly$\alpha$ emission line from these other possible low-redshift contaminants. Furthermore, we calculate weighted skewness, $S_w$, which is a good indicator of asymmetry, to distinguish Ly$\alpha$ emission line from [O{\sc ii}] doublet \citep{kashikawa06}. In case that doublet is detected as a single line due to low spectral resolution, it should not show a large skewness. Section 4.2 of T16 gives more details about line contaminations. We have used five masks in total: two masks for the D1RD01, two for the D1GD02, and one for the D4GD01 region. The details of configuration and sky condition for each mask observation are summarized in Table \ref{tab:obs}. We put a higher priority for allocating slits to galaxies which were not observed by the previous follow-up spectroscopy in T16. Our masks also include observed but unconfirmed galaxies in order to detect their possible faint Ly$\alpha$ emission. In particular, the mask for the overdense region of D1RD01 used in Sep., 2016 has many duplicated targets. The pipeline of spec2d\footnote{The data-reduction pipeline was developed at the University of California, Berkeley, with support from National Science Foundation grant AST 00-71048.} is used to reduce the data taken by DEIMOS \citep{cooper12,newman13}. The pipeline involves dividing the raw images into individual slits, flat-fielding, calculating wavelength solution, subtracting sky background, and combining separate exposures into one spectrum with cosmic-ray removal. In addition to the science targets, slits in each mask are allocated for bright stars ($\sim20\,\mathrm{mag}$) to monitor the time variations of seeing size or atmospheric transmission between exposures. The differences of seeing size and transparency between exposures are found to be $\lesssim0\farcs1$ and $\lesssim10\%$, respectively. Table \ref{tab:obs} also shows seeing sizes and total exposure times, and the integration time for each individual exposure is typically 20 minutes. \begin{figure*} \epsscale{1.0} \plotone{./hist_imag_spec_T18.eps} \caption{$i$-band magnitude distributions of dropout (gray), spectroscopically-observed (blue), and Ly$\alpha$-detected galaxies (red) within $3\,\mathrm{arcmin}$ radius of the overdensity peaks of the D1RD01 (left), D1GD02 (middle), and D4GD01 regions (right).} \label{fig:spmag} \end{figure*} Based on one-dimensional spectra produced by the pipeline, we have made the crude identification of possible emission lines with the criterion of three connected pixels having signal-to-noise ratio ($S/N$) more than 1.0 per pixel. The fake emission lines caused by sky residuals and ghosts can be removed by visual inspection on two-dimensional spectra. Because the predicted position from the mask design could be shifted by up to a few pixel, one-dimensional spectra are manually extracted again so that we can correctly trace object positions. Then, we have estimated the $S/N$ of emission lines by integrating all pixels in a line profile, and insignificant lines with $S/N<3$ are removed from the sample of detected emission lines. For flux calibration, we have observed the spectroscopic standard stars of Feige15 or BD+28d4211 with a $1.0\,\mathrm{arcsec}$ long slit and the same grating as science targets each night. The standard stars are reduced in the same way as science targets, and the slit loss is corrected based on the ratio between slit width and seeing size, in which its light profile is assumed to be a Gaussian function whose width is the seeing size. The sensitivity as a function of wavelength is estimated by the IRAF task {\sf standard} and {\sf sensfunc} with the correction of airmass and extinction, and applied to science targets by the IRAF task {\sf fluxcalib}. In this study, we have allocated 361 slits for dropout galaxies in the overdense regions of $r$- and $g$-dropout galaxies, and 83 galaxies are newly confirmed by detecting their Ly$\alpha$ emission lines (Table \ref{tab:obs}). Although our spectroscopy may detect continuum only from bright galaxies ($\lesssim24\,\mathrm{mag}$), its $S/N$ is not high enough to identify absorption lines, and we cannot precisely determine redshifts by the Lyman break. Therefore, it should be noted that we can spectroscopically confirm only dropout galaxies having a Ly$\alpha$ emission. Combining with the previous work of T16, the total numbers of spectroscopically-observed(confirmed) galaxies are 76(29), 184(50), and 224(52) in the overdense regions of D1RD01, D1GD02, and D4GD01, respectively. Figure \ref{fig:spmag} shows the magnitude distributions of dropout and spectroscopically-observed/confirmed galaxies in the three target regions. The fractions of spectroscopically-observed galaxies among dropout galaxies located within $3\,\mathrm{arcmin}$ radius from the overdensity peak are 93\%($=37/40$), 32\%($=49/153$), and 53\%($=81/153$) for the D1RD01, D1GD02, and D4GD01 regions, respectively. Based on the Kolmogorov-Smirnov (KS) test, the magnitude distributions of spectroscopically-observed/confirmed galaxies are consistent with that of dropout galaxies located in the overdense regions (the $p$-values of the KS test are $p_\mathrm{KS}>0.5$ for any combinations). There is no clear contamination of low-redshift galaxies in our follow-up spectroscopy. However, since the possible contaminants of the color selection of dropout galaxies are mainly dwarf stars or quiescent galaxies rather than H$\alpha$, [O{\sc ii}], or [O{\sc iii}] emitters, we need continuum detections with high $S/N$ in order to spectroscopically confirm them. Although faint low-redshift galaxies or dwarf stars could contaminate the sample of unconfirmed dropout galaxies, the contamination rate expected by the color selection of dropout galaxies may be up to $\sim5\%$ at worst (refer T16 for the details). We use only confirmed dropout galaxies for the following analysis. The observed properties, such as redshift, Ly$\alpha$ luminosity ($L_\mathrm{Ly\alpha}$), UV absolute magnitude ($M_\mathrm{UV}$), and rest-frame Ly$\alpha$ equivalent width ($EW_0$), of newly confirmed galaxies are described in Table \ref{tab:spec}, and their one- and two-dimensional spectra are shown in Figure \ref{fig:spec}. Their IDs are continued from T16; thus, ID=1-6 in the D1RD01 region and ID=1-42 in the D4GD01 region are also shown in Table 4 and Figure 9 of T16. The redshifts are derived by the peak wavelength of the Ly$\alpha$ emission line, assuming the rest wavelength of Ly$\alpha$ to be 1215.6{\AA}. These measurements could be overestimated if there is a galactic outflow. When emission lines are located near strong sky lines, the position of the peak could be shifted. These effects of sky lines and the wavelength resolution are taken into account when estimating the uncertainty. The observed line flux, $f_\mathrm{Ly\alpha}$, corresponds to the total amount of the flux within the line profile. The slit loss is corrected based on the ratio of slit width and seeing size for each observation, and its uncertainty is estimated from the combination of the line width and the noise level at wavelengths blueward of Ly$\alpha$. Since continuum flux is too faint to be detected in the observed spectra, $M_\mathrm{UV}$ is estimated from the broadband photometry. It is derived from $r$-band ($i$-band) magnitudes for $g$-dropout ($r$-dropout) galaxies after correcting the contribution of the absorption of intergalactic medium (IGM) and the Ly$\alpha$ emission. In this calculation, we assume flat UV spectra ($f_\lambda\propto\lambda^\beta$ where $\beta=-2$) and the IGM transmission model of \citet{madau95}. It should be noted that the shape of UV spectra can differ according to various galaxy properties \citep[e.g., dust, age, or metallicity;][]{bouwens12}, and IGM absorption also varies depending on the line of sight \citep{thomas17}. Although it is difficult to predict physical properties with these UV spectra, an observed broad-band magnitude can be converted into pure continuum flux with the spectroscopic measurements of $f_\mathrm{Ly\alpha}$ and redshift. We have confirmed that $M_\mathrm{UV}$ changes $\sim5\%$ when UV slope or IGM transmission fluctuate between $\pm1.0$ or $\pm15\%$ respectively. This systematic error is smaller than the photometric error of our dataset. In addition, $EW_0$ is estimated by combining $f_\mathrm{Ly\alpha}$ and $M_\mathrm{UV}$. \startlongtable \begin{deluxetable*}{cccccccccc} \tabletypesize{\scriptsize} \tablecaption{Observed properties of spectroscopically confirmed dropout galaxies. \label{tab:spec}} \tablewidth{0pt} \tablehead{ \colhead{ID} & \colhead{R.A.} & \colhead{Decl.} & \colhead{$m_i$} & \colhead{Redshift} & \colhead{$M_\mathrm{UV}$} & \colhead{$f_\mathrm{Ly\alpha}$} & \colhead{$L_\mathrm{Ly\alpha}$} & \colhead{$EW_0$} & \colhead{$S_w$} \\ \colhead{} & \colhead{(J2000)} & \colhead{(J2000)} & \colhead{(mag)} & \colhead{} & \colhead{(mag)} & \colhead{($10^{-18}\,\mathrm{erg\,s^{-1}\,cm^{-2}}$)} & \colhead{($10^{42}\,\mathrm{erg\,s^{-1}}$)} & \colhead{(\AA)} & \colhead{(\AA)} } \startdata \multicolumn{10}{c}{D1RD01 (23 galaxies)} \\ 7 & 02:24:33.40 & -04:57:58.4 & $26.92\pm0.09$ & $4.473^{+0.001}_{-0.001}$ & $-19.33\pm0.23$ & $1.90\pm0.30$ & $0.38\pm0.06$ & $8.00\pm2.25$ & $10.11\pm1.99$ \\ 8 & 02:24:58.59 & -04:56:25.6 & $26.36\pm0.06$ & $4.602^{+0.001}_{-0.001}$ & $-19.93\pm0.14$ & $3.22\pm0.48$ & $0.69\pm0.10$ & $8.34\pm1.71$ & $4.15\pm0.80$ \\ 9 & 02:25:24.73 & -04:53:10.1 & $25.13\pm0.02$ & $4.635^{+0.001}_{-0.001}$ & $-21.17\pm0.05$ & $6.39\pm0.89$ & $1.39\pm0.19$ & $5.38\pm0.79$ & $4.44\pm1.79$ \\ 10 & 02:24:52.51 & -04:56:08.5 & $24.21\pm0.01$ & $4.648^{+0.001}_{-0.001}$ & $-22.10\pm0.02$ & $20.07\pm1.02$ & $4.40\pm0.22$ & $7.22\pm0.39$ & $9.68\pm0.56$ \\ 11 & 02:25:20.08 & -04:52:54.8 & $26.91\pm0.09$ & $4.671^{+0.001}_{-0.001}$ & $-19.41\pm0.23$ & $1.06\pm0.30$ & $0.24\pm0.07$ & $4.62\pm1.69$ & $8.26\pm9.97$ \\ 12 & 02:25:33.61 & -04:56:43.1 & $26.37\pm0.06$ & $4.724^{+0.001}_{-0.001}$ & $-19.96\pm0.15$ & $4.01\pm0.60$ & $0.91\pm0.14$ & $10.80\pm2.24$ & $4.15\pm1.55$ \\ 13 & 02:24:31.90 & -04:55:46.6 & $26.00\pm0.04$ & $4.850^{+0.001}_{-0.001}$ & $-20.12\pm0.14$ & $21.90\pm1.45$ & $5.31\pm0.35$ & $53.77\pm8.25$ & $10.47\pm1.02$ \\ 14 & 02:24:30.17 & -04:55:59.5 & $25.97\pm0.04$ & $4.851^{+0.001}_{-0.001}$ & $-20.24\pm0.13$ & $17.51\pm1.17$ & $4.24\pm0.28$ & $38.77\pm5.46$ & $9.84\pm0.99$ \\ 15 & 02:24:52.95 & -04:57:56.2 & $25.96\pm0.04$ & $4.889^{+0.001}_{-0.001}$ & $-20.31\pm0.12$ & $15.23\pm0.56$ & $3.76\pm0.14$ & $31.99\pm4.03$ & $9.31\pm0.63$ \\ 16 & 02:25:32.22 & -04:55:40.0 & $26.65\pm0.07$ & $4.892^{+0.001}_{-0.001}$ & $-19.87\pm0.19$ & $1.12\pm0.27$ & $0.28\pm0.07$ & $3.56\pm1.08$ & $3.06\pm2.01$ \\ 17 & 02:24:47.88 & -04:54:28.9 & $26.52\pm0.07$ & $4.898^{+0.001}_{-0.001}$ & $-19.71\pm0.21$ & $11.60\pm1.03$ & $2.88\pm0.26$ & $42.64\pm10.04$ & $7.27\pm0.97$ \\ 18 & 02:24:51.79 & -04:54:56.7 & $26.18\pm0.05$ & $4.907^{+0.001}_{-0.001}$ & $-20.33\pm0.13$ & $2.14\pm0.44$ & $0.53\pm0.11$ & $4.46\pm1.07$ & $4.11\pm2.44$ \\ 19 & 02:24:32.96 & -04:55:05.0 & $26.51\pm0.06$ & $4.914^{+0.001}_{-0.001}$ & $-19.79\pm0.20$ & $9.96\pm0.87$ & $2.49\pm0.22$ & $34.43\pm7.65$ & $9.55\pm1.19$ \\ 20 & 02:25:16.85 & -04:57:01.3 & $25.95\pm0.04$ & $4.943^{+0.001}_{-0.001}$ & $-20.62\pm0.10$ & $2.46\pm0.36$ & $0.62\pm0.09$ & $4.00\pm0.71$ & $4.59\pm1.22$ \\ 21 & 02:25:20.12 & -04:53:10.0 & $26.55\pm0.07$ & $4.949^{+0.001}_{-0.001}$ & $-19.94\pm0.18$ & $4.40\pm0.70$ & $1.12\pm0.18$ & $13.37\pm3.26$ & $2.22\pm1.07$ \\ 22 & 02:25:26.32 & -04:54:32.8 & $26.50\pm0.06$ & $4.958^{+0.002}_{-0.001}$ & $-19.54\pm0.27$ & $20.08\pm0.85$ & $5.13\pm0.22$ & $88.61\pm24.87$ & $5.38\pm0.32$ \\ 23 & 02:24:42.76 & -04:55:45.3 & $26.56\pm0.07$ & $5.056^{+0.001}_{-0.001}$ & $-20.06\pm0.20$ & $6.80\pm0.70$ & $1.82\pm0.19$ & $19.49\pm4.35$ & $6.61\pm1.21$ \\ 24 & 02:25:16.35 & -04:55:04.9 & $25.70\pm0.03$ & $5.090^{+0.001}_{-0.001}$ & $-21.01\pm0.09$ & $12.58\pm1.31$ & $3.41\pm0.35$ & $15.26\pm2.07$ & $6.29\pm0.88$ \\ 25 & 02:25:32.46 & -04:54:37.8 & $26.51\pm0.06$ & $5.107^{+0.001}_{-0.001}$ & $-20.25\pm0.18$ & $4.08\pm0.81$ & $1.11\pm0.22$ & $10.04\pm2.67$ & $5.92\pm2.33$ \\ 26 & 02:25:18.42 & -04:55:53.6 & $26.61\pm0.07$ & $5.173^{+0.001}_{-0.001}$ & $-20.23\pm0.20$ & $5.39\pm0.47$ & $1.52\pm0.13$ & $13.97\pm3.03$ & $10.67\pm1.41$ \\ 27 & 02:25:17.94 & -04:57:24.9 & $26.86\pm0.09$ & $5.402^{+0.001}_{-0.001}$ & $-19.83\pm0.38$ & $19.41\pm1.23$ & $6.05\pm0.38$ & $80.58\pm34.20$ & $3.96\pm0.88$ \\ 28 & 02:25:24.23 & -04:54:25.4 & $26.75\pm0.08$ & $5.402^{+0.001}_{-0.001}$ & $-20.26\pm0.27$ & $12.94\pm0.78$ & $4.04\pm0.24$ & $36.05\pm10.33$ & $4.32\pm0.70$ \\ 29 & 02:25:33.51 & -04:54:16.7 & $27.07\pm0.11$ & $5.470^{+0.001}_{-0.001}$ & $-20.19\pm0.32$ & $7.26\pm0.54$ & $2.33\pm0.17$ & $22.25\pm7.70$ & $8.17\pm0.89$ \\ \hline \multicolumn{10}{c}{D1GD02 (50 galaxies)} \\ 1 & 02:26:02.53 & -04:49:03.0 & $26.20\pm0.05$ & $3.636^{+0.001}_{-0.001}$ & $-19.70\pm0.08$ & $4.42\pm0.72$ & $0.54\pm0.09$ & $8.11\pm1.47$ & $3.35\pm1.14$ \\ 2 & 02:25:59.84 & -04:50:37.0 & $25.51\pm0.03$ & $3.638^{+0.001}_{-0.001}$ & $-20.54\pm0.04$ & $84.45\pm1.35$ & $10.36\pm0.17$ & $71.69\pm2.88$ & $8.18\pm0.49$ \\ 3 & 02:25:11.74 & -04:47:47.8 & $25.80\pm0.03$ & $3.674^{+0.001}_{-0.001}$ & $-20.02\pm0.06$ & $3.25\pm0.46$ & $0.41\pm0.06$ & $4.56\pm0.70$ & $5.56\pm0.97$ \\ 4 & 02:25:18.03 & -04:49:13.7 & $27.01\pm0.10$ & $3.679^{+0.001}_{-0.001}$ & $-18.43\pm0.26$ & $3.74\pm0.69$ & $0.47\pm0.09$ & $22.71\pm7.36$ & $4.24\pm1.82$ \\ 5 & 02:25:49.68 & -04:48:13.9 & $26.87\pm0.09$ & $3.681^{+0.001}_{-0.001}$ & $-19.08\pm0.15$ & $3.97\pm0.81$ & $0.50\pm0.10$ & $13.30\pm3.33$ & $2.96\pm0.98$ \\ 6 & 02:25:23.34 & -04:45:53.9 & $26.55\pm0.07$ & $3.712^{+0.001}_{-0.001}$ & $-19.21\pm0.14$ & $2.54\pm0.43$ & $0.33\pm0.05$ & $7.71\pm1.66$ & $7.25\pm1.35$ \\ 7 & 02:25:42.21 & -04:50:11.4 & $26.21\pm0.05$ & $3.715^{+0.001}_{-0.001}$ & $-19.62\pm0.10$ & $6.08\pm0.93$ & $0.78\pm0.12$ & $12.63\pm2.26$ & $7.51\pm1.81$ \\ 8 & 02:25:52.18 & -04:51:13.7 & $25.62\pm0.03$ & $3.736^{+0.001}_{-0.001}$ & $-20.38\pm0.05$ & $21.77\pm1.18$ & $2.85\pm0.15$ & $22.74\pm1.62$ & $6.37\pm0.54$ \\ 9 & 02:26:01.69 & -04:47:38.7 & $26.04\pm0.04$ & $3.742^{+0.001}_{-0.001}$ & $-19.83\pm0.08$ & $4.57\pm0.89$ & $0.60\pm0.12$ & $7.96\pm1.66$ & $6.81\pm1.61$ \\ 10 & 02:25:21.96 & -04:50:39.9 & $26.93\pm0.10$ & $3.754^{+0.001}_{-0.001}$ & $-19.02\pm0.17$ & $5.14\pm0.91$ & $0.68\pm0.12$ & $19.03\pm4.62$ & $5.60\pm1.56$ \\ 11 & 02:25:23.21 & -04:49:28.5 & $26.40\pm0.06$ & $3.759^{+0.001}_{-0.001}$ & $-19.52\pm0.11$ & $3.27\pm0.64$ & $0.43\pm0.08$ & $7.68\pm1.70$ & $4.70\pm5.55$ \\ 12 & 02:26:02.08 & -04:52:07.2 & $26.72\pm0.08$ & $3.805^{+0.001}_{-0.001}$ & $-19.08\pm0.17$ & $8.87\pm0.96$ & $1.21\pm0.13$ & $31.99\pm6.39$ & $4.27\pm1.43$ \\ 13 & 02:25:49.65 & -04:50:47.4 & $26.83\pm0.09$ & $3.818^{+0.001}_{-0.001}$ & $-19.18\pm0.16$ & $14.51\pm1.28$ & $2.00\pm0.18$ & $48.06\pm8.57$ & $7.21\pm1.33$ \\ 14 & 02:25:44.86 & -04:49:51.6 & $26.75\pm0.08$ & $3.819^{+0.001}_{-0.001}$ & $-19.17\pm0.16$ & $5.55\pm0.79$ & $0.76\pm0.11$ & $18.68\pm3.96$ & $4.11\pm1.00$ \\ 15 & 02:25:44.45 & -04:48:37.0 & $26.30\pm0.05$ & $3.825^{+0.001}_{-0.001}$ & $-19.59\pm0.11$ & $5.70\pm1.02$ & $0.79\pm0.14$ & $13.04\pm2.72$ & $7.71\pm1.43$ \\ 16 & 02:25:40.97 & -04:49:30.9 & $26.94\pm0.10$ & $3.839^{+0.001}_{-0.001}$ & $-19.00\pm0.19$ & $9.68\pm0.80$ & $1.35\pm0.11$ & $38.41\pm7.86$ & $5.27\pm0.85$ \\ 17 & 02:25:55.72 & -04:50:06.2 & $26.12\pm0.04$ & $3.854^{+0.001}_{-0.001}$ & $-19.89\pm0.09$ & $8.36\pm0.92$ & $1.18\pm0.13$ & $14.80\pm2.05$ & $4.58\pm0.72$ \\ 18 & 02:25:48.95 & -04:51:29.9 & $27.07\pm0.11$ & $3.855^{+0.001}_{-0.001}$ & $-18.85\pm0.22$ & $31.52\pm1.60$ & $4.44\pm0.22$ & $145.71\pm32.86$ & $8.54\pm0.65$ \\ 19 & 02:25:45.69 & -04:50:26.7 & $25.16\pm0.02$ & $3.879^{+0.001}_{-0.001}$ & $-20.68\pm0.04$ & $22.96\pm1.26$ & $3.28\pm0.18$ & $20.00\pm1.38$ & $3.44\pm0.73$ \\ 20 & 02:25:33.56 & -04:49:31.9 & $27.13\pm0.11$ & $3.890^{+0.001}_{-0.001}$ & $-18.72\pm0.24$ & $3.07\pm0.71$ & $0.44\pm0.10$ & $16.26\pm5.56$ & $3.75\pm1.05$ \\ \hline \multicolumn{10}{c}{D1GD02 (50 galaxies)} \\ 21 & 02:25:42.44 & -04:51:13.2 & $26.91\pm0.09$ & $3.896^{+0.001}_{-0.001}$ & $-18.95\pm0.20$ & $6.79\pm0.66$ & $0.98\pm0.09$ & $29.23\pm6.63$ & $7.86\pm0.90$ \\ 22 & 02:25:43.65 & -04:49:41.9 & $26.91\pm0.09$ & $3.897^{+0.001}_{-0.001}$ & $-19.31\pm0.15$ & $5.38\pm0.82$ & $0.78\pm0.12$ & $16.61\pm3.51$ & $3.36\pm0.75$ \\ 23 & 02:25:51.29 & -04:49:26.1 & $25.98\pm0.04$ & $3.910^{+0.001}_{-0.001}$ & $-20.02\pm0.09$ & $44.70\pm2.76$ & $6.52\pm0.40$ & $72.73\pm7.50$ & $9.63\pm0.60$ \\ 24 & 02:25:18.82 & -04:50:19.4 & $27.17\pm0.12$ & $3.961^{+0.001}_{-0.001}$ & $-18.78\pm0.25$ & $8.84\pm1.17$ & $1.33\pm0.18$ & $46.51\pm13.62$ & $3.92\pm0.78$ \\ 25 & 02:26:00.37 & -04:51:42.6 & $26.20\pm0.05$ & $3.976^{+0.001}_{-0.001}$ & $-19.78\pm0.11$ & $17.44\pm1.68$ & $2.64\pm0.25$ & $36.61\pm5.28$ & $6.08\pm0.91$ \\ 26 & 02:25:32.24 & -04:50:36.8 & $27.08\pm0.11$ & $3.979^{+0.001}_{-0.001}$ & $-18.57\pm0.30$ & $4.71\pm0.97$ & $0.72\pm0.15$ & $30.16\pm11.57$ & $4.40\pm1.84$ \\ 27 & 02:26:10.02 & -04:49:55.6 & $26.40\pm0.06$ & $3.979^{+0.001}_{-0.001}$ & $-19.78\pm0.11$ & $3.75\pm0.74$ & $0.57\pm0.11$ & $7.92\pm1.77$ & $3.63\pm0.95$ \\ 28 & 02:25:56.95 & -04:52:00.6 & $26.76\pm0.08$ & $4.008^{+0.001}_{-0.001}$ & $-18.88\pm0.25$ & $10.97\pm1.52$ & $1.69\pm0.23$ & $53.76\pm15.64$ & $6.23\pm1.80$ \\ 29 & 02:25:30.80 & -04:50:08.0 & $26.34\pm0.06$ & $4.032^{+0.001}_{-0.001}$ & $-19.22\pm0.19$ & $14.39\pm1.34$ & $2.26\pm0.21$ & $52.69\pm11.12$ & $6.40\pm0.86$ \\ 30 & 02:26:02.73 & -04:47:59.7 & $27.06\pm0.11$ & $4.038^{+0.001}_{-0.001}$ & $-19.18\pm0.20$ & $8.21\pm1.16$ & $1.29\pm0.18$ & $31.10\pm7.66$ & $9.22\pm2.40$ \\ 31 & 02:25:42.96 & -04:49:06.8 & $26.65\pm0.07$ & $4.039^{+0.001}_{-0.001}$ & $-19.45\pm0.16$ & $4.91\pm0.74$ & $0.77\pm0.12$ & $14.61\pm3.19$ & $3.70\pm1.12$ \\ 32 & 02:25:48.14 & -04:50:14.6 & $26.97\pm0.10$ & $4.048^{+0.001}_{-0.001}$ & $-18.98\pm0.24$ & $8.11\pm1.57$ & $1.28\pm0.25$ & $37.22\pm11.60$ & $5.40\pm2.27$ \\ 33 & 02:25:51.19 & -04:49:15.3 & $26.30\pm0.05$ & $4.121^{+0.001}_{-0.001}$ & $-19.86\pm0.12$ & $16.19\pm1.20$ & $2.67\pm0.20$ & $34.34\pm4.79$ & $6.43\pm0.60$ \\ 34 & 02:25:40.55 & -04:49:04.0 & $26.19\pm0.05$ & $4.131^{+0.001}_{-0.001}$ & $-19.88\pm0.12$ & $9.75\pm0.77$ & $1.62\pm0.13$ & $20.46\pm2.87$ & $3.66\pm0.49$ \\ 35 & 02:25:26.36 & -04:50:34.1 & $27.00\pm0.10$ & $4.155^{+0.001}_{-0.001}$ & $-19.19\pm0.22$ & $5.97\pm0.80$ & $1.00\pm0.14$ & $23.99\pm6.36$ & $5.14\pm1.33$ \\ 36 & 02:25:16.65 & -04:49:46.8 & $26.87\pm0.09$ & $4.156^{+0.001}_{-0.001}$ & $-19.23\pm0.22$ & $9.12\pm1.56$ & $1.54\pm0.26$ & $35.44\pm9.97$ & $6.45\pm1.35$ \\ 37 & 02:26:09.32 & -04:51:25.2 & $26.71\pm0.08$ & $4.209^{+0.001}_{-0.001}$ & $-19.25\pm0.23$ & $5.47\pm1.03$ & $0.95\pm0.18$ & $21.43\pm6.50$ & $3.27\pm9.24$ \\ 38 & 02:25:10.34 & -04:48:34.8 & $26.06\pm0.04$ & $4.250^{+0.001}_{-0.001}$ & $-20.25\pm0.10$ & $20.04\pm1.24$ & $3.56\pm0.22$ & $32.14\pm3.79$ & $11.03\pm0.78$ \\ 39 & 02:25:29.32 & -04:47:49.8 & $26.40\pm0.06$ & $4.278^{+0.001}_{-0.001}$ & $-19.91\pm0.15$ & $21.27\pm1.37$ & $3.84\pm0.25$ & $47.48\pm7.55$ & $9.91\pm0.63$ \\ 40 & 02:25:50.08 & -04:50:27.9 & $25.82\pm0.03$ & $4.313^{+0.001}_{-0.001}$ & $-20.14\pm0.13$ & $6.75\pm1.22$ & $1.24\pm0.22$ & $12.37\pm2.70$ & $2.55\pm1.45$ \\ 41 & 02:26:06.44 & -04:49:58.8 & $25.01\pm0.02$ & $4.314^{+0.001}_{-0.001}$ & $-21.00\pm0.06$ & $15.67\pm1.92$ & $2.88\pm0.35$ & $13.01\pm1.75$ & $5.77\pm1.19$ \\ 42 & 02:25:37.04 & -04:48:35.4 & $26.20\pm0.05$ & $4.318^{+0.001}_{-0.001}$ & $-19.91\pm0.15$ & $16.81\pm1.50$ & $3.10\pm0.28$ & $38.10\pm6.72$ & $5.98\pm1.09$ \\ 43 & 02:25:29.80 & -04:50:37.2 & $26.25\pm0.05$ & $4.321^{+0.001}_{-0.001}$ & $-20.03\pm0.14$ & $3.68\pm0.71$ & $0.68\pm0.13$ & $7.48\pm1.76$ & $2.64\pm0.77$ \\ 44 & 02:25:17.87 & -04:47:58.1 & $27.12\pm0.11$ & $4.363^{+0.001}_{-0.001}$ & $-18.93\pm0.38$ & $5.83\pm1.44$ & $1.10\pm0.27$ & $33.49\pm16.12$ & $5.23\pm1.65$ \\ 45 & 02:25:21.37 & -04:46:50.2 & $26.10\pm0.04$ & $4.371^{+0.001}_{-0.001}$ & $-20.10\pm0.14$ & $24.74\pm1.26$ & $4.69\pm0.24$ & $48.50\pm7.26$ & $8.80\pm0.48$ \\ 46 & 02:25:34.96 & -04:50:25.5 & $26.16\pm0.05$ & $4.436^{+0.001}_{-0.001}$ & $-19.77\pm0.21$ & $12.31\pm1.97$ & $2.42\pm0.39$ & $34.01\pm9.13$ & $6.12\pm1.65$ \\ 47 & 02:25:36.93 & -04:49:27.1 & $26.09\pm0.04$ & $4.442^{+0.001}_{-0.001}$ & $-20.20\pm0.15$ & $13.10\pm1.36$ & $2.58\pm0.27$ & $24.31\pm4.29$ & $7.61\pm0.92$ \\ 48 & 02:25:11.51 & -04:48:28.9 & $26.78\pm0.08$ & $4.471^{+0.001}_{-0.001}$ & $-19.12\pm0.37$ & $35.58\pm1.71$ & $7.12\pm0.34$ & $182.13\pm75.23$ & $6.68\pm0.46$ \\ 49 & 02:25:11.85 & -04:50:18.0 & $27.15\pm0.12$ & $4.534^{+0.001}_{-0.001}$ & $-19.16\pm0.39$ & $13.66\pm1.83$ & $2.83\pm0.38$ & $69.58\pm31.77$ & $6.02\pm0.93$ \\ 50 & 02:25:45.42 & -04:50:13.7 & $26.73\pm0.08$ & $4.638^{+0.001}_{-0.001}$ & $-19.72\pm0.30$ & $15.65\pm1.31$ & $3.41\pm0.29$ & $50.01\pm16.57$ & $5.87\pm0.77$ \\ \hline \multicolumn{10}{c}{D4GD01 (10 galaxies)} \\ 43 & 22:16:56.09 & -17:19:39.0 & $25.04\pm0.02$ & $3.649^{+0.001}_{-0.001}$ & $-20.68\pm0.04$ & $4.24\pm0.74$ & $0.52\pm0.09$ & $3.18\pm0.57$ & $4.74\pm1.11$ \\ 44 & 22:16:43.36 & -17:16:37.9 & $26.13\pm0.05$ & $3.678^{+0.001}_{-0.001}$ & $-19.89\pm0.08$ & $7.18\pm1.01$ & $0.90\pm0.13$ & $11.32\pm1.81$ & $5.52\pm0.89$ \\ 45 & 22:16:48.25 & -17:20:18.9 & $27.01\pm0.12$ & $3.679^{+0.001}_{-0.001}$ & $-18.72\pm0.22$ & $1.31\pm0.43$ & $0.16\pm0.05$ & $6.10\pm2.43$ & $5.10\pm11.14$ \\ 46 & 22:16:48.21 & -17:21:21.8 & $26.16\pm0.05$ & $3.719^{+0.001}_{-0.001}$ & $-19.96\pm0.08$ & $9.00\pm1.15$ & $1.16\pm0.15$ & $13.67\pm2.02$ & $5.91\pm0.96$ \\ 47 & 22:16:56.07 & -17:15:31.9 & $25.68\pm0.03$ & $3.736^{+0.001}_{-0.001}$ & $-20.13\pm0.07$ & $3.64\pm0.82$ & $0.48\pm0.11$ & $4.78\pm1.12$ & $3.98\pm1.88$ \\ 48 & 22:17:10.34 & -17:27:41.6 & $25.55\pm0.03$ & $3.988^{+0.001}_{-0.001}$ & $-20.73\pm0.05$ & $5.20\pm1.22$ & $0.79\pm0.19$ & $4.60\pm1.10$ & $5.93\pm2.48$ \\ 49 & 22:16:47.70 & -17:22:27.6 & $26.38\pm0.07$ & $4.070^{+0.001}_{-0.001}$ & $-19.58\pm0.16$ & $9.87\pm1.33$ & $1.58\pm0.21$ & $26.44\pm5.50$ & $7.73\pm1.38$ \\ 50 & 22:16:47.71 & -17:21:36.2 & $27.05\pm0.12$ & $4.095^{+0.001}_{-0.001}$ & $-19.24\pm0.21$ & $5.40\pm1.12$ & $0.88\pm0.18$ & $20.04\pm6.02$ & $-2.14\pm6.22$ \\ 51 & 22:16:53.40 & -17:21:26.0 & $26.30\pm0.06$ & $4.288^{+0.001}_{-0.001}$ & $-19.54\pm0.22$ & $6.38\pm1.33$ & $1.16\pm0.24$ & $20.06\pm6.08$ & $2.77\pm1.05$ \\ 52 & 22:16:55.57 & -17:30:05.8 & $26.67\pm0.09$ & $4.546^{+0.001}_{-0.001}$ & $-19.76\pm0.26$ & $3.51\pm1.08$ & $0.73\pm0.22$ & $10.35\pm4.26$ & $2.70\pm1.30$ \\ \enddata \end{deluxetable*} \begin{figure*} \epsscale{1.1} \plotone{./multi_spec_all_T18_1.eps} \vspace{-3mm} \caption{Spectra of all dropout galaxies having Ly$\alpha$ emission lines. The field and object IDs are indicated at the upper left corner (column 1 of Table \ref{tab:spec}). The vertical and horizontal dashed lines show the wavelength of Ly$\alpha$ emission and the zero level of flux, respectively.} \label{fig:spec} \end{figure*} \setcounter{figure}{1} \begin{figure*} \epsscale{1.1} \plotone{./multi_spec_all_T18_2.eps} \vspace{-3mm} \caption{(Continued.)} \end{figure*} \section{Results \label{sec:res}} In the following subsections, we will investigate whether there are protoclusters or not in each overdense region based on the both spatial and redshift concentrations of galaxies. At first, in Section \ref{sec:D1RD01}, \ref{sec:D1GD02}, and \ref{sec:D4GD01}, we will make a statistical test to see whether the number of galaxies contained in a three-dimensional region is high enough to discard the possibility that the overdensity is due to a random fluctuation. In this test, we set significance level at 5\%. Then, in Section \ref{sec:sum}, we will perform a theoretical comparison in order to connect observed galaxy concentrations to halo masses. If the expected descendant halo masses at $z=0$ of such significant galaxy concentrations are expected to be $>10^{14}\,\mathrm{M_\sun}$, we will be able to confirm the existence of protoclusters. In this process of protocluster confirmation, we need to set some arbitrary criteria or assumptions. The first is a box size to calculate three-dimensional galaxy overdensity, while a second assumption is related to the observational bias of galaxy populations for tracing underlying structures. We will discuss criteria and possible biases for the estimate of three-dimensional overdensity in the following. In T16, we have estimated the expected distribution of protocluster galaxies based on the light-cone models constructed by \citet{henriques12}. The typical sizes of protoclusters in redshift and spatial coordinates ($L_z$, $L_\mathrm{sky}$) are found to be $L_z\lesssim0.03\mathrm{-}0.04$ and $L_\mathrm{sky}\lesssim10\,\mathrm{arcmin}$. The size of protoclusters is strongly dependent on the descendant mass at $z=0$; for example, the size of progenitors of $>10^{15}\,M_\sun$ halos is about twice as large as that of $\sim10^{14}\,M_\sun$ halos. Even for such rich protoclusters, a significant excess of galaxy density can be found with the above scale because galaxy density in a protocluster tends to increase toward the center. Thus, we will estimate the strength of galaxy clustering within the three-dimensional space of $L_z\sim0.04$ and $L_\mathrm{sky}\sim10\,\mathrm{arcmin}$ to find out protoclusters. It should be noted that redshift can be dependent on both line-of-sight distance and radial velocity; however, the redshift difference between protocluster members is mainly caused by their spatial separation. Based on the light-cone model, the typical difference between apparent and geometrical redshifts is $0.001\mathrm{-}0.004$, which is about ten times smaller than the expected protocluster size in redshift space. Even for the progenitors of rich clusters ($>10^{15}\,\mathrm{M_\sun}$), it is $0.001\mathrm{-}0.006$. Thus we effectively regard redshift as the parameter of radial distance. \setcounter{figure}{3} \begin{figure*} \epsscale{1.0} \plotone{./cntr_D1_rdrop_spec_2nd.eps} \vspace{-3mm} \caption{Sky distribution of $r$-dropout galaxies and number density contours in and around the D1RD01 region. Ly$\alpha$-detected galaxies are marked by filled symbols (red triangle: protocluster, cyan square: foreground group, green diamond: background group, black circle: field galaxies), and Ly$\alpha$-undetected galaxies are indicated by open circles. The dots are spectroscopically unobserved galaxies. The lines correspond to the contours of the surface overdensity from $4\sigma$ to $0\sigma$ in steps of $1\sigma$. The origin $(0,\,0)$ is $(\mathrm{R.A.},\,\mathrm{Decl.})=(02:25:01.89,\,-04:54:51.5)$.} \label{fig:sky_rdrop} \end{figure*} \setcounter{figure}{2} \begin{figure} \epsscale{1.2} \plotone{./zhist_D1RD01.eps} \vspace{-7mm} \caption{Left-hand panel: redshift distribution of 29 $r$-dropout galaxies with bin size of $\Delta z=0.05$ in and around the D1RD01 region. The yellow dashed line shows the selection function of $r$-dropout galaxies. Right-hand panel: close-up of the protocluster redshift range, with a bin size of $\Delta z=0.01$. The red, blue, and green lines indicate the galaxies of the protocluster and fore/background groups, respectively.} \label{fig:z_rdrop} \vspace{-3mm} \end{figure} In addition, we have to consider the possible bias of overdensity depending on galaxy population. Since we rely on Ly$\alpha$ emission to determine the redshifts of dropout galaxies, we would miss old or dusty galaxies, from which Ly$\alpha$ emission cannot escape easily. Even among star-forming galaxies, their Ly$\alpha$ strength can vary widely depending on the kinematics, geometry, and column density of interstellar medium \citep[e.g.,][]{du18,marchi19}. As for the environmental dependence of Ly$\alpha$ emission, there are some controversial results: for example, \citet{dey16} reported that Ly$\alpha$ luminosity is enhanced in a protocluster at $z=3.87$, while \citet{shimakawa17} found Ly$\alpha$ depletion in a protocluster at $z=2.53$. Furthermore, \citet{shi19b} conducted a follow-up investigation of LAE distribution around a $z=3.13$ protocluster which were initially identified by the overdensity of dropout galaxies in T16. They found that the peak of LAE overdensity (3.6 times higher than the average) is $\sim10\,\mathrm{arcmin}$ ($\sim4.6\,\mathrm{physical\>Mpc}$) away from the protocluster, though the LAE overdense region is elongated toward the protocluster and the protocluster itself is centered on a 1.8 times higher dense region of LAEs. These studies suggested that the overdensity fluctuates depending on what galaxy population is used to trace it, as expected. However, taking into account that star-forming galaxies are dominant even in protoclusters and the strength of Ly$\alpha$ emissions is mainly affected by galaxy internal properties rather than environments, it is feasible to confirm the existence of protoclusters by the combination of overdensity of dropout galaxies and follow-up spectroscopy of Ly$\alpha$ emissions. It should be noted that such protoclusters would be only a subsample of all protoclusters, and confirmed protocluster galaxies themselves are only a subset of all the members in a protocluster. Deep multi-wavelength observation and complete spectroscopy are necessary to reveal a complete sample of protoclusters, and this is beyond the scope of this study. The results of follow-up observations on each overdense region are described in the following subsections, and Table \ref{tab:pcl} summarizes the results of our protocluster confirmation. \subsection{The Protocluster at $z\sim4.9$ in the D1 Field \label{sec:D1RD01}} In the previous work of T16, it was not clear whether a protocluster exits in the overdense region of D1RD01 or not because the total number of redshift identifications was only six, which was too small to draw a firm conclusion. However, two of them are tightly clustered at $z=4.89$. In this study, we have increased the number of confirmed galaxies by a factor of five. As shown in the updated redshift distribution of Figure \ref{fig:z_rdrop}, there is a significant peak at $z\sim4.9$. As the FoV of DEIMOS ($16.3\times5.0\,\mathrm{arcmin^2}$) is larger than the typical size of protoclusters ($\sim5\,\mathrm{arcmin}$ radius; see Figure 8 of T16), we focus on the part of the FoV of DEIMOS including the peak of overdensity. Six galaxies, with ID=5, 6, 15, 17, 18, and 19, are tightly clustered in both redshift and spatial coordinates ($\Delta z=0.025$ at $z=4.898$ and $3.5\,\mathrm{arcmin}$ radius from the center of (R.A., Decl.)=(02:24:47.03, -04:54:43.3)). Although the redshift of ID16 is within the redshift range of these six galaxies, its sky position is $>10\,\mathrm{arcmin}$ away from these six galaxies. We have estimated the probability of finding this clustering structure by chance if galaxies are randomly distributed according to the redshift selection function of $r$-dropout galaxies with the following method. In the overdense region, where six clustering galaxies are located (the area of $\Delta\mathrm{R.A.}<0$ in Figure \ref{fig:sky_rdrop}), there are 16 galaxies including fore/background galaxies. Using the selection function of $r$-dropout galaxies, the same number of galaxies as observed (16 galaxies) are randomly distributed in redshift. Then, we check whether more than six galaxies are clustering within $\Delta z\leq0.025$. We have repeated this random realization 10,000 times, and the probability is found to be $<0.2\%$ ($>3.1\sigma$ significance). Since the clustering structure cannot be attributed to just a random coincidence, these six galaxies are highly expected to be physically related. This is the evidence for the existence of a protocluster in the overdense region of D1RD01. The close galaxy pair found by T16 turns out to be part of this protocluster. These six galaxies are indicated in red in Figures \ref{fig:z_rdrop} and \ref{fig:sky_rdrop}, respectively. \setcounter{figure}{5} \begin{figure*} \epsscale{1.0} \plotone{./cntr_D1GD02.eps} \vspace{-3mm} \caption{Sky distribution of $g$-dropout galaxies and number density contours in and around the D1GD02 region. Ly$\alpha$-detected galaxies are marked by filled symbols (red triangle: protocluster, cyan square: background groups, black circle: field galaxies), and Ly$\alpha$-undetected galaxies are indicated by open circles. The dots are spectroscopically unobserved galaxies. The lines correspond to the contours of the surface overdensity from $4\sigma$ to $0\sigma$ in steps of $1\sigma$. The origin $(0,\,0)$ is $(\mathrm{R.A.},\,\mathrm{Decl.})=(02:25:38.81,\,-04:49:16.9)$.} \label{fig:sky_gdrop1} \vspace{-1mm} \end{figure*} \setcounter{figure}{4} \begin{figure} \epsscale{1.2} \plotone{./zhist_D1GD02.eps} \vspace{-7mm} \caption{Left-hand panel: redshift distribution of 50 $g$-dropout galaxies with bin size of $\Delta z=0.05$ in and around the D1GD02 region. The gray histogram shows all 50 galaxies over the whole FoV of DEIMOS, and the black one shows only those within the typical size of protoclusters from the overdensity peak (the area of $\Delta\mathrm{R.A.}>0\,\mathrm{arcmin}$ in Figure \ref{fig:sky_gdrop1}). In the black histogram, we find a significant peak at $z\sim3.8$, and peaks at $z\sim3.7$ or 4.0 shown by the gray histogram turn out to be incidental peaks due to the wider FoV than protocluster size. The yellow dashed line shows the selection function of $g$-dropout galaxies. Right-hand panel: close-up of the protocluster redshift range, with a bin size of $\Delta z=0.01$. The red and blue lines indicate the galaxies of the protocluster and background group, respectively.} \label{fig:z_gdrop1} \vspace{-3mm} \end{figure} We have also found two galaxy groups which are located closely around the protocluster (indicated by blue and green in Figures \ref{fig:z_rdrop} and \ref{fig:sky_rdrop}). Both groups consist of three galaxies, and their redshift and spatial sizes are less than $\Delta z=0.015$ and $2.4\,\mathrm{arcmin}$ radius. Although such a grouping containing three galaxies within $\Delta z=0.015$ can be reproduced by random distributions with a probability of $20\%\mathrm{-}37\%$ ($0.9\sigma\mathrm{-}1.3\sigma$), we have found two nearby groups simultaneously. Furthermore, they are surprisingly arranged in the foreground and background of the protocluster, as if they form a filamentary structure in the large-scale structure of the universe. The redshift separation between the protocluster and the fore/background groups is only 0.05, corresponding to $4.6\,\mathrm{Mpc}$ in physical scale. These foreground (blue) and background (green) groups are composed of ID=4, 13, and 14 and ID=20, 21, and 22, respectively. The whole redshift range including the protocluster and fore/background groups is $\Delta z=0.12$. We have formally applied the above calculation of the significance of clustering to this large system, and it is found to be 3.3\% ($2.1\sigma$). It should be noted that, if there is no neighboring groups, the galaxy distribution of six galaxies within $\Delta z=0.12$ can be reproduced from random distribution with the probability of 46\%. Thus, the fore/background groups are essential components, suggesting the existence of a large-scale structure composed of the protocluster and fore/background groups though the statistical significance is $2\sigma$ level. As the FoV of our follow-up spectroscopy is limited to a part of the surrounding area of the protocluster, we need more follow-up observations to cover other surrounding areas, which may result in the finding of other neighboring groups. It should be noted that, due to the wide redshift window of dropout selection, the spatial clustering of dropout galaxies embedded in a projected overdense region might favor elongated large-scale structure which points toward us. \subsection{The Possible Protocluster at $z\sim3.8$ in the D1 Field \label{sec:D1GD02}} The overdense region of D1GD02 is newly observed in this study, and 50 galaxies are spectroscopically confirmed. From the redshift distribution of all galaxies as shown in Figure \ref{fig:z_gdrop1}, it is difficult to find a clear redshift spike. However, when we focus only on the typical size of protoclusters including the peak of overdensity (the area of $\Delta\mathrm{R.A.}>0\,\mathrm{arcmin}$ in Figure \ref{fig:sky_gdrop1}), there is a peak at $z\sim3.8$ indicated by the black histogram in the left panel of Figure \ref{fig:z_gdrop1}. The redshift spike consists of six galaxies (ID=13-18), ranging over $\Delta z=0.036$ centered at $z=3.834$. As shown by red points in Figure \ref{fig:sky_gdrop1}, these six galaxies are also closely clustered in spatial coordinates ($\sim2\,\mathrm{arcmin}$ radius). In the same way as we did for $r$-dropout galaxies, we find that such a clustering structure can be reproduced by random distribution with a probability of 5.9\% ($1.9\sigma$). Based on this probability, the clustering signature of these six galaxies is likely to result from the existence of a protocluster. If these six galaxies form a single protocluster, it should be noted that this protocluster seems to exhibit a bimodal redshift distribution (the red histogram in the right panel of Figure \ref{fig:z_gdrop1}). In addition, there is another group including five galaxies (ID=19-23) around $z=3.895$ just behind the protocluster at $z=3.834$ (the blue histogram in the right panel of Figure \ref{fig:z_gdrop1}). The redshift width of these five galaxies is $\Delta z=0.032$, and the probability that five galaxies happen to be located within this redshift width by random distribution is $\sim20\%$ ($\sim1.3\sigma$). This probability is not small enough to deny that the background group might be just an apparent clustering, not physically associated with each other. However, it would be unlikely that these two galaxy groups are closely located by chance because the expected redshift distribution of dropout galaxies is $\Delta z\sim1$, which is much larger than their redshift separation of $\Delta z=0.061$. The probability of the reproduction of this large system including eleven galaxies within $\Delta z=0.10$ is estimated to be 8.5\% ($1.7\sigma$) by a random distribution. In the overdense region of D1GD02, we have found a possible protocluster at $z=3.834$ with a moderate ($1.9\sigma$) significance level, which does not allow us to make a definitive identification of the overdense region as a protocluster. We will need more spectroscopic identifications in order to definitely confirm this as a protocluster and reveal the large-scale structure around it. \setcounter{figure}{6} \begin{figure} \epsscale{1.2} \plotone{./zhist_D4GD01.eps} \vspace{-7mm} \caption{Left-hand panel: redshift distribution of 52 $g$-dropout galaxies and an AGN with bin size of $\Delta z=0.05$, in and around the D4GD01 region. The yellow dashed line shows the selection function of $g$-dropout galaxies. Right-hand panel: close-up of the protocluster redshift range with a bin size of $\Delta z=0.01$. The red and blue lines indicate the galaxies of the main and background protoclusters, respectively.} \label{fig:z_gdrop2} \vspace{-3mm} \end{figure} \begin{figure} \epsscale{1.2} \plotone{./cntr_D4_gdrop_spec_2nd.eps} \vspace{-8mm} \caption{Sky distribution of $g$-dropout galaxies and number density contours in and around the D4GD01 region. Ly$\alpha$-detected galaxies are marked by filled symbols (red triangle: main protocluster, cyan square: background protocluster, black circle: field galaxies), and Ly$\alpha$-undetected galaxies are indicated by open circles. The dots are spectroscopically unobserved galaxies, and the cyan double square is the AGN. The lines correspond to the contours of the surface overdensity from $4\sigma$ to $0\sigma$ in a step of $1\sigma$. The origin $(0,\,0)$ is $(\mathrm{R.A.},\,\mathrm{Decl.})=(22:16:54.38,\,-17:22:59.9)$.} \label{fig:sky_gdrop2} \vspace{-3mm} \end{figure} \begin{deluxetable*}{cccccc} \tabletypesize{\normalsize} \tablecaption{Results of the protocluster confirmation \label{tab:pcl}} \tablewidth{0pt} \tablehead{Name & R.A.\tablenotemark{a} & Decl.\tablenotemark{a} & redshift\tablenotemark{a} & $N_\mathrm{mem}$\tablenotemark{b} & $\sigma_v$ ($\mathrm{km\,s^{-1}}$)} \startdata D1RD01 & 02:24:47.03 & -04:54:43.3 & 4.898 & 6 & $502.6\pm171.2$ \\ D1GD02 & 02:25:46.90 & -04:50:02.5 & 3.834 & 6 & $1025.0\pm393.5$ \\ D4GD01 & 22:16:51.37 & -17:18:24.6 & 3.675 & 13 & $329.2\pm73.3$ \\ D4GD01-back & 22:16:48.16 & -17:17:47.0 & 3.721 & 9 & $229.1\pm129.9$ \\ \enddata \tablenotetext{a}{Biweight location of protocluster members.} \tablenotetext{b}{Number of protocluster members.} \vspace{-9mm} \end{deluxetable*} \subsection{The Protoclusters at $z\sim3.7$ in the D4 Field \label{sec:D4GD01}} A protocluster in the overdense region of D4GD01 was originally discovered at $z=3.67$ by T16. This study increases the number of spectroscopically-confirmed member galaxies for a more detailed investigation. Out of ten newly confirmed galaxies, two are found to be in the protocluster; thus, there are at least 13 member galaxies (ID=10-20, 44, and 45), which are indicated by red in Figures \ref{fig:z_gdrop2} and \ref{fig:sky_gdrop2}. These 13 member galaxies are in the redshift range of $\Delta z=0.016$ centered at $z=3.675$. In none of our 10,000 randomly simulated realizations, such a clustering structure was reproduced. In addition, we can notice that there is another clustering structure at $z=3.721$, i.e. the background of the protocluster, which is composed of nine galaxies within $\Delta z=0.020$ in total: eight galaxies (ID=21-26, 46, and 47), and an AGN. This background structure can also not be explained by a random distribution, and has a comparable number of member galaxies to other known high-redshift protoclusters \citep[e.g.,][]{ouchi05,cucciati14,lemaux18}. Therefore, by the further follow-up spectroscopy in this study, we have not only increased the number of member galaxies in the known protocluster at $z=3.675$, but also discovered another protocluster just behind it. As shown in Figures \ref{fig:z_gdrop2} and \ref{fig:sky_gdrop2}, these two protoclusters are near each other ($\Delta z=0.046$, corresponding to $7.4\,\mathrm{Mpc}$ in physical scale). These two protoclusters are expected to form a large system because we cannot reproduce such a galaxy distribution from random realizations. \subsection{Summary of Protocluster Confirmation \label{sec:sum}} Based on these follow-up spectroscopy, we newly confirm two protoclusters at $z=4.898$ and 3.721, and two member galaxies are additionally found in the known protocluster at $z=3.675$, Furthermore, the overdense region of D1GD02 may also include a protocluster at $z=3.834$ though its statistical significance is marginal. We have estimated the three-dimensional galaxy overdensity for these four protoclusters, including a possible one, by comparing with the other fore/background galaxies as field counterparts. As both protocluster and field galaxies are selected from the same photometric sample and spectroscopically observed in the same masks, there should be little observational bias. However, it should be noted that protocluster galaxies could have different physical properties from field galaxies, which could causes different LAE fractions among dropout galaxies between protoclusters and field. In this study, we assume the same LAE fractions in the protoclusters and field. Thus, the three-dimensional galaxy overdensity, $\delta_\mathrm{gal}$ ($=n/\bar{n}$, where $n$ and $\bar{n}$ are the number density in a protocluster and field respectively), of protoclusters at $z=4.898$, 3.834, 3.721, and 3.675 are found to be $\delta_\mathrm{gal}=6.0^{+3.6}_{-2.4}$, $3.7^{+2.2}_{-1.5}$, $4.5^{+2.0}_{-1.5}$, and $6.4^{+2.4}_{-1.7}$, respectively. Three-dimensional density enables us to estimate descendant halo mass at $z=0$ by using theoretical models. As shown in Section \ref{sec:obs}, 76\% of $>4\sigma$ overdense regions are expected to grow into galaxy clusters with the halo mass of $>10^{14}\,\mathrm{M_\sun}$ comparing with a theoretical model \citep{henriques12}. In this model comparison, there are 84 $>4\sigma$ overdense regions, of which 82 regions show three-dimensional galaxy concentrations with $\delta_\mathrm{gal}>2$. It should be noted that such galaxy concentrations can be buried in lower surface dense regions. Since our objective of this model comparison is to predict the descendant halo mass of the observed protoclusters, we have focused on three-dimensional galaxy concentrations embedded in $>4\sigma$ overdense regions in the same manner as our observations. Then, we can find the relation between three-dimensional overdensity and descendant halo mass. The result is that protoclusters with $\delta_\mathrm{gal}=3.7$ and $4.5$ are expected to be $(1.0\mathrm{-}5.0)\times10^{14}\,\mathrm{M_\sun}$ and $(2.2\mathrm{-}5.1)\times10^{14}\,\mathrm{M_\sun}$ halos (the range between upper and lower quartiles). As for protoclusters with $\delta_\mathrm{gal}>6$, only two comparable regions are identified in the theoretical model, and their descendant halo masses are $4.0\times10^{14}\,\mathrm{M_\sun}$ or $1.2\times10^{15}\,\mathrm{M_\sun}$. While descendant halo mass at $z=0$ generally tends to be proportional to galaxy density at high redshifts, there is still a large dispersion. From this theoretical comparison, the three-dimensional galaxy concentrations identified by this study are found to have large overdensity enough to grow into galaxy clusters ($>10^{14}\,\mathrm{M_\sun}$ halos) by $z=0$. Therefore, we have concluded that the three overdense regions at $z=4.898$, 3.721, and 3.675 are genuine protoclusters, while the overdense region at $z=3.834$ is still a possible candidate of a protocluster due to the small number of confirmed galaxies. \section{Discussion \label{sec:disc}} \subsection{Formation of Superclusters} As shown in Section \ref{sec:res}, the two protoclusters at $z=4.898$ and 3.675 are accompanied by neighboring groups/protocluster. In the context of the hierarchical structure formation model, it is expected that galaxy clusters are formed from clumps of galaxy groups or smaller structures through repeated halo mergers. Thus, at high redshift, we would observe some groups around a main progenitor. Furthermore, galaxy clusters themselves frequently reside in larger-scale high-density regions, which include some clusters, groups, or filamentary distributions of galaxies. We can see much larger structures beyond the scale of galaxy clusters in the local universe, which are called superclusters \citep[e.g.,][]{bahcall84}. It is a question how the protoclusters we found at $z=4.9$ and 3.7 will evolve by $z=0$: will they become a single rich galaxy clusters by incorporating their neighboring groups/protocluster, or will each of them develop into an independent halo as a part of a supercluster? The separation between the main protoclusters and their surrounding groups/protocluster is $\Delta z\sim 0.05$, corresponding to $\sim5\,\mathrm{Mpc}$ and $\sim8\,\mathrm{Mpc}$ in physical scale at $z=4.9$ and 3.7 respectively. The size of protoclusters depends significantly on the descendant halo mass at $z=0$. \citet{chiang13} estimated an effective radius of protoclusters, in which 40\% of the total mass of a protocluster is distributed, based on $N$-body dark matter simulations \citep{springel05}. A typical size of the effective radius is $\sim1\,\mathrm{Mpc}$ in physical scale at $z\sim4\mathrm{-}5$ for the progenitors of $1\mathrm{-}3\times10^{14}\,\mathrm{M_\sun}$ halos. Even for those of $>10^{15}\,\mathrm{M_\sun}$ halos, the size is $\lesssim2\,\mathrm{Mpc}$. Similarly, \citet{muldrew15} also investigated the size of protoclusters at high redshifts based on the stellar mass of protocluster members. They predicted that 90\% of the stellar mass of all protocluster members is on average enclosed in $\sim2\,(4)\,\mathrm{Mpc}$ in physical scale at $z\sim4\mathrm{-}5$ for the descendants of $1\mathrm{-}6\times10^{14}\,(>10^{15})\,\mathrm{M_\sun}$ halos by the combination of $N$-body dark matter simulation and a semi-analytic galaxy formation model \citep{guo11}. Based on these theoretical predictions, only if a descendant halo mass at $z=0$ is $>10^{15}\,\mathrm{M_\sun}$, the main protoclusters and their neighboring groups/protocluster have the potential to merge into a single cluster by $z=0$. However, according to \citet{toshikawa18}, which used the same method to search for protoclusters as this study, the typical descendant halo mass of $z\sim4$ protoclusters is expected to be $\sim4\mathrm{-}8\times10^{14}\,M_\sun$ at $z=0$ based on clustering analysis and abundance matching. Assuming that the protoclusters in this study have similar descendant halo mass, the separation between the main protoclusters and their neighboring groups/protocluster is larger than those theoretical expectations of typical protocluster size. This suggests that the neighboring groups/protocluster grow into independent halos from the main protocluster. However, we should consider the possibility that there is no physical relation between the protoclusters and the neighboring groups (at least not at the supercluster scale), because the redshift separation between the main protoclusters and neighboring groups/protocluster is only $\Delta z\sim0.05$. This is much smaller than the redshift window of dropout selection ($\Delta z\sim1$). Especially, as for the $z=4.898$ protocluster, two galaxy groups are simultaneously found at foreground and background. Thus, the proximity of galaxy groups would result from the underlining large-scale structure of the universe. As about half of clusters are in superclusters at $z\lesssim0.5$ \citep[e.g.,][]{bahcall84,chow14}, some parts of protoclusters are expected to reside in a primordial superstructure. In the local universe ($z\lesssim0.5$), superclusters are typically $\sim20\,\mathrm{physical\>Mpc}$ in size between the maximally separated pair of clusters in a supercluster, and the largest ones have nearly $100\,\mathrm{physical\>Mpc}$ length, based on the extended ROSAT-ESO Flux-Limited X-ray Galaxy Cluster survey data \citep{chon13}. Most of such superclusters are composed of two or three galaxy clusters, and a few of them include nearly ten clusters. \citet{alpaslan14} also investigated the large-scale structure of the universe by using the Galaxy And Mass Assembly survey, and a typical length of filamentary large-scale structures is found to be $\sim20\,\mathrm{Mpc}$ including eight galaxy groups. Although the spatial size of superclusters depends on the definition or method to search, it is typically a few tens Mpc in the local universe. If our large systems are already detached from Hubble flow, their expected spatial sizes at $z=0$ are less than $10\,\mathrm{Mpc}$, which is smaller than that of typical local superclusters. In case the separations between the protoclusters and accompanying groups/protocluster are increasing according to the Hubble flow, their expected separations at $z=0$ are $\sim30\,\mathrm{Mpc}$, comparable to local superclusters. Therefore, the large systems including the protoclusters and accompanying groups/protocluster would be the progenitors of superclusters rather than the chance alignment of totally unrelated groups/protoclusters. At $z=4.9$ and 3.7, we have revealed primordial superclusters with comprehensive follow-up spectroscopy by Keck/DEIMOS, which has a larger FoV than the typical protocluster size. In particular, in the D4GD01 overdensity, we have found a close pair of protoclusters, whose separation is $7.5\,\mathrm{Mpc}$ in physical scale. According to the two-point correlation function of protocluster candidates at $z\sim4$ \citep{toshikawa18}, the expected number of protoclusters within $\sim8\,\mathrm{Mpc}$ in physical scale from another protocluster can be estimated to be $\sim0.20\mathrm{-}0.45$. Therefore, if the protoclusters identified in this study are comparable to typical ones (the progenitors of $\sim3\times10^{14}\,\mathrm{M_\sun}$ halos, not $>10^{15}\,\mathrm{M_\sun}$ halos at $z=0$), the large systems including the protoclusters and neighboring groups would trace primordial large-scale structures instead of multiple progenitors on a same halo merger tree. To predict the fate of these large structures, we need to map out three-dimensional galaxy distribution more precisely by more spectroscopic follow-up observations. There are other examples of such large-scale structures at $z\sim2\mathrm{-}6$ \citep[e.g.,][]{ouchi05,kuiper12,dey16,topping16,cucciati18}. Especially, \citet{cucciati18} discovered a clear example of a primordial supercluster at $z=2.45$ by identifying seven galaxy groups within a volume of $\sim60\times60\times150\,\mathrm{comoving\,Mpc^3}$. On the other hand, the large-scale structure in the D1RD01 region includes the protocluster and two groups over a volume of $\sim33\times12\times64\,\mathrm{comoving\,Mpc^3}$, and, in the D4GD01 regions, the two protoclusters are embedded in a volume of $\sim15\times10\times50\,\mathrm{comoving\,Mpc^3}$. When the size of the primordial supercluster of \citet{cucciati18} is limited to that of the ones we found, only two or three among the seven galaxy groups identified as the components of the primordial supercluster of \citet{cucciati18} can be reconfirmed as its components. Since this number is comparable to ours, we would observe a portion of primordial superclusters like one found by \citet{cucciati18}. \citet{topping18} also closely investigated a large-scale structure around the SSA22 protocluster at $z=3.09$. They found two galaxy concentrations within a volume of $\sim12\times14\times43\,\mathrm{comoving\,Mpc^3}$, which are predicted to be the size of two separated halos at $z=0$. Our findings of the large-scale structures at $z=4.898$ and 3.675 seem to exhibit a consistent size and number of components with the clear examples at lower redshifts. However, according to the theoretical comparison in \citet{topping18}, the occurrence rate of such a large-scale structure around the SSA22 protocluster is expected to be $7.4\,\mathrm{Gpc^{-3}}$. On the other hand, the total survey volume of our $g$- and $r$-dropout galaxies over the CFHTLS Deep Fields ($\sim4\,\mathrm{deg^2}$) is only $\sim0.06\,\mathrm{Gpc^3}$. Since \citet{topping18} predicted that two galaxy concentrations grow into the cluster pair of $>10^{15}\,\mathrm{M_\sun}$ and $>10^{14}\,\mathrm{M_\sun}$ halos at $z=0$, our findings of the large-scale structures may be composed of the progenitors of smaller clusters. Although we cannot statistically calculate what fraction of protoclusters are in primordial superclusters or isolated due to the small and heterogeneous sample of protoclusters, these findings, at least, indicate that some components of a supercluster are simultaneously formed at high redshifts despite being a few Mpc away from each other. A similar result is found in a supercluster at $z\sim0.9$, in which three clusters and five groups ($4.6\mathrm{-}0.3\times10^{14}\,\mathrm{M_\sun}$ halos) are contained over $\sim20\,\mathrm{comoving\>Mpc}$ area \citep{lemaux12}. Considering the limited volume of our observation of the primordial superclusters, this supercluster at $z=0.9$ would be comparable to our findings; at least, it is worth to compare with a well-known supercluster at $z\sim1$. This shows that superclusters can be formed around not only rich clusters but also typical ones. Furthermore, \citet{hayashi19} newly found some components of this superclusters which are $\sim50\,\mathrm{comoving\,Mpc}$ away from each other at maximum, and investigated stellar ages of red-sequence galaxies in each component based on composite spectra. Although they are $\sim10\mathrm{-}50\,\mathrm{comoving\,Mpc}$ away from each other, they are found to have similar mean stellar ages, which possibly indicate that each component is formed at almost the same redshift. Thus, we would be able to identify multiple protoclusters/groups in the progenitors of superclusters at even higher redshifts as shown in this study or \citet{cucciati18}. This is qualitatively predicted by the hierarchical structure formation model; however, each path of structure formation would have a large variation due to repeated halo mergers. Therefore, the direct observation of developmental stages will provide us with an important constraint on such stochastic processes. The $z=4.898$ protocluster brings two small groups while the primordial superclusters at $z=3.675$ in the D4GD01 overdense region are composed of two protoclusters, which include almost comparable numbers of member galaxies. The number of components in a primordial supercluster and the ratio of galaxy number between each component would be hints to understand the formation of superclusters or the large-scale structure of the universe. It should be noted that our follow-up spectroscopy focuses on the overdensity peak. Surrounding regions are sparsely observed, and the spatial distribution of galaxies is traced only by dropout galaxies having Ly$\alpha$ emissions. This incompleteness of observed surrounding area and galaxy population may cause the apparent difference between at $z=4.898$ and 3.675 due to a bias against large-scale galaxy distribution or an oversight of small accompanying groups. Although we need further follow-up observations to cover the whole structure and to investigate galaxy population in protoclusters, we have directly observed (a part of) primordial superclusters at $z=4.9$ and 3.7. \begin{figure*} \epsscale{1.1} \plottwo{./3D_D4_gdrop_main.eps}{./3D_D4_gdrop_sub.eps} \caption{Left-hand panel: three-dimensional distribution of the main protocluster galaxies in the D1RD01 region. The filled points represent the 13 protocluster galaxies, and the dots are the other $r$-dropout galaxies. The origin $(0,\,0)$ is $(\mathrm{R.A.},\,\mathrm{Decl.})=(22:16:50.44,\,-17:18:41.6)$. Right-hand panel: Same as the left-hand panel, but for the background protocluster. The AGN is indicated by the blue double circle.} \label{fig:3D_gdrop} \end{figure*} \subsection{Internal Structure and Morphology of Protoclusters} Next, we focus on the internal galaxy distributions of the protoclusters. The six member galaxies of the $z=4.898$ protocluster tend to be located at the outskirt of the overdense region rather than at the peak of overdensity ($(\Delta\mathrm{R.A.},\,\Delta\mathrm{Decl.})=(0.2,\,0.0)$). Furthermore, three (ID=5, 6, and 17) of them seem to be strongly clustered compared with the other member galaxies. The member galaxies of the possible protocluster at $z=3.834$ may also be distributed to avoid the center of the protocluster since they tend to be bimodal in the redshift distribution ($p$-value of Hartigan's dip test is 0.055). A similar internal structure was found in the protocluster at $z=6.01$ \citep{toshikawa14}, in which member galaxies are widely distributed and divided into four subgroups. These internal structure would be a clue to understanding the assembly process toward galaxy clusters. In these two protoclusters, it is found that the central concentration of member galaxies is not high and small substructures still exists. On the other hand, it should be noted that galaxy overdensity is calculated by dropout galaxies while protocluster members are identified by detecting Ly$\alpha$ emission. \citet{hathi16} showed that star-forming galaxies with strong Ly$\alpha$ emission have significantly different properties compared with those without Ly$\alpha$ emission at $z\sim2\mathrm{-}2.5$, though the absolute value of the difference is small. Dropout galaxies with strong Ly$\alpha$ emission are expected to have less dust, lower SFR, and less mass than those without Ly$\alpha$ emission. Therefore, member galaxies may be segregated in a protocluster depending on their properties: newly-formed young galaxies may be in outskirts, and evolved massive galaxies may be near the center of a protocluster. \citet{cooke13} also found the $\sim60\%$ of Ly$\alpha$ emitting dropout galaxies at $z\sim3$ have shell-like spatial distribution with a radius of $\sim3\mathrm{-}6\,\mathrm{Mpc}$ in physical scale. They concluded that dropout galaxies without Ly$\alpha$ emission tend to be in group-like environments while those with Ly$\alpha$ emission would be distributed on the outskirts. Their claim from the statistical method of clustering analysis could be attributed to the contribution of protoclusters as in this study. This implication needs to be checked by direct identification of protocluster members without Ly$\alpha$ emission in a central region. As for the pair of protoclusters at $z\sim3.7$, the main protocluster at $z=3.675$ is composed of at least 13 member galaxies, and nine member galaxies are confirmed for the background protocluster at $z=3.721$. These numbers would allow us to make a close investigation of the internal structures of both protoclusters. Figure \ref{fig:3D_gdrop} shows the three-dimensional galaxy distribution of the main and background protoclusters. As already discussed in T16, the member galaxies of the main protocluster are spherically distributed, and nearly half the member galaxies are concentrated into the central small region. In this study, we have added two member galaxies, and this trend is maintained. On the other hand, the background protocluster can be divided into three subgroups. Each subgroup includes three (ID=21, 22, 46), five (ID=23-26, and AGN), and one (ID=47) galaxies located around $(\Delta\mathrm{R.A.},\,\Delta\mathrm{Decl.},\,\Delta z)=(0.0,\,-1.1,\,1.3)$, $(-0.3,\,0.6,\,0.6)$, $(0.6,\,1.4,\,-1.5)$, respectively. Except for the AGN, we cannot find significant differences of galaxy properties ($M_\mathrm{UV}$ and Ly$\alpha$ $EW_0$) between the main and background protoclusters; however, these two protoclusters appear to have clearly different internal structures as shown in Figure \ref{fig:3D_gdrop}. In addition to this visual inspection, a three-dimensional ellipsoid is fitted to the distributions of the member galaxies following the method in \citet{lovell18}, so that we can quantitatively investigate the shape of protoclusters. Although an ellipsoid may be too simple modeling, it is useful to find a overall shape. The best-fitting ellipsoid can be determined from the eigenvalues of the moment of inertia tensor: \begin{equation} {\bf I}_{ij} = \sum_{n=1}^{N_\mathrm{gal}} ({\bf r}^2_n\delta_{ij} - r_{n,i}r_{n,j}), \end{equation} where $N_\mathrm{gal}$ is the number of member galaxies, ${\bf r}_n$ is the position vector of the $n$th galaxies, and $i$ and $j$ are the tensor indices. We set no weight on each member galaxy to estimate the inertia tensor. The lengths of the primary, secondary, and tertiary axes ($a_1$, $a_2$, $a_3$) can be shown by eigenvalues, $I_1\geq I_2\geq I_3$, as: \begin{eqnarray} a_1 = \sqrt{\frac{5}{2N_\mathrm{gal}}(I_1+I_2-I_3)}, \\ a_2 = \sqrt{\frac{5}{2N_\mathrm{gal}}(I_3+I_1-I_2)}, \\ a_3 = \sqrt{\frac{5}{2N_\mathrm{gal}}(I_2+I_3-I_1)}. \end{eqnarray} Using these axis lengths, we calculate the parameters of sphericity, $s=a_3/a_1$, and triaxiality, $T=(a_1^2-a_2^2)/(a_1^2-a_3^2)$. Where shape is spherical (aspherical), $s$ approaches 1 (0). We use $T$ together to quantify the form of asphericity: oblate ($a_1>a_2,a_3$) and prolate ($a_1,a_2>a_3$) ellipsoids have $T\sim1$ and $T\sim0$, respectively. Table \ref{tab:shape} shows these shape parameters for the main and background protoclusters. As expected by the visual inspection, these two protoclusters seem to have different shapes, especially in the parameter $T$. Our identification of protocluster members is far from complete because the fraction of spectroscopically-observed dropout galaxies is 53\% in this overdense region, and only a portion of dropout galaxies have Ly$\alpha$ emission. This incompleteness would strongly affect the shape estimate since it is based on only about ten galaxies. Thus, we have deduced their intrinsic shape parameters as below. First, we select protoclusters at $z\sim3.7$ from the Millennium simulation \citep{springel05}; the definition of a protocluster is all halos at $z\sim3.7$ which will merge into a single halo having $>10^{14}\,\mathrm{M_\sun}$ at $z=0$. Second, we pick up protocluster member galaxies which are similar to dropout galaxies based on galaxy properties, such as stellar mass, SFR, or age, predicted by a semi-analytic model \citep{henriques15}. Counterpart galaxies in the theoretical model is determined by $\mathrm{SFR}>5\,\mathrm{M_\sun\,yr^{-1}}$, which corresponds to the limiting UV luminosity in our observation \citep{kennicutt98}. Then, the shape parameters of $s$ and $T$ are calculated for each protocluster by using all member galaxies, which are regarded as intrinsic parameters. In this estimate of intrinsic parameters, there are two assumptions of descendant halo mass and SFR, which are used to pick up protoclusters and member galaxies from the simulation, respectively. However, the large redshift window of dropout selection could enable us to identify only more massive protoclusters than our assumption. The conversion from apparent (dust-uncorrected) UV luminosity to SFR involves some systematic errors due to lack of information on dust attenuation, metallicity, or stellar age. Therefore, we have checked possible systematic errors on the estimate of shape parameters and confirmed that shape parameters are not significantly dependent on these changes. The details are shown in Appendix \ref{app:err}. Finally, in order to account for the effect of observational bias, the same number of member galaxies as observed ones ($N_\mathrm{gal}=13$ and 9 for the main and background protoclusters respectively) are randomly extracted from a simulated protocluster. In this selection of protocluster members from the theoretical model, we have also applied the spatial and redshift windows of $L_z$ and $L_\mathrm{sky}$, which are the same as in our observation. It should be noted that the spatial and redshift windows of our observation are large enough to impart no significant bias to shape parameters (Appendix \ref{app:err}). By using randomly selected member galaxies, shape parameters are re-calculated, and we investigated how different they are from intrinsic ones. This random realization is repeated 300 times for each protocluster. As shown in Figure \ref{fig:shape}, random sampling results in a systematic offset on the distribution of $s$, though the distribution of $T$ is not significantly affected. \begin{deluxetable}{ccccc} \tabletypesize{\normalsize} \tablecaption{Shape parameters of the protoclusters at $z\sim3.7$ \label{tab:shape}} \tablewidth{0pt} \tablehead{ & \multicolumn{2}{c}{Observed} & \multicolumn{2}{c}{Expected intrinsic} \\ & $s$ & $T$ & $s$ & $T$} \startdata D4GD01 & 0.31 & 0.19 & $0.45\pm0.12$ & $0.47\pm0.20$ \\ D4GD01-back & 0.22 & 0.84 & $0.37\pm0.10$ & $0.80\pm0.23$ \\ \enddata \end{deluxetable} \begin{figure} \vspace{-12mm} \epsscale{1.1} \plotone{model_shape_z3p6_SFR05.eps} \vspace{-4mm} \caption{Shape parameters ($s$ and $T$) of protoclusters at $z\sim3.7$. The red and blue contours show the expected distribution of intrinsic and apparent values, respectively. The lines from inner to outer correspond to the 25\%-, 50\%-, 75\%-, and 90\%-tile contours. The upward and downward triangles are observed values for the main and background protoclusters, respectively.} \label{fig:shape} \vspace{-4mm} \end{figure} Comparing the shape parameters of the observed protoclusters with the simulated ones including random sampling, we have found that the main protocluster exhibits an unusual shape (in the 90 percentile), while the background protocluster has a typical shape (in the 50 percentile). Table \ref{tab:shape} also shows the expectations of intrinsic values of the shape parameters for the observed protoclusters, which are determined by using the relation between the shape parameters calculated from full number of galaxies and random sampling based on the theoretical model. The significance of the shape difference between the main and background protoclusters is found to be $1.6\sigma$. The main and background protoclusters indicate pancake-like and filamentary shapes, respectively. Although the significance of the difference is marginal, the following is one of the possible interpretations. As the background protocluster can be divided into three subgroups, it might be on the earlier stage of cluster formation; thus, galaxies or small groups are just accreting along the filamentary cosmic web. On the other hand, in the main protocluster, such building blocks of a galaxy cluster might be merging into a single structure. \subsection{Redshift Evolution} We have characterized the large-scale and internal structures of the protoclusters, which may help us understand cluster formation in the context of the hierarchical structure formation by combining with other protoclusters from the literature. It should be noted that protocluster samples searched by various methods may be heterogeneous, and the definitions of protoclusters differ depending on studies. The sizes of some protoclusters are also artificially limited by the size of the FoV of their associated observations. In this study, we use velocity dispersion, which indicates the dynamical state of protoclusters, to compare with other protoclusters. In the estimate of the velocity dispersion, we assume that the redshift difference of protocluster members is attributed to the velocity difference, instead of the difference of line-of-sight distance. The biweight scale \citep{beers90} is used as the estimator of velocity dispersion (Table \ref{tab:pcl}). In addition, we have compiled the velocity dispersion of known protoclusters at $z>2$ from the literature and investigated the relation between redshift and velocity dispersion (Figure \ref{fig:vel-dis}). We cannot find a significant correlation though velocity dispersion is expected to be increasing with protocluster growth. The heterogeneous sample of protoclusters from the collection of many previous studies might dilute a possible trend between velocity dispersion and redshift because the relation is also dependent on the descendant halo mass at $z=0$. However, we can find a distinguishing feature in the histogram of velocity dispersion (the right panel of Figure \ref{fig:vel-dis}). While most protoclusters have velocity dispersions of $\sim200\mathrm{-}600\,\mathrm{km\,s^{-1}}$, some have large velocity dispersions of $\sim1000\,\mathrm{km\,s^{-1}}$. This cause the skewed distribution of velocity dispersion as shown in the right panel of Figure \ref{fig:vel-dis}. The skewness of this distribution is found to be 0.71, and the null hypothesis that the observed distribution is generated from a single normal distribution can be rejected with 97\% significance level according to the Anderson-Daring test. The lower peak is almost consistent with the redshift evolution of dark mater velocity dispersion of typical galaxy clusters ($\sim1\mathrm{-}5\times10^{14}\,\mathrm{M_\sun}$); on the other hand, it would be difficult to explain the higher peak by halo evolution under virial equilibrium. Either the protoclusters having higher velocity dispersion ($\gtrsim1000\,\mathrm{km\,s^{-1}}$) could contain subgroups or the velocity (redshift) distribution of protocluster members may deviate from normal distribution \citep[the circles in Figure \ref{fig:vel-dis}; e.g.,][]{kuiper12,lemaux14,toshikawa14}. Some previous studies measured the velocity dispersion of each individual subgroup (the crosses in Figure \ref{fig:vel-dis}), and they are almost comparable to that of the lower peak. \begin{figure} \epsscale{1.2} \plotone{pcl_vel-dis.eps} \vspace{-9mm} \caption{Left-hand panel: velocity dispersion of protoclusters as a function of redshift. The red points show protoclusters discovered by our protocluster search in the CFHTLS \citep[this study and][]{toshikawa16}, and the blue points are protoclusters from the literature \citep{chanchai19,cucciati14,cucciati18,dey16,galametz13,harikane19,kuiper11,kuiper12,lemaux14,lemaux18,miller18,ouchi05,oteo18,topping16,toshikawa14,venemans07,yuan14}. The sample of \citet{cucciati18} includes the protoclusters discovered by \citet{casey15,chiang15,diener15,wang16}. Protoclusters which are reported to have subgroups or deviation from normal distribution in their velocity distributions of member galaxies are indicated by the open black circles. The velocity dispersion of subgroups, if available, are represented by the black crosses. The dashed lines show the redshift evolution of dark matter velocity dispersion of $1\times10^{14}\,\mathrm{M_\sun}$ and $5\times10^{14}\,\mathrm{M_\sun}$ halos at $z=0$. The redshift evolution of velocity dispersion is derived from that of dark matter halo mass by assuming virial equilibrium and extended Press-Schechter model. Right-hand panel: histogram of the velocity dispersion of protoclusters. The velocity dispersion of subgroups is not included in this histogram. It should be noted that \citet{wang16} detected extended X-ray emission; thus, it should be categorized as cluster not protocluster. However, since cluster formation would be seamless from high to low redshifts, the cluster found by \citet{wang16} is also plotted on this figure as a example at $z=2.5$.} \label{fig:vel-dis} \vspace{-2mm} \end{figure} In the local universe, galaxy clusters of $\sim3\times10^{14}\,\mathrm{M_\sun}$ halos have velocity dispersion of $\sim500\,\mathrm{km\,s^{-1}}$, and, even for massive clusters of $\sim1\times10^{15}\,\mathrm{M_\sun}$ halos, their velocity dispersion is $\lesssim1000\,\mathrm{km\,s^{-1}}$ \citep[e.g.,][]{berlind06,tempel14,wilson16}. On the other hand, merging galaxy clusters, which can be traced by using radio relics, are found to show high velocity dispersions \citep[$\gtrsim1000\,\mathrm{km\,s^{-1}}$:][]{golovich17}. Therefore, higher velocity dispersion can be explained if such protoclusters are in a merging phase of galaxy groups on the way to forming more massive structures; thus their dynamical state may be far from virialization. Systematic merger motions would need to be included in the calculation of velocity dispersion in addition to the random motion of protocluster members. \citet{kuiper11} have simulated the evolution of velocity dispersion in the case of the Spiderweb protocluster. They found that velocity dispersion is dramatically changed from $\sim400\mathrm{-}500\,\mathrm{km\,s^{-1}}$ to $\sim900\,\mathrm{km\,s^{-1}}$ at the point of merger of two halos. As shown in Figure \ref{fig:vel-dis}, we find that protoclusters with higher velocity dispersion account for about one forth of the total, distributed over whole redshift range. Protoclusters would be evolving by mergers of galaxy groups as well as steady galaxy accretion, and these two phases are repeated from early to late developmental stages. It should be noted that there is a possibility that higher velocity dispersion has been overestimated by classifying independent groups as a single protocluster. As discovered by \citet{cucciati18}, protoclusters would bring together many smaller components and form large-scale structures. It is necessary to map out galaxy distribution beyond the scale of a protocluster in order to discuss how they grow into mature clusters. \section{Conclusion \label{sec:conc}} In this study, we have presented optical follow-up spectroscopy on the three overdense regions of $g$- and $r$-dropout galaxies in the CFHTLS Deep Fields. In the overdense region of D4GD01, the existence of a protocluster is already confirmed at $z=3.675$ by \citet{toshikawa16}. This study increases the number of confirmed member galaxies of this protocluster to thirteen. In addition, we have newly discovered a protocluster including nine member galaxies at $z=3.721$ at the same sky position with the $z=3.675$ protocluster. As for the overdense region of D1RD01, we have confirmed a protocluster at $z=4.898$, composed of six member galaxies. Furthermore, a possible protocluster is found at $z=3.834$ in the overdense region of D1GD02, though it is a tentative detection. From these protoclusters, including a possible one, we have obtained the major implications as below. In the vicinity of the $z=4.898$ protocluster, there are two small galaxy groups, each including three galaxies. Since the separations between these two groups and the $z=4.898$ protocluster are only $\Delta z\sim0.05$, these two groups are expected to become a part of a supercluster at $z=0$, rather than merge into the protocluster to form a single massive dark matter halo. Similarly, in the overdense regions of both D1GD02 and D4GD01, we have found close pair-like structures, whose redshift separation is only $\Delta z\sim0.05$. These results suggest that large-scale galaxy/group assembly comparable to the size of superclusters start by $z\gtrsim4$, and the primordial satellite components of superclusters appear at $z\sim4\mathrm{-}5$ in parallel with the formation of central protoclusters. It should be noted that this conclusion depends on descendant halo mass; if this protocluster is the progenitor of a significantly rich cluster ($>10^{15}\,\mathrm{M_\sun}$), it is possible to incorporate the neighboring groups into a single halo by $z=0$. For the protocluster pair at $z\sim3.7$ in the D4GD01 overdensity, their detailed internal structures are investigated by fitting a triaxial ellipsoid to the distribution of member galaxies. In this analysis, after carefully considering sampling bias based on theoretical models, we have tentatively found that the two protoclusters have different shapes ($1.6\sigma$ significance). The main protocluster, which has thirteen member galaxies, has a pancake-like shape, while the other protocluster, which is located just behind the main protocluster, and composed of nine galaxies, exhibits a filamentary shape. The background protocluster can be divided into three subgroups. These three groups nearly align in three-dimensional space as suggested by the ellipsoid fitting. This probably indicates that they are on the way to merging along cosmic web to make a single dark matter halo. On the other hand, the main protocluster would be expected to develop more than the background protocluster, judging from the number of confirmed member galaxies and its shape. We have also discussed the redshift evolution of protoclusters based on their velocity dispersion by combining with other known protoclusters from the literature. Although there is no significant dependence of protocluster velocity dispersion on redshift, we have found a distribution skewed towards high velocity dispersion. This could be interpreted as the two phases of cluster formation, which are steady galaxy accretion and mergers of galaxy groups. This would be generally consistent with the picture of hierarchical structure formation model. Although it is difficult to perform quantitative investigations due to the small and heterogeneous sample of protoclusters, the incidence of mergers or mass ratio between merging groups will be keys to understand the formation of galaxy clusters. As we have shown, the protoclusters are characterized from the viewpoints of shape and large-scale structure. We have found that the formation of a supercluster starts in the early universe, and the main and background protoclusters at $z\sim3.7$ show different galaxy distributions. However, since our results are derived from a few protoclusters based on the investigation of a single galaxy population, it is difficult to evaluate whether they are representative of all protoclusters or not. Through the multi-wavelength observations of more protoclusters, we will be able to reveal how the large-scale structure is built from the early to present-day universe, which is related to the cosmological parameters and the initial perturbations of mass density. Furthermore, this is linked to galaxy evolution across cosmic time because environments have and important influence on star-formation activity. In the future, we will systematically observe protoclusters provided by the Hyper SuprimeCam Subaru Strategic Program \citep{toshikawa18} in order to discuss the dynamical evolution of protoclusters and its relation with the physical properties of member galaxies. \acknowledgments We are really grateful to Dr. Brian Lemaux for valuable comments and suggestions that significantly improved the manuscript. The CFHTLS data used in this study are based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This study is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. This study is also based on data collected at the W. M. Keck telescope, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The W. M. Keck Observatory was made possible by the generous financial support of the W.M. Keck Foundation. We are grateful to the W. M. Keck Observatory staff for their help with the observations, and we wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. This research was supported by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research 18K13575 and 18J01430. NK acknowledges support from the JSPS grant 15H03645, and RAO is grateful for financial support from FAPERJ, CNPq, and FAPESP. \facilities{CFHT (MegaCam), Keck:II (DEIMOS)}
2,877,628,089,936
arxiv
\section{#1}} \newcommand{\appsection}[1]{\addtocounter{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}~~#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand\al{\alpha} \newcommand\be{\beta} \newcommand\ga{\gamma} \newcommand\te{\theta} \newcommand\bzero{\boldsymbol{0}} \newcommand\bmu{\boldsymbol{\mu}} \newcommand\bnu{\boldsymbol{\nu}} \newcommand\brho{\boldsymbol{\rho}} \newcommand\bLambda{\boldsymbol{\Lambda}} \newcommand\blambda{\boldsymbol{\lambda}} \newcommand\rg{r_{\mathfrak{g}}} \newcommand\Rth{{\mathbb R}} \newcommand\ep{\varepsilon} \newcommand{\ivec}[1]{|\,#1\,\rangle\!\rangle} \newcommand{\tmmathbf}[1]{\ensuremath{\boldsymbol{#1}}} \newcommand{\tmop}[1]{\ensuremath{\operatorname{#1}}} \newenvironment{tab}{\linespread{1.0} \begin{table}}{\end{table}% \linespread{1.3}} \begin{document} \begin{titlepage} \vskip 0.5cm \vskip .7cm \begin{center} {\Large{\bf ABCD and ODEs }} \end{center} \vskip 0.8cm \centerline{Patrick Dorey$^1$, Clare Dunning$^2$, Davide Masoero$^3$, Junji Suzuki$^4$ and Roberto Tateo$^5$} \vskip 0.9cm \centerline{${}^1$\sl\small Dept.\ of Mathematical Sciences, University of Durham,} \centerline{\sl\small Durham DH1 3LE, United Kingdom\,} \vskip 0.3cm \centerline{${}^{2}$\sl\small IMSAS, University of Kent, Canterbury, UK CT2 7NF, United Kingdom} \vskip 0.3cm \centerline{${}^{3}$\sl\small SISSA, via Beirut 2-4, 34014 Trieste, Italy} \vskip 0.3cm \centerline{${}^{4}$\sl\small Department of Physics, Shizuoka University, Ohya 836, SURUGA, Shizuoka, Japan.} \vskip 0.3cm \centerline{${}^{5}$\sl\small Dip.\ di Fisica Teorica and INFN, Universit\`a di Torino,} \centerline{\sl\small Via P.\ Giuria 1, 10125 Torino, Italy} \vskip 0.2cm \centerline{E-mails:} \centerline{p.e.dorey@durham.ac.uk, t.c.dunning@kent.ac.uk,} \centerline{ masoero@sissa.it, sjsuzuk@ipc.shizuoka.ac.jp, tateo@to.infn.it} \vskip 1.25cm \begin{abstract} \noindent We outline a relationship between conformal field theories and spectral problems of ordinary differential equations, and discuss its generalisation to models related to classical Lie algebras. \end{abstract} \end{titlepage} \setcounter{footnote}{0} \def\fnsymbol{footnote}{\fnsymbol{footnote}} \resection{Introduction} The ODE/IM correspondence~\cite{Dorey:1998pt,Bazhanov:1998wj,Suzuki:1999rj,Dorey:2007zx} has established a link between two dimensional conformal field theory (CFT) and generalised spectral problems in ordinary differential and pseudo-differential equations. It is based on an equivalence between transfer matrix eigenvalues~\cite{Bazhanov:1994ft, Bazhanov:1996dr} and Baxter $Q$-functions in integrable models (IMs), and spectral determinants \cite{Sha,Voros} of ordinary differential equations (ODEs). In statistical mechanics, the transfer matrix and its largest eigenvalue -- denoted by $T$ in the following -- are central objects. For example, consider the six-vertex model defined on a square lattice with $N$ columns and $N'$ rows; $T$ can be written in terms of an auxiliary entire function $Q$ through the so-called Baxter $TQ$ relation. Up to an overall constant, $Q$ is completely determined by the knowledge of the positions of its zeros, the Bethe roots, which are constrained by the Bethe ansatz equations (BAE). Subject to some qualitative information on the positions of the Bethe roots, easily deduced by studying systems with small size, the Bethe ansatz leads to a unique set of ground-state roots. In the $N' \rightarrow \infty$ limit the free energy per site $f$ is simply related to $T$ by \eq f \sim -{ 1 \over N} \ln T~. \en In \cite{Bazhanov:1994ft,Bazhanov:1996dr}, Bazhanov, Lukyanov and Zamolodchikov showed how to adapt the same techniques directly to the conformal field theory (CFT) limit of the six-vertex model. In this setting, we consider the conformal field theory with Virasoro central charge $c=1$ corresponding to the continuum limit of the six-vertex model, defined on an infinitely-long strip with twisted boundary conditions along the finite size direction. The largest transfer matrix eigenvalue $T$ depends on three independent parameters: the (rescaled) spectral parameter $\nu$, the anisotropy $\eta$ and the twist $\phi$. Defining $E$, $M$, $l$, $\omega$, $\Omega$ through the following relations \eq E=e^{2 \nu},~~\eta={\pi \over 2} {M \over M+1},~~\omega=e^{i {\pi \over M+1}}, ~~\Omega=\omega^{2M},~~\phi={ (2l+1) \pi \over 2M+2} \en the resulting $TQ$ relation is \eq T(E,l,M)Q(E,l,M)=\omega^{-\fract{2l+1}{2}} Q(\Omega E,l,M)+ \omega^{\fract{2l+1}{2}} Q(\Omega^{-1} E,l,M)~. \label{TTQ} \en The Baxter function $Q$ for this largest eigenvalue is fixed by demanding entirety of both $T$ and $Q$, and reality, positivity and `extreme packing' for $l>-1/2$ of the set $\{ E_i\}$ of zeros of $Q$. The BAE follow from the entirety of $T$ and $Q$ via \eq Q(E_i)=0 \Rightarrow T(E_i)Q(E_i)=0 \Rightarrow { Q(\Omega E_i) \over Q(\Omega^{-1} E_i)} =-\omega^{2l+1}~. \label{BA} \en Surprisingly, equations (\ref{TTQ}) and (\ref{BA}) also emerge from an apparently unrelated context: the study of particular spectral problems for the following differential equation \eq \left (\left(\frac{d}{dx} - \frac{l}{x} \right)\left(\frac{d}{dx} + \frac{l}{x}\right) - x^{2M} +E \right)y(x,E,l)=0~, \label{eq5} \en \noindent with $x$ and $E$ possibly complex. To see the emergence of (\ref{BA}) from (\ref{eq5}), we start from the unique solution $\psi(x,E,l)$ of (\ref{eq5}) on the punctured complex plane $x \in \CC\setminus \{0\}$ which has the asymptotic \eq \psi \sim x^{-M/2} \exp(-\fract{1}{M+1}x^{M+1})~,~~~(M>1) \label{asyp} \en as $|x| \rightarrow \infty$ in any closed sector contained in the sector $|\arg x\,|<\frac{3\pi}{2M+2}$. This solution is entire in $E$ and $x$\,. {}From $\psi$ we introduce a family of solutions to (\ref{eq5}) using the `Sibuya trick' (also known as `Symanzik rescaling'): \eq \psi_k = \psi(\omega^{k} x,\;\Omega^{k} E,\; \;l)~. \label{symanzik} \en In (\ref{symanzik}), $k$ takes integer values; any pair $\{\psi_k,\psi_{k{+}1} \}$ constitutes a basis of solutions to (\ref{eq5}). An alternative way to characterize a solution to (\ref{eq5}) is through its behaviour near the origin $x=0$. The indicial equation is \eq (\lambda-1-l)(\lambda+l)=0~, \en and correspondingly we can define two (generally) independent solutions \eq \chi^{+}(x,E)=\chi(x,E,l) \sim x^{l+1} +O(x^{l+3})\,, \label{eq9} \en and $\chi^{-}(x,E)=\chi(x,E,-l-1)$, which transform trivially under Symanzik rescaling as \eq \chi^{+}_k=\chi^{+}(\omega^{k} x,\Omega^{k} E)= \omega^{(l+1)k} \chi^{+}(x,E)~. \label{trs} \en The trick is now to rewrite $\chi^{+}_0=\chi^{+}( x,E)$ respectively in terms of the basis $\{\psi_0, \psi_1 \}$ and $\{\psi_{-1}, \psi_0 \}$: \bea 2 i\chi^{+}_0&{=}&\omega^{-l-\frac{1}{2}} Q( \Omega E)\psi_0 -Q(E) \omega^{-\frac{1}{2}} \psi_{1} \label{r1} \\ \hspace{-15pt} 2i \chi^{+}_0=2i\omega^{l+1}\chi^{+}_{-1}&{=}& \omega^{\frac{1}{2}}Q(E) \psi_{-1}-\omega^{l+\frac{1}{2}}Q( \Omega^{-1} E)\psi_0 \quad \label{r2} \eea where the coefficients has been fixed by consistency among (\ref{r1}), (\ref{r2}) and (\ref{trs}) and \eq Q(E,l) =W[\psi_0,\chi^+_0]~. \en Here $W[f,g]=f \frac{dg}{dx}-g \frac{df}{dx}$ denotes the Wronskian of $f$ and $g$. Taking the ratio (\ref{r1})/(\ref{r2}) evaluated at a zero $E{=}E_i$ of $Q$ leads immediately to the Bethe ansatz equations (\ref{BA}) without the need to introduce the $TQ$ relation, though in this case it can be done very easily (see, for example the recent ODE/IM review article \cite{Dorey:2007zx}). Correspondingly, $\chi$ becomes subdominant at $x \rightarrow \infty$ on the positive real axis: $\chi(x,E_i,l) \propto \psi(x,E_i,l)$. The motivation of dealing with $\chi$, instead of $\psi$ (\ref{asyp}), is two-fold. Firstly, $\chi$ can be obtained by applying the powerful and numerically efficient iterative method proposed by Cheng many years ago~\cite{cheng:1962} in the context of Regge pole theory, and applied to spectral problems of this sort in \cite{DDTb}. To this end we introduce the linear operator $L$, defined through its formal action \eq L[x^p] = \frac{x^{p+2}}{ (p+l)(p-l-1) }\,. \en So for any polynomial ${\cal P}(x)$ of $x$\,, \eq \left(\frac{d}{dx} - \frac{l}{x} \right)\left(\frac{d}{dx} + \frac{l}{x}\right) L[{\cal P}(x)]= {\cal P}(x)~, \label{dna} \en and the basic differential equation (\ref{eq5}), with the boundary conditions (\ref{eq9}) at the origin, is equivalent to \eq \chi(x,E,l) = x^{l+1} + L\left[ (x^{2M}-E) \chi (x,E,l) \right]~. \label{chia} \en Equation (\ref{chia}) is solvable by iteration and it allows the predictions of the ODE/IM correspondence to be checked with very high precision. The initial results of~\cite{Dorey:1998pt,Bazhanov:1998wj,Suzuki:1999rj} connected conformal field theories associated with the Lie algebra $A_1$ to (second-order) ordinary differential equations. The generalisation to $A_{n-1}$-models was established in \cite{Suzuki:1999hu, Dorey:2000ma} but it was only recently~\cite{Dorey:2006an} that the ODE/IM correspondence was generalised to the remaining classical Lie algebras $B_n$, $C_n$ and $D_n$. Our attempts to derive generalised $TQ$ relations from the proposed set of pseudo-differential equations were unsuccessful, but a series of well-motivated conjectures led us directly to the BAE, allowing us to establish the relationship between BAE and pseudo-differential equation parameters. Moreover, while the numerics to calculate the analogs of the functions $\psi$ turned out to be very costly in CPU time, the generalisation of Cheng's method proved very efficient and allowed very high precision tests to be performed. This is our second main reason to deal with solutions defined through the behaviour about $x=0$, rather than $x=\infty$. \resection{Bethe ansatz for classical Lie algebras} \label{BAe} For any classical Lie algebra $\mathfrak{g}$, conformal field theory Bethe ansatz equations depending on a set of $rank(\mathfrak{g})$ twist parameters $ \gamma{=}\{ \gamma_a \}$ can be written in a compact form as \eq \prod_{ b=1}^{rank(\mathfrak{g})} \Omega^{B_{ab}\gamma_b}_{\phantom a} \frac {Q^{(b)}_{B_{ab}}(E^{(a)}_{i},\gamma)} {Q^{(b)}_{-B_{ab}}(E^{(a)}_{i},\gamma)}= -1\,,\qquad i=0,1,2,\dots~ \label{dall0} \en where $ Q^{(a)}_k(E,\gamma)=Q^{(a)}(\Omega^k E,\gamma), $ and the numbers $E^{(a)}_i$ are the (in general complex) zeros of the functions $Q^{(a)}$. In (\ref{dall0}) the indices $a$ and $b$ label the simple roots of the Lie algebra $\mathfrak{g}$, and \eq B_{ab}= { (\alpha_a, \alpha_b) \over |\hbox{\rm long roots}|^2}~,~~~a,b=1,2,\dots,rank(\mathfrak{g}) \label{cab} \en where the $\alpha$'s are the simple roots of $\mathfrak{g}$. The constant $\Omega=\exp \left(i {2\pi \over h^{\vee} \mu} \right)$ is a pure phase, $\mu$ is a positive real number and $h^{\vee}$ is the dual Coxeter number. It turns out that the Bethe ansatz roots generally split into multiplets (strings) with approximately equal modulus $|E_i^{(a)}|$. The ground state of the model corresponds to a configuration of roots containing only multiplets with a common dimension $ d_a=K/ B_{aa}$; the model-dependent integer $K$ corresponds to the degree of fusion~(see for example \cite{Kulish:1981gi}). \resection{ The pseudo-differential equations} To describe the pseudo-differential equations corresponding to the $A_{n-1}$, $B_n$, $C_n$ and $D_n$ simple Lie algebras we first introduce some notation. We need an $n^{\rm th}$-order differential operator~\cite{Dorey:2000ma} \eq D_n({\bf g})=D(g_{n-1}-(n{-}1))\,D(g_{n-2}-(n{-}2))\,\dots\, D(g_1-1)\,D(g_0)~, \label{dfactdef} \en \eq D(g)=\left(\frac{d}{dx}-\frac{g}{x}\right)~, \en depending on $n$ parameters \eq {\bf g} {=}\{g_{n-1}, \dots,g_1, g_0 \}~~~,~~{\bf g^{\dagger}} {=}\{ n-1-g_0, n-1-g_1, \dots, n-1 -g_{n-1} \}~. \label{conj} \en Also, we introduce an inverse differential operator $(d/dx)^{-1}$, generally defined through its formal action \eq \left( { d \over dx} \right)^{-1} x^{s}= {x^{s+1} \over s+1}~, \label{def00} \en and we replace the simple `potential' $P(E,x)=(x^{2M}-E)$ of (\ref{eq5}) with \eq P_K(E,x)= ( x^{h^{\vee} M/K}-E)^K~. \label{pk} \en Using the notation of Appendix B in~\cite{Dorey:2006an} the proposed pseudo-differential equations are reported below. \\ {\bf \large $A_{n-1}$ models:}\\ \nobreak \noindent The $A_{n-1}$ ordinary differential equations are \eq D_n({\bf g^\dagger})\chi^{\dagger}_{n-1}(x,E)= P_K(x,E)\chi^{\dagger}_{n-1}(x,E)~, \label{sunnq} \en \goodbreak \noindent with the constraint $\sum_{i=0}^{n-1}g_i{=}\frac{n(n{-}1)}{2}$~ and the ordering $g_i < g_j< n-1,~\forall \ i<j$~. We introduce the alternative set of parameters $\gamma=\gamma({\bf g})=\{ \gamma_a({\bf g}) \}$ \eq \gamma_a= {2 K \over h^\vee M} \left(\sum_{i=0}^{a-1} g_i - {a(h^{\vee}-1) \over 2} \right)~. \en \\ The solution $\chi^{\dagger}_{n-1}(x,E)$ is specified by its $x\sim 0$ behaviour \eq \chi_{n-1}^{\dagger} \sim x^{n-1-g_0}+\hbox{subdominant terms,}~~~ (x \rightarrow 0^{+})~. \label{x=0} \en In general, this function grows exponentially as $x$ tends to infinity on the positive real axis. In Appendix B of \cite{Dorey:2006an}, it was shown that the coefficient in front of the leading term, but for an irrelevant overall constant, is precisely the function $Q^{(1)}(E,\gamma)$ appearing in the Bethe Ansatz, that is \eq \chi_{n-1}^\dagger \sim Q^{(1)}(E,\gamma({\bf g})) \;x^{(1-n){M \over 2}} e^{ {x^{M+1} \over M+1} }+\hbox{subdominant terms,} ~~~ (x \rightarrow \infty)~. \en Therefore, the set of Bethe ansatz roots \eq \{E_i^{(1)} \} \leftrightarrow Q^{(1)}(E_i^{(1)},\gamma)=0 \en coincide with the discrete set of $E$ values in (\ref{sunnq}) such that \eq \chi_{n-1}^\dagger \sim o \left(x^{(1-n){M \over 2}} e^{ {x^{M+1} \over M+1} } \right)~,~~~ (x \rightarrow \infty)~. \en This condition is equivalent to the requirement of absolute integrability of \eq \left ( x^{(n-1){M \over 2}} e^{ -{x^{M+1} \over M+1} } \right) \chi_{n-1}^\dagger(x,E) \en on the interval $[0,\infty)$. It is important to stress that the boundary problem defined above for the function $\chi_{n-1}^\dagger$ (\ref{x=0}) is in general different from the one discussed in Sections 3 and 4 in \cite{Dorey:2006an} involving $\psi(x,E)$. The latter function is instead a solution to the adjoint equation of (\ref{sunnq}) and characterised by recessive behaviour at infinity. Surprisingly, the two problems are spectrally equivalent and lead to identical sets of Bethe ansatz roots. \vspace{0.5cm} {\bf \large $D_{n}$ models:}\\ \noindent The $D_{n}$ pseudo-differential equations are \eq D_{n}({\bf g^{\dagger}}) \left( \frac{d}{dx} \right)^{-1} D_{n}({\bf g}) \chi_{2n-1}(x,E)=\sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \,\chi_{2n-1}(x,E)~. \label{so2n02} \en\\ Fixing the ordering $g_i<g_j<h^{\vee}/2$, the ${\bf g} \leftrightarrow \gamma$ relationship is \eq \gamma_a= \frac{2K}{h^\vee M}\left( \sum_{i=0}^{a-1} g_i - {a \over 2} h^{\vee} \right)~,~~(a=1,\dots, n-2) \en \eq \gamma_{n-1}= \frac{K}{h^\vee M} \left( \sum_{i=0}^{n-1} g_i - {n \over 2} h^{\vee} \right),~\gamma_{n}= {K \over h^\vee M} \left( \sum_{i=0}^{n-2} g_i - g_{n-1} - {n-2 \over 2} h^{\vee} \right)~. \en The solution is specified by requiring \eq \chi_{2n-1} \sim x^{h^\vee-g_0}+\hbox{subdominant terms,}~~~(x \rightarrow 0^{+})~, \en \eq \chi_{2n-1} \sim Q^{(1)}(E,\gamma({\bf g})) \;x^{-h^\vee{M \over 2}} e^{ {x^{M+1} \over M+1} }+\hbox{subdominant terms,}~~~(x \rightarrow \infty). \en\\ Figure~\ref{figwav} illustrates $ \Psi(x,E)=x^{h^\vee {M \over 2}} e^{ -{x^{M+1} \over M+1} } \chi_{2n-1}(x,E)$ for the first three eigenvalues of the $D_4$ pseudo-differential equation defined by $K{=}1,M=1/3$ and {\bf g}=(2.95,2.3,1.1,0.2)~. \begin{figure} \centering \includegraphics[width=4.2cm]{d4p3e0.eps} \ \ \ \includegraphics[width=4.2cm]{d4p3e1.eps} \ \ \ \includegraphics[width=4.2cm]{d4p3e2.eps} \caption{Lowest three functions $\Psi(x,E)$ for a $D_4$ pseudo-differential equation.\label{figwav}} \end{figure} \\ {\bf \large $B_{n}$ models:}\\ \noindent The $B_{n}$ ODEs are \eq D_{n}({\bf g^{\dagger}}) D_{n}({\bf g}) \chi^{\dagger}_{2n-1}(x,E) = \sqrt{P_K (x,E)} \left(\frac{d}{dx} \right) \sqrt{P_K (x,E)} \chi^{\dagger}_{2n-1}(x,E)~. \label{so2n102a} \en\\ With the ordering $g_i<g_j<h^{\vee}/2$, the ${\bf g} \leftrightarrow \gamma$ relation is \eq \gamma_a= {2 K \over h^\vee M} \left( \sum_{i=0}^{a-1} g_i - {a \over 2} h^{\vee} \right)~. \en The asymptotic behaviours about $x=0$ and $x=\infty$ are respectively \eq \chi_{2n-1}^{\dagger} \sim x^{h^\vee-g_0}+\hbox{subdominant terms,}~~~(x \rightarrow 0^{+})~, \en and \eq \chi_{2n-1}^\dagger \sim Q^{(1)}(E,\gamma({\bf g})) \; x^{-h^\vee{M \over 2}} e^{ {x^{M+1} \over M+1} }+\hbox{subdominant terms,} ~~~(x \rightarrow \infty)~. \en\\ {\bf \large $C_{n}$ models:}\\ \noindent The pseudo-differential equations associated to the $C_{n}$ systems are \eq D_{n}({\bf g^{\dagger}}) \left(\frac{d}{dx} \right)D_{n}({\bf g})\,\chi_{2n+1}(x,E) = P_{K }(x,E) { \left(d \over dx \right)^{-1}}P_{K} (x,E) \,\chi_{2n+1}(x,E) \label{sp2n02} \en\\ with the ordering $g_i<g_j<n$. The relation between the $g$'s and the twist parameters in the BAE is \eq \gamma_a= {2 K \over h^\vee M} \left(\sum_{i=0}^{a-1} g_i - a n \right),~\gamma_n= { K \over h^\vee M} \left( \sum_{i=0}^{n-1} g_i - n^2 \right) \en and \eq \chi_{2n+1}^{\dagger} \sim x^{2n-g_0}+\hbox{subdominant terms,}~~~(x \rightarrow 0^{+})~, \en \eq \chi_{2n+1}^\dagger \sim Q^{(1)}(E,\gamma) x^{-nM} e^{ {x^{M+1} \over M+1} }+\hbox{subdominant terms,}~~~ (x \rightarrow \infty)~. \en\\ Using a generalisation of Cheng's algorithm, the zeros of $Q^{(1)}(E,\gamma)$ can be found numerically and shown to match the appropriate Bethe ansatz roots~\cite{Dorey:2006an}. In general, the `spectrum' of a pseudo-differential equation may be either real or complex. In the $A_{n-1}$, $B_n$, $D_n$ models with $K{=}1$\footnote{ The $C_n$ spectrum is complex for any integer $K{\ge}1$.}, the special choice $g_i=i$ leads to pseudo-differential equations with real spectra, a property which is expected to hold for a range of the parameters ${\bf g}$ (see, for example, \cite{Dorey:2000ma}). The $K{>}1$ generalisation of the potential (\ref{pk}), proposed initially by Lukyanov for the $A_1$ models \cite{Luk-private} but expected to work for all models, introduces a new feature. The eigenvalues corresponding to a $K{=}2,3$ and $K{=}4$ case of the $SU(2)$ ODE are illustrated in figure~\ref{figpairs}. \begin{figure} \centering \includegraphics[width=4.1cm]{abcdk2m3.eps} \ \ \ \ \includegraphics[width=4.1cm]{abcdk3m3.eps} \ \ \ \ \includegraphics[width=4.1cm]{abcdk4m3.eps} \caption{Complex $E$-plane: the eigenvalues for the $SU(2)$ model with $M=3$, $g_0=0$ for $K=2$, $3$ and $4$ respectively.\label{figpairs}} \end{figure} The interesting feature appears if we instead plot the logarithm of the eigenvalues as in figure~\ref{twostrings}. We see that the logarithm of the eigenvalues form `strings', a well-known feature of integrable models. The string solutions approximately lie along lines in the complex plane, the deviations away from which can be calculated \cite{Dorey:2006an} using either WKB techniques, or by studying the asymptotics of the Bethe ansatz equations directly. \begin{figure} \centering \includegraphics[width=4.1cm]{abcdk2m3log.eps} \ \ \ \ \includegraphics[width=4.1cm]{abcdk3m3log.eps} \ \ \ \ \includegraphics[width=4.1cm]{abcdk4m3log.eps} \caption{Complex $(\ln E)$-plane: two, three- and four-strings. \label{twostrings}} \end{figure} To end this section, we would like to comment briefly on the motivation behind the conjectured pseudo-differential equations of $B_n$, $C_n$ and $D_n$ type. Modulo the generalisation to $K{>}1$, the $A_{n-1}$ type ODEs were derived in \cite{Dorey:2000ma}. We began with the $D_3$ case since it coincides up to relabeling with $A_3$, implying that the $D_3$ function $Q^{(1)}(E,\gamma)$ coincides with the $A_3$ function $Q^{(2)}(E,\gamma)$. Fortunately, the latter is known~\cite{Dorey:2000ma} to encode the spectrum of a differential equation satisfied by the Wronskian of two solutions of the $Q^{(1)}$-related ODE. The generalisation to $D_n$ models with larger $n$ was then clear. Further supporting evidence came from a relationship between certain $D_n$ lattice models and the sine-Gordon model, which appears as an $SU(2)$ problem. This relationship also extends to a set of $B_n$ models, and leads naturally to the full $B_n$ proposal. Finally, the $C_n$ proposal arose from the $B_n$ cases via a consideration of negative-dimension W-algebra dualities~\cite{Hornfeck:1994is}. Numerical and analytical tests provided further evidence for the connection between these spectral problems and the Bethe ansatz equations for the classical Lie algebras. \section{Conclusions} \label{conclusions} The link between integrable models and the theory of ordinary differential equations is an exciting mathematical fact that has the potential to influence the future development of integrable models and conformal field theory, as well as some branches of classical and modern mathematics. Perhaps the most surprising aspect of the functions $Q$ and $T$, only briefly discussed in this short note, is their variety of possible interpretations: transfer matrix eigenvalues of integrable lattice models in their CFT limit~\cite{Bazhanov:1994ft,Bazhanov:1996dr}, spectral determinants of Hermitian and PT-symmetric~\cite{BB,BBN} spectral problems~(see for example \cite{DDTb}), g-functions of CFTs perturbed by relevant boundary operators~\cite{Bazhanov:1994ft, Dorey:1999cj}, and particular expectation values in the quantum problem of a Brownian particle~\cite{Bazhanov:1998za}. Further, the (adjoint of the) operators (\ref{sunnq}), (\ref{so2n02}), (\ref{so2n102a}) and (\ref{sp2n02}) resemble in form the Miura-transformed Lax operators introduced by Drinfel'd and Sokolov in the context of generalised KdV equations, studied more recently in relation to the geometric Langlands correspondence~\cite{Mukhin:2002fp2, Frenkel:2005fr}. Clarifying this connection is an interesting open task. Here we finally observe that the proposed equations respect the well-known Lie algebras relations $D_2 \sim A_1 \oplus A_1$, $A_3 \sim D_3$, $B_1 \sim A_1$, $B_2 \sim C_2$. Also, at special values of the parameters the $C_n$ equations are formally related to the $D_n$ ones by the analytic continuation $n \rightarrow -n$, matching an interesting W-algebra duality discussed by Hornfeck in~\cite{Hornfeck:1994is}: \eq {(\hat{D}_{-n})_K \times (\hat{D}_{-n})_L \over (\hat{D}_{-n})_{K+L}} \sim { (\hat{C}_{n})_{-K/2} \times (\hat{C}_{n})_{-L/2} \over (\hat{C}_{n})_{-K/2 -L/2}}\,. \label{dualdc} \en The relationship between our equations and coset conformal field theories is another aspect worth investigation. We shall return to this point in a forthcoming publication. \medskip \noindent{\bf Acknowledgements --} RT thank Vladas Sidoravicius, Fedor Smirnov and all the organizers of the conference $M{\cap}\Phi$-- ICMP 2006 in Rio de Janeiro for the invitation to talk at the conference and for the kind hospitality. JS thanks the Ministry of Education of Japan for a `Grant-in-aid for Scientific Research', grant number 17540354. This project was also partially supported by the European network EUCLID (HPRN-CT-2002-00325), INFN grant TO12, NATO grant number PST.CLG.980424 and The Nuffield Foundation grant number NAL/32601, and a grant from the Leverhulme Trust.
2,877,628,089,937
arxiv
\section{Introduction} \label{sec:intro} Designing biological sequences entails searching over vast combinatorial design spaces. Recently, deep sequence generation models trained on large datasets of known, functional sequences have shown promise in generating physically and chemically plausible designs \citep[e.g.][]{biswas2021low,das2021accelerated,gligorijevic2021function}. Whereas these models accelerate the design process, limited resources place a cap on how many designs we can characterize \textit{in vitro} for assessing their suitability. Only once a design is validated \textit{in vitro} and undergoes multiple rounds of optimization can it proceed down the drug development pipeline to preclinical development and clinical trials, where its performance is tested \textit{in vivo}. Because the wet lab cannot provide feedback on all of the candidate designs, we take an iterative, data-driven approach to select the the most informative subset to submit to the wet lab. Many drug design applications call for such an active learning approach, as the initial datasets available to train predictive models on our desired properties of interest tends to be small or nonexistent. The measurements returned by the lab in each iteration is appended to our training set and we update our models using the augmented dataset for the next iteration. The wet lab's measurement process can be viewed as a black-box function that is expensive to evaluate. In the context of identifying designs maximizing this function, Bayesian optimization (BO) emerges as a promising, sample-efficient framework that trades off exploration (evaluating highly uncertain designs) and exploitation (evaluating designs believed to carry the best properties) in a principled manner \citep{jones1998efficient}. It relies on a probabilistic surrogate model that infers the posterior distribution over the objectives and an acquisition function that assigns an expected utility value to each candidate. BO has been successfully applied to a variety of protein engineering applications \citep{pyzer2018bayesian,bellamy2022batched,stanton2022accelerating}. In particular, we cast our problem as multi-objective BO, where multiple objectives are jointly optimized. Our objectives originate from the molecular properties evaluated during \textit{in vitro} validation. This validation process involves producing the design, confirming its pharmacology, and evaluating whether it is active against a given drug target of interest. If found to be potent, the design is then assayed for \textit{developability} attributes---physicochemical properties that characterize the safety, delivery, and manufacturability \citep{jarasch2015developability}. The experimental process of \textit{in vitro} validation signifies a hierarchy among the objectives. Consider the property ``expression'' in the context of antibody design, for instance. A designed antibody candidate must first be expressed in live cell culture. If the level of expression does not meet a fixed threshold, the lab cannot produce it and it cannot be assayed for potency and developability downstream. Supposing now that a design did express in viable amounts, if it does not bind to a target antigen with sufficient ``affinity'' (and is thus not potent), then the design fails as an antibody and there is little practical incentive in assaying it for developability (such as specificity and thermostability)---even if it is possible to do so. The dependency between properties, whether experimental or biological in origin, motivates us prioritize some objectives before others when selecting the subset of designs to submit to the wet lab. Our primary goal is to identify ``joint positive'' designs, designs that meet the chosen thresholds in all the parent objectives (expressing binders) according to the specified partial ordering and also perform well in the leaf-level objectives (high specificity, thermostability). To this end, we propose PropertyDAG, a simple framework that operates on top of the traditional multi-objective BO to impose a desired partial ordering on the objectives, e.g. expression $\rightarrow$ affinity $\rightarrow$ $\{$ specificity, thermostability$\}$. Our framework modifies the posterior inference procedure within standard BO in two ways. First, we treat the objectives as \textit{mixed-variable}---in particular, each objective is modeled as a mixture of zeros and a wide dispersion of real-valued, non-zero values. The surrogate model consists of a binary classifier and a regressor, which infer the zero mode and the non-zero mode, respectively. We show that this modeling choice is well-suited for biological properties, which tend to carry excess zero, or null, values and fat tails \citep{jain2017biophysical}. Second, before samples from the posterior distribution inferred by the surrogate model enter the acquisition function, we transform the samples such that they conform to the specified partial ordering of properties. We run multi-objective BO with PropertyDAG over multiple simulated active learning iterations to a penicillin production task, a toy numerical problem, and a real-world antibody design task. In all three tasks, PropertyDAG-BO identifies significantly more joint positives compared to standard BO. After the final iteration, the surrogate models trained under Property-BO also output more accurate predictions on the joint positives in a held-out test set than do the standard BO equivalents. \section{Background} \label{sec:bg} \textbf{Bayesian optimization} (BO) is a popular technique for sample-efficient black-box optimization \cite[see][for a review]{shahriari2015taking, frazier2018tutorial}. Suppose our objective $f: \mathcal{X} \rightarrow \mathbb{R}$ is a black-box function of the design space $\mathcal{X}$ that is expensive to evaluate. Our goal is to efficiently identify a design $\bm{x}^\star \in \mathcal{X}$ maximizing\footnote{ For simplicity, we define the task as maximization in this paper without loss of generality. For minimizing $f$, we can negate $f$, for instance. } $f$. BO leverages two tools, a probabilistic surrogate model and a utility function, to trade off exploration (evaluating highly uncertain designs) and exploitation (evaluating designs believed to maximize $f$) in a principled manner. For each iteration $t \in \mathbb{N}$, we have a dataset $\mathcal{D}_t = \{( \bm{x}^{(1)}, {y}^{(1)} ), \cdots, ( \bm{x}^{(N_t)}, {y}^{(N_t)} )\} \in \mathscr{D}_t$, where each ${y}^{(n)}$ is a noisy observation of $f$. First, the probabilistic model $\hat f: \mathcal{X} \rightarrow \mathbb{R}$ infers the posterior distribution $p(\hat f | \mathcal{D}_t)$, quantifying the plausibility of surrogate objectives $\hat f \in \mathcal{F}$. Next, we introduce a utility function $u: \mathcal{X} \times \mathcal{F} \times \mathscr{D}_t : \rightarrow \mathbb{R}$. The acquisition function $a(\bm x)$ is simply the expected utility of $\bm x$ w.r.t. our beliefs about $f$, \begin{align} a(\bm x) = \int u(\bm x, \hat f, \mathcal{D}_t)p(\hat f | \mathcal{D}_t) d\hat f. \end{align} For example, we obtain the expected improvement (EI) acquisition function if we take $ u_{\mathrm{EI}}(\bm x, \hat f, \mathcal{D}) = [\hat f(\bm x) - \max_{(\bm x', y') \in \mathcal{D}} y']_+,$ where $[\cdot]_+ = \max(\cdot, 0)$ \citep{movckus1975bayesian,jones1998efficient}. Generally the integral is approximated by Monte Carlo (MC) with posterior samples $\hat{f}^{(j)} \sim p ( \hat f|\mathcal{D}_t )$. We select a maximizer of $a$ as the new design, measure its properties, and append the observation to the dataset. The surrogate is then retrained on the augmented dataset and the procedure repeats. \paragraph{Multi-objective optimization} When there are multiple objectives of interest, a single best design may not exist. Suppose there are $K$ objectives, $f: \mathcal{X} \rightarrow \mathbb{R}^K$. The goal of multi-objective optimization (MOO) is to identify the set of \textit{Pareto-optimal} solutions such that improving one objective within the set leads to worsening another. We say that $\bm x$ dominates $\bm x'$, or ${f}(\bm{x}) \succ {f}(\bm{x}')$, if $f_k(\bm{x}) \geq f_k(\bm{x}')$ for all $k \in \{1, \dotsc, K\}$ and $f_k(\bm x) > f_k(\bm x')$ for some $k$. The set of \textit{non-dominated} solutions $\mathscr{X}^*$ is defined in terms of the Pareto frontier (PF) $\mathcal{P}^*$, \begin{align} \label{eq:pareto} \mathscr{X}^\star = \{\bm{x}: f(\bm{x}) \in \mathcal{P}^\star\}, \hspace{4mm} \text{where } \mathcal{P}^\star = \{f(\bm{x}) \: : \: \bm x \in \mathcal{X}, \; \nexists \: \bm{x}' \in \mathcal{X} \textit{ s.t. } f(\bm{x}') \succ f(\bm{x}) \}. \end{align} MOO algorithms typically aim to identify a finite approximation to $\mathscr{X}^\star$, which may be infinite, within a reasonable number of iterations. One way to measure the quality of an approximate PF $\mathcal{P}$ is to compute the hypervolume ${\rm HV}(\mathcal{P} | \bm{r}_{\rm ref})$ of the polytope bounded by $\mathcal{P} \cup \{\bm r_{\mathrm{ref}}\}$, where $\bm r_{\mathrm{ref}} \in \mathbb{R}^K$ is a user-specified \textit{reference point}. We obtain the expected hypervolume improvement (EHVI) acquisition function if we take \begin{align} u_{\mathrm{EHVI}}(\bm x, \hat f, \mathcal{D}) = {\rm HVI}(\mathcal{P}', \mathcal{P} | \bm{r}_{\rm ref}) = [{\rm HV}(\mathcal{P}' | \bm{r}_{\rm ref}) - {\rm HV}(\mathcal{P} | \bm{r}_{\rm ref})]_+, \label{eq:ehvi} \end{align} where $\mathcal{P}' = \mathcal{P} \cup \{\hat f(\bm x)\}$ \citep{emmerich2005single,emmerich2011hypervolume}. \paragraph{Noisy observations} In the noiseless setting, the observed baseline PF is the true baseline PF, i.e. $\mathcal{P}_t = \{\bm{y}: \bm{y} \in \mathcal{Y}_t, \: \nexists \: \bm{y}' \in \mathcal{Y}_t \textit{ s.t. } \bm{y}' \succ \bm{y} \}$ where $\mathcal{Y}_t \coloneqq \{\bm{y}^{(n)}\}_{n=1}^{N_t}$. This does not, however, hold in many practical applications, where measurements carry noise. For instance, given a zero-mean Gaussian measurement process with noise covariance $\Sigma$, the feedback for a design $\bm{x}$ is $\bm{y} \sim \mathcal{N}\left( {f}(\bm{x}), \Sigma \right)$, not $f(\bm{x})$ itself. The \textit{noisy} expected hypervolume improvement (NEHVI) acquisition function marginalizes over the surrogate posterior at the previously observed points $\mathcal{X}_t = \{\bm{x}^{(n)}\}_{n=1}^{N_t}$, \begin{align} u_{\mathrm{NEHVI}}(\bm x, \hat f, \mathcal{D}) = {\rm HVI}(\hat{\mathcal{P}}_t', \hat{\mathcal{P}}_t | \bm{r}_{\rm ref}) \label{eq:nehvi}, \end{align} where $\hat{\mathcal{P}}_t = \{\hat f(\bm{x}) \: : \: \bm x \in \mathcal{X}_t, \; \nexists \: \bm{x}' \in \mathcal{X}_t \textit{ s.t. } \hat f(\bm{x}') \succ \hat f(\bm{x}) \}$ and $\hat{\mathcal{P}}' = \hat{\mathcal{P}} \cup \{\hat{f}(\bm{x})\}$ \citep{daulton2021parallel}. \paragraph{Batched (parallel) optimization} Sequential optimization, or querying $f$ for one design per iteration, is impractical for many applications due to the latency in feedback. In protein engineering, for example, it may be necessary to select a batch of designs in a given iteration and wait several months to receive measurements \citep{mayr2009novel,sinai2020primer}. Jointly selecting a batch of $q$ designs from a large pool of $q' \gg q$ candidates requires combinatorial evaluations of the acquisition function. \section{Related Work} \label{sec:related_work} Existing work on multi-objective BO does not account for objectives with a hierarchical dependency structure \citep{gelbart2014bayesian,wada2019bayesian,yang2019multi,daulton2020differentiable,daulton2021parallel}. We refer to \cite{astudillo2021bayesian} for a formulation of single-objective BO with a hierarchy in how the objective is computed. A body of work focuses on constrained optimization, which optimizes a black-box function subject to a set of black-box constraints being satisfied \cite{wu2016parallel,gardner2014bayesian,hernandez2016general,ginsbourger2010kriging,letham2019constrained, malkomes2021beyond}. For dealing with mixed-variable objectives, \cite{daulton2022bayesian} propose reparameterizing the discrete random variables in terms of continuous parameters. Our approach here is to model them explicitly using the zero-inflated formalism. \section{Method} \label{sec:method} \paragraph{Overview} Figure~\ref{fig:pipeline} illustrates our proposed PropertyDAG-BO framework alongside standard BO. The candidate generation step is identical in both cases; we first sample a large pool of design candidates from a proposal distribution, often implemented as a generative model. The difference lies in the next step, in which the designs are scored by probabilistic surrogate models. In PropertyDAG-BO, we must first specify a partial ordering of our objectives (Section~\ref{sec:defining_property_dag}). We then explicitly assign a zero-inflated generative model for each objective (Section~\ref{sec:zero_inflation}) such that a probabilistic classifier models its ``zero'' mode and a probabilistic regressor models the remaining continuous-valued ``non-zero'' mode. The raw posterior samples from the surrogates then undergo a ``resampling'' step (Section~\ref{sec:resampling}) that enforces the specified PropertyDAG. Finally, the modified posterior samples enter the multi-objective acquisition function, which scores the design candidates just as in standard BO. \begin{figure}[ht] \centering \includegraphics[scale=0.75]{figs/pipeline_no_boundaries.pdf} \caption{Comparison of a traditional multi-objective BO pipeline (dashed black arrows) with the proposed pipeline enabled by PropertyDAG (solid magenta arrows). \label{fig:pipeline}} \end{figure} \subsection{Defining a PropertyDAG} \label{sec:defining_property_dag} Many drug design applications motivate us to prescribe some hierarchy among our objective properties of interest. The partial ordering may arise from an experimental dependency, e.g. a design candidate must pass a certain threshold in one property for its other properties to be measured. In the context of antibody design, a design candidate is a sequence of amino acids representing an antibody that must first be expressed in cell culture. If the level of expression does not exceed some threshold in mass per volume, the lab cannot produce it in viable amounts and it cannot be assayed for other properties, such as binding affinity to a target antigen. Our PropertyDAG may then take the form: expression $\rightarrow$ affinity. Experimental dependencies like this creates an asymmetry among the objectives; it reduces the information content of designs that do not express much more than that of designs that do not bind, because non-expressing designs cannot provide binding measurements. Alternatively, the partial ordering may encode our preference for the types of designs we want to obtain. We may prioritize a property, for instance, so that we reject designs that performs poorly in this property, no matter how well they perform in all the others. If a designed antibody does not bind to the target antigen, it has failed in its primary function, so we have little interest in its developability properties, such as specificity to the target antigen and thermostability, even though, unlike for non-expressers, they often remain measurable. We then impose the following PropertyDAG: expression $\rightarrow$ affinity $\rightarrow$ $\{$ specificity, thermostability $\}$. A PropertyDAG can be expressed as ordered sets of properties: $\{y_{0, 0}, \dotsc, y_{0, M_0}\} \rightarrow \{y_{1, 0}, \dotsc, y_{1, M_1}\} \rightarrow \cdots \rightarrow \{y_{L, 0}, \dotsc, y_{L, M_L}\}$, where $y_{l, m}$ is the property at level $l \in \{0, \dotsc, L-1\}$ of the hierarchy and $m \in \{0, \dotsc, M_l-1\}$ is its index among the $M_l$ sibling properties at the same level $l$. Figure~\ref{fig:pgm} shows an example of a PropertyDAG with three properties and two levels. \begin{figure}[t!] \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right, top},capbesidewidth=8cm}}]{figure}[\FBwidth] {\caption{Example of a PropertyDAG, with three properties and two levels of hierarchy. Each property is denoted $y_{l, m}$, where $l$ indexes the level and $m$ indexes sibling properties at the same level. Magenta arrows refer to the dependencies imposed by PropertyDAG. Black arrows make it explicit that each $y_{l, m}$ is modeled as zero-inflated, where $b_{l, m}$ governs the zero events and $r_{l, m}$ governs the continuous non-zero events.}\label{fig:pgm}} {\includegraphics[width=0.3\textwidth]{figs/pgm.pdf}} \end{figure} \begin{figure}[b!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=0.48, trim=0.cm 6cm 0.cm 4cm]{figs/pareto_schematic_default.pdf} \caption{Default} \label{fig:pareto_schematic_default} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth, trim=0.cm 6cm 0.cm 4cm]{figs/pareto_schematic_dag.pdf} \caption{PropertyDAG resampling} \label{fig:pareto_schematic_dag} \end{subfigure} \caption{Effect of resampling the surrogate posteriors on the acquisition function. Suppose the magenta dashed lines represent our threshold for Objective 0 such that we want to acquire candidates that maximize Objective 1 \textit{and} exceed this threshold in Objective 0. \textit{Bottom:} The black dots constitute our baseline Pareto front. Colors of the grid indicate HVI (Equation~\ref{eq:ehvi}) computed from each posterior sample at the given location in the objective space, in the (a) default and (b) PropertyDAG settings. Consider six samples (white triangles) from the posterior (white contour). PropertyDAG transforms the posterior samples below the threshold in Objective 0 such that their HVI contribution is zero. \textit{Top:} The HVI corresponding to each sample before (a) and after (b) the resampling.} \end{figure} \subsection{Zero-inflated modeling} \label{sec:zero_inflation} Biological properties tend to carry excess zeros, or null values. Their zero-inflated nature motivates us to employ statistical models that account for large incidences of zeros \cite{eggers2015statistical}. The zero-inflated negative binomial (ZINB) distribution has been applied to model discrete counts in single-cell RNA-seq data \citep{grun2014validation}. For each objective $y_k$, our zero-inflated model assigns a binary random variable $b_k \in \{0, 1\}$ to generate the zeros and $r_k \in \mathbb{R}$ to generate the remaining wide dispersion of continuous non-zero values. Assume $f$ is non-negative (equivalently, that it is bounded from below). Given a training dataset $\mathcal{D}_t$ available at time $t$, we decompose $p(\hat f_k \mid \mathcal{D}_t)$ as follows: \begin{align} \label{eq:combined} p(\hat f_k(\bm x) = c \mid \mathcal{D}_t) = \begin{cases} p(b_k = 0 \mid \bm x, \mathcal{D}_t) & \text{if } c = 0, \\ p(b_k = 1 \mid \bm x, \mathcal{D}_t) \ p(r_k = c \mid \bm x, \mathcal{D}_t, \theta_r) & \text{else}, \end{cases} \end{align} where $p(r_k \mid \bm x, \mathcal{D}_t, \theta_r)$ is a regressor parameterized by $\theta_r$ trained to predict $f_k$ given $b_k = 1$. For simplicity we have assumed $p(r_k=0|\bm{x}, \mathcal{D}_t, \theta_r)=0$. Since Gaussian processes (GPs) are often used as surrogates for BO \citep[see, e.g.][for a review on GPs]{williams2006gaussian} and common GP assumptions fail for sparse, multi-modal data, separating out the non-zero mode of the data can improve posterior inference. To accommodate objective hierarchy, each $p(b_k \mid \bm x, \mathcal{D}_t)$ decomposes further as \begin{align} \label{eq:binary_posterior} p(b_k \mid \bm x, \mathcal{D}_t) = \begin{cases} 0 & \text{if } \exists j \in {\rm pred}(k) \text{ s.t. } b_j = 0, \\ p(b_k \mid \bm x, \mathcal{D}_t, \theta_b) & \text{else}, \end{cases} \end{align} where $p(b_k \mid \bm x, \mathcal{D}_t, \theta_b)$ is a classifier parameterized by $\theta_b$ trained to predict $\mathds{1}\{f_k(\bm x) > 0\}$ and ${\rm pred}(k)$ are the predecessors, or ancestral nodes, in the PropertyDAG corresponding to property $k$. This general framework, presented in terms of a zero-inflated, continuous-valued objective (a mixture of a delta function at zero and a continuous distribution), applies to binary-valued objectives and continuous-valued objectives without zero inflation, which can be viewed as specific cases taking $p(r_k|\bm{x}, \mathcal{D}_t, \theta_r) = p(r_k) = \mathcal{N}(0, \sigma^2)$ with very small $\sigma$ and $p(b_k=1 \mid \bm x, \mathcal{D}_t, \theta_b)=1$, respectively. \subsection{Resampling} \label{sec:resampling} Using a simple resampling trick, we modify the posterior samples from the surrogate models to enforce the parent-child relationships specified in PropertyDAG. As before, consider a property $y_k$ and refer to its predecessors, or ancestral nodes, as ${\rm pred}(k)$. Suppose we have drawn single a posterior sample and obtained $\beta_{k'} \sim p(b_{k'}|\bm{x}) \in \{0, 1\}$ and $\rho_{k'} \sim p(r_{k'}|\bm{x}) \in \mathbb{R}$ for each ${k'} \in \{1, \dotsc, K\}$. Without any modification, Equation~\ref{eq:combined} would yield the following sample $\gamma_k$ of $y_k$: \begin{align} \label{eq:no_resampling} {\gamma}_k = \begin{dcases*} 0 & if $b_k = 0$ \\ \rho_{k} & if $b_k = 1$. \end{dcases*} \end{align} Instead, we begin at the top level and proceed down the levels of PropertyDAG to impose dependencies between $b_k$ and its predecessor properties $\{b_{k'}\}_{k' \in {\rm pred}(k)}$. If $y_k$ is a top-level property, then ${\rm pred}(k) = \emptyset$ and $\hat{\beta}_k = \beta_k$. Otherwise, $y_k$ has parent properties and we have \begin{align} \label{eq:binary_resampling} \hat{\beta}_k = \begin{dcases*} 0 & if $\exists j \in {\rm pred}(k) \text{ s.t. } b_j = 0$ \\ \beta_k & else. \end{dcases*} \end{align} We can then use the modified binary samples $\{\hat{\beta}_k\}_{k=1}^K$ to obtain our effective sample $\hat{\gamma}_k$ of $y_k$: \begin{align} \label{eq:regress_resampling} \hat{\gamma}_k = \begin{dcases*} 0 & if $\beta_k=0$ or $\exists j \in {\rm pred}(k) \text{ s.t. } b_j = 0$ \\ \rho_k & else. \end{dcases*} \end{align} Let $\bm{\gamma} \coloneqq [ {\gamma}_1, \dotsc, {\gamma}_K ] \in \mathbb{R}^K$ and $\bm{\hat{\gamma}} \coloneqq [ \hat{\gamma}_1, \dotsc, \hat{\gamma}_K ] \in \mathbb{R}^K$ and denote the transformation at the sample level described in Equations~\ref{eq:binary_resampling} and \ref{eq:regress_resampling} as $h: \mathbb{R}^K \rightarrow \mathbb{R}^K$ such that $h(\bm{\gamma}) = \bm{\hat{\gamma}}$. We repeat this resampling procedure to other posterior samples. The transformed samples are then used to evaluate NEHVI (Equation~\ref{eq:nehvi}) via MC integration. More precisely, suppose we draw $S$ posterior samples in parallel for a design candidate $\bm{x}^*$ (reflecting both aleatoric and epistemic uncertainties) and the previously observed designs $\mathcal{X}_t = \{\bm{x}^{(n)}\}_{n=1}^{N_t}$ (reflecting the aleatoric uncertainty) and denote each draw as $\bm{{\gamma}}^*_s$ and $G_s \coloneqq \{\bm{{\gamma}}_s^{(n)}\}_{n=1}^{N_t}$, respectively, for $s=1, \dotsc, S$. Then the MC approximation of NEHVI can be efficiently evaluated as: \begin{align} \label{eq:nehvi_mc} & \bm{\hat{\gamma}}_{s}^* = h(\bm{\gamma}_{s}^*), \: \bm{\hat{\gamma}}_{s}^{(n)} = h(\bm{\gamma}_{s}^{(n)}) \quad \forall s=1, \dotsc, S, \: \forall n=1, \dotsc, N_t \quad \nonumber \\ & \alpha_{\rm NEHVI} \left( \bm{x}^* \right) \approx \frac{1}{S} \sum_{s=1}^{S} {\rm HVI} \left(\mathcal{P}^{* [s]}_t, \mathcal{P}^{[s]}_t |\bm{r}_{\rm ref} \right), \end{align} where $\mathcal{P}_t^{[s]} = \{\bm{{\gamma}}_s: \bm{{\gamma}}_s \in G_s, \: \nexists \: \bm{{\gamma}}_s' \in G_s \textit{ s.t. } \bm{{\gamma}}_s' \succ \bm{{\gamma}}_s \}$ and $\mathcal{P}^{* [s]}_t = \mathcal{P}^{[s]}_t \cup \{\bm{\hat{\gamma}}_{s}^*\}$. Figures~\ref{fig:pareto_schematic_default} and \ref{fig:pareto_schematic_dag} illustrate the effect of the PropertyDAG resampling on each posterior sample's HVI contribution to EHVI. \section{Experiments} \label{sec:results} We perform simulated active learning experiments on two synthetic tasks and a real-world antibody design task. We use NEHVI (Equation~\ref{eq:nehvi}) as our acquisition function and evaluate it via MC integration (Equation~\ref{eq:nehvi_mc}). In each experiment, we test three types of acquisitions: (1) batched, multi-objective BO with PropertyDAG (``qNEHVI-DAG''), (2) without PropertyDAG (``qNEHVI''), and (3) random. Our main metric is the number of acquired ``joint positive'' designs, designs that exceed the chosen thresholds in all objectives according to the specified PropertyDAG. We refer to the batch size as $q$. \subsection{Penicillin production dataset} \label{sec:penicillin_experiment} This task is based on the penicillin production simulator introduced in \cite{liang2021scalable}. We defined the goal as minimizing the CO$_2$ byproduct emission while ensuring that the fermentation time is below a set threshold and the yield exceeds a set threshold ($K=3, \mathcal{X}=\mathbb{R}^7$). We negated the latter two objectives to define a maximization problem and assume the PropertyDAG: $\{y_{0, 0}\} \rightarrow \{y_{1, 0}\} \rightarrow \{y_{2, 0}\}$, where $y_{0, 0}=$ Yield (``Objective 0''), $y_{1, 0}=$ Negative fermentation time (``Objective 1''), and $y_{2, 0}=$ Negative CO$_2$ byproduct (``Objective 2''). Zero-mean Gaussian noise was added to the input, following \cite{daulton2020differentiable}. We fit an exact GP to model $r_k$ and an approximate GP with the variational evidence lower bound (ELBO) to model $b_k$, separately for each Objective $k$. We drew 512 posterior samples to evaluate qNEHVI. We executed 10 rounds of simulated active learning by initializing the surrogates with 8 training points and selecting $q=4$ out of 80 randomly-sampled pool of candidate points in iteration. The three acquisition modes (qNEHVI-DAG, qNEHVI, and Random) were subject to the same initial training points and candidate pool each round. The entire experiment was repeated 5 times. Figure~\ref{fig:penicillin_joint_pos} shows that qNEHVI-DAG identifies significantly more joint positives than do qNEHVI and Random over active learning iterations. Figure~\ref{fig:penicillin_pareto} compares the qNEHVI and qNEHVI-DAG selections for every pair of objectives, for the final (after Iteration 10) selections stacked across the 5 repeated trials. For every objective, qNEHVI-DAG identifies more examples to the right of the threshold (black dashed lines) than do qNEHVI and Random. \begin{figure}[ht!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=0.35]{figs/penicillin_joint_pos_v3.pdf} \caption{Penicillin production \label{fig:ab_log_p}} \label{fig:penicillin_joint_pos} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{figs/branin_currin_joint_pos_v3.pdf} \caption{Branin-Currin toy problem} \label{fig:bc_joint_pos} \end{subfigure} \caption{Number of joint positives over simulated active learning iterations. Error bands are standard deviations over (a) five and (b) ten trials.} \vspace{-0.3cm} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.7\textwidth, trim=0.cm 1cm 0.cm 4cm]{figs/penicillin_pareto_overlaid.pdf} \caption{Pairwise Pareto front visualization for the penicillin production task showing the ground-truth objectives of final selections. We pool the selections over five trials. Black dashed lines are the thresholds we impose. \label{fig:penicillin_pareto}} \end{figure} \subsection{Branin-Currin toy problem} \label{sec:bc_experiment} This task is based on an analytic Branin-Currin test function from \cite{daulton2020differentiable} with $\mathcal{X}=\mathbb{R}^2$ and $K=2$. We reformulated this task to simulate the antibody design task (Section \ref{sec:ab_design_experiment}) in a controlled environment. We defined the PropertyDAG, $\{y_{0, 0}\} \rightarrow \{y_{1, 0}\}$, where $y_{0, 0}=$ Dimension 0 (``Objective 0'') and $y_{1, 0}=$ Dimension 1 (``Objective 1''). Objective 0 was transformed into binary values using a set threshold. Objective 1 was zero-inflated and real-valued. Posterior inference was performed following the same procedure described in Section \ref{sec:penicillin_experiment}. We executed 20 rounds of simulated active learning by initializing the surrogates with 6 training points and selecting $q=4$ out of 40 randomly-sampled pool of candidate points in iteration. The entire experiment was repeated 10 times. Figure~\ref{fig:bc_joint_pos} shows that qNEHVI-DAG identifies significantly more joint positives than do qNEHVI and Random over active learning iterations. Figure~\ref{fig:bc_obj} compares the qNEHVI-DAG, qNEHVI, and Random selections for Objective 1, for the final selections stacked across the 10 repeated trials. Overall, qNEHVI-DAG identifies more examples to the right of the threshold (black dashed lines) than do qNEHVI and Random, and the improvement is more pronounced for the identification of joint positives (middle panel). \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs/bc_acquired_examples_combined.pdf} \caption{Distribution of final selected candidates in the Branin-Currin problem. We stack the selections over 10 repeated trials. \textit{Top}: all selections. \textit{Middle}: selections for which Objective 0 = 1. \textit{Bottom}: selections for which Objective 0 = 0. \label{fig:bc_obj}} \end{figure} \subsection{Antibody design} \label{sec:ab_design_experiment} The antibody design task is derived from real-world dataset of antibody sequences and their measured \textit{in vitro} properties, affinity and expression. As in the toy problem (Section~\ref{sec:bc_experiment}), we defined the PropertyDAG, $\{y_{0, 0}\} \rightarrow \{y_{1, 0}\}$, where $y_{0, 0}=$ Expression (``Objective 0'') and $y_{1, 0}=$ Affinity (``Objective 1''). Objective 0 was binary-valued, i.e. expressing or not. Objective 1 was zero-inflated and real-valued. We executed 3 iterations of simulated active learning and repeated the entire procedure 5 times. To simulate active learning, we split the entire dataset of 4,022 variable-length protein sequences, designed as antibodies for an anonymized target antigen A, into 5 groups of sizes 1230, 736, 746, and 600. The first group served as the initial training set for the surrogates, the following three groups as the ``candidate pools'' from which we selected 200 candidates in each iteration, and the last served as a held-out test set. As shown in Figure~\ref{fig:ab_joint_pos_over_iters}, qNEHVI-DAG once again outperforms qNEHVI and Random in the number of joint positives. The log posterior density evaluated at the affinity measurements for the joint positives (expressing binders) in the test set is also highest for qNEHVI-DAG, which indicates that the surrogate models from qNEHVI-DAG had the most accurate beliefs about the joint positives after the final iteration. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=0.35]{figs/ab_joint_pos_3_iters_final.pdf} \caption{Joint positives identified} \label{fig:ab_joint_pos_over_iters} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\linewidth]{figs/ab_log_p.pdf} \caption{Log posterior density for joint positives} \label{fig:ab_log_p} \end{subfigure} \caption{(a) Number of joint positives (expressing binders) for the antibody design problem. Error bands are standard deviations over three data splits for qNEHVI-DAG and qNEHVI and additionally over five repeated trials for each data split for Random. (b) Log posterior density on binding affinity evaluated at the lab affinity measurements ($\log p$) for the test set consisting of 600 datapoints. Each $\log p$ value is averaged over five data splits. \label{fig:ab_log_p}} \vspace{-0.3cm} \end{figure} \section{Conclusion} Our proposed method, PropertyDAG, sits on top of the existing multi-objective BO framework to make it amenable to a common scenario in drug design, where a hierarchical structure, or partial ordering, exists among the objectives. It modifies the surrogate posteriors so that each objective is modeled as zero-inflated (a mixture of excess zeros and a continuous distribution) and parent properties in PropertyDAG are prioritized before the children. Empirical evaluations shows that PropertyDAG-BO can identify significantly more designs that are jointly positive (i.e. exceeding a chosen threshold in all properties) than does standard BO. By encapsulating our experimental and biological priors on the relationship between molecular properties, our method promises to accelerate computational drug discovery. \medskip \clearpage { \small \bibliographystyle{unsrtnat}
2,877,628,089,938
arxiv
\section{Introduction} Recall that for a profinite group $G$ and a prime number $p$, the Zassenhaus ($p$-)filtration $(G_{(n)})$ of $G$ is defined inductively by \[ G_{(1)}=G, \quad G_{(n)}=G_{(\lceil n/p\rceil)}^p\prod_{i+j=n}[G_{(i)},G_{(j)}], \] where $\lceil n/p \rceil$ is the least integer which is greater than or equal to $n/p$. (Here for two closed subgroups $H$ and $K$ of $G$, $[H,K]$ means the smallest closed subgroup of $G$ containing the commutators $[x,y]=x^{-1}y^{-1}xy$, $x\in H, y\in K$. Similarly, $H^p$ means the smallest closed subgroup of $G$ containing the $p$-th powers $x^p$, $x\in H$.) Zassenhaus filtrations of groups were introduced in \cite{Zas} and are now recognized as being of fundamental importance in determining the structure and properties of various types of groups. For example, in the case of absolute Galois groups, these filtrations and their subquotients have recently been investigated in \cite{CEM, Ef1,Ef2, EM,MT, EMT}. In the case of arbitrary groups, this filtration has also been referred to as the dimension series, with the subgroups $G_{(n)}$ being called the dimension subgroups in characteristic $p$ (see \cite[Chapters 11, 12]{DDMS}). Our goal is to develop a method for determining the ${\mathbb F}_p$-dimension of subquotients of the Zassenhaus filtration in the case of finitely generated pro-$p$ groups. Let $G$ be a finitely generated pro-$p$-group. For each $n\geq 1$, we set \[ c_n(G) =\dim_{{\mathbb F}_p}(G_{(n)}/G_{(n+1)}). \] Note that since $G$ is finitely generated, $c_n(G)$ is finite for every $n\geq 1$ (see Section 2 for more details). We will proceed to derive an explicit formula for $c_n(G)$ for various families of groups $G$, including finitely generated free pro-$p$-groups, Demushkin groups, and free pro-2 products of finitely many copies of the cyclic group $C_2$ of order 2. Galois theory provides much of the underlying motivation, as many of these groups are realizable as Galois groups of maximal $p$-extensions of local fields (see \cite{De1}, \cite{Sha}) and other fields (see \cite[Proposition 1.3]{EH}). Shafarevich \cite{Sha} demonstrated that for certain fields $F$ not containing primitive $p$-th roots of unity, one could show that the Galois group of the maximal $p$-extension of $F$ was a free pro-$p$-group simply by determining the cardinality of some of its filtration quotients. In Remarks~\ref{rmks:characterization} (1) we show that the numbers $c_n(G)$, $n=1,2,\ldots$, are sufficient to determine finitely generated free pro-$p$-groups in the family of all finitely generated pro-$p$-groups. In Remarks~\ref{rmks:characterization} (2) we are able to determine finitely generated free pro-p groups in the family of all Galois groups of the maximal $p$-extensions of fields containing a primitive $p$-th root of unity by just two numbers, $c_1(G)$ and $c_2(G)$. In Remarks~\ref{rmks:SAP} we show that $c_1(G)$ and $c_2(G)$ are sufficient to determine the Galois groups of the maximal $2$-extensions of Pythagorean fields in two significant cases. In Subsection 4.2 we study groups $G$ which are the free products of several copies of the cyclic group of order 2 in the category of pro-$2$-groups. These groups can be realized as the Galois groups of the maximal 2-extensions of Pythagorean SAP fields, and therefore they are significant in Galois theory. Each such group G contains a free pro-2-subgroup $H$ of index 2. In Corollary~\ref{cor:quotient} we are able to use knowledge of the numbers $c_n(G)$ and $c_n(H)$, to obtain the interesting relation $H_{(n)} = H\cap G_{(n)}$ for each $n\geq 2$. This is yet another example illustrating that the numbers $c_n(G)$ can be very useful in group theory and Galois theory. In this paper we provide a unifying principle for deriving the dimensions $c_n(G)$ in a number of interesting cases. We observe that the formulas obtained look simple, elegant, and potentially useful. We would also like to note that when $S$ is a finitely generated free pro-$p$ group, a formula for $c_n(S)$ is implicitly given in \cite{Ga}, where an ${\mathbb F}_p$-basis for $S_{(n)}/S_{(n+1)}$ is provided. When we interpret the groups $G$ considered in this paper as Galois groups, our formulas lead to formulas for the order of related Galois groups. For example, if $G$ is isomorphic to the maximal pro-$p$-quotient $G_F(p)$ of the absolute Galois group $G_F$ of a field $F$, and if we denote by $F_{(n)}$ the fixed field of $G_F(p)_{(n)}$, then $|{\rm Gal}(F_{(n)}/F)|=p^{\sum_{i=1}^{n-1} c_i(G)}$. As indicated above, these Galois groups play a fundamental role in current Galois theory. Furthermore, we observe in Sections 3 and 5 that our formulas also lead to the determination of the minimal number of topological generators of $G_{(n)}$, for $G$ a free pro-$p$-group or a Demushkin pro-$p$-group. In fact the orders of Galois groups ${\rm Gal}(F_{(n)}/F)$ are of considerable interest in current Galois theory research. In particular, in \cite{Ef1,MT, EMT}, based partially on the special cases in \cite{EM,MS2}, the Kernel Unipotent Conjecture was formulated. If this conjecture is true, we would obtain a characterization of $G_F(p)_{(n)}$, where $n\geq 3,$ as the intersection of the kernels of all Galois representations $\rho\colon G_F(p)\to {\mathbb U}_n({\mathbb F}_p)$. In order to prove the Kernel Unipotent Conjecture in the case when $G_F(p)$ is finitely generated, one may try to produce enough such representations. However, in order to check whether the intersection of the kernels of given representations is in fact $G_F(p)_{(n)}$, it would be useful to know $|{\rm Gal}(F_{(n)}/F)|$. This strategy resembles the previous successful strategy of Shafarevich, mentioned above. Another very interesting project in current Galois theory is to study the possible Koszul duality relating the Galois cohomology algebra $H^*(G,{\mathbb F}_p)$ to the Lie algebra $\bigoplus_{n=1}^\infty G_{(n)}/G_{(n+1)}$ and its universal enveloping algebra. In order to check some corollaries of this possible Koszul duality, determination of the numbers $c_n(G)$ could play an important role. The structure of our paper is as follows: In Section 2 we discuss Hilbert-Poincar\'e series and provide a general formula for $c_n(G)$ (see Theorem~\ref{thm:general}). In Section 3 we provide an explicit formula for $c_n(G)$ when $G$ is a free pro-$p$-group of finite rank. In Section 4 we treat the case when $G$ is a free pro-2 product of finite copies of $C_2$. We also show that in some significant special cases, knowledge of just $c_1(G)$ and $c_2(G)$ is sufficient to determine certain Galois groups within large families of pro-$2$-groups. In Section 5 we treat the case in which $G$ is a Demushkin group. In the last section we discuss some other groups. \\ \\ {\bf Acknowledgements: } The first author gratefully acknowledges discussions with I. Efrat, J. Labute and A. Topaz; the latter two having provided some extra motivation for the work in this paper. All of the authors would like to thank the referee for valuable suggestions related to the exposition. \section{Hilbert-Poincar\'e series} Let $F$ be a unital commutative ring. A graded free $F$-module $V=\bigoplus_{i=0}^\infty V_n$ is called {\it locally finite} if ${\rm rank}_F(V_n)<\infty$ for all $n\geq 0$. For such a graded free $F$-module $V$, the {\it Hilbert-Poincar\'e series} $P_V(t)\in {\mathbb Z}[[t]]$ of $V$ is the formal power series \[ P_V(t)=\sum_{n=0}^\infty {\rm rank}_F(V_n) t^n. \] Let $G$ be a finitely generated pro-$p$-group. It is convenient to use the completed group algebra ${\mathbb F}_p[[G]]$ of $G$ over ${\mathbb F}_p$ \[ {\mathbb F}_p[[G]]:= \varprojlim_{N} {\mathbb F}_p[G/N]. \] Thus ${\mathbb F}_p[[G]]$ is the topological inverse limit of the usual group rings ${\mathbb F}_p[G/N]$, where $N$ runs through open normal subgroups of $G$. A standard reference for completed group rings is \cite[Chapter 5]{NSW}. We also use the convenient references \cite[Chapter 7]{Ko} and \cite[Chapters 7 and 12]{DDMS}. We recall that $I(G)\subset {\mathbb F}_p[[G]]$ denotes the augmentation ideal of ${\mathbb F}_p[[G]]$ which is the closed two-sided ideal of ${\mathbb F}_p[[G]]$ generated by the elements $g-1$, for $g\in G$. Thus if $\epsilon\colon {\mathbb F}_p[[G]]\to {\mathbb F}_p$ is the homomorphism such that $\epsilon(g)=1$ for all $g\in G$, then $I(G)=\ker\epsilon$. We denote by $I^n(G)$ the $n$-th power of the augmentation ideal $I(G)$. There are two graded ${\mathbb F}_p$-algebras associated to $G$ and ${\mathbb F}_p[[G]]$ respectively, which are defined by \[ {\rm gr}(G)= \bigoplus_{n\geq 1} G_{(n)}/G_{(n+1)} \quad \text{ and } \quad {\rm gr}({\mathbb F}_p[[G]]) =\bigoplus_{n\geq 0} I^n(G)/I^{n+1}(G). \] Then ${\rm gr}(G)$ is a restricted Lie algebra. (See \cite[Chapter 12]{DDMS}.) Furthermore since $G$ is finitely generated, the two graded algebras ${\rm gr}(G)$ and ${\rm gr}({\mathbb F}_p[[G]])$ are locally finite (see \cite[Section 7.4]{Ko}). We recall that $c_n(G)=\dim_{{\mathbb F}_p} G_{(n)}/G_{(n+1)}$ and we let $a_n(G):= \dim_{{\mathbb F}_p} I^n(G)/I^{n+1}(G)$. As pointed out in \cite[page 312]{DDMS}, $a_n(G)=\dim_{{\mathbb F}_p} I_0^n(G)/I_0^{n+1}(G)$, where $I_0(G)$ is the augmentation ideal of ${\mathbb F}_p[G]$ - the usual group ring of $G$. Thus our results below apply to this case as well. In several places we use results from discrete groups which extend in a straightforward way to pro-$p$-groups. We usually mention this, but in some cases we omit explicitly mentioning such a standard extension. The following theorem, Theorem~\ref{thm:JL}, is a consequence of a beautiful theory of Jennings and Lazard \cite[Chapters 11 and 12]{DDMS} viewing the Zassenhaus filtration subgroups $G_{(n)}$ as dimension subgroups. (See also \cite{Qu}.) \begin{thm}[Jennings-Lazard] \label{thm:JL} Let the notation be as above. \begin{enumerate} \item The graded algebra ${\rm gr}({\mathbb F}_p[[G]])$ is a restricted universal enveloping algebra of ${\rm gr}(G)$. \item We have \begin{equation} \label{eq:fundamental} P_{{\rm gr}({\mathbb F}_p[[G]])}(t) = \sum_{n=0}^\infty a_n(G) t^n=\prod_{n=1}^\infty \left(\frac{1-t^{np}}{1-t^n}\right)^{c_n(G)}. \end{equation} \end{enumerate} \end{thm} \begin{proof} (1) See \cite[Theorem 12.8]{DDMS}. (2) See \cite[Theorem 12.16]{DDMS} (see also \cite[Proposition 2.3]{Er}). \end{proof} The following lemma is an important technical tool which allows us to derive our results in a concise way. It relies on one fundamental result of Lichtman and also on a simple, but quite remarkable formula which can be traced back to the work of Lemaire in \cite[Chapter 5]{Le}. \begin{lem} \label{lem:free product} Let $G_1$ and $G_2$ be two finitely generated pro-$p$-groups. Let $G=G_1*G_2$ be the free product of $G_1$ and $G_2$ in the category of pro-$p$-groups. Then \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t) = (P_{{\rm gr}({\mathbb F}_p[[G_1]])}^{-1}(t)+P_{{\rm gr}({\mathbb F}_p[[G_2]])}^{-1}(t)-1)^{-1}. \] \end{lem} \begin{proof} By \cite[Theorem 1]{Li}, the graded ${\mathbb F}_p$-algebra ${\rm gr}({\mathbb F}_p[[G]])$ is a free product (i.e., a categorical coproduct) of ${\rm gr}({\mathbb F}_p[[G_1]])$ and ${\rm gr}({\mathbb F}_p[[G_2]])$. The statement then follows from \cite[Equation (1.2), page 56]{PP}. \end{proof} \begin{rmk} \label{rmk:direct product} Let $G=G_1\times G_2$ be the direct product of two finitely generated pro-$p$-groups $G_1$ and $G_2$. We first observe that every commutator in $G$ is the product of a commutator in $G_1$ and a commutator in $G_2$, and every $p$-power in $G$ is the product of a $p$-power in $G_1$ and a $p$-power in $G_2$. Then we can check that $G_{(n)}=(G_1)_{(n)}\times (G_2)_{(n)}$, and that \[\frac{G_{(n)}}{G_{(n+1)}}\simeq\frac{(G_1)_{(n)}}{(G_1)_{(n+1)}}\times \frac{(G_2)_{(n)}}{(G_2)_{(n+1)}}.\] This implies that $c_{n}(G)=c_{n}(G_1)+c_{n}(G_2)$, and that \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t) = P_{{\rm gr}({\mathbb F}_p[[G_1]])}(t)\cdot P_{{\rm gr}({\mathbb F}_p[[G_2]])}(t). \] In fact, since ${\rm gr}(G)\simeq {\rm gr}(G_1) \oplus {\rm gr}(G_2)$, one can show that \[{\rm gr}({\mathbb F}_p[[G]])\simeq {\rm gr}({\mathbb F}_p[[G_1]]) \otimes {\rm gr}({\mathbb F}_p[[G_2]]).\] \end{rmk} \begin{examples} \mbox{} In our examples below, $d$ can be any natural number, and in (3), $d=0$ is also meaningful. \begin{enumerate} \item If $G$ is a free pro-$p$-group of rank $d$, then (see Section 3) \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= \frac{1}{1-dt}. \] \item If $G=C_p$ is a cyclic group of order $p$, then \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= 1+t+\cdots+t^{p-1}. \] Indeed, since ${\rm gr}(C_p)=C_p$, the graded algebra ${\rm gr}({\mathbb F}_p[[G]])$, which is a universal enveloping algebra of ${\rm gr}(C_p)$ by Theorem~\ref{thm:JL}, is isomorphic to ${\mathbb F}_p[X]/(X^p)$, the preceding statement follows. \\ \item If $G=C_p*\cdots*C_p$ is a free product of $d+1$ copies of $C_p$ the cyclic group of order $p$, then \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= \frac{1+t+\cdots + t^{p-1},}{1-dt-\cdots - dt^{p-1}}. \] This follows by induction on $d$, and by using (2) above, and Lemma~\ref{lem:free product}. \\ \item If $G={\mathbb Z}_p^d$, then \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=\frac{1}{(1-t)^d}. \] This follows from Remark~\ref{rmk:direct product} and (1) above. \\ \item If $G:={\mathbb Z}_2^d \rtimes C_2$, where $C_2=\langle x \rangle$ and the action of $C_2$ on ${\mathbb Z}_2^d$ is given by $xyx=y^{-1}$, for all $y\in {\mathbb Z}_2^d$, then (see Corollary \ref{cor:superPy}) \[ P_{{\rm gr}({\mathbb F}_2[[G]])}(t)=\frac{1+t}{(1-t)^d}\prod_{i=1}^{\infty}\frac{1}{1-t^{2i+1}}. \] \item If $G$ is a Demushkin pro-$p$-group of rank $d$, then (see Section 5) \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= \frac{1}{1-dt+t^2}. \] \item If $G$ is a free product of a cyclic group of order 2 and a free pro-$2$-group of rank $d$, then \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= \frac{1+t}{1-dt-dt^2}. \] This follows by using Lemma~\ref{lem:free product} and (1)-(2) above. \qed \end{enumerate} \end{examples} Below we shall describe a general method for deriving a formula for $c_n(G)$ if we know the Hilbert-Poincar\'e series $P_{{\rm gr}({\mathbb F}_p[[G]])}(t)$. So we assume that we are given a power series $P(t)=1+\sum_{n\geq 1}a_nt^n\in {\mathbb Z}[[t]]$. We define $c_n,n=1,2,\ldots$ by \[ P (t)=1+\sum_{n\geq 1}a_n t^n=\prod_{n=1}^\infty \left(\frac{1-t^{np}}{1-t^n}\right)^{c_n}. \] We write $\log P(t)= \sum_{n\geq 1} b_n t^n$. We shall derive a formula for $c_n$ using the values $b_1,\ldots,b_n$. To do this, it is convenient to introduce a new auxiliary sequence $w_1,w_2,\ldots$ defined below. Taking logarithms and using $\log(\dfrac{1}{1-t})=\sum \limits_{\nu=1}^\infty \dfrac{1}{\nu}t^\nu$, we obtain \[ \sum_{n=1}^\infty b_n t^n =\sum_{m=1}^\infty c_m \sum_{\nu=1}^\infty \frac{1}{\nu} (t^{m\nu}-t^{mp\nu}). \] Equating the coefficients of $t^n$, we obtain \[ b_n=\sum_{m\nu=n} \frac{1}{\nu}c_m - \sum_{mp\nu=n}\frac{1}{\nu}c_m. \] Hence \[ nb_n= \sum_{m\mid n} m c_m -\sum_{mp\mid n} mp c_m. \] Recall that for two integers $a$ and $b$, the symbol $a\mid b$ means that $a$ divides $b$. Now we define the sequence $w_n, n=1,2,\ldots$ by \[ w_n=\frac{1}{n}\sum_{m\mid n} \mu(n/m) mb_m. \] Then by the M{\"o}bius inversion formula, \[ nb_n= \sum_{m\mid n} m w_m. \] Here $\mu$ is the M\"obius function: for a positive integer $d$, \[ \mu(d)= \begin{cases} (-1)^r & \text{ if $d$ is a product of $r$ distinct prime numbers},\\ 0 &\text{ otherwise}. \end{cases} \] \begin{rmk} \label{rmk:wn} From the definition of $w_n$ we see that \[ P (t)=1+\sum_{n\geq 1}a_n t^n=\prod_{n=1}^\infty \frac{1}{(1-t^n)^{w_n}}. \] \end{rmk} \begin{lem} \label{lem:coprime} If $(n,p)=1$ then $c_n=w_n$. \end{lem} \begin{proof} Assume that $(n,p)=1$. Then we have \[nb_n=\sum_{m\mid n} mc_m.\] Hence by the M{\"o}bius inversion formula, we have \[ c_n=\frac{1}{n}\sum_{m\mid n} \mu(n/m) mb_m=w_n. \qedhere \] \end{proof} \begin{lem} \label{lem:not coprime} If $p$ divides $n$, then we have \[ c_n = c_{n/p}+w_n. \] \end{lem} \begin{proof} We proceed by induction on $n$. Clearly $c_p-c_1=\dfrac{pb_p-b_1}{p}=w_p$, hence the statement is true for $n=p$. Therefore we assume now that $n>p$ and $p\mid n$. We assume that the statement is true for every $m$ such that $p\mid m\mid n$, $m\not=n$. We are going to show that the statement is also true for $n$. We have \[ \begin{aligned} nb_n &=\sum_{m\mid n} m c_m -\sum_{pm\mid n} pm c_m\\ &=\sum_{m\mid n} m c_m -\sum_{p\mid m\mid n} m c_{m/p}\\ &= \sum_{m\mid n, (m,p)=1} m c_m +\sum_{p\mid m\mid n} m (c_m-c_{m/p})\\ &= \sum_{m\mid n, (m,p)=1} m w_m +\sum_{p\mid m\mid n,m\not= n} m w_m + n(c_n-c_{n/p})\\ &=\sum_{m\mid n, m\not= n} m w_m +n(c_n-c_{n/p}). \end{aligned} \] Combining with \[ nb_n=\sum_{m\mid n} mw_m, \] we obtain $c_n-c_{n/p}=w_n$. \end{proof} \begin{prop} \label{prop:key} If $n=p^k m $ with $(m,p)=1$, then \[ c_n =w_m +w_{pm}+\cdots + w_{p^km}. \] \end{prop} \begin{proof} This follows from the above two lemmas. \end{proof} \begin{thm} \label{thm:general} Let $G$ be a finitely generated pro-$p$-group. We write \[ \log P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=\sum_{n\geq 1}b_nt^n\in {\mathbb Q}[[t]],\] and we define $w_n(G)$ by \[ w_n(G):=\frac{1}{n}\sum_{m\mid n} \mu(n/m) mb_m. \] Let $n=p^k m $ with $(m,p)=1$. Then \[ c_n(G) =w_m(G) +w_{pm}(G)+\cdots + w_{p^km}(G). \] \end{thm} \begin{proof} This follows from Theorem~\ref{thm:JL} and Proposition~\ref{prop:key}. \end{proof} Let $G$ be a finitely generated pro-$p$-group. We write $\log P_{{\rm gr}({\mathbb F}_p[[G]])}(t) = \sum_{n\geq 1} b_n t^n$ and recall that we have defined $w_n(G)$ by \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(n/m) mb_m. \] At first glance the definition of $w_n$ may look a bit artificial. One may ask whether $w_n$ appears more naturally as the rank or dimension of some free finitely generated abelian group. Below we shall give a partial answer to this question. Recall that for a profinite group $G$, the descending central series $(G_n)$ is defined inductively by \[ G_1=G,\quad G_{n+1}=[G_n,G]. \] Let $J(G)$ be the augmentation ideal of the completed group ring ${\mathbb Z}_p[[G]]$. (Here ${\mathbb Z}_p[[G]]$ and $J(G)$ are defined similarly to ${\mathbb F}_p[[G]]$ and $I(G)$.) Then we have two graded ${\mathbb Z}_p$-algebras associated to $G$ and ${\mathbb Z}_p[[G]]$ respectively which are defined by \[ {\rm gr}_\gamma(G)= \bigoplus_{n\geq 1} G_{n}/G_{n+1} \quad \text{ and } \quad {\rm gr}({\mathbb Z}_p[[G]]) =\bigoplus_{n\geq 0} J^n(G)/J^{n+1}(G). \] \begin{lem} \label{lem:integral version} Let $G$ be a finitely generated pro-$p$-group. Assume that the graded algebra ${\rm gr}_\gamma(G)=\bigoplus_{n\geq1} G_n/G_{n+1}$ is torsion free. Let $e_n(G)={{\rm rank}}_{{\mathbb Z}_p} G_n/G_{n+1}$. \begin{enumerate} \item[(a)] The graded algebra ${\rm gr}({\mathbb Z}_p[[G]])$ is a universal enveloping algebra of ${\rm gr}_\gamma(G)$. \item[(b)] $J^n(G)/J^{n+1}(G)$ is a free module over ${\mathbb Z}_p$ of finite rank $d_n(G)$, and \[ P_{{\rm gr}({\mathbb Z}_p[[G]])}(t) = \sum_{n=0}^\infty d_n(G) t^n=\prod_{n=1}^\infty \frac{1}{(1-t^n)^{e_n(G)}}. \] \end{enumerate} \end{lem} \begin{proof} (a) This follows from \cite[Theorem 1.3]{Har}. In \cite[Theorem 1.3]{Har}, a discrete group $G$ is considered, but an adaptation of this proof to the profinite case is straightforward. (b) This follows from (a) and \cite[Proposition 2.5]{La3}. \end{proof} \begin{prop} \label{prop:wn} Let $G$ be a finitely generated pro-$p$-group. Assume that the graded algebra ${\rm gr}_\gamma(G)=\bigoplus_{n\geq1} G_n/G_{n+1}$ is torsion free. The following are equivalent. \begin{enumerate} \item[(a)] ${\rm rank}_{{\mathbb Z}_p} J^n(G)/J^{n+1}(G)=\dim_{{\mathbb F}_p} I^n(G)/I^{n+1}(G)$ for all $n\geq 1$. \item[(b)] $w_n(G)={\rm rank}_{{\mathbb Z}_p} G_n/G_{n+1}$ for all $n\geq 1$. \end{enumerate} \end{prop} \begin{proof} We keep the existing notation as in Lemma~\ref{lem:integral version}. (a) $\Rightarrow$ (b): Assume that ${\rm rank}_{{\mathbb Z}_p} J^n(G)/J^{n+1}(G)=\dim_{{\mathbb F}_p} I^n(G)/I^{n+1}(G)$ for all $n$. Then by Theorem~\ref{thm:JL}, Remark~\ref{rmk:wn} and Lemma~\ref{lem:integral version}, we have \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=\prod_{n=1}^\infty \frac{1}{(1-t^n)^{w_n(G)}}= P_{{\rm gr}({\mathbb Z}_p[[G]])}(t)=\prod_{n=1}^\infty \frac{1}{(1-t^n)^{e_n(G)}}. \] Therefore $w_n(G)=e_n(G)$ for all $n\geq 1$. (b) $\Rightarrow$ (a): Assume that $w_n(G)=e_n(G)$ for all $n\geq 1$. Then by Theorem~\ref{thm:JL}, Remark~\ref{rmk:wn} and Lemma~\ref{lem:integral version}, we have \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=P_{{\rm gr}({\mathbb Z}_p[[G]])}(t). \] Therefore ${\rm rank}_{{\mathbb Z}_p} J^n(G)/J^{n+1}(G)=\dim_{{\mathbb F}_p} I^n(G)/I^{n+1}(G)$ for all $n\geq 1$. \end{proof} \begin{rmk} We shall see in Sections 3 and 5, Remark~\ref{rmk:wn free} and Lemma~\ref{lem:wn Demushkin}, that both a free finitely generated pro-$p$-group and a Demushkin group with a relation of the form $r=[x_1,x_2]\cdots [x_{d-1},x_d]$ satisfy the equivalent statements in Proposition~\ref{prop:wn}. \end{rmk} \begin{question} Let $G$ be a finitely generated pro-$p$-group. We assume that the graded algebra $\bigoplus_{n\geq1} G_n/G_{n+1}$ is torsion free. Is this true that \[{\rm rank}_{{\mathbb Z}_p}(G_n/G_{n+1})=w_n(G)?\] \end{question} \section{Free pro-$p$-groups} Throughout this section we assume that $S$ is a free pro-$p$-group on a finite set of generators $x_1,\ldots,x_d$. We recall the Magnus homomorphism from the completed group algebra ${\mathbb F}_p[[S]]$ to the ${\mathbb F}_p$-algebra ${\mathbb F}_p\langle\langle X_1,\ldots,X_d\rangle\rangle$ of the formal power series in $d$ non-commuting variables $X_1,\ldots,X_d$ over ${\mathbb F}_p$. The homomorphism is given by \[ \psi\colon {\mathbb F}_p[[S]] \to {\mathbb F}_p \langle\langle X_1,\ldots,X_d\rangle\rangle, x_i\mapsto 1+X_i. \] The ${\mathbb F}_p$-algebra ${\mathbb F}_p\langle\langle X_1,\ldots,X_d\rangle\rangle$ is equipped with a natural valuation $v$ given by \[ v(\sum a_{i_1,\ldots,i_k}X_{i_1}\cdots X_{i_k})=\inf\{k\mid a_{i_1,\ldots,i_k}\not=0\}\in {\mathbb Z}_{\geq 0}\cup \{\infty\}, \] making it a compact topological ${\mathbb F}_p$-algebra. One basic result is the following. \begin{lem} \label{lem:0a} The Magnus homomorphism $\psi$ is a (topological) isomorphism. \end{lem} \begin{proof} See for example, \cite[Chapter I, Proposition 7]{Se} or \cite[Chapter 6]{Laz}. \end{proof} \begin{cor} The Hilbert-Poincar\'e series \[ P_{{\rm gr}({\mathbb F}_p[[S]])}(t)=\frac{1}{1-dt}. \] \end{cor} \begin{proof} Via the Magnus homomorphism, the augmentation ideal $I(S)$ is mapped to the ideal $I=(X_1,\ldots,X_d)$ of ${\mathbb F}_p\langle\langle X_1,\ldots,X_d\rangle\rangle$. Hence \[ a_n(S):= \dim_{{\mathbb F}_p}(I^n(S)/I^{n+1}(S))=\dim_{{\mathbb F}_p}(I^n/I^{n+1}), \] which is equal to the number of non-commutative monomials of degree $n$ in $d$ variables $X_1,\ldots,X_n$. Hence $a_n(S)=d^n$. The statement then follows. \end{proof} We define $w_n(S)$ by \[w_n(S)=\frac{1}{n}\sum_{m\mid n} \mu(m) d^{n/m}. \] \begin{rmk} \label{rmk:wn free} Let $(S_n)$ be the lower central series of $S$. Then by Witt's result, $S_{n}/S_{n+1}$ is a free ${\mathbb Z}_p$-module of finite rank $w_n(S)$. \end{rmk} Theorem~\ref{thm:general} immediately implies the following result. \begin{prop} \label{prop:cn free} If $n=p^k m $ with $(m,p)=1$, then \[ c_n(S) =w_m(S) +w_{pm}(S)+\cdots + w_{p^km}(S). \qedhere \] \end{prop} \begin{rmks} \label{rmks:characterization} (1) If a finitely generated pro-$p$-group $G$ has Hilbert-Poincar\'e series of a finitely generated free pro-$p$-group, then $G$ is itself free. In other words, if $ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=\displaystyle \frac{1}{1-dt}, $ then $G$ is free of rank $d$. Indeed, we first have $c_1(G)=w_1(G)=d$, which is equal to the minimal number of topological generators of $G$. Hence there exists a minimal presentation of $G$: \[ 1 \to R\to S\to G\to 1, \] where $S$ is a free pro-$p$-group of rank $d$. We then have $c_n(G)=c_n(S)$ for all $n\geq 1$. Hence $|S/S_{(n)}|=|G/G_{(n)}|$ for all $n\geq 1$. Thus the natural epimorphism \[ S/S_{(n)} \twoheadrightarrow G/G_{(n)} \] is in fact an isomorphism. This implies that $R\subseteq S_{(n)}$ for all $n\geq 1$. Therefore by \cite[Theorem 7.11]{Ko}, $R=1$ and hence $S\simeq G$. (2) Let $G$ be a finitely generated pro-$p$-group. In the case in which $G$ is realizable as the Galois group of a maximal $p$-extension of a field $F$ containing a primitive $p$-th root of unity, it is noteworthy to point out that if $c_1(G)=c_1(S)$ and $c_2(G)=c_2(S)$ for some finitely generated free pro-$p$-group $S$, then $G$ is in fact isomorphic to $S$. Indeed, as $c_1(S)=c_1(G)$ we have a short exact sequence \[ 1 \to R\to S\stackrel{\pi}{\to} G\to 1. \] Since $c_1(S)=c_1(G)$ and $c_2(S)=c_2(G)$, we see that $|S/S_{(3)}|=|G/G_{(3)}|$. Thus the natural epimorphism \[ S/S_{(3)} \twoheadrightarrow G/G_{(3)} \] is in fact an isomorphism. Hence by \cite[Theorem C]{EM} (see also \cite[Theorem D]{CEM} for the case $p=2$) we see that $\pi\colon S\to G$ is an isomorphism. \end{rmks} In Corollary~\ref{cor:unipotent} relying on Lemma~\ref{lem:coefficient} below, we obtain an interesting purely group-theoretical corollary of our formula for $c_n(S)$. For each positive integer $n$, let ${\mathbb U}_{n+1}({\mathbb F}_p)$ be the group of all upper-triangular unipotent $(n+1)\times(n+1)$-matrices with entries in ${\mathbb F}_p$. \begin{lem} \label{lem:coefficient} Let $n$ be a positive integer. If ${\mathbb U}_{n+1}({\mathbb F}_p)_{(n)}=1$, then $c_n(S)=0$ for every free pro-$p$-group $S$. \end{lem} \begin{proof} Let $S$ be a free pro-$p$-group. Assume that ${\mathbb U}_{n+1}({\mathbb F}_p)_{(n)}=1$. Then for any (continuous) representation $\rho: S\to {\mathbb U}_{n+1}({\mathbb F}_p)$, we have $\rho(S_{(n)})\subseteq {\mathbb U}_{n+1}({\mathbb F}_p)_{(n)}=1$. Hence \[ S_{(n+1)}\subseteq S_{(n)}\subseteq \bigcap \ker(\rho\colon S\to {\mathbb U}_{n+1}({\mathbb F}_p)), \] where $\rho$ runs over the set of all representations (continuous homomorphisms) $S \to {\mathbb U}_{n+1}({\mathbb F}_p)$. On the other hand, we know that the Kernel Unipotent Conjecture is true for $S$ (see \cite{Ef1}, and also \cite{Ef2}, \cite{MT}). This means that we have \[ S_{(n+1)}=\bigcap \ker(\rho\colon S\to {\mathbb U}_{n+1}({\mathbb F}_p)). \] Therefore, $S_{(n+1)}=S_{(n)}$, i.e., $c_n(S)=0$, as desired. \end{proof} \begin{cor} \label{cor:unipotent} Let $n$ be a positive integer. Then $ {\mathbb U}_{n+1}({\mathbb F}_p)_{(n)}\simeq {\mathbb F}_p$ and \[ n=\max\{h\mid {\mathbb U}_{n+1}({\mathbb F}_p)_{(h)}\not=1\}. \] \end{cor} \begin{proof} We first observe that if $S$ is a free pro-$p$-group of rank $d>1$, then all numbers $w_n(S)$, $n=1,2,\ldots$, are positive. Therefore from Proposition~\ref{prop:cn free} we see that $c_n(S)\not=0$ for all $n\in {\mathbb N}$. Hence by Lemma~\ref{lem:coefficient}, ${\mathbb U}_{(n+1)}({\mathbb F}_p)_{(n)}\not=1$. On the other hand, it is well-known that ${\mathbb U}_{n+1}({\mathbb F}_p)_{(n+1)}=1$. Hence ${\mathbb U}_{n+1}({\mathbb F}_p)_{(n)}\subseteq Z({\mathbb U}_{n+1}({\mathbb F}_p))\simeq {\mathbb F}_p$, where $Z({\mathbb U}_{n+1}({\mathbb F}_p))$ is the center of ${\mathbb U}_{n+1}({\mathbb F}_p)$. Therefore \[ {\mathbb U}_{n+1}({\mathbb F}_p)_{(n)}=Z({\mathbb U}_{n+1}({\mathbb F}_p))\simeq {\mathbb F}_p, \] and the second assertion is also clear. \end{proof} \begin{ex} Let $S$ be a free pro-$p$-group of finite rank $d$. We have \[ \begin{aligned} c_1(S)&=d,\\ c_2(S)&= \begin{cases} \frac{d^2-d}{2} \text{ if } p\not=2,\\ \frac{d^2+d}{2} \text{ if } p=2, \end{cases}\\ c_3(S) &=\begin{cases} \frac{d^3-d}{3} \text{ if } p\not=3,\\ \frac{d^3+2d}{3} \text{ if } p=3, \end{cases}\\ c_4(S)&= \begin{cases} \frac{d^4-d^2}{4} \text{ if } p\not=2,\\ \frac{d^4+d^2+2d}{4} \text{ if } p=2, \end{cases}\\ c_5(S)&=\begin{cases} \frac{d^5-d}{5} \text{ if } p\not=5,\\ \frac{d^5+4d}{5} \text{ if } p=5. \end{cases} \end{aligned} \] Observe that our numbers $c_n(S)$, $n=1,2,\ldots$, also detect the minimal numbers of generators of $S_{(n)}$. Indeed by the pro-$p$-version of Schreier's formula for each open subgroup $T$ of $S$, we have the following expression for the minimal number of generators $d(T)$ of $T$: \[ d(T)=[S:T](d(S)-1)+1. \] Therefore \[ d_n(S):=d(S_{(n)})= p^{\sum_{i=1}^{n-1}c_i(S)}(d-1)+1. \] \end{ex} Next we are going to give an explicit ${\mathbb F}_p$-basis for $S_{(n)}/S_{(n+1)}$, for each $n$. We shall first recall a definition of Hall commutators of weight $n$ and their linear order. This was originally introduced by M. Hall in \cite[Section 4]{Ha} (see also \cite[Definition 2.3]{Vo}). \begin{defn} The set $C_n$ of {\it Hall commutators of weight $n$} together with a total order $<$ is inductively defined as follows: \begin{enumerate} \item $C_1=\{x_1,\ldots,x_d\}$ with the ordering $x_1>\cdots >x_d$. \item Assume $n>1$ and that we have defined Hall commutators and their ordering for all weights $<n$. Then $C_n$ is the set of all commutators $[c_1,c_2]$ where $c_1\in C_{n_1},c_2\in C_{n_2}$ such that $n_1+n_2=n$, $c_1>c_2$ and if $c_1=[c_3,c_4]$ then we also require that $c_2\geq c_4$. The set $C_n$ is ordered lexicographically, i.e., $[c_1,c_2]<[c_1^\prime,c_2^\prime]$ if and only if $c_1<c_1^\prime$, or $c_1=c_1^\prime$ and $c_2<c_2^\prime$. Finally commutators of weight $n$ are greater than all commutators of smaller weight. \end{enumerate} \end{defn} The following theorem was proved by M. Hall in the discrete case. The extension of his theorem to the pro-$p$ case is straightforward. \begin{thm}[{\cite[Theorem 4.1]{Ha}}] The Hall commutators of weight $n$ represent a basis of $S_n/S_{n+1}$ as a free ${\mathbb Z}_p$-module. In particular $w_n(S)= |C_n|$. \end{thm} The following theorem is due to Lazard (see \cite[Theorem 11.2]{DDMS}). \begin{thm}[Lazard] For each $n$, one has \[ G_{(n)}=\prod_{ip^j\geq n}G_i^{p^j}. \] \end{thm} \begin{cor} Let us write $n=p^km$ with $(m,p)=1$. Then a basis of the ${\mathbb F}_p$-vector space $S_{(n)}/S_{(n+1)}$ can be represented by the following set \[ C_m^{p^k} \bigsqcup C_{pm}^{p^{k-1}}\bigsqcup \cdots \bigsqcup C_{p^{k-1}m}^p\bigsqcup C_n. \] \end{cor} \begin{proof} By Lazard's theorem, we can check that the above set defines a set of generators for the ${\mathbb F}_p$-vector space $S_{(n)}/S_{(n+1)}$. Now by Proposition~\ref{prop:cn free} and by a counting argument, we see that this set defines a basis for the ${\mathbb F}_p$-vector space $S_{(n)}/S_{(n+1)}$. \end{proof} \section{Free products of a finite number of cyclic groups of order 2} \subsection{Free products of finitely many cyclic groups of order 2} Let $d$ be a non-negative integer. Let $G=C_2*\cdots * C_2$ be a free product in the category of pro-$2$-groups of $d+1$ copies of $C_2$, where $C_2$ is the group of order 2. In this section we shall consider Pythagorean fields. A field $F$ is said to be Pythagorean if each finite sum of squares in $F$ is again a square in $F$. A Pythagorean field is called a formally real field if $-1$ is not a square. Pythagorean fields play an important role in Galois theory, real algebraic geometry and the algebraic theory of quadratic forms. We refer a reader to a beautiful exposition of related topics in \cite{Lam}. Let us recall that a formally real Pythagorean field $K$ with $|K^\times/(K^\times)^2|=2^{d+1}$ is called an SAP field if $K$ admits exactly $d+1$ orderings. These SAP fields form an interesting and well investigated family of fields. (See \cite[Chapter 17]{Lam}.) \begin{thm} \label{thm:SAP} Let $F$ be a field with $|F^\times/(F^\times)^2|=2^{d+1}$. Then $F$ is an SAP field if and only if $G_F(2)$ is isomorphic to $G=C_2*\cdots *C_2$, the free product of $d+1$ copies of $C_2$. \end{thm} \begin{proof} The "only if" part follows from \cite{Mi}. We now prove the "if" part. Suppose that $G_F(2)$ is isomorphic to $C_2*\cdots*C_2$ ($d+1$ copies of $C_2$). Then $F$ is formally real Pythagorean, and $|F^\times/(F^\times)^2|=2^{d+1}$. Now we pick any SAP field $K$ which has exactly $d+1$ orderings. From the "only if" part, we see that $G_F(2)\simeq G_K(2)$. In particular, $G_F(2)/G_F(2)_{(3)}\simeq G_K(2)/G_K(2)_{(3)}$. Then \cite[Theorem 3.8]{MS2} implies that the Witt ring $WF$ of $F$ is isomorphic to the Witt ring $WK$ of $K$. We know that for a Pythagorean field $L$, the Witt ring $WL$ of $L$ determines the space of orderings $X_L$ of $L$. Hence the space of orderings of $F$ is isomorphic to the space of orderings of $K$. In particular $F$ admits exactly $d+1$ orderings. Therefore $F$ is an SAP field. \end{proof} \begin{cor} \label{cor:SAP} Let $F$ be any Pythagorean field, and let $K$ be an SAP field. Assume that $|F^\times/(F^\times)^2|=|K^\times/(K^\times)^2|=2^{d+1}$ . Then there exists an epimorphism $G_K(2)\simeq C_2*\cdots*C_2 \twoheadrightarrow G_F(2)$. \end{cor} \begin{proof} By \cite[Remark 17.5]{Lam}, $F$ has at least $d+1$ orderings, and we can choose $d+1$ involutions $\sigma_1,\ldots,\sigma_{d+1}$ in $G=G_F(2)$ such that $\sigma_1,\ldots,\sigma_{d+1}$ minimally generate $G_F(2)$. The statement then follows from the previous theorem. \end{proof} Our treatment below is purely group-theoretical. However the group $G$ plays an important role as the maximal pro-$2$ quotient of the absolute Galois group of SAP fields. (We refer the interested reader to \cite{Haran}, \cite{Mi}.) Also it is interesting to observe that if $G=C_2*\cdots *C_2$ is the Galois group as above, then $G$ is already determined by its quotient $G/G_{(3)}$. (See also Remarks~\ref{rmks:SAP} for closely related observations.) More precisely, assume that $H$ is another pro-2-group which is realizable as the Galois group of the maximal 2-extension of a field $F$, and that $H/H_{(3)}\simeq G/G_{(3)}$, then $H\simeq G$. (See \cite{MS1,MS2,Mi}.) The Hilbert-Poincar\'e series of ${\rm gr}({\mathbb F}_2[[G]])$ is \[ P_{{\rm gr}({\mathbb F}_2[[G]])}(t)= \frac{1+t}{1-dt}. \] \begin{rmk} Since the cohomology algebra $H^*(C_2,{\mathbb F}_2)$ is isomorphic to ${\mathbb F}_2[X]$, the polynomial algebra in one variable $X$ over ${\mathbb F}_2$, we have \[ P_{H^*(G,{\mathbb F}_2)}(t)=\sum_{n=0}^\infty \dim_{{\mathbb F}_2}(H^n(G,{\mathbb F}_2)) t^n= 1+(d+1)t+(d+1)t^2+\cdots= \frac{1+dt}{1-t} \] by \cite[Theorems 4.1.4-4.1.5]{NSW}. Therefore, we have \[ P_{H^*(G,{\mathbb F}_2)}(t) P_{{\rm gr}({\mathbb F}_2[[G]])}(-t)=1. \] This is not a coincidence. One can show that the cohomology algebra $H^*(G,{\mathbb F}_2)$ is Koszul and that ${\rm gr}({\mathbb F}_2[[G]])$ is its Koszul dual. The above equality is just a special case of the well-known relation between the two Hilbert-Poincar\'e series of a Koszul algebra and its dual (see \cite[Corollary 2.2]{PP}). \qed \end{rmk} We have \[ \log P_{{\rm gr}({\mathbb F}_2[[G]])}(t)= \log(\frac{1}{1-dt})-\log(\frac{1}{1+t})=\sum_{n\geq 1} \frac{1}{n} (d^n-(-1)^n)t^n. \] Now we define the sequence $w_n(G), n=1,2,\ldots$ by \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(n/m) (d^m-(-1)^m). \] \begin{prop} \label{prop:cn free product of C2} If $n=2^k m $ with $(m,2)=1$, then \[ c_n(G) =w_m(G) +w_{2m}(G)+\cdots + w_{2^km}(G). \qed \] \end{prop} \subsection{Free products of the cyclic group of order 2 as semidirect products} Let $G=C_2*\cdots * C_2$ be a free product in the category of pro-$2$-groups of $d+1$ copies of $C_2$ In this subsection we shall show that $G$ is isomorphic to a semidirect product $H\rtimes C_2$ of a free pro-$2$-group $H$ and $C_2$. We also provide provide a relation between $G_{(n)}$ and $H_{(n)}$. We define the numbers $\epsilon_n$, $n=1,2,\ldots$ by \[ \epsilon_n=\frac{1}{n}\sum_{m\mid n} \mu(n/m)(-1)^m, \] i.e., by the M{\"o}bius inversion formula, \[ \label{eq:epsilon} \tag{*}(-1)^n= \sum_{m\mid n} m \epsilon_m. \] \begin{lem} \label{lem:epsilon} We have $\epsilon_1=-1$, $\epsilon_2=1$ and $\epsilon_n=0$ for $n\geq 3$. \end{lem} \begin{proof} The equation (\ref{eq:epsilon}) determines $\epsilon_n$, $n\in {\mathbb N}$, uniquely. But $\epsilon_1=-1$, $\epsilon_2=1$ and $\epsilon_n=0$ for $n\geq 3$ work as for these numbers \[ \sum_{m\mid n} m \epsilon_m =\begin{cases} -1 &\;\text{if $n$ is odd},\\ -1+2=1 &\; \text{if $n$ is even}. \qedhere \end{cases} \] \end{proof} Let us write \[G=C_2*C_2*\cdots *C_2=\langle x_0\mid x_0^2\rangle*\langle x_1\mid x_1^2\rangle *\cdots * \langle x_d\mid x_d^2\rangle.\] For ease of notation, we consider $x_0,x_1,\ldots, x_d$ as elements of $G$. We consider a continuous homomorphism $\varphi:G\to C_2= \langle x\mid x^2\rangle$ defined by $x_i\mapsto x$ for all $i=0,1,\ldots,d$. For each $i=1,\ldots,d$, we set $y_i=x_0x_i\in G$ and let $H$ be the closed subgroup of $G$ generated by $y_1,\ldots,y_d$. \begin{lem} Let the notation be as above. \begin{enumerate} \item $\ker\varphi=H$. \item $H$ is a free pro-$2$-group of rank $d$. \item We have $G\simeq H\rtimes C_2$, where the action of $C_2$ on $H$ is given by $xy_ix=y_i^{-1}$. \end{enumerate} \end{lem} \begin{proof} (1) Clearly $y_i\in \ker\varphi$, hence $H\subseteq \ker\varphi$. Now consider any element $\gamma\in \ker\varphi$. Then for each open neighborhood $U$ of $\gamma$ in $G$, there exists an element $g=x_{i_1}\cdots x_{i_r}\in U$, $i_1,\ldots,i_r\in \{1,\ldots,d\}$ such that $1=\varphi(g)=x^r$. Hence $r=2s$ is even. Since $x_0 y_ix_0= y_i^{-1}$, we obtain \[ g= x_0 y_{i_1}\cdots x_0 y_{i_r}=y_{i_1}^{-1}y_{i_2}\cdots y_{i_{r-1}}^{-1}y_{i_r}. \] Thus $g\in H$. Therefore $\gamma\in H$ and $H=\ker\varphi$. (2) By profinite analogue of the well known Kurosch's subgroup theorem in the theory of free products of discrete groups due to E. Binz, J. Neukirch and G. Wenzel explained in \cite[Theorem 4.2.1 and Remarks below this Theorem]{NSW}, we see that $H$ is indeed a free pro-$2$-group of rank $d$. (3) This follows by observing that $\psi\colon C_2=\langle x\mid x^2 \rangle \to G$ which maps $\bar{x}$ to $x_1$, is a section of $\varphi$. \end{proof} The following proposition and corollary are remarkable properties of the pair $\{H,G\}$. \begin{prop} \label{prop:comparison} We have $c_1(H)=d=c_1(G)-1$ and $c_n(H)=c_n(G)$ for all $n\geq 2$. \end{prop} \begin{proof} It is clear that $c_1(H)=w_1(H)=d$ and $c_1(G)=w_1(G)=d+1$. Hence $c_1(H)=d=c_1(G)-1$. We shall show that $c_n(H)=c_n(G)$ for any $n\geq 2$. We note that \[ w_n(H)-w_n(G)=\frac{1}{n}\sum_{m\mid d}\mu(n/m)(-1)^m =\epsilon_n. \] By Lemma~\ref{lem:epsilon}, one has $w_2(H)=w_2(G)+1$ and $w_n(H)=w_n(G)$ for every $n\geq 3$. If $n>1$ is odd, then \[ c_n(H)=w_n(H)=w_n(G)=c_n(G). \] If $n$ is even, then by writing $n=2^km$ with $m$ odd, we have \[ \begin{aligned} c_n(H)&= w_m(H)+w_{2m}(H) +w_{4m}(H)+\cdots+w_{2^km}(H)\\ &=w_m(G)+w_{2m}(G) +w_{4m}(G)+\cdots+w_{2^km}(G)=c_n(G). \end{aligned} \] (Note that we always have $w_m(H)+w_{2m}(H)=w_m(G)+w_{2m}(G)$ for every $m\geq 1$ odd.) \end{proof} \begin{cor} \label{cor:quotient} Let $n\geq 2$ be an integer. \begin{enumerate} \item $H_{(n)}=H\cap G_{(n)}$. \item $G/G_{(n)}\simeq H/H_{(n)}\rtimes C_2$, where the action of $C_2$ on $H/H_{(n)}$ is given by $\bar{x}\bar{y}_i\bar{x}=\bar{y}_i^{-1}$. \end{enumerate} \end{cor} \begin{proof} (1) Clearly $H_{(n)}\subseteq H\cap G_{(n)}$. We proceed by induction on $n$ to show that $H_{(n)}=H\cap G_{(n)}$. First consider the case $n=2$. We have an exact sequence \[ 1\to H/H\cap G_{(2)} \to G/G_{(2)}\to C_2\to 1. \] This implies that $[H: H\cap G_{(2)}]= [G:G_{(2)}]/2=2^d=[H:H_{(2)}]$. Hence $H_{(2)}=H\cap G_{(2)}$. Assume that $H_{(n)}=H\cap G_{(n)}$ for some $n\geq 2$. Then from the exact sequence \[ 1\to H/H\cap G_{(n)} \to G/G_{(n)}\to C_2\to 1, \] we obtain $[H:H_{(n)}]=[H:H\cap G_{(n)}]=[G:G_{(n)}]/2$. From a similar exact sequence we obtain \[ \begin{aligned} {[H:H\cap G_{(n+1)}]}&=\frac{1}{2}[G:G_{(n+1)}] = \frac{1}{2}[G:G_{n}] [G_{(n)}:G_{(n+1)}]\\ &= [H:H_{(n)}] [H_{(n)}:H_{(n+1)}]=[H:H_{(n+1)}]. \end{aligned} \] Here the equality $[G_{(n)}:G_{(n+1)}]=[H_{(n)}:H_{(n+1)}]$ follows from Proposition~\ref{prop:comparison}. Therefore $H_{(n+1)}=H\cap G_{(n+1)}$. (2) This follows from (1). \end{proof} \subsection{Another semidirect product} In this subsection we consider an example in which $G$ is the semidirect product $G:={\mathbb Z}_2^d \rtimes C_2= H \rtimes \langle x \rangle$, where the action of $C_2$ on $H:={\mathbb Z}_2^d$ is given by $xyx=y^{-1}$, for all $y\in H$. This group $G$ is realizable as the maximal pro-$2$-quotient of the absolute Galois group of a superpythagorean field. Recall that a formally real Pythagorean field $F$ with $|F^\times/(F^\times)^2|=2^{d+1}<\infty$ is called a superpythagorean field if $F$ admits exactly $2^d$ orderings. \begin{prop} Let $F$ be a Pythagorean field with $|F^\times/(F^\times)^2|=2^{d+1}$. Then there exists an epimorphism $G_F(2)\twoheadrightarrow G={\mathbb Z}_2^d\rtimes C_2$. \end{prop} \begin{proof} We choose any ordering $P$ in $F$ and an ${\mathbb F}_2$-basis $[a_1],\ldots, [a_d]$ of $P/(F^\times)^2$. By \cite{Be1} we know that there exists a field $E$, the Euclidean closure of $F$ with respect to $P$ such that $F(2)=E(\sqrt{-1})$, $E$ is a formally real field and $(E^\times)^2\cap F^\times=P$. We can pick for each $a_i$ as above, a sequence \[ \sqrt{a_i}, \sqrt[4]{a_i},\ldots, \sqrt[2^n]{a_i},\ldots, \] such that all $\sqrt[2^n]{a_i}$ are in $E^\times$. Indeed, by induction on $n$ we may assume that $\sqrt[2^n]{a_i}$ is in $E^\times$. Then we can pick $\sqrt[2^{n+1}]{a_i}$ in $(E^\times)^2$ because $E^\times=(E^\times)^2\cup -(E^\times)^2$. We set \[ \tilde{M}:=\bigcup_{n=1}^\infty F(\sqrt[2^n]{a_1},\ldots,\sqrt[2^n]{a_d}). \] Then $\tilde{M}$ is formally real since $\tilde{M}$ is a subfield of $E$. We recall that for each $n\in {\mathbb N}$, $F(\sqrt{-1})$ contains a primitive $2^n$-th root of unity $\zeta_{2^n}$. (See \cite[Chapter II, Theorem 8]{Be}.) We may also assume that $\zeta_{2^{n+1}}^2=\zeta_{2^n}$. We set $M:=\tilde{M}(\sqrt{-1})$. Then $M/F$ is a Galois extension. We show that ${\rm Gal}(M/F{\sqrt{-1}})$ is isomorphic to ${\mathbb Z}_2^d$. This follows from Kummer theory. In fact, let $\tau_1, \ldots, \tau_d$ be elements in ${\rm Gal}(M/F(\sqrt{-1})$ such that for each $i=1,\ldots,d$, one has \[ \tau_i(\sqrt[2^n]{a_i})=\zeta_{2^n} \sqrt[2^n]{a_i}, \; \text { and }\tau_i(\sqrt[2^n]{a_j})=\sqrt[2^n]{a_j}\; \forall j\not=i. \] Then ${\rm Gal}(M/F(\sqrt{-1}))=(\prod_{i=1}^d\langle \tau_i\rangle) \simeq {\mathbb Z}_2^d$. We observe that the restriction of a nontrivial element of ${\rm Gal}(E(\sqrt{-1})/E)$ to $M$ gives us a nontrivial element $\sigma\in {\rm Gal}(M/\tilde{M})$. Thus we have a spliting \[ {\rm Gal}(M/F)\simeq {\rm Gal}(M/F{\sqrt{-1}})\rtimes \langle \sigma \rangle, \] where $ \langle \sigma \rangle\simeq C_2$, and the action of $C_2$ on ${\rm Gal}(M/F{\sqrt{-1}})$ is by involution. The natural projection \[ G_F(2)={\rm Gal}(F(2)/F)\to {\rm Gal}(M/F)\simeq {\mathbb Z}_2^d\rtimes C_2 \] gives the desired epimorphism. \end{proof} \begin{cor} \label{cor:superPytha} Let $F$ be a a field with $|F^\times/(F^\times)^2|=2^{d+1}$. Then $F$ is a superpythagorean field if and only if $G_F(2)$ is isomorphic to the group $G={\mathbb Z}_2^d \rtimes C_2$. \end{cor} \begin{proof} Assume that $F$ is a superpythagorean field with $|F^\times/(F^\times)^2|=2^{d+1}$. Let the notation be as in the previous proposition. Then ${\rm Gal}(M/F)\simeq G={\mathbb Z}_2^d \rtimes C_2$. On the other hand, from \cite[Example 3.8, (ii)]{Wa} (see also \cite[Chapter III, Theorem 1]{Be}), we know that ${\rm Gal}(M/F)$ is equal to $G_F(2)$. Hence $G_F(2)\simeq {\mathbb Z}_2^d \rtimes C_2$. The converse direction is proved in a similar fashion to the proof of the "if" part in Theorem~\ref{thm:SAP}. \end{proof} \begin{cor} \label{cor:superPy} Let $F$ be any Pythagorean field, and let $K$ be a superpythagorean field. Assume that $|F^\times/(F^\times)^2|=|K^\times/(K^\times)^2|=2^{d+1}$ . Then there exists an epimorphism $G_F(2) \twoheadrightarrow G_K(2)\simeq {\mathbb Z}_2^d\rtimes C_2$. \end{cor} \begin{proof} This follows from the previous theorem and corollary. \end{proof} \begin{lem} Let $G=H\rtimes \langle x \rangle={\mathbb Z}_2^d\rtimes C_2$ be as above. Let $n\geq 2$ be an integer, and let $s=\lceil \log_2n \rceil$. Then $G_{(n)}=H^{2^s}$. \end{lem} \begin{proof} We proceed by induction on $n$. We first observe that $[y,x]=y^{-1}x^{-1} y x=(y^{-1})^2$ and $(yx)^2=y^2$, for every $y\in H$. Hence \[ G_{(2)}=G^2[G,G]=G^2=H^2. \] The lemma is true for $n=2$. We assume that the lemma is true for $j$ with $2\leq j<n$. Then \[ \begin{aligned} G_{(n)}&=G_{(\lceil n/2 \rceil)}^2\prod_{i+j=n}[G_{(i)},G_{(j)}]\\ &=G_{(\lceil n/2\rceil)}^2[G,G_{(n-1)}]\\ &=(H^{2^{s-1}})^2=H^{2^s}. \end{aligned} \] Here we use that $G_{n-1}\subseteq H^{2^{s-1}}$, and hence $[G,G_{(n-1)}]\subseteq H^{2^s}$. \end{proof} An immediate consequence of the above lemma is the following result. \begin{cor} \label{cor:cn superPy} Let $n\geq 2$ be an integer. We have \[ c_n(G)= \begin{cases} d+1 &\text{ if $n=1$},\\ d &\text{ if $n=2^s$ for some $1\leq s\in {\mathbb Z}$},\\ 1 & \text{ if $n$ is not a power of 2}. \end{cases} \] \end{cor} \begin{cor} \label{cor:superPy} We have \[ P_{{\rm gr}({\mathbb F}_2[[G]])}(t)=\frac{1+t}{(1-t)^d}\prod_{i=1}^{\infty}\frac{1}{1-t^{2i+1}}. \] \end{cor} \begin{proof} We write $\log P_{{\rm gr}({\mathbb F}_2[[G]])}= \sum_{n\geq 1} b_n(G) t^n $, and let \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(n/m) mb_m(G). \] By Lemma~\ref{lem:coprime}, if $n$ is odd then $w_n(G)=c_n(G)$. In particular, $w_1(G)=c_1(G)=d+1$, and $w_{2i+1}(G)=c_{2i+1}(G)=1$ for $i\geq 1$. By Lemma~\ref{lem:not coprime}, $w_2(G)=c_2(G)-c_1(G)=d-(d+1)=-1$. We claim that $w_n(G)=0$ if $ n$ is even and $n\geq 4$. Indeed, if $n=2^s$ with $s\geq 2$, then by Lemma~\ref{lem:not coprime}, \[w_{2^s}(G)=c_{2^s}(G)-c_{2^{s-1}}(G)=d-d=0.\] Now if $n=2m$, where $m$ is not a power of $2$, then also by Lemma~\ref{lem:not coprime}, \[w_{2m}(G)=c_{2m}(G)-c_{m}(G)=1-1=0.\] The corollary then follows from Remark~\ref{rmk:wn}. \end{proof} \begin{rmks} \label{rmks:SAP} It is an interesting fact that $c_1(G)$ and $c_2(G)$ can be sufficient in determining $G$ itself within some large families of pro-$p$-groups. We mentioned an example in Remarks~\ref{rmks:characterization} (2). Here are two other instances. (1) Suppose that $K$ is an SAP field with $|K^\times/(K^\times)^2|=2^{d+1}$. Then $G_K(2)=C_2*\cdots *C_2$, the free product of $d+1$ copies of $C_2$. By Proposition~\ref{prop:cn free product of C2}, one has \[ c_1(G_K(2))=d+1 \text { and } c_2(G_K(2))=\frac{d(d+1)}{2}. \] Now let $F$ be a formally real Pythagorean field $F$ with $|F^\times/(F^\times)^2|< \infty$. We assume that $c_1(G_F(2))=d+1$ and that $c_2(G_F(2))=d(d+1)/2$ for some integer $d\geq 0$. Then we claim that $F$ is an SAP field with exactly $d+1$ orderings. So, quite remarkably, within the family of formally real Pythagorean fields with finitely many square classes, the numbers $c_1(G_F(2))$ and $c_2(G_F(2))$ above suffice to characterize SAP fields $F$. We shall now prove this claim. Because $c_1(G_F(2))=d+1$, we see that $G_F(2)$ has $d+1$ minimal generators, and therefore $|F^\times/(F^\times)^2|=2^{d+1}$. We pick any SAP field $K$ with $|K^\times/(K^\times)^2|=2^{d+1}$. By Corollary~\ref{cor:SAP}, there exists an epimorphism $\varphi\colon G_K(2)\twoheadrightarrow G_F(2)$. We have \[ \begin{aligned} |G_K(2)/{G_K(2)}_{(3)}| &=c_1(G_K(2))+c_2(G_K(2))\\ &=d+\frac{d(d+1)}{2}\\ &=c_1(G_F(2))+c_2(G_F(2))\\ &=|G_F(2)/{G_F(2)}_{(3)}|. \end{aligned} \] This implies that the induced epimorphism $G_K(2)/{G_K(2)}_{(3)}\twoheadrightarrow G_F(2)/{G_F(2)}_{(3)}$ is an isomorphism. By \cite[Theorem D]{CEM}, $\varphi \colon G_K(2)\to G_F(2)$ is an isomorphism. This implies that $F$ is a SAP field by Theorem~\ref{thm:SAP}. (2) Suppose that $K$ is a superpythagorean field with $|K^\times/(K^\times)^2|=2^{d+1}<\infty$. By Corollary~\ref{cor:cn superPy}, one has \[ c_1(G_K(2))=d+1 \; \text { and } c_2(G_K(2))=d. \] Now let $F$ be a formally real Pythagorean field $F$ with $|F^\times/(F^\times)^2|< \infty$. We assume that $c_1(G_F(2))=d+1$, $c_2(G_F(2))=d$ for some integer $d\geq 0$. Then we claim that $F$ is a superpythagorean field. So within the family of formally real Pythagorean fields with finitely many square classes, the numbers $c_1(G_F(2))$ and $c_2(G_F(2))$ above, also suffice to characterize superpythagorean fields $F$. We shall now prove this claim. We pick any superpythagorean field $K$ with $|K^\times/(K^\times)^2|=2^{d+1}$. By Corollary~\ref{cor:superPy}, we have an epimorphism $\varphi\colon G_F(2)\twoheadrightarrow G_K(2)$. We have \[ \begin{aligned} |G_F(2)/{G_F(2)}_{(3)}| &=c_1(G_F(2))+c_2(G_F(2))\\ &=d+1+d\\ &=c_1(G_K(2))+c_2(G_K(2))\\ &=|G_K(2)/{G_K(2)}_{(3)}|. \end{aligned} \] This implies that the induced epimorphism $G_F(2)/{G_F(2)}_{(3)}\twoheadrightarrow G_K(2)/{G_K(2)}_{(3)}$ is an isomorphism. By \cite[Theorem D]{CEM}, $\varphi \colon G_F(2)\to G_K(2)$ is an isomorphism. This implies that $F$ is a superpythagorean field by Corollary~\ref{cor:superPytha}. \qed \end{rmks} \section{Demushkin groups} Recall that a pro-$p$-group $G$ is said to be a Demushkin group if \begin{enumerate} \item $\dim_{{\mathbb F}_p} H^1(G,{\mathbb F}_p)<\infty,$ \item $\dim_{{\mathbb F}_p} H^2(G,{\mathbb F}_p)=1,$ \item the cup product $H^1(G,{\mathbb F}_p)\times H^1(G,{\mathbb F}_p)\to H^2(G,{\mathbb F}_p)$ is a non-degenerate \mbox{bilinear} form. \end{enumerate} By the work of \cite{De1,De2}, \cite{Se1} and \cite{La1}, we now have a complete classification of Demushkin groups. Let $G$ be a Demushkin group of rank $d=\dim_{{\mathbb F}_p} H^1(G,{\mathbb F}_p)$. Let $c_n=c_n(G)$. Then by \cite[Theorem 5.1 (g)]{La3} (see also \cite{Fo,Ga,LM}), we have the Hilbert-Poincar\'e series \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= \frac{1}{1-dt+t^2}. \] We write $1-dt+t^2=(1-at)(1-bt)$ so that $a+b=d$ and $ab=1$. Then \[ \log P_{{\rm gr}({\mathbb F}_p[[G]])}(t)= \log(\frac{1}{1-at})+\log(\frac{1}{1-bt})=\sum_{n\geq 1} \frac{1}{n}(a^n+b^n). \] We define the sequence $w_n(G), n=1,2,\ldots$ by \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(m) (a^{n/m}+b^{n/m})=\frac{1}{n}\sum_{m\mid n} \mu(n/m) (a^m+b^m). \] \begin{rmk} The numbers $w_n(G)$ are given by the formula \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(n/m) \left[ \sum_{0\leq i\leq [m/2]} (-1)^i \frac{m}{m-i} {m-i \choose i} d^{m-2i} \right]. \] (See \cite[Proof of Proposition 4]{La2}.) \end{rmk} \begin{prop} If $n=p^k m $ with $(m,p)=1$, then \[ c_n(G) =w_m(G) +w_{pm}(G)+\cdots + w_{p^km}(G). \qedhere \] \end{prop} \begin{ex} Let $G$ be a Demushkin pro-$p$-group of finite rank $d$. We have \[ \begin{aligned} c_1(G)&=d,\\ c_2(G)&= \begin{cases} \frac{d^2-d-2}{2} \text{ if } p\not=2,\\ \frac{d^2+d-2}{2} \text{ if } p=2, \end{cases}\\ c_3(G) &=\begin{cases} \frac{d^3-4d}{3} \text{ if } p\not=3,\\ \frac{d^3-d}{3} \text{ if } p=3, \end{cases}\\ c_4(G)&= \begin{cases} \frac{d^4-5d^2+4}{4} \text{ if } p\not=2,\\ \frac{d^4-3d^2+2d}{4} \text{ if } p=2, \end{cases}\\ c_5(G)&=\begin{cases} \frac{d^5-5d^3+4d}{5} \text{ if } p\not=5,\\ \frac{d^5-5d^3+9d}{5} \text{ if } p=5. \end{cases} \end{aligned} \] Observe that our numbers $c_n(G)$, $n=1,2,\ldots$, also detect the minimal numbers of generators of $G_{(n)}$. Indeed by the remarkable result of I. V. Ando\v{z}skii and independently by J. Dummit and J. Labute for each open subgroup $T$ of the Demushkin group $G$, we have the following expression for the minimal number of generators $d(T)$ of $T$: \[ d(T)=[G:T](d(G)-2)+2. \] (See \cite[Theorem 3.9.15]{NSW}.) Therefore \[ d_n(G):=d(G_{(n)})= p^{\sum_{i=1}^{n-1}c_i(G)}(d-2)+2. \] \end{ex} From now on we assume that $G=F/\langle r\rangle$, where $F$ is a free pro-$p$-group on generators $x_1,x_2,\ldots,x_d$, and \[ r=[x_1,x_2][x_3,x_4]\cdots [x_{d-1},x_d]. \] Then we extract from \cite{La2} the following fact. \begin{lem} \label{lem:wn Demushkin} For every $n$, $w_n(G)={\rm rank}_{{\mathbb Z}_p} G_n/G_{n+1}$. \end{lem} \begin{proof} This follows from \cite[Theorem and proof of Proposition 4]{La2}. (Although \cite{La2} only treats abstract discrete groups, his results are also true for pro-$p$-groups with virtually the same proofs; one only has to replace ${\mathbb Z}$ by ${\mathbb Z}_p$, subgroups by closed subgroups, and group rings by completed group rings.) \end{proof} \begin{cor} Assume that for each $n$, $B_n$ represents a ${\mathbb Z}_p$-basis of $G_n/G_{n+1}$. Let us write $n=p^km$ with $(m,p)=1$. Then a basis of the ${\mathbb F}_p$-vector space $G_{(n)}/G_{(n+1)}$ can be represented by the following set \[ B_m^{p^k} \bigsqcup B_{pm}^{p^{k-1}}\bigsqcup \cdots \bigsqcup B_{p^{k-1}m}^p\bigsqcup B_n. \] \end{cor} \section{Some other groups} \subsection{Free products of a finite number of Demushkin groups and free pro-$p$-groups} Let $G$ be a free pro-$p$ product of $r$ Demushkin groups of ranks $d_1,\ldots, d_r$, and of a free pro-$p$-group of rank $e$. The Hilbert-Poincar\'e series of ${\rm gr}({\mathbb F}_p[[G]])$ is \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=\frac{1}{1-(d_1+\cdots +d_r+e)t+ rt^2}=: \frac{1}{(1-at)(1-bt)}. \] We define the sequence $w_n(G), n=1,2,\ldots$ by \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(m) (a^{n/m}+b^{n/m})=\frac{1}{n}\sum_{m\mid n} \mu(n/m) (a^m+b^m). \] \begin{prop} If $n=p^k m $ with $(m,p)=1$, then \[ c_n(G) =w_m(G) +w_{pm}(G)+\cdots + w_{p^km}(G). \qedhere \] \end{prop} \subsection{A free product of a cyclic group of order 2 and a free pro-$2$-group} We first consider the case of $p=2$ because this is the case of interest in Galois theory of $2$-extensions, and because this case is a bit simpler than the general case of any prime $p$. This latter case will be covered in the next subsection. Let $G=C_2* S$ be a free pro-$2$ product of the cyclic group $C_2$ of order 2 and a free pro-$2$-group of rank $d$. The Hilbert-Poincar\'e series of ${\rm gr}({\mathbb F}_2[[G]])$ is \[ P_{{\rm gr}({\mathbb F}_2[[G]])}(t)=\left(\frac{1}{1+t}-dt\right)^{-1}=\frac{1+t}{1-dt- dt^2}=: \frac{1+t}{(1-at)(1-bt)}. \] We define the sequence $w_n(G), n=1,2,\ldots$ by \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(n/m) (a^m+b^m-(-1)^m). \] \begin{prop} If $n=2^k m $ with $(m,2)=1$, then \[ c_n(G) =w_m(G) +w_{pm}(G)+\cdots + w_{2^km}(G). \qedhere \] \end{prop} \subsection{A free product of a cyclic group of order $p$ and a free pro-$p$-group} Let $G=C_p* S$ be a free pro-$p$ product of the cyclic group $C_p$ of order $p$, and a free pro-$p$-group of rank $d$. We shall find a formula for $c_n(G)$. The Hilbert-Poincar\'e series of ${\rm gr}({\mathbb F}_p[[G]])$ is \[ P_{{\rm gr}({\mathbb F}_p[[G]])}(t)=\frac{1+t+\cdots+t^{p-1}}{1-dt- dt^2-\cdots-dt^p}=: \frac{(1-\xi_1 t)\cdots(1-\xi_{p-1}t)}{(1-a_1t)\cdots (1-a_pt)}. \] We define the sequence $w_n(G), n=1,2,\ldots$ by \[ w_n(G)=\frac{1}{n}\sum_{m\mid n} \mu(n/m)( a_1^m+\cdots+a_p^m-(\xi_1^m+\cdots+\xi_{p-1}^m)). \] \begin{prop} If $n=p^k m $ with $(m,p)=1$, then \[ c_n(G) =w_m(G) +w_{pm}(G)+\cdots + w_{p^km}(G). \qedhere \] \end{prop} \begin{rmk} Note that \[ \xi_1^n+\cdots+\xi_{p-1}^n =\begin{cases} -1 \text{ if } (n,p)=1,\\ p-1 \text{ if } p\mid n. \end{cases} \] We shall compute $a_1^n+\cdots+a_{p}^n$. From \[ \frac{1}{(1-a_1t)\cdots(1-a_pt)}=\frac{1}{1-(dt+dt^2+\cdots+dt^p)}, \] taking the logarithms of both sides, we obtain \[ \begin{aligned} & \sum_{n\geq 1}\frac{1}{n}(a_1^n+\cdots+a_p^n)t^n= \sum_{n\geq 1} \frac{1}{n} (dt+dt^2+\cdots +dt^p)^n\\ &= \sum_{n\geq 1} \frac{1}{n}\sum_{\substack{k_1+\cdots+k_p=n,\\k_i \geq 0}} \binom{n}{k_1,\ldots,k_p} (dt)^{k_1} (dt^2)^{k_2}\cdots (dt^p)^{k_p}\\ &= \sum_{M}\sum_{\substack{k_1+2k_2+\cdots+pk_p=M,\\k_i \geq 0}}\\ &\hspace*{60pt} \left[\frac{1}{M-k_2-\cdots-(p-1)k_p} \binom{M-k_2-\cdots-(p-1)k_p}{k_1,\ldots,k_p} d^{M-k_2-\cdots-(p-1)k_p}\right] t^M. \end{aligned} \] Finally comparing the coefficients of $t^n$ gives us the required formula for $a_1^n+\cdots+a_{p}^n$, \[ \begin{aligned} &a_1^n+\cdots+a_p^n\\ &=\sum_{\substack{k_1+2k_2+\cdots+pk_p=n,\\k_i \geq 0}}\frac{n}{n-k_2-\cdots-(p-1)k_p} \binom{n-k_2-\cdots-(p-1)k_p}{k_1,\ldots,k_p} d^{n-k_2-\cdots-(p-1)k_p}. \end{aligned} \] \end{rmk}
2,877,628,089,939
arxiv
\section{Introduction} We consider the function $u(x,t): {\mathbb R}^N\times [0,\infty) \to[0,\infty)$ solving the following free boundary problem \begin{equation}\label{eqn: CLMV}\tag{P-V} \left \{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in }& \{u>0\},\\ \\ \mathcal{V} = F(|Du|)& \text{ on } & \partial\{u>0\}, \end{array}\right. \end{equation} where the Lagrange multiplier $\lambda(t)$ is associated to the volume constraint \begin{equation}\label{volume} \int u(\cdot,t) dx\equiv V. \end{equation} (see Figure 1.) \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{dropletpic2.pdf} \end{center} \caption{Description of the problem} \label{fig: problem} \end{figure} $\mathcal{V}=\mathcal{V}_{x,t}$ given above denotes the (outward) normal velocity of the free boundary $\partial\{u>0\}$ at $(x,t)$. The prescribed normal velocity $F:{\mathbb R}\to {\mathbb R}$ is a continuous and strictly increasing function with $F(1)=0$, where $1$ denotes the equilibrium contact angle of the profile $u$ with the surface. See Section 2 for a rigorous formulation of weak solutions for \eqref{eqn: CLMV}. \medskip In two dimensions, \eqref{eqn: CLMV} is a simplified model to describe a liquid droplet resting on a flat surface. The model is quasi-static, the speed of the contact line is much slower than the capillary relaxation time: see below for more discussion on the derivation of the model (also see e.g. \cite{AD}, \cite{greenspan}). In this context, $u$ denotes the height of the drop. We call $\Omega_t := \{u(\cdot,t)>0\}$ the {\it wetted set} and the free boundary $\Gamma_t := \partial \{u(\cdot,t)>0\}$ the {\it contact line} between the liquid and the flat surface. The function $$F: [0,+\infty) \to \mathbb{R}$$ represents the dependence of the normal velocity of the contact line $\Gamma_t$ on the contact angle. The deviation of the contact angle $|\nabla u|$ on $\Gamma_t(u)$ from the equilibrium value $1$ is responsible for the motion of the contact line. The relation between the velocity of the contact line and the contact angle $|\nabla u |$ is not a settled issue, and various velocity laws have been proposed and studied (e.g. \cite{blake}, \cite{greenspan}, \cite{hocking}) in the fluids literature. This motivates us to study our problem \eqref{eqn: CLMV} with the most general possible $F$, at least for the study of geometric properties of solutions. \medskip In section 4 and 5, we will focus on $F(|Du|) = |Du|^2 -1$ for simplicity of presentation, since the associated energy structure is simplest for this choice of $F$. The approach we present nevertheless will apply to the general $F$, as we will discuss in section 4. \medskip Our aim is to address the long time behavior of the solution of \eqref{eqn: CLMV}. Note that the equilibrium solution $v(x)$ for \eqref{eqn: CLMV} solves \begin{equation}\label{eqn: EQ}\tag{EQ} \left \{ \begin{array}{lll} -\Delta v(x) = \lambda &\text{ in }& \{v>0\}, \\ \\ |Dv|=1 &\text{ on } &\partial\{v>0\}. \end{array}\right. \end{equation} Assuming that $v$ is $C^{2,\alpha}$, the classical result of Serrin \cite{Serrin71} yields that $v$ is radial. We would like to show the dynamic stability of the above result, in the context of our model. More precisely, we would like to show that the solutions of \eqref{eqn: CLMV} uniformly converges to the round drop given by \eqref{eqn: EQ}. Such a stability result is, to the best of the authors' knowledge, new in any model describing evolution of volume-preserving drops. (see \cite{aftalon} and \cite{brandolini} for stability results on stationary problems). \medskip In general, the behavior of evolving drops is highly difficult to observe due to the diverse possibility of topological changes the drop may go through, and due to the generic non-unique nature of the evolution (see \cite{AD} for examples). On the other hand, it is reasonable to expect that the drops stay simply connected if its initial shape is star-shaped with respect to some ball inside of the drop. This is what we confirm in terms of {\it $\rho$-reflection} (see Definition~\ref{rho_reflection}) which measures star-shapedness of a drop with respect to the reflection comparison. Numerical simulations suggest that initially convex drops may develop an wedge while it contracts (see e.g. \cite{glasner}), and thus our definition, which allows wedges on the free boundary, seems to be appropriate to address the global-time behavior of the drop. \medskip \begin{thm}[Theorem 5.1]\label{main} Suppose $u_0$ satisfies $\rho$-reflection and $\int u_0 = V$, with $0<\rho \leq \frac{1}{10}V^{\tfrac{1}{N+1}}$. Then the following holds: \begin{itemize} \item[(a)] there exists an ``energy" solution (see the definition in section 5) $u(x,t)$ of \eqref{eqn: CLMV} with initial data $u_0$ which stays star-shaped for all times and is H\"{o}lder continuous in space and time variable. \item[(b)] Any energy solution of \eqref{eqn: CLMV} with initial data $u_0$ converges uniformly modulo translation to the unique equilibrium solution $v$ of \eqref{eqn: EQ} with volume $V$. Moreover the set $\{u(\cdot,t)>0\}$ converges, modulo translation, to a ball in Hausdorff topology. \end{itemize} \end{thm} \medskip \begin{rem}\label{boost} It should be pointed out that the result is not a perturbative one. In fact for any initial base of the drop $\{u_0>0\}$ which is star-shaped with respect to some ball inside it (e.g. a star or a triangle), our theorem implies given that the drop has sufficiently large volume. Indeed the proof is based on geometric, moving-plane method-type arguments (see section 3.6). As a consequence we obtain explicit estimates on the size of the parameter $\rho$. \end{rem} \begin{rem} It may be possible to obtain the rate of the convergence by further investigation of the formal energy dissipation inequality carried out in the beginning of section 4. Unfortunately in our setting $u$ is not regular enough for the calculation to go through. \end{rem} \begin{rem} Corresponding results were proved for convex solutions of the volume preserving, anisotropic mean curvature flow by Belletini et. al. \cite{BCCN09}, but their approach strongly depends on the level set formulation of the problem as well as the convexity preserving and regularizing feature of the mean curvature flow. \end{rem} \medskip It is an open question whether there exist other geometric properties besides $\rho$-reflection that are preserved throughout the evolution \eqref{eqn: CLMV}. In fact, finding such a geometric property is one of the main novel features in our result. Let us point out that, in particular, it is unknown whether the convexity of the drop is preserved in the system \eqref{eqn: CLMV}. \medskip \subsection{The Model}\label{sec: The Model} The energy of a static droplet which occupies a subset of $\mathbb{R}^{N+1}$ resting on the plane $\{x_{N+1}=0\}$ is given by \begin{equation}\label{eq: full energy} J(E) = P(E\cap \{x_{N+1}>0\})+\int_{E\cap \{x_{N+1}=0\}} \sigma d\mathcal{H}^{N}+ \int_{E} \rho g x_{N+1}dx. \end{equation} Here $P(F)$ is the perimeter of a subset $F$ of $\mathbb{R}^N$ defined at least for bounded domains with Lipschitz boundaries. Stationary droplets correspond to minimizers of the energy $J$ when the volume $|E| = V$ is fixed. The first term is the energy due to surface tension of the free surface of the droplet. The second term is the energy due to the adhesion between the droplet and the surface it rests on, $\sigma$ is called the relative adhesion coefficient and in general it can be spatially dependent. In this paper however we will assume $\sigma$ is constant. The third is a gravitational potential energy, $\rho$ is the mass per unit volume of water and $g$ the acceleration due to gravity. We will neglect the effects of gravity and assume that $$ g = 0$$ one expects this to be a reasonable approximation when the total volume is small. We will make the further assumption that our droplets are the region under the graph of a function $u:\mathbb{R}^N \to [0,+\infty)$, $$ E = \{ (x',x_{N+1}) \in \mathbb{R}^{N}\times [0,+\infty): x_{N+1}<u(x')\}.$$ Now, thinking of the droplet in terms of the function $u$, its height profile, the energy simplifies to become, \begin{equation}\label{eq: function energy} J(u) = \int _{u>0} \sqrt{1+|Du|^2} dx+\sigma |\{u>0\}|. \end{equation} The Euler-Lagrange equation corresponding to fixed volume energy minimizers involves the mean curvature of the free surface. Finally, to simplify our analysis, we linearize \eqref{eq: function energy} and taking $\sigma = -1/2$ for concreteness we get \begin{equation} \mathcal{J}(u) = \int _{\mathbb{R}^N} |Du|^2 dx+|\{u>0\}|. \end{equation} Formally our problem \eqref{eqn: CLMV} can be written as the gradient flow associated with the energy $\mathcal{J}(u)$ (see the heuristic argument in section 4). Based on this structure a minimizing movement scheme in the spirit of \cite{Mielke} was carried out for the general energy $J(E)$ in \cite{AD} to derive a weak solution in the continuum limit, where the solution is given as an evolution of entire drop (sets of finite perimeters in ${\mathbb R}^{N+1}$). In our paper we show that, in the graph setting with the energy $\mathcal{J}(u)$, for a large class of initial data (see Remark~\ref{boost}) we are able to give a point-wise description on the movement of the contact line $\partial\{u>0\}$ using the notion of viscosity solutions. \begin{comment} We expect that the analysis we carry out for the simplified model \eqref{eq: energy} will in general carry over to similar results for \eqref{eq: function energy} with the added difficulties being mostly technical. \end{comment} \begin{comment} The interesting range of values for this coefficient is $$ -1 < \sigma < 1.$$ Physically smaller values of $\sigma$ correspond to more hydrophilic surfaces, larger to more hydrophobic. In the case $\sigma \geq 1$ the fixed volume energy minimizers are balls of volume $V$ in $\mathbb{R}^{N+1}_+$ which do not touch the surface $\{ x_{N+1}=0\}$. This is an easy consequence of the isoperimetric inequality. In the case $\sigma \leq -1$ the infimum of the energy $J$ is $\leq 0$ and is not achieved. This can be seen by explicit computation for cylinders $B_R\times [0,t]$ sending the height $t \to 0$. \end{comment} \subsection{ Main challenges and strategy} \medskip The major difficulty of this problem is the lack of strong compactness (either in the associated energy given above or in the problem itself) which hinders most approximation arguments without geometric restrictions. In particular the standard comparison principle fails due to $\lambda(t)$, or, more fundamentally, the volume preserving nature of the problem. We emphasize that one cannot simplify the problem by replacing $\lambda$ fixed in the problem, especially when one is interested in the long-time behavior of the solutions: in fact, one can show that, if we fix $\lambda$ then most drops either vanish in finite time or grow to infinity. In comparison to other volume-preserving problems such as the volume-preserving mean curvature flow or the Hele-Shaw flow with surface tension, the evolution of our problem \eqref{eqn: CLMV} is driven by the contact angle, not by higher-order regularizing terms such as the mean curvature. This aspect of the problem challenges, for example, the diffusive-interface approach which are present in many other problems. \medskip To get around aforemented difficulties and establish the existence of solutions for \eqref{eqn: CLMV}, we utilize the energy structure of the problem. Formally, solutions of the problem \eqref{eqn: CLMV} are gradient flows of an energy associated to the capillary drop problem. This formulation of the problem is best understood from the perspective of the wetted set. The energy of a capillary drop of volume $V$ resting above a domain $\Omega \subset \mathbb{R}^N$ is taken to be \begin{equation}\label{eq: energy} \mathcal{J}(\Omega) := \int_\Omega |Du|^2 +|\Omega| \end{equation} where, for now, $u$ is the height profile above the wetted set $\Omega$ and is defined by \begin{equation} u := \argmin \{u \in H^1_0(\Omega): \int_{\mathbb{R}^N} u = V \}. \end{equation} Then one models the motion of the capillary droplet as a gradient flow of the functional $\mathcal{J}$ in an appropriate space of subsets of $\mathbb{R}^N$. \medskip A rigorous justification of this approach is carried out using a regularized discrete gradient flow scheme in the space of Cacciopoli sets (sets of finite perimeter) in \cite{GrunewaldKim11} (also see \cite{AD} for more general approach). Since the analysis has been performed in the general setting which allows pinching and merging of droplet components, the resulting continuum solution is rather weak and is unstable under the variation of initial data. In this paper we show that much stronger results can be obtained when the initial data satisfies the $\rho$-reflection (see Theorem~\ref{main}). Our analysis will be based on the aforementioned energy structure of the problem with geometric arguments (reflection maximum principle) as well as a modified viscosity solution theory where one views $\lambda(t)$ as a prescribed parameter. \medskip Let us explain the geometric argument in more detail. For {\it a priori} given positive and continuous $\lambda(t)$, let us consider the problem \begin{equation}\label{eqn: CLML}\tag{P-$\lambda$} \left\{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) &\text{ in }& \{u>0\}, \\ \\ u_t = F(|Du|)|Du| & \text{ on } & \partial\{u>0\}. \end{array}\right. \end{equation} For a given function $\lambda(t)$ as above and under some assumption on the niceness of the initial data $\Omega_0$ and boundedness and ellipticity assumptions on $F$, a comparison principle holds for \eqref{eqn: CLML} and, by a standard application of Perron's method, there exist global-in-time viscosity solutions. \medskip \begin{comment} In the context of the energy structure mentioned above, it is worthwhile to notice that one can formulate the problem \eqref{eqn: CLML} entirely as an evolution $\Phi$ on domains in $\mathbb{R}^N$ $$ \Phi(t)[\Omega_0] := \Omega_t.$$ Moreover as long as $\Omega_t$ is (for example) a Lipschitz domain we can reduce things further and think of \eqref{eqn: CLML} as an evolution $\Phi$ again by abuse of notation on Lipschitz manifolds $$ \Phi(t)[\Gamma_0] = \Gamma_t. $$ We will switch between these three perspectives frequently. \end{comment} \medskip We show, under an additional assumption on the star-shapedness of the initial data $\Omega_0$ ($\rho$-reflection), the existence of such a function $\lambda(t)$ defined for all times such that a viscosity solutions $u$ of \eqref{eqn: CLML} satisfies $$ \int u(x,t) \ dx = \int u(x,0) \ dx =: V$$ for all $t>0$. The proof is carried out by showing that, when $\Omega_0$ is strongly star-shaped, then the ``energy solution", constructed with the discrete time scheme associated with gradient flow structure mentioned above, stays star-shaped and coincides with the viscosity solution of \eqref{eqn: CLML} with the corresponding $\lambda$. We point out that it is not a priori clear whether the discrete-time energy solutions preserve the star-shaped condition, so we incorporate the restriction directly into the approximate scheme (See section 4) to obtain the continuum limit. The introduction of geometric restriction to the gradient flow scheme seems to be new and of independent interest. \medskip \subsection{Outline of the paper} In section 2 we recall known results about the solutions of $(EQ)$. It turns out that viscosity solutions of $(EQ)$ with connected and Lipschitz positive phase are radial and minimize the associated energy. \medskip In section 3 we investigate the geometric properties of the viscosity solutions of \eqref{eqn: CLML}. In particular we show that $\rho$-reflection property is preserved over the time (see Corollary ~\ref{reflectiontime}), based on the reflection maximum principle. We point out that the reflection maximum principle holds only when the solution of \eqref{eqn: CLML} is stable under perturbations, which is the case when the solutions are star-shaped (see section 3.5 as well as the Appendix). \medskip In section 4 we discuss the energy structure of our original problem \eqref{eqn: CLMV}. Motivated by the formal gradient flow structure, we construct a solution of \eqref{eqn: CLMV} as a continuum limit of a discrete-time ``minimizing movement" scheme following the approach of \cite{Chambolle04}, \cite{Mielke} and \cite{AD}. By putting the $\rho$-reflection property as a constraint for the minimizing movements, we obtain uniform convergence of solutions in the continuum limit. An important result proved here is Proposition 4.7, which states that the limiting ``energy" solution is indeed a viscosity solution of \eqref{eqn: CLML} with volume-preserving property. \medskip In section 5 we make use of the energy structure of the problem to show that any energy solution obtained in section 4 uniformly converges to the radial solution as $t\to\infty$, modulo translation (Theorem~\ref{thm: convergence thm}) We point out that our result does not imply that our solutions converge to a unique radial solution centered at a given point, since the drops may slowly move around a range of round profiles and may not converge to a single one. The physical uniqueness of the limiting profile and the characterization of its center remain open at the moment (Though see Proposition~\ref{conditional} for the discussion on the uniqueness). \section{The Equilibrium Problem}\label{sec: The Equilibrium Problem} Now consider the volume constrained minimization problem \begin{equation}\label{equilibrium problem min form} \inf \{J(u): u \in H^1(\mathbb{R}^N) \text{ and } \int_{\mathbb{R}^N} u = V\}. \end{equation} One can show immediately using symmetric decreasing rearrangements that any infimizer of $$ \inf \{J(u): u \in H^1(\mathbb{R}^N) \text{ and } \int_{\mathbb{R}^N} u = V \text{ and } u \text{ radial }\} $$ is also an infimizer of \eqref{equilibrium problem min form}. That the only minimizers are radial is more delicate. One can show this using a theorem from \cite{BrothersZiemer88}. We will not discuss the specifics here since this fact will not be needed for our arguments. \medskip To find the unique (up to translation) minimizers of \eqref{equilibrium problem min form} among radial functions first fix a particular radius $r$ for the radially symmetric support set of the droplet $$\{ u>0\}=B_r(0).$$ Now the minimization becomes $$ \inf \{ J(u): u \in H^1_0(B_r(0)) \text{ and } \int_{\mathbb{R}^N} u = V \text{ and } u \text{ radial }\} $$ the unique infimizer is just the solution of the Dirichlet problem for the Laplace operator with lagrange multiplier $\lambda=\lambda(r) \in (0,+\infty)$ chosen so that the height profile $$ u_r(x) = \max\{\frac{\lambda}{2N}(r^2 -|x|^2),0\}$$ has the correct volume. Then explicitly calculating $\lambda(r)$ and minimizing $J$ over $r>0$ one can easily check that for $$r_* = (\frac{N}{N+2})^{1/(N+1)}V^{1/(N+1)} $$ $u_{r_*}$ is the strict minimizer of $J$ among radial $H^1$ functions and therefore -- by the rearrangement argument that was mentioned above -- is also a minimizer among all $H^1$ functions. Alternatively, one can consider the Euler-Lagrange equation for \eqref{equilibrium problem min form} which is given by, \begin{equation}\label{eqn: equilibrium problem euler lagrange}\tag{$EQ$} \left\{ \begin{array}{lll} -\Delta u = \lambda & \text{ in } & \{u>0\}, \\ \\ \lambda>0 & \text{ s.t.} & \int u = V, \\ \\ |Du| = 1 & \text{ on } & \partial\{u>0\}. \end{array}\right. \end{equation} Then there is a classical theorem of Serrin \cite{Serrin71} regarding the uniqueness of solutions of $(EQ)$. \begin{thm}\label{thm: serrin}\textup{(Serrin)} Let $u: \mathbb{R}^N \to [0,+\infty)$ be compactly supported such that $\partial \{u >0\}$ a $C^2$ hypersurface in $\mathbb{R}^N$ and $u$ is a classical solution of (EQ). Then $\{u>0\} = B_{r^*}(x_0)$ for some $x_0 \in \mathbb{R}^N$ and \begin{equation}\label{eqn: equilibrium radius} r^* = (\frac{N}{N+2})^{1/(N+1)}V^{1/(N+1)}. \end{equation} \end{thm} Serrin's proof of this theorem used the method of moving planes and a variant of the Hopf Lemma for domains with corners. The best possible spatial regularity we are able to show for the evolving contact line in \eqref{eqn: CLML} is Lipschitz. In order to apply Serrin's result to show the convergence to equilibrium, we need the same result to hold for viscosity solutions of $(EQ)$ with positivity sets which are Lipschitz domains. It is not clear whether it is possible to use a variant of the moving planes method to show this. Instead we use some regularity results for free boundary problems. First a theorem of De Silva from \cite{DeSilva09} shows that viscosity solutions of $(EQ)$ with Lipschitz free boundaries are classical solutions. \begin{thm}\label{thm: de silva}\textup{(De Silva) (Lipschitz implies $C^{1,\alpha}$)} Let $\Omega \subseteq \mathbb{R}^N$ a domain and $u:\mathbb{R}^N\to [0,+\infty)$ be a viscosity solution of the free boundary problem in $\Omega$, \begin{equation}\label{problem} \left\{\begin{array}{lll} -\sum_{i,j=1}^N a_{ij} \partial_{ij}u = f & \text{ in } & \{x \in \Omega : u(x)>0\} \\ \\ |Du| = g & \text{ on } & \Gamma(u) := \partial\{u>0\}\cap\Omega \end{array}\right. \end{equation} where $a_{ij}$, $g \in C^{0,\beta}(\Omega)$ for some $0 < \beta \leq 1$ and $f \in C(\Omega) \cap L^{\infty}(\Omega)$ and $g \geq 0$. Moreover, we have the ellipticity condition: there exists $0<\lambda<\Lambda$ such that for all $\xi \in \partial B_1(0)\subset\mathbb{R}^N$ and all $x \in \Omega$, $$ \lambda \leq \sum a_{ij}(x)\xi_i\xi_j \leq \Lambda. $$ Then if $x_0 \in \Gamma(u)$, $g(x_0)>0$ and $\Gamma(u)$ is a Lipschitz graph in a neighborhood of $x_0$ then $\Gamma(u)$ is $C^{1,\alpha}$ in a smaller neighborhood of $x_0$ for $\alpha>0$ depending only on $N$, $\beta$, and the ellipticity constants $\lambda$ and $\Lambda$. \end{thm} Using above result, higher regularity of $u$ can be derived from the Hodograph method \cite{KN} when the coefficients are smooth and $\Gamma(u)$ is locally Lipschitz. See Appendix C for more details . \begin{cor}\label{smoothness} Let $u$ solve \eqref{problem} with $(a_{ij})$ as identity matrix and $f=g=1$. In addition suppose that $\Gamma(u)$ is locally Lipschitz and is bounded. Then $u$ and $\Gamma(u)$ is $C^{\infty}$. \end{cor} \section{Viscosity Solutions}\label{sec: Viscosity Solutions} \subsection{Basic Definitions and Assumptions}\label{sec: Basic Definitions and Assumptions} We will recall the basic viscosity solution theory for \eqref{eqn: CLML}. A more detailed exposition for free boundary problems of a similar form can be found in \cite{Kim03}. First we restate the problem under consideration, \begin{equation}\tag{\ref{eqn: CLML}} \left\{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in } & \Omega_t, \\ \\ u_t = F(|Du|)|Du| & \text{ on } & \Gamma_t. \end{array}\right. \end{equation} $\lambda: [0,+\infty) \to [0,+\infty)$ will be bounded and continuous. We will work in a space-time parabolic domain $Q = U\times(a,b]$ where $0\leq a <b\leq +\infty$ and $U$ is a domain (possibly unbounded) in $\mathbb{R}^N$, with parabolic boundary, $$\partial_p Q := \overline{U}\times \{t=0\} \cup \partial U \times [a,b].$$ The positive phase of a height profile $u: Q \to [0,+\infty)$ is defined as \begin{equation} \Omega(u) := \{u>0\} \quad \text{ and } \quad \Gamma(u) := \partial \Omega(u) \end{equation} and at particular times, $$ \Omega_t(u) = \Omega(u) \cap \{ (y,s) \in Q: s=t\} \quad \text{ and } \quad \Gamma_t(u) = \partial\Omega_t(u). $$ The dependence on $u$ will be omitted when it is unambiguous which droplet profile we are referring to. The free boundary velocity $F$ will satisfy the boundedness and ellipticity assumption, \begin{hyp}\label{hyp: F incr cond} $F:[0,+\infty)\to{\mathbb R}$ is strictly monotone increasing and continuous, and $F(1)=0$. \end{hyp} \begin{hyp}\label{hyp: F cond} There exists $c>0$ such that for $\epsilon$ sufficiently small, \begin{equation}\label{eqn: F condition} (1+\epsilon)F((1+\epsilon)^{-1}s)+c\epsilon \geq F(s). \end{equation} \end{hyp} Notice if Assumption 2 holds then necessarily $F$ has sublinear growth at $\infty$. For any $s>0$ using \eqref{eqn: F condition} $\floor{s/\epsilon}$ times with $\epsilon\to 0$ yields: $$F(s) \leq cs+\max\{(1+s)F(0),0\}.$$ The monotonicity assumption on $F$ implies, at least formally, that the problem \eqref{eqn: CLML} has a comparison principle. This is the underlying reason why viscosity solutions are the natural definition of weak solution for this PDE. The second assumption is important for proving the strong comparison results and thereby the regularity of the viscosity solutions of \eqref{eqn: CLML}. It is not clear to the authors whether this assumption is only technical. Some examples which satisfy the Assumptions \ref{hyp: F incr cond} and \ref{hyp: F cond} are \begin{ex} If $F(s) := s^p-1$ for $p\leq 1$ then by a simple calculation \eqref{eqn: F condition} is satisfied for all $\epsilon>0$ with $c=1$. \end{ex} The following example in the case $p=2$ will be important to us later, \begin{ex} Let $M>0$ and $p>1$, suppose $F(s) := \max\{s^p-1,M\}$ then \eqref{eqn: F condition} is satisfied for $\epsilon \leq 1+\tfrac{1}{2}(p-1)^{-1}$ and $c \geq 2(M+2)$. The calculation is given below, \begin{align*} \max\{(1+\epsilon)[(1+\epsilon)^{-p}s^p-1)],M\}+c\epsilon &\geq \max\{s^p-1-(p-1)\epsilon s^p-\epsilon,M\} +c\epsilon \\ & \geq \max\left\{s^p-1-2(M+2)\epsilon,M\right\} +c\epsilon \\ & \geq \max\left\{s^p-1+\left(c-2(M+2)\right)\epsilon,M\right\} \\ &\geq \max\{s^p-1,M\}. \end{align*} Above in the second line we have used that if $$ s^p-1-(p-1)\epsilon s^p-\epsilon \leq M$$ then $$ s^p \leq 2(M+2). $$ \end{ex} Now we turn to defining a notion of solution for \eqref{eqn: CLML}. For $E \subset {\mathbb R}^N\times[0,+\infty)$ we use the notation $C^{2,1}(E)$ for functions $f$ with two continuous derivatives in the spatial variables and one continuous derivative in time. First we define a classical solution of the free boundary problem. \begin{DEF} (Classical Solutions) A profile $u: Q \to [0,+\infty)$ is a classical subsolution (supersolution) for the free boundary problem \eqref{eqn: CLML} on $Q$ if $ u \in C^{2,1}(\overline{\Omega(u)}\cap Q)$ and \eqref{eqn: CLML} is satisfied in the pointwise classical sense with equality signs replaced by $\leq$ ($\geq$ in the supersolution case). Classical solutions are both super and subsolutions. \end{DEF} For general initial data the contact line motion problem will not have a classical solution. To get existence of a weak solution we define the notion of viscosity solution of \eqref{eqn: CLML}. First though we define the following standard notion. \begin{DEF}\label{strictlyseparated} (Strictly separated) Let $v$, $w$ be defined on a set $D\subseteq\mathbb{R}^N$ then we say $v$ and $w$ are strictly separated on $D$ and write $ v \prec w$ if $\overline{\Omega(v)}\cap\overline{D}$ is compact and $v<w$ on $\overline{\Omega(v)}\cap D$. \end{DEF} Next we define subsolutions and supersolutions, then a viscosity solution is defined to satisfy both the sub and supersolution properties. Informally, $u$ is a subsolution of \eqref{eqn: CLML} if $u$ cannot be crossed from above by any strict classical supersolution. More precisely: \begin{DEF}\label{def: subsolution} (Subsolution) A non-negative upper semi-continuous function $u:Q \to \mathbb{R}_+$ is a subsolution of \eqref{eqn: CLML} if, for any parabolic neighborhood $Q' \subseteq Q$, and any strict classical supersolution $\phi$ with $u \prec \phi$ on $\partial _p Q'$, then $u \leq (\phi)_+$ in $Q'$. \end{DEF} \begin{DEF}\label{def:supersolution} (Supersolution) A non-negative lower semi-continuous function $u:Q \to \mathbb{R}_+$ is a supersolution of \eqref{eqn: CLML} if, for any parabolic neighborhood $Q' \subseteq Q$, and any strict classical supersolution $\phi$ with $u \prec \phi$ on $\partial _p Q'$, then $ \phi \leq u$ in $Q'$. \end{DEF} \begin{DEF}\label{viscositysolution} (Solution) A viscosity solution of \eqref{eqn: CLML} is a non-negative continuous function $u$ on $Q$ which is both a supersolution and a subsolution. \end{DEF} Naturally, one can assign boundary data on the parabolic boundary. We will usually have $Q = \mathbb{R}^N \times (0,T]$ and in that case this will reduce to assigning initial data. \begin{DEF} \label{barrier}(Subsolution with boundary data $g$) Let $g:\partial_p Q \to [0,+\infty)$ bounded then $u$ is a subsolution of \eqref{eqn: CLML} on $Q$ with boundary data $g$ if $u$ is a subsolution of \eqref{eqn: CLML} on $Q$ as defined above and $$ \limsup_{(y,s)\to (x,t)\in\partial_p Q} u(y,s) \leq g(x,t)$$ \end{DEF} Supersolutions and solutions are then defined analogously. \medskip For the rest of the paper, in the case $Q={\mathbb R}^N\times (0,T]$, when we assign initial data we will specify the positivity set as some $\Omega_0 \subset {\mathbb R}^N$ an open domain and then $u_0$ the initial data will be taken to solve $$ -\Delta u_0 (x) = \lambda(0) \quad \text{ and} \quad u_0(x)=0 \text{ for } x\in\Gamma_0 = \partial\Omega_0. $$ As mentioned in the introduction, it is often more natural to think of the problem as an evolution on the positivity sets. \begin{rem}\label{rem: continuity of solutions} In general, the solution of \eqref{eqn: CLML} is not expected to be continuous. First of all, the solutions can vanish or blow up in finite time. The following example in the case $N=1$ demonstrates these behaviors. \begin{ex} Consider in $N=1$ the problem \eqref{eqn: CLML} with $ \lambda(t) = \lambda$ for some $\lambda>0$ and initial data \begin{equation*} \begin{array}{ll} u_0(x) = \tfrac{\lambda}{2N}(1-x^2)_+, & \Omega_0 = [-1,1]. \end{array} \end{equation*} The solution of \eqref{eqn: CLML} with the above initial data and inhomogeneity will take the form $$ u(x,t) = \tfrac{\lambda}{2N}(r(t)-x^2)_+ $$ where $r(t)$ solves the ordinary differential equation $$ \dot{r}(t) = F\left(\tfrac{\lambda}{N}r(t)\right) $$ with initial data $r(0) = 1$. From basic ODE theory there are choices of $F$ for which some solutions of the above equation blow up in finite time. For example if \begin{equation*} \begin{array}{lll} F(s) = s^2-1 & \text{ and } & \lambda \geq 2N, \end{array} \end{equation*} then first of all $\dot{r}(t)>0$ as long as the solution exists and moreover $$\dot{r}(t) \geq 4r(t)^2-1 \geq 3r(t)^2 $$ an equation for which finite time blow up is well known. (This behavior is ruled out by Assumption 2 on $F$ which implies sublinear growth at $\infty$.) Alternatively, even when $F$ does not have superlinear growth solutions can disappear in finite time. Consider now any $F$ such that $$F(0)<0$$ then let $-F(0)>\delta>0$ and choose $\lambda$ sufficiently small that $$F\left(\tfrac{\lambda}{N}\right)<-\delta<0.$$ Then we have the following differential inequality for $r(t)$, $$\dot{r}(t) \leq -\delta$$ so the solution $u(x,t)$ must become extinct before time $1/\delta$. \end{ex} Even without blow up the solution may be discontinuous for certain initial data: one can construct an example in one dimension where two adjacent drops merge after a short time, causing a discontinuity. To address the long time behavior of our solutions without the complexity of multiple components, we will make restrictions on the initial data such that our solutions actually are continuous. \end{rem} The following lemma clarifying the connection between classical and viscosity solutions is standard in the viscosity solutions theory. For example see \cite{BrandleVazquez05, Kim03}. \begin{lem} Suppose that $u:Q \to [0,+\infty)$ is a classical subsolution (supersolution) of \eqref{eqn: CLML} in $Q$ then it is also a viscosity subsolution (supersolution) in $Q$. Conversely suppose that $u$ is a viscosity subsolution (supersolution) of \eqref{eqn: CLML} in $Q$ and moreover $u$ is sufficiently regular, in particular $$u \in C^{2,1}(\overline{\Omega(u)}\cap Q),$$ then $u$ is a classical subsolution (supersolution) in $Q$. \end{lem} An important basic property of viscosity solutions theory is the stability of viscosity solutions under uniform convergence. We state this in the following lemma. For example see \cite{CrandallIshiiLions92}. \begin{lem}\label{lem: uniform stability} Let $u_n:{\mathbb R}^N\times[0,T]$ be a sequence of viscosity solutions of the problems \begin{equation*} \left\{ \begin{array}{lll} -\Delta u_n(x,t) = \lambda_n(t) & \text{ in } & \Omega_t(u_n), \\ \\ \partial_tu_n = F_n(|Du_n|)|Du_n| & \text{ on } & \Gamma_t(u_n). \end{array}\right. \end{equation*} where $F_n$ are all monotone increasing and continuous. Suppose that $u_n \to u$, $F_n \to F$ and $\lambda_n \to \lambda$ uniformly on compact sets, then $u$ is a viscosity solution of \begin{equation*} \left\{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in } & \Omega_t(u), \\ \\ \partial_tu = F(|Du|)|Du| & \text{ on } & \Gamma_t(u). \end{array}\right. \end{equation*} \end{lem} As mentioned in the introduction, it is useful sometimes to formulate the problem \eqref{eqn: CLML} as an evolution on domains in ${\mathbb R}^N$. We quickly define what it means for an evolution $$\Omega_t : [0,+\infty) \to \{ \text{ bounded subsets of } {\mathbb R}^N\} $$ to solve \eqref{eqn: CLML} in the viscosity sense. \begin{DEF} The evolution $\Omega_t$ is a supersolution (subsolution) of \eqref{eqn: CLML} if and only if, \begin{equation}\label{eqn: drp above} u[\Omega_t] := \left \{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in }& \Omega_t\\ \\ u=0 & \text{ on } & \Gamma_t. \end{array}\right. \end{equation} is a supersolution (subsolution) in the sense of Definition \ref{def:supersolution} (Definition \ref{def: subsolution}). \end{DEF} \subsection{Sup and Inf Convolutions} An important property of subsolutions (supersolutions) is the closure under sup (inf) convolutions. These convolutions will be used to perturb the originial solutions and to overcome the lack of scaling invariance in the problem \eqref{eqn: CLML}. \begin{comment} under max (min) of any finite collection. The proof is straightforward from the definition. \begin{lem} (Lattice property) Suppose $v$ and $w$ are sub (super) solutions of \eqref{eqn: CLML} on $Q$ then $$ u(x,t) := \max (\min)\{v(x,t),w(x,t)\} $$ is also a sub (super) solution. \end{lem} \textcolor{blue}{do we use lattice properties? if not } Next we introduce the \end{comment} \begin{lem}\label{lem: spaceconvs} \textup{(Convolutions in space) } \begin{enumerate}[(a)] \item Let $u$ be a subsolution of \eqref{eqn: CLML} on $\mathbb{R}^N\times[0,+\infty)$, and $r>0$, $c \geq 0$ and define the sup-convolution of $u$ \begin{equation}\label{eqn: spacesupconv} \widetilde{u}(x,t) := \sup_{y \in B_{r-ct}(x)}u(y,t). \end{equation} Then $\widetilde{u}$ is a subsolution with free boundary speed $F(|D\widetilde{u}|)-c$ as long as $r-ct>0$. \item Let $u$ be a supersolution of \eqref{eqn: CLML} on $\mathbb{R}^N\times[0,+\infty)$, and $r,c>0$ and define the inf-convolution of $u$ \begin{equation}\label{eqn: spaceinfconv} \widetilde{u}(x,t) := \inf_{y \in B_{r-ct}(x)}u(y,t). \end{equation} Then $\widetilde{u}$ is a supersolution with free boundary speed $F(|D\widetilde{u}|)+c$ as long as $r-ct>0$. \end{enumerate} \end{lem} Before we prove this let us note the continuity properties of the sup and inf convolutions. \begin{lem}\label{lem: sup inf continuity} \textup{(Continuity of sup and inf convolutions)} Let $f : \mathbb{R}^N\to \mathbb{R}$ and let $S : \mathbb{R}^N \to \mathcal{B}(\mathbb{R}^N)$ such that $S(x)$ is compact for all $x \in \mathbb{R}^N$ and $S$ is continuous under the Hausdorff distance $d_H$. Define the sup and inf convolutions $$ f^{S}(x) = \sup_{h\in S(x)}f(y+h) \quad \text{ and } \quad f_{S}(x)=\inf_{h \in S(x)}f(y+h). $$ Then we have the following \begin{enumerate}[(i)] \item If $f$ is upper semi-continuous then $f^S$ is as well. \item If $f$ is lower semi-continuous the $f_S$ is as well. \item If $f$ is continuous then both $f^S$ and $f_S$ are as well. \end{enumerate} \end{lem} \begin{proof} We will prove (i), (ii) is similar and (iii) follows from (i) and (ii) by noting that suprema of lower semi-continuous functions are lower semicontinuous and infima of upper semicontinuous functions are upper semi-continuous. Let $\alpha \in \mathbb{R}$ we will show that the sub level sets $\{f^S < \alpha\}$ are open. Let $x_0 \in \{f^S < \alpha\}$ then $x_0+S(x_0) \subset \{f <\alpha\}$. Since $x_0+S(x_0)$ is compact, $\{f\geq\alpha\}$ is closed, and they are disjoint, they must be a positive distance apart. That is, there exists $\epsilon>0$ such that $$ \bigcup_{h \in S(x_0)} B_\epsilon(x_0+h) \subseteq \{f<\alpha\}. $$ By continuity of $S$ there exists $\delta<\epsilon/2$ such that $|x-x_0|<\delta$ means that $d_H(S(x),S(x_0))<\epsilon/2$. Let $x \in B_{\epsilon/2}(x_0)$ and since upper semi-continuous functions achieve their maximum on compact sets there exists $h \in S(x)$ such that $f^S(x) = f(x+h)$. Then there exists $h_0$ in $S(x_0)$ such that $|h-h_0| <\epsilon/2$ so that $$ |(x+h)-(x_0+h_0)| <\epsilon\quad \text{ and hence }\quad x+h \in B_{\epsilon}(x_0+h_0) \subseteq \{f<\alpha\}. $$ So $f^S(x) <\alpha$ and therefore $ B_{\epsilon/2}(x_0) \subseteq \{f^S<\alpha\}$. \end{proof} {\it The proof of Lemma \ref{lem: spaceconvs}:} We will only prove the inf convolution case (b), since the proof of (a) is essentially the same. As in \eqref{eqn: spaceinfconv} for $r,c>0$ define the inf convolution of a supersolution $u$, \begin{equation*} \widetilde{u}(x,t) := \inf_{y \in B_{r-ct}(x)}u(y,t). \end{equation*} Suppose that for some parabolic domain $Q \subseteq {\mathbb R}^N \times [0,r/c)$ there exists a strict classical subsolution $\phi\in C^{2,1}(Q \cap \overline{\Omega(\phi)})$ with the free boundary speed $F(|D\phi|) + c$, which satisfies $$ \phi \prec \widetilde{u} \ \hbox{ on } \ \partial_pQ \ \hbox{ and } \ \phi(x_0,t_0)>\widetilde{u}(x_0,t_0) \ \hbox{ for some } (x_0,t_0) \in Q.$$ Let $x_1$ be the point where $\widetilde{u}(x_1,t_0)= u(x_0,t_0)$. Now we define the translated (and convoluted) test function $$ \psi (x,t) := \sup_{|x-y| \leq c(t_0-t), (y,t)\in Q} \phi (y+(x_0-x_1),t) \hbox{ for } t\leq t_0,$$ and define $\psi_2$ by the solution of the Dirichlet problem $$ -\Delta \psi_2(\cdot,t) = \lambda(t) \hbox{ in } \{\psi>0\}\cap Q, \quad \psi_2=\psi \hbox{ on the lateral boundary of }\{\psi>0\}\cap Q. $$ Note that the free boundary speed of $\psi_2$ has been decreased by $c$ over the free boundary speed of $\phi$ in $Q$. In other words, $\psi_2$ is a classical subsolution with free boundary speed $F(|D\psi|)$. Due to the definition of $\widetilde{u}$, we have $\psi_2 \leq u$ on the parabolic neighborhood of $Q +(x_1-x_0)$ and $\psi_2>u$ at $(x_1,t_0)$, which yields a contradiction. \hfill$\Box$ \subsection{Comparison}\label{sec: Comparison} We state the general strictly separated comparison principle for viscosity solutions of \eqref{eqn:CLML}, \begin{thm}\label{thm: strictly separated comparison} \textup{(Comparison for strictly separated data)} Suppose $u$ is a supersolution and $v$ a subsolution of \eqref{eqn: CLML} on $Q$. Suppose $u$ and $v$ are strictly separated (defined below) on the parabolic boundary of $Q$, $$v \prec u \quad \text{ on } \quad \partial_p Q $$ then $v < u$ in $\overline{\Omega(v)} \cap Q$, in particular $\Gamma(v)$ cannot touch $\Gamma(u)$ from the interior on any compact subset of $Q$. \end{thm} \begin{proof} The proof can be found in various places in the literature \cite{Kim03,BrandleVazquez05,CaffarelliVazquez99, KimPozar12}. We will go into more detail later in Section \ref{sec: Comparison}. \end{proof} As mentioned in Remark \ref{rem: continuity of solutions}, we will need to make a geometric restriction on our initial data to expect the existence of a viscosity solution which is stable under a family of perturbations. First, we recall the definition of a set star-shaped with respect to a point: \begin{DEF} A domain $\Omega \subseteq {\mathbb R}^N$ is called star-shaped with respect to a point $x$ if for every $y \in \partial \Omega$ the line segment between $x$ and $y$ is contained in $\overline{\Omega}$. \end{DEF} Then the assumption on our initial data $\Omega_0$ will be called \textit{strong star-shapedness}: \begin{DEF}\label{hyp: starshapedness} We call a domain $\Omega$ \textit{strongly star-shaped} if there exists $r>0$ such that $\Omega$ is star-shaped with respect to every point of $B_r(0)$. For each $R>0$ we define the class of uniformly bounded strongly star-shaped sets: \begin{equation} \begin{array}{ll} \mathcal{S}_{r,R}:= \{\Omega \subset B_{R}(0): \Omega \text{ star-shaped with respect to } B_r(0)\}, & \mathcal{S}_{r} := \bigcup_{R>0} \mathcal{S}_{r,R}, \end{array} \end{equation} and the class of strongly star-shaped sets \begin{equation} \mathcal{S}_0 := \bigcup_{r>0} \mathcal{S}_{r}. \end{equation} \end{DEF} That the ball $B_r(0)$ is centered at the origin is only for convenience, the problem is translation invariant. We will often refer to the strongly star-shaped property by saying that a set $\Omega_0 \in \mathcal{S}_{r,R}$ in order to clarify the role of $r$. We note a basic property of sets in $\mathcal{S}_{r,R}$ -- they are Lipschitz domains with the Lipschitz constant depending only on $r,R$. First, we define a notation for cones with apex at the origin. For $x \in {\mathbb R}^N$ and $\theta \in (0,\pi)$ the cone in the direction $x$ with opening angle $\theta$ is called, \begin{equation}\label{eqn: cone def} C(x,\theta):= \left\{y: \langle x , y \rangle \geq (\cos\theta)|x||y|\right\}. \end{equation} We show that $\Omega \in \mathcal{S}_{r}$ have interior and exterior cones at every boundary point: \begin{lem}\label{lem: sstolip} The following are equivalent for a bounded domain $\Omega \supset B_r(0)$: \begin{enumerate}[(i)] \item The domain $\Omega \in \mathcal{S}_{r}$. \item There is an $\epsilon_0 \in (0,\infty]$ such that for all $x \in \Omega^C$ there is an exterior cone to $\Omega$, \begin{equation} \begin{array}{lll} x+C\left(x,\theta_x\right)\cap B_{\epsilon_0}(x) \subset \Omega^C & \text{ where } & \sin\theta_x = \frac{r}{|x|}. \end{array} \end{equation} \item For all $x \in \partial \Omega$ there is an interior 'cone' to $\Omega$, \begin{equation} \begin{array}{lll} \left(x+C\left(-x,\theta_x\right)\right) \cap C\left(x,\tfrac{\pi}{2}-\theta_x\right) \cup B_r(0) \subset \Omega & \text{ where } & \sin\theta_x = \frac{r}{|x|}. \end{array} \end{equation} \item There exists $\epsilon_0>0$ so that \begin{equation}\label{eqn: inf conv containment} \Omega \subset\subset \bigcap_{|z| \leq a\epsilon} \left[(1+\epsilon)\Omega+z\right] \ \hbox{ for all } \epsilon_0>\epsilon>0, \end{equation} for every $0<a<r$. \end{enumerate} \end{lem} \begin{proof} The proof for parts (i)-(iii) is essentially in the picture, see Figure~\ref{fig: strong star shaped}. For part (iv) let us first note that (see figure \ref{fig: strong star shaped}), $$ x+C(x,\theta_x)\cap B_{\epsilon_0r}(x) \subset \bigcup_{\epsilon_0>\epsilon>0}\bigcup_{0<a<r} B_{a\epsilon}((1+\epsilon)x).$$ Therefore by (ii) $\Omega \in \mathcal{S}_r$ if and only if for all $x \in \Omega^C$ all $\epsilon_0>\epsilon>0$ and all $0<a<r$, $$ B_{a\epsilon}((1+\epsilon)x) \subset \subset \Omega^C,$$ or equivalently, for every $\epsilon_0>\epsilon>0$ and all $0<a<r$, $$ \{ x: d(x,(1+\epsilon)\Omega^C) \leq a\epsilon \} \subset\subset \Omega^C. $$ Again, equivalently, for every $\epsilon_0>\epsilon>0$ and all $0<a<r$, \begin{align*} \Omega &\subset \subset \{ x: d(x,(1+\epsilon)\Omega^C) \leq a\epsilon \}^C = {\mathbb R}^N\setminus\bigcup_{|z| \leq a \epsilon} [(1+\epsilon)\Omega^C+z] = \bigcap_{|z| \leq a\epsilon} \left[(1+\epsilon)\Omega+z\right]. \end{align*} \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{strong_star-shaped2.pdf} \end{center} \caption{Interior and exterior cones for a strongly star-shaped domain.} \label{fig: strong star shaped} \end{figure} \end{proof} We prove a comparison principle for reflections through hyperplanes. This will be very useful to us later because the reflection ordering is a property which does not depend on the radius from the strong star-shapedness property. Let $H$ be a hyperplane in $\mathbb{R}^N$ with $y \in H$ define the reflection through $H$ by $$ \phi_H(x) = x-2\langle x-y, \nu_H\rangle\nu_H.$$ The symmetry of the problem \eqref{eqn: CLML} with respect to reflections at least formally implies that if a solution $u(\cdot,t)$ and its reflection $u(\phi_H(\cdot),t)$ through $H$ are initially ordered in the half spaces of $H$ then they will remain so. \begin{prop}\label{prop: reflection comparison} \textup{(Reflection Comparison)} Suppose $u:\mathbb{R}^N\times[0,T]\to[0,+\infty)$ is a solution of \eqref{eqn: CLML} such that $\Omega_t(u)\in\mathcal{S}_{r}$ for some $r>0$ and for all $t \in [0,T]$. Let $H$ be a hyperplane in $\mathbb{R}^N$ with $H\cap B_r(0) = \emptyset$ and define $H_+$ and $H_-$ the half spaces of $H$ such that $B_r(0) \subset H_+$. Suppose that $\phi_H(\Omega_0)\cap H_+ \subseteq\Omega_0 \cap H_+ $ i.e. $$ u(\phi_H(x),0) \leq u(x,0) \text{ for } x \in H_+.$$ Then we have the ordering: $$ u(\phi_H(x),t) \leq u(x,t) \text{ for } (x,t) \in H_+\times[0,T].$$ \end{prop} A similar argument will give the more standard strong comparison result, \begin{lem}\label{lem: strong comparison} \textup{(Strong comparison)} Suppose $u$ and $v$ are respectively a sub- and supersolution of \eqref{eqn: CLML} on $\mathbb{R}^N\times [0,+\infty)$ which are initially ordered, $$\Omega_0(u)\subseteq\Omega_0(v).$$ Suppose that that there exists $T\geq0$ such that for each $t \in [0,T]$ either $\Omega_t(u)$ or $\Omega_t(v)$ is in $\mathcal{S}_{r,R}$ for some $r,R>0$ independent of $t$. Then let $c$ from Assumption~\ref{hyp: F cond} and define $t_0 := r^2/(Rc)$ then $$ \Omega_t(u) \subseteq \Omega_t(v) \hbox{ for } t\leq T+t_0. $$ \end{lem} \begin{proof} The proof is a slightly easier version of the proof of Proposition \ref{prop: reflection comparison} so we omit it. \end{proof} In order to get a strong comparison type result we want to slightly perturb the supersolution (or the subsolution) so that strict separation holds initially and we can use Theorem \ref{thm: strictly separated comparison}. To achieve this we use the inf and sup convolutions introduced above. This will be where the 'technical' assumption on $F$ \ref{hyp: F cond} comes in. \textit{Proof of Proposition \ref{prop: reflection comparison}:} Let $\epsilon_0$ and $c$ from Assumption \ref{hyp: F cond}, $0<a<r$, and define for $0<\epsilon \leq \epsilon_0$, \begin{align*} v_\epsilon(x,t) &= (1+\epsilon)^2u((1+\epsilon)^{-1}x,t);\hbox { and } \\ u_\epsilon(x,t) = &\inf_{y \in B_{a\epsilon-c\epsilon t}(x)} v_\epsilon(y,t) \ \ \text{ for } \ \ 0\leq t < \frac{a}{c}. \end{align*} Then, from Lemma \ref{lem: sstolip} part (iv) and the strong maximum principle for harmonic functions, we have $$ u(x,t) < u_\epsilon(x,t)\ \text{ in }\ \mathbb{R}^N \times [0,\frac{a}{c})\hbox{ for } \epsilon>0. $$ We check the super-solution property of $u_\epsilon$. Note that $v_\epsilon(x,t)$ is a supersolution of, \begin{equation*} \left \{ \begin{array}{lll} -\Delta v_\epsilon(x,t) \geq \lambda(t) & \text{ in } & \Omega_t(v_\epsilon)\\ \\ \partial_tv_\epsilon \geq (1+\epsilon)F((1+\epsilon)^{-1}|Dv_\epsilon|)|Dv_\epsilon| & \text{ on } & \Gamma_t(v_\epsilon). \end{array}\right. \end{equation*} Then, from Lemma \ref{lem: spaceconvs}, $u_\epsilon(\cdot,t)$ is a supersolution of \begin{equation*} \left \{ \begin{array}{lll} -\Delta u_\epsilon(x,t) \geq \lambda(t) & \text{ in } & \Omega_t(u_\epsilon)\\ \\ \partial_tu_\epsilon \geq [(1+\epsilon)F((1+\epsilon)^{-1}|Du_\epsilon|)+c\epsilon]|Du_\epsilon| & \text{ on } & \Gamma_t(v_\epsilon). \end{array}\right. \end{equation*} Using Assumption \ref{hyp: F cond} we conclude that $u_\epsilon$ is a super-solution of \eqref{eqn: CLML}. Since $u_\epsilon$ is a super-solution of \eqref{eqn: CLML}, by the strictly separated comparison principle Theorem~\ref{thm: strictly separated comparison}, $$ u(\phi_H(x),t) \leq u_\epsilon(x,t) \text{ in }\ H_+ \times [0,\frac{a}{c}). $$ Since $u_\epsilon \searrow u$ as $\epsilon\to 0$, it follows that $$ u(\phi_H(x),t) \leq u(x,t) \text{ in }\ H_+ \times [0,\frac{a}{c}).$$ We now iterate $\floor{T_0c/a}+1$ times to conclude. \hfill$\Box$ \subsection{Short Time Existence} As usual in the viscosity solutions theory, the comparison theorem is the key ingredient needed to use Perron's method to show existence. In the existence proof we will use the following elementary facts about solutions of Poisson's equation at the boundary. \begin{lem}\label{lem: holder in lip domain} \textup{(Boundary H\"{o}lder Estimates in Lipschitz Domains)} Let $\Omega \subset {\mathbb R}^N$ be a domain and $x_0 \in \partial\Omega$ and suppose that there exists $\theta \in (0,\pi/2)$ and $\nu \in S^{N-1}$ such that, $$ x_0+\{x: \langle x,\nu \rangle>\cos\theta|x| \}\subset{\mathbb R}^N\setminus\Omega. $$ In other words, $\Omega$ has an exterior cone of opening angle $\theta$ at $x_0$. Suppose $\Lambda>\lambda>0$ and $u: \Omega \to [0,+\infty)$ satisfies \begin{equation*} \begin{array}{lll} -\Delta u (x)\leq \lambda & \text{ for} & x \in \Omega \\ \\ u(x) = 0 & \text{ on } & \partial\Omega \end{array} \end{equation*} Then there exists $\alpha = \alpha(\theta,N) \in (0,1)$ and $C=C(\Lambda,N)>0$ such that for all $h>0$, $$\sup_{|z|\leq h} u(x_0+z) \leq Ch^\alpha .$$ \end{lem} \begin{lem}\label{lem: bdry gradient est} \textup{(Boundary Gradient Estimate in $C^{1,1}$ Domains)} Let $\Omega \subset {\mathbb R}^N$ be a domain $x_0 \in \partial\Omega$ and suppose that there exists $r>0$, $y \in {\mathbb R}\setminus\overline{\Omega}$ such that $$ B_r(y) \cap \overline{\Omega } = \{x\}.$$ In other words, $\Omega$ has an exterior ball at $x_0$ of radius $r$. Suppose $\Lambda>\lambda>0$ and $u: \Omega \to [0,+\infty)$ satisfies $u(x_0) = 0$ and \begin{equation*} -\Delta u \leq \lambda \quad \text{ for} \quad x \in \Omega . \end{equation*} Then $$ \lim_{s \to 0}\sup_{z \in \Omega \cap B_s(x)} \frac{u(z)}{s} \leq C(\Lambda,N)\frac{\sup_{B_{2r}(x_0)} u}{r}. $$ \end{lem} We move on to the 'short time' existence theorem. Actually this shows the existence of a global in time discontinuous viscosity solution without any need for Assumption \ref{hyp: F cond}. The uniqueness and continuity however rely on the strong comparison result which requires Assumption \ref{hyp: F cond} and only holds for a short time. This is to be expected as noted in Remark \ref{rem: continuity of solutions}. In particular the limiting factor will be the strong comparison principle Lemma \ref{lem: strong comparison} which only holds for a short time barring any a priori knowledge about strong star-shapedness. \begin{thm}\label{thm: short time existence} \textup{(Short time existence and uniqueness)} Let $r,R>0$ and $\Omega_0 \in \mathcal{S}_{r,R}$ and let $u_0$ solve $$-\Delta u_0 (x) = \lambda(0) \quad \text{ and} \quad u_0(x)=0 \text{ for } x\in\Gamma_0.$$ Then there is a $t_0>0$ depending only on $r$ such that there exists a (unique) continuous viscosity solution $u$ of \eqref{eqn: CLML} on $Q = \mathbb{R}^N\times(0,t_0]$ with initial data $u_0$. \end{thm} \begin{proof} This proof can be found in \cite{GlasnerKim09}. We construct a sub and supersolution which take the initial data continuously then apply Perron's method. 1. First we construct a subsolution. Let $\gamma>0$ to be chosen later and define, \begin{equation}\label{eqn: perron subsoln} U(x,t) := \left \{ \begin{array}{lll} (1-\gamma t)^2u_0(\frac{x}{1-\gamma t}) & \text{ for} & t<\gamma^{-1} \\ \\ 0 & \text{ for} & t>\gamma^{-1} \end{array}\right. \end{equation} Then, calculating formally, if $x_0 \in \Gamma_0$ then $(1-\gamma t)x_0$ is in $\Gamma_t$ and $$ \frac{U_t}{|DU|}( (1-\gamma t)x_0) = \gamma x_0 \cdot \frac{Du_0}{|Du_0|}\leq -\gamma |x_0| \sqrt{1-\left(\frac{r}{|x_0|}\right)^2} \leq \min F $$ if $\gamma = \gamma(r,R)$ is chosen small. In particular, $$ \gamma < \min F\left(R^2-r^2\right)^{-1/2}$$ will work. This calculation shows that $U$ is a subsolution of \eqref{eqn: CLML} when $\partial\Omega_0$ is smooth enough that $Du_0$ is defined on $\partial\Omega_0$. The calculation can be transferred to the test functions to show that $U$ is a subsolution in the general case. 2. Next we construct a supersolution which takes the initial data continuously. Let $\alpha=\alpha(r/2,R)\in(0,1)$ from Lemma~\ref{lem: holder in lip domain} combined with Lemma~\ref{lem: sstolip}. Define, for any $h>0$, \begin{equation}\label{eqn: perron supersoln} V_h(x,t) = u\left[ \bigcap _{|z| \leq \rho(t)}(1+h)\Omega_0+z\right] \end{equation} where $$\rho(t) = a\left(h-\dfrac{1}{a}\max\{F\left(Ch^{\alpha-1}\right),1\}t\right) \ \text{ for } \ 0 \leq t \leq \max\{F\left(Ch^{\alpha-1}\right),1\}^{-1} $$ Here $0<a<r$ so that $u_0 \prec V_h$ as in Lemma \ref{lem: sstolip}. It follows from Lemma \ref{lem: spaceconvs} that $V_h$ satisfies $$ \frac{\partial_tV_h}{|DV_h|}(x,t) \geq \max\{F\left(Ch^{\alpha-1}\right),1\} \quad \text{ on } \quad \Gamma_t(V_h) $$ in the viscosity sense. Let $x \in \Gamma_t(V_h)$, then there exists $y \in \partial B_{\rho(t)}(x) \cap (1+h)\Gamma_0$ and $\Omega_t(V_h)$ has the exterior ball at $x$ $$ B_{\rho(t)}(y) \subset {\mathbb R}^N\setminus\Omega_t(V_h).$$ Then, for $h$ sufficiently small, $\Omega_t(V_h)$ is an intersection of domains star-shaped with respect to $B_{r/2}(0)$ and we apply Lemma \ref{lem: bdry gradient est} and then Lemma \ref{lem: holder in lip domain} to get, $$ |DV_h|(x,t) \leq Ch^{-1}\sup_{|z|\leq 2h}V_h(x+z,t) \leq C h^{\alpha-1} $$ and it follows from the monotonicity of $F$ that $V_h$ is a supersolution. 3. Now we apply Perron's method. Let us define $$ u(x,t) := \sup \{ v: {\mathbb R}^N\times [0,+\infty) \to [0,+\infty): v \text{ is a subsolution of \eqref{eqn: CLML} with initial data } u_0 \}. $$ From its definition and Theorem \ref{thm: strictly separated comparison} we get the ordering for all $h>0$ \begin{equation}\label{initial_order} \begin{array}{lll} U(x,t) \leq u(x,t) \leq V_h(x,t) & \text{for} & 0 \leq t \leq \max\{F\left(Ch^{\alpha-1}\right),1\}^{-1}. \end{array} \end{equation} Let us now define the upper and lower semi-continuous envelopes of $u$, respectively $$ u^*(x,t) = \limsup_{(y,s) \to (x,t)} u(y,s) \quad \text{ and } \quad u_*(x,t) = \liminf_{(y,s) \to (x,t)} u(y,s).$$ It is standard in viscosity solution theory (see \cite{user}) to show that $u^*$ is a subsolution and $u_*$ is a supersolution. Due to \eqref{initial_order} one can check that $u(\cdot,t) \to u_0(\cdot)$ uniformly as $t \to 0$, i.e. $u^*=u_*$ at $t=0$. Thus, letting $t_0=t_0(r,R)>0$ as in Lemma \ref{lem: strong comparison}, we have $u^* \leq u_*$ on $[0,t_0]$. Combining this with the fact $u_*\leq u^*$, we obtain the existence of a unique continuous viscosity solution of \eqref{eqn: CLML} on $[0,t_0]$. \end{proof} We point out that the above proof yields a modulus of continuity in time of the contact line at $t=0$. For solutions which are uniformly strongly star-shaped on a time interval $[0,T]$ this extends to give a uniform modulus of continuity in time for the contact line on $[0,T]$. We make this explicit in the case when $F$ has polynomial growth below. For the rest of this section we will make the following assumption: \begin{hyp}\label{hyp: polynomial growth} There exists $p>0$ such that: \begin{equation}\label{eqn: polynomial growth} \limsup_{s\to\infty}\frac{F(s)}{s^p}<+\infty \end{equation} \end{hyp} We prove a modulus continuity for strongly star-shaped solutions of \eqref{eqn: CLML} depending only on $r$, $R$, $p$, $\sup \lambda$ and $\sup F(s)/s^p$: \begin{cor}\label{cor: equicontinuity} Suppose $u:\mathbb{R}^N\times[0,T]\to [0,+\infty)$ is a viscosity solution of \eqref{eqn: CLML} with $0<\lambda(t)\leq \Lambda$ bounded and $F$ satisfying Assumptions \ref{hyp: F incr cond}, \ref{hyp: F cond} and \ref{hyp: polynomial growth}. Moreover suppose that $\Omega_t(v)\in \mathcal{S}_{r,R}$ for some $r,R>0$ and all $t\in[0,T]$. Let $\alpha$ from Lemma \ref{lem: bdry gradient est} and $$\beta:=\tfrac{1}{1+p(1-\alpha)},$$ then $\Gamma_t(u)$ is $C^\beta$ in time in the Hausdorff metric, i.e., $$ d_H(\Gamma_{t_1}(u),\Gamma_{t_2}(u))\leq C_2|t_1-t_2|^{\beta} \hbox { for all } t_1, t_2 \in [0,T]. $$ Here $C_2$ depends only on $\sup F(s)/s^p$, $p$, $\Lambda$ and $N$. \end{cor} \begin{rem} An analogous equicontinuity result is true for general $F$ satisfying Assumptions~\ref{hyp: F incr cond} and \ref{hyp: F cond}. In this case the H\"{o}lder modulus of continuity will have to be replaced by a more general modulus of continuity depending on the growth of $F$ at infinity. We restrict to the polynomial growth case for simplicity of presentation. \end{rem} \begin{proof} Let $T>t_2>t_1>0$ and $\gamma$ from the proof of Theorem \ref{thm: short time existence}. Using the barriers from the previous theorem, \begin{equation} u(x,t_2) \geq U(x,t_2) := \left \{ \begin{array}{lll} (1-\gamma t)^2u(\frac{x}{1-\gamma t},t_1) & \text{ for} & t-t_1<\gamma^{-1} \\ \\ 0 & \text{ for} & t-t_1>\gamma^{-1}. \end{array}\right. \end{equation} Also for $h=C(t_2-t_1)^{\beta}$ we have that \begin{equation} u(x,t_2) \leq V_h(x,t_2) \leq (1+h)^2 u\left(\frac{y}{1+h},t_1\right). \end{equation} Then the following containments hold: $$ [(1-\gamma(t_2-t_1))\vee0]\Omega_{t_1}(u)\subseteq\Omega_{t_2}(u)\subseteq (1+C(t_2-t_1)^{\beta})\Omega_{t_1}(u). $$ So if $x \in \Gamma_{t_1}(u)$ and $(t_2-t_1)$ small, $$ \{y: |y-x| \leq CR(t_2-t_1)^{\beta}\}\cap \Gamma_{t_2}(u) \neq \emptyset $$ and \begin{equation}\label{eqn: bdry holder in time} d_H(\Gamma_{t_1}(u),\Gamma_{t_2}(u))\leq C|t_1-t_2|^{\beta}. \end{equation} \end{proof} \begin{rem} See Lemma \ref {lem: hausdorff estimates} for some implications of \eqref{eqn: bdry holder in time}. \end{rem} We will be able to use this Corollary to show existence of a continuous viscosity solution for \eqref{eqn: CLML} when $F$ has polynomial growth. The idea is to take the unique solution $u_M$ for the problem with the free boundary velocity $\max\{F,M\}$ and let $M \to \infty$. Corollary \ref{cor: equicontinuity} will give equicontinuity to take a convergent subsequence of the $u_M$ as long as the $\Omega_t(u_M)\in\mathcal{S}_{r,R}$ with $r,R$ uniform in $M$ and $t$. In order to show this we will need some kind of preservation of the strongly star-shaped property which does not depend on $M$. \begin{lem}\label{lem: sending M to infty} Let $T>0$ and $u_n:\mathbb{R}^N\times[0,T]\to [0,+\infty)$ be viscosity solutions of: \begin{equation}\label{eqn: approx problem} \left\{ \begin{array}{lll} -\Delta u_n(x,t) = \lambda_n(t) & \text{ in } & \Omega_t(u_n), \\ \\ \partial_tu_n = F_n(|Du_n|)|Du_n| & \text{ on } & \Gamma_t(u_n). \\ \\ \Omega_0(u_n) = \Omega_0. \end{array}\right. \end{equation} Here $\lambda_n \to \lambda>0$ uniformly so that in particular there exists $\Lambda>0$ so that: $$\sup_n \sup_t \lambda_n(t) \leq \Lambda<+\infty. $$ We suppose that $F_n \leq F$ for all $n$ and $F_n \to F$ uniformly on compact sets. The limiting free boundary speed $F$ has polynomial growth of order $p>1$, $$ 0<\limsup_{s\to \infty} \frac{F(s)}{s^p}<+\infty. $$ The $F_n$ are assumed to all satisfy Assumptions \ref{hyp: F incr cond}, \ref{hyp: F cond} and \ref{hyp: polynomial growth}, with Assumption~\ref{hyp: polynomial growth} satisfied uniformly : \begin{equation} \begin{array}{lll} \sup_n \sup_s \frac{F_n(s)}{s^p}<+\infty. \end{array} \end{equation} Moreover, suppose that there exist $r,R>0$ such that for all $n$ and all $t \in [0,T]$ we have $\Omega_t(u_n) \in \mathcal{S}_{r,R}$. However note that we do NOT assume that the constants $c_n$ from Assumption \ref{hyp: F cond} are uniformly bounded. Then the $u_n$ converge uniformly on ${\mathbb R}^N \times [0,T]$ to a viscosity solution of: \begin{equation}\label{eqn: limit pde} \left\{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in } & \Omega_t(u), \\ \\ \partial_tu = F(|Du|)|Du| & \text{ on } & \Gamma_t(u). \\ \\ \Omega_0(u) = \Omega_0. \end{array}\right. \end{equation} If additionally $\lambda_n(t) \leq \lambda(t)$ for all $n$ then $u$ is the minimal viscosity solution \eqref{eqn: limit pde}. In particular, in this case, the limit does not depend on the approximating sequence $F_n$. \end{lem} \begin{proof} 1. First we show that $\Omega_t(u_n)$ is an equicontinuous sequence of paths in $\mathcal{S}_{r,R}$. From Lemma \ref{lem: hausdorff estimates} and Corollary \ref{cor: equicontinuity} we derive for $t,s>0$: $$\sup_nd_H(\Omega_t(u_n),\Omega_s(u_n))\leq\sup_nd_H(\Gamma_t(u_n),\Gamma_s(u_n)) \leq C|t-s|^\beta. $$ Then from the compactness lemma \ref{lem: compactness} for paths in $\mathcal{S}_{r,R}$ up to a subsequence $\Omega_t(u_n)$ converge uniformly to a continuous path $\Omega_t$ in $\mathcal{S}_{r,R}$ with free boundary $\Gamma_t := \partial\Omega_t$. \medskip Let $u(x,t)$ be the solution of \begin{equation} \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ for } & x \in \Omega_t \\ \\ u(x,t) = 0 & \text{ for } & x \in \Gamma_t. \end{array} \end{equation} We claim that $u_n$ converges uniformly to $u$ in $\mathbb{R}^N\times[0,T]$, we will demonstrate this by showing the following estimate: $$ |u_n-u|(x,t) \lesssim_{r,R} d_H(\Omega_t(u_n),\Omega_t(u))^\alpha+|\lambda_n(t)-\lambda(t)|. $$ Let $x \in \Omega_t(u) \Delta \Omega_t(u_n)$, then: $$d(x,\partial(\Omega_t (u)\cup \Omega_t(u_n))) \leq d_H(\Gamma_t(u_n),\Gamma_t(u)) \lesssim_{r,R} d_H(\Omega_t(u_n),\Omega_t(u)). $$ Since $u_n-u=0$ on $\partial(\Omega_t(u) \cup \Omega_t(u_n))$ we get from Lemma \ref{lem: holder in lip domain}, $$|u_n-u|(x,t) \leq C d(x,\partial(\Omega_t(u) \cup \Omega_t(u_n)))^\alpha \lesssim_{r,R}d_H(\Omega_t(u_n),\Omega_t(u))^\alpha. $$ Meanwhile for $x \in \Omega_t(u) \cap \Omega_t(u_n)$ we combine the above inequality which holds on the boundary of $\Omega_t(u) \cap \Omega_t(u_n)$ with the fact that $\text{diam}(\Omega_t(u) \cap \Omega_t(u_n)) \lesssim_R 1$ to get, $$ |u_n-u|(x,t) \lesssim d_H(\Omega_t(u_n),\Omega_t(u))^\alpha+ |\lambda_n(t)-\lambda(t)|. $$ We apply the stability of viscosity solutions under uniform convergence, Lemma \ref{lem: uniform stability}, to see that $u$ is a viscosity solution of the PDE \eqref{eqn: limit pde} as claimed. \medskip 2. Now we show that if $\lambda_n \nearrow \lambda$ then $u$ must be the smallest viscosity solution of \eqref{eqn: limit pde}. Let $v$ be another viscosity solution of \eqref{eqn: limit pde}. Note that due to the orderings $F_n \leq F$ and $\lambda_n \leq \lambda$ we have that $v$ is a supersolution of each of the approximating problems \eqref{eqn: approx problem}. Using the strong comparison principle Lemma~\ref{lem: strong comparison} which holds for each problem \eqref{eqn: approx problem} we get $$ u_n \leq v \hbox{ on } {\mathbb R}^N\times[0,T]. $$ \end{proof} \subsection{Preservation of the Strongly Star-shaped Property}\label{sec: Preservation of the Strongly Star-shaped Property} Here we describe some of the properties of the viscosity solutions of \eqref{eqn: CLML} and \eqref{eqn: CLMV}. First, we will show that, if the droplet initially is in $\mathcal{S}_0$, then this property persists for a short time with the radius of the strong star-shaped property going to zero in some finite amount of time. This short-time regularity will not be sufficient to prove any kind of long-time behavior. However, in Proposition \ref{prop: reflection comparison} we have shown that a reflection comparison principle holds as long as the positivity set is in $\mathcal{S}_{r,R}$ for any $r,R>0$. The reflection comparison principle in turn will allow us to show, in some situations, that actually there was no loss of star-shapedness. In particular for \eqref{eqn: CLMV} we will show that initial data which is in $\mathcal{S}_{r}$ for a sufficiently large $r$ along with a condition on the smallness of $$\sup_{x\in \Omega_0} |x|-\inf_{x\in \Omega_0} |x|$$ will maintain some regularity globally in time. In fact, it will be strongly star-shaped for all time with a possibly smaller radius. \medskip We show that strong star-shapedness cannot disappear immediately. The below Lemma is essentially contained in \cite{GlasnerKim09}, we use a different proof. \begin{lem}\label{lem: GK short time ss} \textup{(Short time strong star-shapedness)} Let $u:{\mathbb R}^N\times[0,+\infty)\to[0,+\infty)$ be the solution of \eqref{eqn: CLML} with initial positive phase $\Omega_0\in \mathcal{S}_{r}$ for some $r>0$. Then $$ \Omega_t \in \mathcal{S}_{r-ct} \ \hbox{ for } \ 0 \leq t < \frac{r}{c} $$ where $c$ is from Assumption \ref{hyp: F cond}. \end{lem} \begin{proof} Let $\epsilon_0>\epsilon>0$ and $0<a<r$, then from Lemma \ref{lem: sstolip} and Assumption \ref{hyp: F cond} we know that \begin{equation*} \begin{array}{lll} u_\epsilon(x,t) := \inf_{|z|\leq a\epsilon-c\epsilon t} (1+\epsilon)^2u((1+\epsilon)^{-1}(x+z),t) & \text{ for } & t \leq \frac{a}{c} \end{array} \end{equation*} is a supersolution of \eqref{eqn: CLML} which has $u(\cdot,0) \prec u_{\epsilon}(\cdot,0)$. Therefore from the strict comparison result Theorem~\ref{thm: strictly separated comparison} $u \leq u_\epsilon$ for $t < a/c$ and in particular, $$ \Omega_t(u) \subset \Omega_t(u_\epsilon) = \bigcap_{|z|\leq(a-ct)\epsilon} ((1+\epsilon)\Omega_t(u)+z) \hbox{ for every } \epsilon_0>\epsilon>0. $$ Since $\epsilon$ and $0<a<r$ were arbitrary the converse direction of Lemma \ref{lem: sstolip} part (iv) implies that $\Omega_t(u)$ has the claimed star-shapedness. \end{proof} We would like to show $\Omega_t(u) \in \mathcal{S}_{r}$ as long as $B_r(0)$ is contained in the positive phase $\Omega_t(u)$. This kind of preservation of the star-shapedness property would be very useful because we could get regularity results by simply showing that the positivity set must always contain some small ball around the origin. However, the arguments of Lemma \ref{lem: GK short time ss} are insufficient to prove such a result. This is where the reflection comparison principle Proposition \ref{prop: reflection comparison} is useful. The key fact is that reflection comparison holds as long as $\Omega_t(u) \in \mathcal{S}_{r}$ for any $r>0$. \medskip We begin by defining a slightly stronger property than strong star-shapedness which is defined in terms of reflections. In order to do this we will need some notations. Let $\Omega$ be an open bounded domain in $\mathbb{R}^N$. For $\nu \in S^{N-1}$, let $H=H(\nu)$ be the hyperplane through the origin orthogonal to $\nu$ with the half spaces it defines: $$H_+=H_+(\nu) = \{x: x\cdot \nu > 0\}, \quad H_- =H_-(\nu)= \{x: x\cdot \nu < 0\} . $$ Now for $s>0$ define the translates, $$ H(s) = H+s\nu, \quad H_+(s) = H_++s\nu, \quad H_-(s) = H_-+s\nu.$$ For $s$ sufficiently large $$\Omega \subset H_-(s) $$ and so trivially \begin{equation}\label{eqn: sliding plane} \phi_{H(s)}(\Omega\cap H_+(s)) \subset \Omega\cap H_-(s). \end{equation} See Figure~\ref{fig: reflections} for a depiction of the situation. Now let us slide the hyperplane inwards towards the origin by decreasing $s$ until \eqref{eqn: sliding plane} no longer holds. We will call $s_{\text{min}}(\nu,\Omega)$ the closest to the origin that we can move this plane with \eqref{eqn: sliding plane} always holding. More precisely, we define \begin{equation} s_{\text{min}}(\nu,\Omega) := \inf \{ s>0: \phi_{H(t)}(\Omega\cap H_+(t)) \subset \Omega\cap H_-(t) \text{ for all } t>s\}. \end{equation} In the following we omit the dependence of $s_{\text{min}}$ on $\Omega$ when it will not present any confusion. If $s_{\text{min}}(\nu) = 0$ for every direction $\nu$ then $\Omega$ must be a ball. The basic approach of Serrin in \cite{Serrin71} to showing the symmetry of solutions of \eqref{eqn: EQ} is to show that $s_{\text{min}}(\nu) = 0$ for all $\nu$. In our case it is useful to use this same idea of symmetry but in a weaker form. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{reflections2.pdf} \end{center} \caption{Reflection comparison will hold.} \label{fig: reflections} \end{figure} \begin{DEF}\label{rho_reflection} We say that a bounded, open set $\Omega$ has \textit{$\rho$-reflection} if $B_\rho(0) \subseteq \Omega$ and \begin{equation}\label{eqn: condition} \sup_{\nu \in S^{n-1}} s_{\text{min}}(\nu) \leq \rho. \end{equation} \end{DEF} \begin{rem} As with strong star-shapedness, given some initial data $\Omega_0$, we will in general assume that $$ \sup_{\nu \in S^{N-1}} s_{\text{min}}(\nu,\Omega_0) = \inf_{z \in {\mathbb R}^N} \sup_{\nu \in S^{N-1}} s_{\text{min}}(\nu,\Omega_0+z).$$ Of course if this is not the case it can be corrected by a spatial translation. \end{rem} \begin{rem} One can show that (see Lemma \ref{lem: lip norm closeness}), upon fixing a maximal diameter $R$, a set has $\rho$-reflection if its boundary normals deviate from the radial direction by $O(\rho/R)$ and additionally the oscillation of the boundary is $O(\rho^2/R)$. Although we would like to know whether the flow preserves convexity, the $\rho$-reflection property does not seem to be very useful in this direction. In particular $\rho$-reflection does not imply convexity, and neither does the converse hold. \end{rem} Note that the property \eqref{eqn: condition} is preserved over time due to the reflection comparison, as long as the positivity set $\Omega_t(u) \in \mathcal{S}_{r}$ for any $r>0$. We list some basic facts about sets which have $\rho$-reflection. The proofs are postponed to the appendix. The first says that the condition $\rho$-reflection imposes a condition on the spatial location of the boundary of the set. \begin{lem}\label{lem: in an annulus} Suppose $\Omega$ has $\rho$-reflection, then \begin{equation}\label{condition} \sup_{x \in \partial\Omega} |x|-\inf_{x\in \partial\Omega}|x| \leq 4\rho. \end{equation} \end{lem} We mention that \eqref{eqn: condition} imposes a bound on what kind of normal vectors $\partial\Omega$ can have. In fact $\Omega$ will be strongly star-shaped with radius depending only on $\text{dist}(\partial\Omega,B_\rho(0))$. \begin{lem}\label{lem: almost radiality} Suppose $\Omega$ has $\rho$-reflection. Then $\Omega$ satisfies the following: \begin{itemize} \item[(a)] for all $x\in \partial\Omega$ there is an exterior cone to $\Omega$ at $x$, \begin{equation} \begin{array}{lll} x+C\left(x,\phi_x\right) \subset {\mathbb R}^N\setminus\Omega & \text{ where } & \cos\phi_x = \frac{\rho}{|x|}, \ \phi_x \in (0,\pi/2), \end{array} \end{equation} and $C(x,\phi_x)$ is the cone in direction $x$ of opening angle $\phi_x$ as defined in \eqref{eqn: cone def}; \item[(b)] $\Omega\in \mathcal{S}_{r}$ where $$ r = r(\rho,\inf_{x\in \partial\Omega}|x|) = (\inf_{x\in \partial\Omega}|x|^2-\rho^2)^{1/2}.$$ \end{itemize} \end{lem} Now, combining the strong star-shapedness from Lemma \ref{lem: almost radiality} with the reflection comparison principle Proposition \ref{prop: reflection comparison}, we obtain that $\rho$-reflection is preserved as long as the $B_\rho(0)$ is contained in evolving positive phase. \begin{lem}\label{lem: preservation} Suppose $u:\mathbb{R}^N\times[0,+\infty)\to[0,+\infty)$ is a solution of \eqref{eqn: CLML} and $\Omega_0$ has $\rho$-reflection for some $\rho>0$. Let $I = [0,T)$ be the maximal time interval containing $0$ on which $\overline{B_\rho(0)}\subseteq \Omega_t(u)$. Then $\Omega_t(u)$ has $\rho$-reflection and in particular $\Omega_t(u) \in \mathcal{S}_{r}$ for some $r>0$ for every $t \in I$. \end{lem} \begin{proof} Note that by comparing with a radial subsolution placed below $u(\cdot,0)$ one can see that $T>0$. Suppose towards contradiction that the lemma fails, that is $$ t_* := \inf \{ t \in [0,T): \Omega_t(u) \text{ does not have $\rho$-reflection }\}<T. $$ Since $\overline{B_\rho(0)}\subseteq \Omega_{t_*}(u)$ we have that $$ h:=\text{dist}(\Omega_{t_*}(u)^C, B_\rho(0))>0$$ so that by Lemma \ref{lem: almost radiality}, $\Omega_{t_*}(u)$ is star-shaped with respect to $B_{(h^2+2\rho h)^{1/2}}(0)$. Then by Lemma \ref{lem: GK short time ss} \begin{equation} \begin{array}{lll} \Omega_t(u) \in \mathcal{S}_{r} & \text{ for } & r = \frac{1}{2}(h^2+2\rho h)^{1/2} \end{array} \end{equation} on some slightly larger interval $[t_*,t_*+r/c)$. Now let $H$ be a hyperplane which does not intersect $B_\rho(0)$. We can apply the reflection comparison Proposition \ref{prop: reflection comparison} to $u(x,t)$ on $[t_*,t_*+r/c)$ to see that, $$ u(\phi_H(x),t) \leq u(x,t) \text{ for } (x,t) \in H_+ \cap [t_*,t_*+r/c)$$ and therefore $$\phi_H(\Omega_t(u))\cap H_+ \subseteq \Omega_t(u)\cap H_+. $$ This holds for all admissible $H$, so $\Omega_t(u)$ has $\rho$-reflection on $[t_*,t_*+r/c)$, contradicting the definition of $t_*$. \end{proof} Notice that an immediate consequence of Lemma~\ref{lem: preservation} is that for any initial data $\Omega_0$ which has $\rho$-reflection the evolution $\Omega_t$ has $\rho$-reflection for at least as long as, $$ t \leq \frac{\min_{x\in\Gamma_0}|x|-\rho}{\min F}. $$ The time on the right hand side above is independent of $c$ from Assumption \ref{hyp: F cond} and of $\lambda(t)$. \medskip Now we present an application of this idea to the volume preserving problem, showing that for certain initial data any solution of the volume preserving problem must have $\rho$-reflection for all time. One can think of this as an a priori estimate for solutions of \eqref{eqn: CLMV}. If the initial data $\Omega_0$ has $\rho$-reflection then Lemma \ref{lem: preservation} says that the solution $\Omega_t$ of the volume preserving flow also has $\rho$-reflection until such a time as $\Omega_t$ touches $B_\rho(0)$ from the exterior. For a domain $\Omega \subset {\mathbb R}^N$ let us define the associated Lagrange multiplier, $$ \lambda[\Omega] = \inf \{ \int |Dv|^2 : v \in H^1_0(\Omega) \ \hbox{ and } \ \int v = V\}.$$ Then from Lemma~\ref{lem: in an annulus} at the touching time $t_*$ we know that $\Omega_{t_*} \subset B_{5\rho}(0)$. This allows us to get a lower bound on the Lagrange multiplier $\lambda[\Omega_{t_*}]$ and show that $B_{\rho}(0)$ is a strict subsolution near the touching time in the sense that $$ \frac{\lambda[\Omega(t)]}{2N}(\rho^2-|x|^2) $$ is a strict classical subsolution for $|t-t_*|$ small. Of course, this will be a contradiction of the supersolution property of $\Omega_t$. We make this precise in the following Lemma. \begin{lem}\label{lem: reflection for all time} Suppose $u:\mathbb{R}^N\times[0,T]\to[0,+\infty)$ is a solution of \eqref{eqn: CLMV} with initial data $\Omega_0$ that has $\rho$-reflection with $B_{\rho} \subset\subset \Omega_0$. Then there is a dimension constant $C_N$ such that if \begin{equation}\label{eqn: rho small} 0<\rho < C_NV^{\frac{1}{N+1}} \end{equation} then there exists $a>0$ such that $B_{(1+a)\rho}(0)\subseteq\Omega_t(u)$ for all $t>0$. In particular, we establish that $$ C_N \geq \frac{1}{5^{\frac{N+2}{N+1}}}\left(\frac{N(N+2)}{|S^{N-1}|}\right)^{\frac{1}{N+1}}. $$ \end{lem} \begin{rem} The scaling $ \rho \lesssim V^{\frac{1}{N+1}}$ is the natural one for such an inequality. In particular, if $\rho$ is larger than the unique radial stationary solution, $$\rho >r_* = \left(\frac{N}{N+2}\right)^{\frac{1}{N+1}}V^{\frac{1}{N+1}} $$ then we can construct a counter-example using radially symmetric solutions of \eqref{eqn: CLMV}. The solution of \eqref{eqn: CLMV} with initial data $B_{r}(0)$ for any $r>\rho$ has $\rho$-reflection and converges as $t \to \infty$ to $B_{r_*}(0)$ so, in particular, there is some time after which $B_\rho(0)$ is no longer contained in the evolving positivity set. Of course in this case the initial data has $\rho$-reflection for every $\rho>0$. \end{rem} \begin{proof} From the compact containment $B_\rho \subset\subset \Omega_0$ along with the strict inequality \eqref{eqn: rho small} let $a>0$ small so that: \begin{equation*} \begin{array}{lll} B_{(1+a)\rho}(0) \subset\subset \Omega_0 & \text{ and } & \rho < \left(\frac{N(N+2)}{(1+a+4)^{N+2}|S^{N-1}|}\right)^{\frac{1}{N+1}}. \end{array} \end{equation*} Towards a contradiction assume that $B_{(1+a)\rho}(0)$ touches $\Omega_t(u)$ from the inside for the first time at $0<t_*<T$. More precisely let $$t_* = \inf \{T>t >0: B_{(1+a)\rho(0)}\cap \Omega_t(u)^C \neq \emptyset\}$$ where we have assumed that the set being infimized over is non-empty. Note that $t_*>0$ by comparing $u$ with a radial subsolution starting on some ball slightly larger than $B_{(1+a)\rho}$ but still contained in $\Omega_0$ and by continuity of the free boundary (Corollary \ref{cor: equicontinuity} for example applies) $$ B_{(1+a)\rho}(0) \subseteq \Omega_{t_*}(u) \text{ and } \exists \ x_*\in \partial B_{(1+a)\rho}(0) \cap \Gamma_{t_*}(u).$$ Now by Lemma \ref{lem: preservation} and $$ B_\rho(0) \subset\subset \Omega_t(u) \text{ for } t\in [0,t_*+]$$ we have that $\Omega_t(u)$ has $\rho$-reflection and therefore by Lemma \ref{lem: almost radiality} $$ B_{(1+a)\rho}(0) \subseteq \Omega_t(u) \subseteq B_{(1+a+4)\rho}(0).$$ Therefore we have $$ \lambda(t_*) > \lambda[B_{(1+a+4)\rho}] = \frac{N^2(N+2)V}{|S^{N-1}|((1+a+4)\rho)^{N+2}}.$$ Now, define $$h(x) := \frac{N(N+2)V}{2|S^{N-1}|((1+a+4)\rho)^{N+2}}(\rho^2-|x|^2), $$ which satisfies $-\Delta h(x) < \lambda(t_*)$ where $h>0$ and $h$ touches $u(x,t)$ from below at $(x_0,t_*)$. On its free boundary, due to the assumption on $\rho$ and $a$, $h$ satisfies $$ F(|Dh|(x)) = F\left(\frac{N(N+2)V(1+a)\rho}{|S^{N-1}|((1+a+4)\rho)^{N+2}}\right)> 0 . $$ The barrier $h$ is a strict classical subsolution which touches $u(x,t)$ from below at $(x_*,t_*)$, a contradiction of $u$ being a viscosity supersolution. \end{proof} The following holds now due to Lemma~\ref{lem: preservation} and Lemma~\ref{lem: reflection for all time}. \begin{cor}\label{reflectiontime} Let $u$ and $\Omega_0$ be given as above. Then $\Omega_t(u)\in\mathcal{S}_{r,R}$ for some $r,R>0$ for all $t>0$. \end{cor} \section{Gradient Flow}\label{sec: Gradient Flow} Now we consider the capillary droplet problem, contact line motion with volume preservation. We recall the PDE here for convenience, \begin{equation}\tag{\ref{eqn: CLMV}} \left \{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in }& \Omega(u)\\ \\ u_t = F(|Du|)|Du| & \text{ on } & \Gamma(u). \end{array}\right. \end{equation} We will consider the problem when \begin{equation}\label{eqn: FBV unbdd} F(|Du|) = |Du|^2-1. \end{equation} We restrict ourselves to the above choice of $F$ because it gives the problem a simple gradient flow structure, although we expect similar results to hold for general $F$ which satisfy Assumption~\ref{hyp: F incr cond}. The viscosity solution theory developed in the previous section does not apply directly to this choice of free boundary velocity. Therefore we will instead deal with \begin{equation}\label{eqn: FBV bdd} F(|Du|) = \max\{|Du|^2-1,M\}. \end{equation} for $M>0$ large and send $M \to \infty$ to get results for \eqref{eqn: FBV bdd}. As mentioned in the introduction, the problem \eqref{eqn: CLMV} with free boundary velocity \eqref{eqn: FBV unbdd} is formally a gradient flow of the energy, \begin{equation}\tag{\ref{eq: energy}} \mathcal{J}(\Omega) := \int_\Omega |Du|^2 +|\Omega|. \end{equation} in the space of compact subsets of ${\mathbb R}^N$ (we leave the metric unspecified for now). Above we call $$ u = u[\Omega] : = \argmin \{\int |Dv|^2: v \in H^1_0(\Omega), \ \ \int_{\mathbb{R}^N} v = V \}, $$ Recall that $u_0$ solves, \begin{equation} \left \{ \begin{array}{lll} -\Delta u(x) = \lambda[\Omega] & \text{ in }& \Omega\\ \\ u(x)=0 & \text{ on } & \partial\Omega, \end{array}\right. \end{equation} where the Lagrange multiplier $\lambda[\Omega]$ is given by $$\lambda[\Omega]:= \min \{\int |Dv|^2: v \in H^1_0(\Omega), \ \ \int_{\mathbb{R}^N} v = V \}.$$ \medskip Now we give the formal derivation of the gradient flow structure as motivation. One can think of the Cacciopoli subsets of ${\mathbb R}^N$ as an infinite dimensional manifold such that the tangent space at $\Omega$ is $$T_\Omega = L^{\infty}(\partial\Omega, |D\chi_\Omega|), $$ and the metric is specified as $$g_{\Omega} (f,g) := \int_{\partial\Omega} fg |D\chi_\Omega|. $$ Then suppose that $\Omega_t$ is a smooth set valued solution of \eqref{eqn: CLMV}, i.e. $u[\Omega_t]$ is a smooth viscosity solution. In the geometric framework $\Omega_t$ is a path with velocity $$ T_{\Omega_t} \ni v(t) = F(|Du|(t))=|Du|(t)^2-1 .$$ Then we calculate, \begin{align*} \frac{d}{dt}\mathcal{J}(\Omega_t) &= \int_{\Omega_t} 2Du\cdot Du_t+ \int_{\Gamma_t} (|Du|^2+1)F(|Du|) \\ &= -2\lambda(t)\int_{\Omega_t} u_t +\int_{\Gamma_t} 2u_t Du\cdot n+(|Du|^2+1)F(|Du|) \\ &= -2\lambda(t) \frac{d}{dt}\left(\int_{\Omega_t} u\right)+\int_{\Gamma_t} -2 F(|Du|)|Du|^2+(|Du|^2+1)F(|Du|) \\ &= \int_{\Gamma_t} (1-|Du|^2)F(|Du|) = -g_{\Omega_t}(v(t),v(t)). \end{align*} Of course, viscosity solutions of \eqref{eqn: CLMV} lack the required smoothness to make the above framework anything but formal. Additionally, as far as we are aware, the distance associated with the metric $g$ has no nice form even in the smooth case. Instead we use the pseudo-distance \begin{equation}\label{eqn: dist tilde} \widetilde{\text{dist}}^2(\Omega_0,\Omega_1) = \int_{\Omega_0\Delta\Omega_1}d(x,\partial\Omega_0)dx. \end{equation} This pseudo-distance was used previously for the same problem in \cite{GrunewaldKim11} and for the mean curvature flow in \cite{Chambolle04} and is motivated by the $L^2$-based Riemannian structure formally given to the space of sets with finite perimeters (see \cite{GrunewaldKim11}). Unfortunately, $\widetilde{\text{dist}}$ is not a distance. For example, it does not satisfy the triangle inequality. We will address this deficiency in Lemma \ref{lem: SQTineq}. \medskip Our plan is to show the existence of viscosity solutions for \eqref{eqn: CLMV} by constructing a gradient flow of the energy $\mathcal{J}$. There is a standard approach to defining curves of maximal slope for energies in metric spaces through a discrete time approximation. Some early works on this subject are \cite{JKO98 , LuckhausSturzenhecker95} and the recent book \cite{AGS08} contains a quite general treatment of the method. As noted previously our problem does not live in a nice metric space, however the essential idea of the construction is the same. In \cite{GrunewaldKim11} it is shown that the curves of maximal slope $\mathcal{J}$ with $\widetilde{\text{dist}}$ satisfy a barrier property with respect to strict smooth sub and supersolutions of \eqref{eqn: CLML}. From the a priori estimate Lemma \eqref{lem: preservation} any solution with sufficiently round initial data of \eqref{eqn: CLMV} will be in $\mathcal{S}_r$ for all time. With this in mind, it is reasonable to restrict the class of admissible sets in the gradient flow to only include sets which are uniformly strongly star-shaped with respect to the origin. With this additional regularity it becomes possible to show that the solutions of the discrete gradient flow scheme will converge to a viscosity solution of \eqref{eqn: CLMV}. \subsection{Discrete Gradient Flow Scheme}\label{sec: Discrete Gradient Flow Scheme} We will now define the discrete JKO type scheme to construct approximate solutions of the gradient flow \eqref{eqn: CLMV}. We make a notational choice to denote these solutions with $\omega$. This is to distinguish between solutions of the gradient flow and $\Omega(\cdot)$ which we use to denote the positive phase of a height profile. Let $\omega_0$ be the initial positive phase and let $u_0 = u[\omega_0]$ be the corresponding initial height profile. Under the assumption that $\omega_0\in \mathcal{S}_{r_0}$ for some $r_0>0$, let us define a discrete scheme approximating the gradient flow of the droplet energy $\mathcal{J}$ as follows: \begin{DEF}\label{def: JKO scheme} Let $M,r,h>0$ we define $\omega_{0,h}=\omega_0$, and define iteratively, \begin{equation}\label{eqn: JKOscheme} \omega_{k+1,h} = \argmin\left\{\mathcal{J}(\omega)+h^{-1}\widetilde{\text{dist}}^2(\omega_{k,h},\omega): \omega \in \mathcal{A}^{h}(\omega_{k,h})\right\}. \end{equation} Here $\mathcal{A}^h(\cdot)$ is the class of admissible sets which we will take to be \begin{equation}\label{eqn: admissible sets} \mathcal{A}^{h}(\omega) := \{U\in \mathcal{S}_{r_0} \text{ and } d_H(U,\omega)\leq Mh\}. \end{equation} \end{DEF} We suppress the dependence $M$, $r_0$ and $h$ in the above notations when there will be no confusion. In order to show that to show that the iteration defined in \eqref{eqn: JKOscheme} is well defined Now we apply the Compactness Lemma \ref{lem: compactness}: \begin{lem}\label{lem: JKOdef} The iteration \eqref{eqn: JKOscheme} is well defined. \end{lem} \begin{proof} We show that for any $\omega_0 \in \mathcal{S}_{r,R}$ the infimum from \eqref{eqn: JKOscheme}, \begin{equation}\label{eqn: JKO inf} \inf \left\{\mathcal{J}(\omega)+h^{-1}\widetilde{\text{dist}}^2(\omega_{0},\omega): \omega \in \mathcal{A}^{h}(\omega_{0})\right\}, \end{equation} is achieved by some $\omega \in \mathcal{A}^{h}(\omega_{0})$. Let $\omega_j\in \mathcal{A}^{h}(\omega_0)$ be an infimizing sequence for \eqref{eqn: JKO inf}. In particular from the admissibility conditions \eqref{eqn: admissible sets}, $$ \omega_j \in \mathcal{S}_{r_0,R+Mh}. $$ Applying Lemma ~\ref{lem: compactness} up to a subsequence the $\omega_j$ converge in $d_H$ metric to some $\omega \in \mathcal{S}_{r_0,R+Mh}$ and we easily derive, $$ d_H(\omega,\omega_0) \leq Mh, $$ so that $\omega \in \mathcal{A}^h(\omega_0)$. Then, from Lemma~\ref{lem: hausdorff estimates}, $\mathcal{J}(\cdot)$ and $\widetilde{\text{dist}}^2(\omega_{0},\cdot)$ are continuous with respect to convergence in $(\mathcal{S}_{r_0,R+Mh},d_H)$ so $\omega$ achieves the infimum in \eqref{eqn: JKO inf}. \end{proof} Note that since $\omega_k$ is admissible at each stage we automatically get the discrete energy decay estimate, \begin{equation}\label{eqn: discrete energy decay} \mathcal{J}(\omega_{k+1})+\frac{1}{h}\widetilde{\text{dist}}^2(\omega_{k+1},\omega_{k})\leq \mathcal{J}(\omega_{k}). \end{equation} We would like to use the triangle inequality (which we don't have) to get the energy decay inequality over time of the form: $$\frac{1}{nh}\widetilde{\text{dist}}^2(\omega_{k+n},\omega_{k})\leq \mathcal{J}(\omega_{k})-\mathcal{J}(\omega_{n+k}).$$ It turns out that this is possible as long as we are restricting to strongly star-shaped sets. \begin{lem}\label{lem: SQTineq} Let $\omega_0,...,\omega_n \in \mathcal{S}_{r,R}$, then we have the inequality, \begin{equation}\label{eqn: triineq} \frac{C(r,R,N)}{n}\widetilde{\textup{dist}}^2(\omega_{n},\omega_1) \leq \sum_{j=1}^n \widetilde{\textup{dist}}^2(\omega_{j+1},\omega_j) \end{equation} \end{lem} In the following it will be useful to introduce some notation. We say that $A \lesssim B$ if there exists a constant $C>0$ such that $A \leq CB$. We say $A \sim B$ or $A$ is equivalent up to constants to $B$ if $A \lesssim B \lesssim A$. The dependence of the constant $C$ on various parameters $q_i$ is expressed by writing $\lesssim_{q_1,...,q_n}$ or $\sim_{q_1,...,q_n}$. If no dependence is expressed, then the constant is assumed to depend at most on the dimension $N$. \begin{proof} Let $\omega_a, \omega_b \in \mathcal{S}_{r,R}$, then we define the distance between the sets $\omega_a$ and $\omega_b$ in the direction $\theta \in S^{N-1}$, \begin{equation*} \begin{array}{lll} d_\theta(\omega_a,\omega_b) = |X_a(\theta)-X_b(\theta)| & \text{ where } & \{X_i(\theta)\} = \partial\omega_i\cap \{x: \frac{x}{|x|} = \theta\}, \ i \in \{a,b\}. \end{array} \end{equation*} Notice, due to the strict star-shapedness of the $\omega_i$, there is only one point $X_i(\theta)$ along the direction $\theta$ in $\partial\omega_i$. It is easy to check that $d_\theta$ has the triangle inequality: $$ d_\theta(\omega_a,\omega_b) \leq d_\theta(\omega_a,\omega_c)+d_\theta(\omega_c,\omega_b),$$ for any bounded strictly star-shaped sets $\omega_i$, $i \in \{a,b,c\}$. \begin{Claim} The $\widetilde{\text{dist}}$ is equivalent up to constants to a true distance: \begin{equation}\label{eqn: equiv distances} \widetilde{\text{dist}}(\omega_a,\omega_b) \sim_{r,R,N} \left(\int_{S^{N-1}}d_\theta(\omega_a,\omega_b)^2d\theta\right)^{1/2}. \end{equation} \end{Claim} Using the claim along with Cauchy-Schwarz and the triangle inequality for $d_\theta$ we will get, \begin{align*} \sum_{j=1}^n \widetilde{\text{dist}}^2(\omega_{j+1},\omega_j)&\gtrsim_{r,R,N} \sum_{j=1}^n \int_{S^{N-1}}d_\theta(\omega_{j+1},\omega_{j})^2d\theta \geq \frac{1}{n} \int_{S^{N-1}}d_\theta(\omega_{n},\omega_{1})^2d\theta \gtrsim_{r,R,N} \widetilde{\text{dist}}^2(\omega_{n},\omega_1). \end{align*} So it suffices to show the claimed equivalence \eqref{eqn: equiv distances}. We calculate: \begin{align*} \widetilde{\text{dist}}^2(\omega_a,\omega_b) &= \int_{\omega_a\Delta\omega_b}d(x,\partial\omega_a)dx =\int_{S^{N-1}}\int_0^R d(\rho\theta,\partial\omega_b){\bf1}_{\omega_a\Delta\omega_b}(\rho\theta)\rho^{N-1} \ d\rho d\theta \\ &\sim_{r,R,N} \int_{S^{N-1}}\int_0^R d(\rho\theta,\partial\omega_b){\bf1}_{\omega_a\Delta\omega_b}(\rho\theta) \ d\rho d\theta \\ & = \int_{S^{N-1}}\int_{\min\{|X_a(\theta)|,|X_b(\theta)|\}}^{\max\{|X_a(\theta)|,|X_b(\theta)|\}}d(\rho\theta,\partial\omega_b) \ d\rho d\theta \\ &\sim_{r,R} \int_{S^{N-1}}\int_{\min\{|X_a(\theta)|,|X_b(\theta)|\}}^{\max\{|X_a(\theta)|,|X_b(\theta)|\}}\left|\rho -|X_a(\theta)|\right| \ d\rho d\theta \\ & = \int_{S^{N-1}}\int_0^{d_\theta(\omega_a,\omega_b)}\rho \ d\rho d\theta = \frac{1}{2}\int_{S^{N-1}}d_\theta(\omega_a,\omega_b)^2d\theta. \end{align*} Here we have used an easy consequence of Lemma \ref{lem: sstolip}, for $x \in \mathbb{R}^N$ the radial distance to $\partial\omega_i$ (i.e. $|X_i(\hat{x})-x|$) is equivalent to $d(x,\partial\omega)$ up to constants depending on $r$ and $R$. In particular for $x \in {\mathbb R}^N\setminus\omega_i$ from Figure~\ref{fig: strong star shaped}, $$ B_{\tfrac{r}{R}|x-X_i(\hat{x})|} \subset {\mathbb R}^N\setminus\Omega $$ so that $$ |x-X_i(\hat{x})|\geq d(x,\partial\omega_i) \geq \frac{r}{R}|x-X_i(\hat{x})|. $$ \end{proof} As in the proof of Lemma~\ref{lem: JKOdef}, the restrictions on the admissible sets will allow us to take a continuum limit of the discrete time scheme. In particular, the assumption of uniform strong star-shapedness gives equicontinuity of the contact lines $\partial\omega_k$ in space, and the Hausdorff distance movement restriction gives equicontinuity in time. \begin{lem}\label{lem: htozero} \textup{ (Continuum Limit)} Let $\omega_{k,h}$ be given as in \eqref{eqn: JKOscheme}. Then the following holds for the interpolants \begin{equation}\label{eqn: interpolant} \omega_h(t) = \omega_{k,h} \text{ for } t \in [kh,(k+1)h). \end{equation} \begin{enumerate}[(i)] \item There exists $R=R(r_0,N)>0$ such that $\omega_h(t) \subset B_R(0)$ for all $h,t >0$ and therefore $$\text{Lip}(\partial\omega_h(t)) \leq C(r_0,N).$$ \item $\omega_h(t)$ (up to a subsequence) locally uniformly converges in $t \in [0,+\infty)$ to $\omega(t)$, which satisfies $$ d_H(\omega(t_1),\omega(t_2))\leq M|t_1-t_2| \hbox{ for all } t_1>t_2\geq 0. $$ \item For $C=C(r_0,R, N)$ given as in Lemma \ref{lem: SQTineq}, $\omega_h(t)$ satisfies the energy decay estimate \begin{equation}\label{eqn: cts energy decay} \frac{C}{t_2-t_1}\widetilde{\text{dist}}^2(\omega_h(t_1),\omega_h(t_2))\leq \mathcal{J}(\omega_h(t_1))-\mathcal{J}(\omega_h(t_2)). \end{equation} \end{enumerate} \end{lem} \begin{proof} We give a sketch of the proof for (i). Let $x \in \partial\omega_h(t)$ and recall that $$|\omega_h(t)| \leq \mathcal{J}(\omega_h(t)) \leq \mathcal{J}(\omega_0), $$ and $\omega_h(t)$ has the interior cone at $x$, $$ \{y: y\cdot x\geq 0 \text{ and } (x-y)\cdot x \geq |x-y|\sqrt{|x|^2-r_0^2} \}\cup B_r(0) \subset \omega_h(t). $$ This cone contains $\sim |x|/r_0$ balls of radius $r_0/2$ centered along it's axis, so that $$ |x| \lesssim R(r_0,N):= r_0^{N-1}\mathcal{J}(\omega_0). $$ Now we prove (ii) and (iii). For simplicity we will take $h= 2^{-j}$ and then call the interpolant \eqref{eqn: interpolant} $\omega_{j}(t)$. From (i) and Lemma \ref{lem: compactness} the sequence $\omega_j(t)$ for $t\in h{\mathbb Z}^+$ has a subsequence in $j$ which converges in Hausdorff distance. Diagonalizing, we get a subsequence which we continue to call $j$ along which $\omega_j(t)$ converges in Hausdorff distance to a limit we call $\omega(t)$. From \eqref{eqn: admissible sets} and the triangle inequality we get: $$d_H(\omega_j(t_1),\omega_j(t_2))\leq M|t_1-t_2| \hbox{ for } t_1, t_2\in h{\mathbb Z}^+ \hbox{ and for sufficiently large } j ,$$ and $\omega(t)$ inherits the same estimate. $\omega(t)$ is densely defined and uniformly continuous and thus has a unique $M$-Lipschitz extension to $[0,+\infty)$. The local uniform convergence then directly follows. Estimate \eqref{eqn: cts energy decay} follows from the continuity of $\mathcal{J}(\cdot)$ and $\widetilde{\text{dist}}^2(\cdot,\cdot)$ with respect to convergence in Hausdorff distance: see Lemma \ref{lem: hausdorff estimates}. \end{proof} \subsection{Barrier Properties for the Gradient Flow} In \cite{GrunewaldKim11} a discrete gradient flow is constructed for \eqref{eqn: CLMV} without restrictions on the star-shapedness of the domain or the maximum speed of the free boundary. There it is proven that the discrete solutions satisfy a barrier property with respect to smooth strict sub and super-solutions of \eqref{eqn: CLML}. An analogous result can be proven for the solutions of the restricted iteration scheme defined in \eqref{eqn: JKOscheme}. The cost of the restriction to nicer sets in \eqref{eqn: JKOscheme} is that the class of barriers for which the sub and super-solution properties hold is reduced. The necessary conditions on admissible barriers can be deduced by inspecting the proofs in Section 3 of \cite{GrunewaldKim11}. We give modified statement of the super-solution barrier property applicable to our situation below. \begin{thm}\label{lem: barriers for restricted JKO} \textup{(Grunewald, Kim) (Super-solution barrier property) } Let $u_h = u[\omega_h]$ be the profile after one iteration of the scheme with time step size $h>0$, and let $$ \lambda = \min\{\lambda[\omega_0],\lambda[\omega_h]\}.$$ Let $\rho>0$, $x_0 \in {\mathbb R}^N$ and suppose there exists $\phi$ smooth with $|D\phi|\neq 0$ in $B_\rho(x_0) \times [0,h]$ with a $\delta>0$ such that \begin{equation*} \begin{array}{lll} -\Delta \phi(\cdot,t) < \lambda-\delta & \textup{ in } & B_\rho(x_0) \times [0,h] \\ \\ \dfrac{\phi_t}{|D\phi|}-(|D\phi|^2-1)< -\delta & \textup{ on } & \Gamma(\phi)\cap(B_\rho(x_0) \times [0,h]). \end{array} \end{equation*} Moreover, we require that there exists $C>0$ which does not depend on $h$ such that the sets \begin{align}\label{eqn: barrierrestrict} \omega_h\cup(\{\phi(\cdot,h)>\eta\}\cap B_\rho(x_0)), \ -h^2\leq\eta\leq Ch^{1/2} \end{align} are in the admissible class $\mathcal{A}^h_{\omega_0}$. Then, for $0<h<h_0 = h_0(\rho, \phi,\delta)$, the following holds: $$ \hbox{ If }\phi(\cdot,0)\leq u_0(\cdot), \hbox{ then } \phi(\cdot,h) \leq u_h(\cdot). $$ \end{thm} Essentially, \eqref{eqn: barrierrestrict} requires that the perturbed sets of $\omega_h$ by the barrier $\phi$ which crosses $u_h$ from below stays in the admissible class. An analogous barrier property with respect to supersolutions holds as well. \medskip Now we show that the barrier property carries over to the continuum limit $\Omega(t)$. \begin{lem}\label{lem: barrier property for grad flow} \textup{(Restricted super-solution property for the gradient flow)} Let $\omega(t)$ be a continuum limit of the discrete gradient flow described in Lemma \ref{lem: htozero}. Let $\phi \in C^{2,1}(\overline{\Omega(\phi)}\cap\{0\leq t\leq T\})$ such that $|D\phi|\neq 0$ on $\Gamma(\phi)\cap\{0\leq t\leq T\}$, the initial strict separation $\Omega_0(\phi) \subset\subset \omega(0)$ holds and $\phi$ is a strict subsolution, \begin{equation}\label{eqn: strict subsoln} \begin{array}{lll} -\Delta \phi(\cdot,t) < \lambda[\omega(t)] & \textup{ in } & \Omega(\phi)\cap\{0\leq t\leq T\} \\ \\ \phi_t-\max\{|D\phi|^2-1,M\}|D\phi|< 0 & \textup{ on } & \Gamma(\phi)\cap\{0\leq t\leq T\}. \end{array} \end{equation} Then $\phi$ cannot touch $u[\Omega(t)]$ from below at any free boundary point $(x_0,t_0) \in \Gamma(u)$ such that \begin{equation} \label{eqn: normal condition} \begin{array}{lll} D\phi (x_0,t_0) \cdot x_0 < -r_0 |D\phi| (x_0,t_0). \end{array} \end{equation} \end{lem} \begin{rem} Test functions $\phi \in C^{2,1}(\overline{\Omega(\phi)}\cap\{0\leq t\leq T\})$ with $|D\phi|\neq 0$ on $\Gamma(\phi)$ can be extended to be $C^{2,1}({\mathbb R}^N\times[0,T])$. \end{rem} \begin{proof} Suppose not. Then there exists $\phi\in C^{2,1}({\mathbb R}^N\times[0,T])$ which touches $u = u[\omega(t)]$ from below in $\mathbb{R}^N\times(0,T)$ without satisfying \eqref{eqn: normal condition}. From the continuity of the free boundaries of $\phi$ and $u$, this occurs for the first time at some $(x_0,t_0) \in \Gamma(u)\cap\Gamma(\phi)$, with $0<t_0\leq T$. That is, there exists $\rho>0$ such that \begin{equation*} \begin{array}{lll} \phi(x,t) \leq u(x,t) & \text{ for } & (x,t) \in Q:=B_{\rho}(x_0)\times[t_0-\rho,t_0]. \end{array} \end{equation*} We may assume that $\phi$ touches $u$ strictly from below by making the replacement, $$\phi \to (\phi -\epsilon|x-x_0|^2-\epsilon(t_0-t))_+. $$ In particular, we will have, \begin{equation*} \begin{array}{lll} \phi(x,t) < u(x,t) & \text{ for } & (x,t) \in Q\cap \overline{\Omega(\phi)}\setminus \{(x_0,t_0)\}, \end{array} \end{equation*} and for $\epsilon>0$ sufficiently small all the strict subsolution conditions for $\phi$ along with \eqref{eqn: normal condition} still hold. Furthermore, we can make a subsolution which crosses $u$ from below in $Q$ by making a small translation in the normal direction to $\Gamma_{t_0}(\phi)$ at $x_0$, \begin{equation*} \begin{array}{lll} \varphi(x,t):= \phi(x+\epsilon \nu,t) & \text{ where } & \nu = \frac{D\phi}{|D\phi|}(x_0,t_0). \end{array} \end{equation*} Here $\epsilon>0$ is chosen sufficiently small based on the $C^2$ (in space) norm of $\phi$ so that \eqref{eqn: normal condition 2} holds for $\varphi$, $\varphi(x_0,t_0)>0 = u(x_0,t_0)$ and also \begin{equation*} \begin{array}{lll} \varphi(x,t) \prec u(x,t) & \text{ for } & (x,t) \in \partial_pQ. \end{array} \end{equation*} Then we take $\rho$ small depending on the modulus of continuity of $D\phi$ such that there exists $\delta>0$ with: \begin{equation*} \begin{array}{lll} |D\varphi|>\delta,& \varphi_t <(M-\delta)|D\varphi|,& u-\varphi >\delta \text{ on } \partial_pQ\cap \Omega(\varphi), \end{array} \end{equation*} and \eqref{eqn: normal condition} hold on all of $Q$, that is \begin{equation}\label{eqn: normal condition 2} \begin{array}{lll} D\varphi (x,t) \cdot x < -r |D\varphi| (x,t) & \text{ for } & (x,t) \in Q\cap \overline{\Omega(\varphi)}. \end{array} \end{equation} In order to make use of Lemma \ref{lem: barriers for restricted JKO} we need to take this information back to the approximating sequence of discrete solutions. Let $h_j \to 0$ be the sequence along which the discrete gradient flow scheme $$ d_H(\omega_{h_j}(t), \omega(t))\to 0$$ uniformly for $t_0-\rho\leq t \leq t_0$. From Lemma \ref{lem: hausdorff estimates} $u_j(\cdot,t) := u[\omega_{h_j}(t)]$ converge uniformly in $(x,t)$ to $u$. Let $j$ be sufficiently large ($h_j$ small) so that Lemma \ref{lem: barriers for restricted JKO} will apply and also $h_j \leq \delta^2$, $\varphi(x_0,t_0)> u_j(x_0,t_0)$ and \begin{equation}\label{eqn: strict separation for discrete} \begin{array}{lll} \varphi(x,t) \prec u_j(x,t) & \text{ for } & (x,t) \in \partial_pQ. \end{array} \end{equation} Now there exists $t_0-\rho-h_j< t_k:=kh_j <t_0$ such that \begin{equation*} \begin{array}{lll} \varphi(x,t_k) \leq u_j(x,t_k) & \text{ for all } & x \in B_\rho(x_0) \\ \\ \varphi(x_1,t_{k+1}) > 0=u_j(x_1,t_{k+1}) & \text{ for some } & x_1 \in B_\rho(x_0). \end{array} \end{equation*} To apply Lemma \ref{lem: barriers for restricted JKO} and get a contradiction it is sufficient to show the following: \begin{Claim} Let $\eta\geq -h_j^2$ then, calling $\omega_{h_j,k} = \omega_k$: $$ U_\eta := (\{\phi(\cdot,t_{k+1})>\eta\}\cap B_{\rho}(x_0))\cup \omega_{k+1} \in \mathcal{A}^{h_j}({\omega_{k}}). $$ \end{Claim} Let $x \in U_\eta$, we claim that: $$ d(x,\omega_k) \leq Mh_j. $$ If $x \in \omega_{k+1}$ then $d(x,\omega_{k})\leq Mh_j$ from the admissibility condition \eqref{eqn: admissible sets} for $\omega_{k+1}$. If $x \in \Omega_{t_{k+1}}(\varphi-\eta) \cap B_\rho(x_0)$ then in the case $\eta\geq0$ from the containment $\Omega_{t_k}(\varphi)\cap B_\rho(x_0) \subset \omega_{k} $ and $\phi_t <M|D\phi|$ we get, $$ d(x,\omega_{k}) \leq d(x,\Omega_{t_k}(\varphi)) < Mh_j. $$ In the case $-\delta^2h_j\leq-h_j^2\leq\eta<0$: \begin{align*} d(x,\omega_{k}) \leq d(x,\Omega_{t_k}(\varphi)\cap B_\rho(x_0)) &\leq d_H(\Omega_{t_{k+1}}(\varphi-\eta),\Omega_{t_{k+1}}(\varphi))+(M-\delta)h_j \\ & \leq \frac{-\eta}{\delta}+(M-\delta)h_j \\ & \leq Mh_j. \end{align*} Let $x \in \omega_{k}$ then $d(x,U_\eta)\leq d(x,\omega_{k+1})\leq Mh_j$. Combining with the above arguments, $$ d_H(U_\eta,\omega_{k+1}) \leq Mh_j.$$ \medskip That $U_\eta\in \mathcal{S}_{r_0}$ is more subtle and this is where the condition \eqref{eqn: normal condition} comes in. For a given $x \in \partial U_\eta$, we need to show that for every $y \in B_{r_0}(0)$, \begin{equation}\label{eqn: containment} L_{x,y}:=\{z \in \mathbb{R}^N: z = y + s(x-y), \ s \in [0,1]\}\subset \overline{U_\eta}. \end{equation} This is trivially true for all $x \in \partial U_\eta \cap \partial \omega_{k+1}$ from the strong star-shapedness of $\omega_{k+1}$. So we consider $$ x \in \partial U_\eta \cap \partial (\Omega_{t_{k+1}}(\varphi-\eta)\cap B_\rho(x_0)) \subset B_{\rho}(x_0)\cap\partial \Omega_{t_{k+1}}(\varphi-\eta). $$ Let $y \in B_{r_0}(0)$ and we consider the line segment from $x$ to $y$ parametrized by $$ z(s) = x + s(y-x).$$ First we show that -- for $s>0$ small depending on the $C^2$ norm of $\phi$ -- $z(s) \in U_\eta$. Using $|y|<r$ and \eqref{eqn: normal condition 2}, $$ \phi(z(s)) = \eta+sD\varphi(x) \cdot (y-x)+o(s) > s(-|y|+r_0)|D\varphi|(x)+o(s)>0. $$ Suppose towards a contradiction that $z(s)$ exits $\overline{U_\eta}$ before $s=1$, then by continuity it must pass through $\partial U_\eta$ for the first time at $0<s_0<1$ defined by, $$ 1>s_0 = \inf \{s>0: z(s) \in \partial U_\eta\}>0. $$ If $z(s_0) \in \partial \omega_{k+1}$ then we are done due to $\omega_{k+1} \in \mathcal{S}_r$: $$L_{z(s_0),y} \subset \overline{\omega_{k+1}} \subset \overline{U_\eta}.$$ So it remains to consider the case $z(s_0) \in B_{\rho}(x_0)\cap\partial \Omega_{t_{k+1}}(\varphi)$ and in this case: $$ 0 \geq \left.\frac{d}{ds}\varphi(z(s))\right|_{s=s_0^{-}} = D\varphi(z(s_0)) \cdot (y-x) = (1-s_0)D\varphi(z(s_0)) \cdot (y-z(s_0)),$$ rearranging this and using $|y| < r_0$ we get $$D\varphi(z(s_0))\cdot z(s_0) \geq D\varphi(z(s_0))\cdot y > -r_0|D\varphi(z(s_0))|$$ which contradicts \eqref{eqn: normal condition 2}. \end{proof} An analogous statement about touching from above by strict classical super-solutions holds as well. Lemma \ref{lem: barrier property for grad flow} almost says that $\omega(t)$ is a viscosity super-solution of \eqref{eqn: CLMV} except for the restriction \eqref{eqn: normal condition} on the barriers. In fact we can make the following statement: \begin{prop}\label{prop: condition to be vs} If there exists $r'>r_0$ such that $\omega(t)\in\mathcal{S}_{r'}$ for all $t \in [0,T]$ then $u[\omega(t)]$ cannot be touched from below (above) by any strict classical sub-solution (super-solution) of (P-$\lambda[\omega(t)]$) and is therefore a viscosity solution of \eqref{eqn: CLMV}. \end{prop} \begin{proof} This is simply because any $\phi\in C^{2,1}({\mathbb R}^N\times[0,T])$ which touches $u[\omega(t)]$ from below (above) at a free boundary point must necessarily satisfy \eqref{eqn: normal condition}. To see this fact, let $x \in \partial\omega(t)$, now from the strong star-shapedness of $\omega(t)$, $$\{x+s(x-y): s\geq 0 \text{ and } y \in B_{r'}(0)\}\subset{\mathbb R}^N\setminus\omega(t) \subset{\mathbb R}^N\setminus\Omega_t(\phi)$$ and so for any $y \in B_{r'}(0)$ $$ 0\geq \left.\frac{d}{ds}\phi(x+s(x-y))\right|_{s=0} = D\phi(x) \cdot (x-y). $$ Rearranging above inequality, we obtain \eqref{eqn: normal condition}: $$D\phi(x) \cdot x \leq \inf_{y \in B_{r'}(0)}D\phi(x)\cdot y = -r'|D\phi|(x)<-r_0|D\phi|(x).$$ \end{proof} Unfortunately this statement is not that useful to us because we would still need to show that the gradient flow preserves strong star-shapedness. That is if we had a Lemma for $\omega(t)$ like Lemma \ref{lem: GK short time ss} then from the above Proposition we would know that $\omega(t)$ was a viscosity solution of \eqref{eqn: CLMV} at least for a short time, and then we could apply a result about preservation of strong star-shapedness for viscosity solutions like Lemma \ref{lem: reflection for all time} to show that $\omega(t)$ is a viscosity solution globally in time. Instead of attempting to prove Lemma \ref{lem: GK short time ss} for solutions of the gradient flow we will show directly that $\omega(t)$ and the viscosity solution of (P-$\lambda[\omega(t)]$) are the same. The idea comes from the following corollary of the proof of Proposition \ref{prop: condition to be vs}. As usual the corresponding result for supersolutions is also true. \begin{cor}\label{cor: comparison for smooth} (Comparison with star-shaped classical subsolutions) Suppose $\phi\in C^{2,1}({\mathbb R}^N\times[0,T])$ is a strict classical sub-solution of (P-$\lambda[\omega(t)]$) and the initial strict separation $$ \phi(\cdot,0) \prec u[\omega(0)] $$ holds. Moreover for some $r'>r_0$ the positive phase of $\phi$, $\Omega_t(\phi)$, is in $\mathcal{S}_{r'}$ for all $t \in [0,T]$. Then $$ \phi(\cdot,T) \leq u[\omega(T)]. $$ \end{cor} \subsection{Comparison}\label{sec: JKO Comparison} Let $\omega(t)$ be the continuum limit of the discrete gradient flow scheme defined above. We want to show that $u = u[\omega(t)]$ is a viscosity solution of the problem, \begin{equation}\label{eqn: fixedlambda} \left\{\begin{array}{lll} -\Delta v(x,t) = \lambda[\omega(t)] &\text{ in }& \omega(t) \\ \\ v_t = \min\{|Dv|^2-1,M\}|Dv|& \text{ on }& \partial \omega(t), \end{array}\right. \end{equation} at least as long as the unique viscosity solution of (P-$\lambda[\omega(t)]$) satisfies the appropriate strong star-shaped condition. In fact $u$ satisfies a comparison principle with respect to sub and super-solutions of \eqref{eqn: fixedlambda} which are star-shaped with respect to a slightly larger ball than $B_{r_0}(0)$. This is the non-smooth version of the idea of Corollary \ref{cor: comparison for smooth}. Various iterations of the proof can be found in the literature \cite{CaffarelliVazquez99, Kim03, BrandleVazquez05}, we will only sketch most of the details of the proof the essential idea is, as in Corollary \ref{cor: comparison for smooth}, to demonstrate that the restricted barrier property of the gradient flow solution $\omega(t)$ is sufficient to make the argument work. \begin{prop}\label{prop: comparison for grad flow} \textup{(Comparison with sub-solutions)} Let $u(\cdot,t)=u[\omega(t)]$ as above from the gradient flow and $v$ a continuous sub-solution of \begin{equation*} \left\{\begin{array}{lll} -\Delta v(x,t) \leq \lambda[\omega(t)] &\text{ in }& \Omega_t(v) \\ \\ v_t \leq \min\{|Dv|^2-1,M\}|Dv|& \text{ on }& \Gamma_t(v), \end{array}\right. \end{equation*} with initial data ordered, $$\Omega_0(v) \subseteq \omega(0).$$ Let $r > 0$ from the restriction on admissible sets in the discrete scheme \eqref{eqn: JKOscheme}. Suppose there exists $r'>r_0$ such that $\Omega_t(v)\in\mathcal{S}_{r'}$ for all $t \in [0,T]$. Then $$ v(\cdot,t)\leq u(\cdot,t) \ \text{ on }\ [0,T].$$ \end{prop} \begin{proof} 1. First we show it suffices to treat the case when $u$ and $v$ are initially strictly separated, \begin{equation}\label{eqn: strict assumption 1} \Omega_0(v) \subset\subset \omega(0), \end{equation} and there exists $\epsilon>0$ so that $v$ satisfies, \begin{equation}\label{eqn: strict assumption 2} \begin{array}{lll} -\Delta v(x,t) \leq (1-\epsilon)\lambda(t) & \text{ for } & (x,t) \in \Omega(v). \end{array} \end{equation} Let us assume temporarily that result holds in this case and we show that it also holds under the non-strict conditions given in the statement of the proposition. This is accomplished by a similar device to the one used in the proof of the strong comparison type results Lemma \ref{lem: strong comparison} and Proposition \ref{prop: reflection comparison}. Let $\epsilon>0$ small enough that $(1+\epsilon)^{-1}r' >r_0$, $c$ from Assumption \ref{hyp: F cond}, and $0<a<r_0$ then define the perturbation, \begin{equation*} \begin{array}{lll} \widetilde{v}_\epsilon(x,t) = \sup_{|z|\leq a\epsilon-c\epsilon t} (1+\epsilon)^{-3}v((1+\epsilon)(x+z),t) & \text{ for } & 0 \leq t \leq \frac{a}{c}. \end{array} \end{equation*} Now the perturbed subsolution $\widetilde{v}_\epsilon$ satisfies \eqref{eqn: strict assumption 1} and \eqref{eqn: strict assumption 2} and so we get, \begin{equation*} \begin{array}{lll} v(x,t) = \sup _{\epsilon>0} \widetilde{v}_\epsilon(x,t) \leq u(x,t) & \text{ for } & 0 \leq t \leq a/c \end{array} \end{equation*} Iterating this $\floor{Tc/a}$ times we get the desired result. \medskip 2. Now we work in the case when \eqref{eqn: strict assumption 1} and \eqref{eqn: strict assumption 2} hold. Suppose towards a contradiction that $v$ crosses $u$ from below on ${\mathbb R}^N\times[0,T]$. In order to have some regularity at the touching point we use the space-time convolutions described in Appendix \ref{sec: convolutions}. Let $\epsilon, \delta >0$ with $ \delta < \epsilon/T$ define the sup-convolution of $v$ \begin{equation} V(x,t) = \sup_{|(y,s)-(x,t)|\leq \epsilon} v(y,s) \end{equation} and the inf-convolution of $u$ \begin{equation} U(x,t) = \inf_{|(y,s)-(x,t)|\leq{\epsilon-\delta t}}Z(y,s), \quad Z(y,s):=\inf_{|(w,\tau)-(y,s)| \leq \varepsilon} u(w,\tau). \end{equation} defined in the domain \begin{equation*} \Sigma_\epsilon :=\mathbb{R}^N\times [\epsilon,T-\epsilon]. \end{equation*} We will show that $V$ still has the following properties: \begin{enumerate}[(a)] \item $\Omega_t(V)$ strongly star-shaped with respect to a ball of radius larger than $r$, \item $V$ is a subsolution of (P-$\lambda[\omega(t)]$) and is initially strictly separated from $U$, $$ \Omega_\epsilon(V) \subset\subset \Omega_\epsilon(U).$$ \end{enumerate} First we show (a), let $\epsilon \leq (r'-r)/2$, then \begin{equation} \Omega_t(V) = \bigcup_{|(y,s)|\leq \epsilon} y+\Omega_{t+s}(v). \end{equation} Noting that all of the sets in the above union are star-shaped with respect to $B_{(r+r')/2}(0)$ implies that $\Omega_t(V)$ is as well. This proves (a). Now we show (b). Choosing $\epsilon$ smaller if necessary based on $\eta$ and the modulus of continuity of $\lambda(t)$ the subsolution property for $V$ is from Lemma \ref{lem: spacetimeconvs}. Then simply from the initial strict separation of $u$ and $v$ along with the Hausdorff distance continuity of the free boundaries of $u$ and $v we also have the strict separation of $U$ and $V$ at time $\epsilon$ \begin{equation*} \Omega_\epsilon(V) \subset \subset \Omega_\epsilon(U). \end{equation*} Now by our assumption we know $U$ crosses $V$ from below for the first time at some point $P_0 = (x_0,t_0) \in\Sigma_\epsilon$. By the strong maximum principle for subharmonic functions $$x_0\in \Gamma_{t_0}(U)\cap \Gamma_{t_0}(V).$$ At the touching point $P_0$ the positivity set of $V$ has an interior space-time ball of radius $\epsilon$ centered at some point $P_1 = (x_1,t_1) \in \Gamma(v)$, i.e. $|P_1-P_0| = \epsilon$ and $$ B_1 :=\{(y,s) \in \mathbb{R}^N\times[0,T]: |(y,s)-P_1| <\epsilon\}\subset \Omega_+(V).$$ This ball has the tangent hyperplane through $P_0$ $$ H_1 = \{(y,s) \in \mathbb{R}^N\times[0,T]: ((y,s)-P_0)\cdot(P_1-P_0)=0\}.$$ Let $(\nu,m)$ be normal to $H_1$ in the direction inward to $\Omega(V)$ with $|\nu| = 1$, $m$ is the advance speed of the free boundary $\Gamma(V)$ at $P_0$. We will show below that $m$ is finite, but at this point let us include the possibilities that $m=\infty$ or $m=-\infty$. Similarly, there exists $P_2=(x_2,t_2) \in \Gamma(Z)$ with $|P_2-P_1| = \epsilon - \delta t_0$ and $\Omega(U)$ has an exterior space-time ellipse at $P_0$ of the form $$ \widetilde{B}_2 := \{(y,s) \in \mathbb{R}^N\times[0,T]: |(y,s)-P_2| \leq \epsilon-\delta s\}$$ while there exists $(x_3,t_2)\in\Gamma(u)$ with $|x_2-x_3|=\varepsilon$ and $\Omega(u)$ contains the following set: \begin{equation}\label{set} E = \{(y,s) \in \mathbb{R}^N\times[0,T]: |y-w| \leq \varepsilon, |(w,s)-P_2| <\epsilon-\delta t_0\} \subset \Omega(u), \quad (x_3,t_2) \in\bar{E}\cap \Gamma(u) . \end{equation} Let $\widetilde{H}_2$ be the tangent hyperplane to $\widetilde{B}_2$ at $P_0$ and $H_2$ be the tangent hyperplane to $B_2$ at $P_2$. \begin{lem} $H_1$ is not horizontal. \end{lem} \begin{proof} We refer to the proof of Lemma 2.5 in \cite{Kim03}, which rules out the possibility that $m=+\infty$. \medskip Next let us rule out the possibility that $m=-\infty$. Let $x_3$ and $E$ be the set given in \eqref{set}. Due to the star-shapedness and H\"{o}lder continuity of $\Omega(u)$, it follows that $(x_3,t_2)$ lies in the spatial boundary of $E\cap \{t=t_2\}$. We consider the classical subsolution Take $\tau>0$ sufficiently small that $D_0:=E\cap \{t=t_2-\tau\}$ is nonempty. Let $D_1$ be the top portion of the closure of $E$, i.e., $D_1:= \bar{E} \cap \{t=t_2\}$, and let us define the interpolation of $D_0$ and $D_1$, i.e., $$ D_t := (1-s(t)) D_0 + s(t)D_1, \hbox{ where } s(t)\hbox{ is linear and satisfies } s(t_2-\tau)=0, s(t_2)=1. $$ We choose $\tau$ small that $s'(t) <-1$. Now let us consider the space-time domain $$ \Sigma:= \bigcup_{\tau \leq t-t_2 \leq 0} [D(t) - \frac{1}{2}D(t)] $$ Then, due to the fact that $\Sigma\subset E$, the positive set $\Omega(u)$ crosses $\Sigma$ for the first time at $(x_3,t_2)$. Let us consider the classical subsolution $\psi$ in the domain satisfying \begin{equation} \left\{\begin{array}{lll} -\Delta_x \phi(x,t) = \lambda[w(t)] & \text{for} & (x,t) \in \Sigma \\ \\ \phi =0 & \text{for} & (x,t) \in \partial D(t) \\ \\ \phi = \min_{\frac{1}{2}D(t)} u(x,t)>0 & \text{for} & (x,t)\in \frac{1}{2}\partial D(t) \\ \\ \end{array}\right. \end{equation} Then $\psi$ crosses $u$ from below at $(x_3,t_2)$. One can check from the definition of $u$ and the strong star-shapedness of $\Omega(V)$ that the outer normal $\nu$ of $D_1$ at $x_3$ satisfies $\nu\cdot x_2 < -r_0$ and thus it contradicts Lemma~\ref{lem: barrier property for grad flow}. \end{proof} Let $(\nu',m')$ be the normal to the hyperplane $H_2$ rescaled as before. Then as in Lemma \ref{lem: spacetimeconvs} the normal to $\widetilde{H}_2$ is $(\nu',m'+\delta)$. As a consequence of the ordering $V(x,t) \leq U(x,t)$ for all $ t\leq t_0$ we also get the ordering of the advance speeds, $$ m'+\delta \leq m.$$ Since $U\leq V$ for $t \leq t_0$ we get the following inclusions \begin{equation*} B_1 \cap \{t \leq t_0 \} \subseteq \{U>0\}\cap\{V>0\} \quad \text{ and } \quad \widetilde{B}_2 \cap \{t \leq t_0 \} \subseteq \{U=0\}\cap\{V=0\}. \end{equation*} In particular $\Omega_{t_0}(U)$ and $\Omega_{t_0}(V)$ both have interior and exterior spatial balls at $x_0$, $$ B_1 \cap \{t = t_0 \} \ \ \text{and} \ \ \widetilde{B}_2 \cap \{t = t_0 \} $$ which must both have centers lying along the same axis, or equivalently both free boundaries have spatial inward normal $\nu$ at $x_0$. From the strong star-shapedness of $\Omega_{t_0}(V)$, and this is a key point as we have seen in the proof of above lemma: we get that \begin{equation}\label{eqn: key thing} \nu \cdot x_0 \leq - r''<-r_0. \end{equation} Now the idea is to construct a smooth strict sub-solution $\phi$ which touches $\Gamma(V)$ from below at $(x_0,t_0)$. Then a translation of $\phi$ will touch $u$ from below at $P_2$ from the inadmissible direction, $$\nu = \frac{\phi_t}{|D\phi|}(x_0,t_0)$$ leading to a contradiction of Lemma~\ref{lem: barrier property for grad flow}. \begin{lem}\label{lem: advance speed bounds} $$ -1< m \leq M. $$ \end{lem} \begin{proof} The fact that $m\leq M$ follows from a relatively simple barrier argument, based on the condition $v_t\leq M|Dv|$ on $\Gamma_t(v)$ as well as the fact that $\Omega(v)$ has an exterior space-time ball at $(x_1,t_1)$. \medskip It remains to show that $ m\leq-1$. If it is, then \begin{equation}\label{fact} m' \leq -1-\delta. \end{equation} Take $\tau>0$ sufficiently small that $D_0:=B_2\cap \{t=t_2-\tau\}$ is nonempty. We can now construct a classical subsolution $\psi$ in the domain $$\tilde{\Sigma}:=\cup_{t_2-\tau \leq t \leq s_0} (B_2-\frac{1}{2}B_2), $$ similarly as in the proof of $m >-\infty$ in the lemma above, and use \eqref{fact} to derive a contradiction. \end{proof} \begin{lem}\label{lem: nontangential estimate} Near the point $P_0$ we have the nontangential estimate \begin{equation} \liminf_{d\to 0^+}\frac{V(x_0-d\nu,t_0)}{d} \geq \sqrt{m+1}. \end{equation} \end{lem} \begin{proof} The proof is based on construction of the barrier for $v$ to yield a contradiction, in the event that the lemma holds false, and it is parallel to the proof of Lemma 2.6 in \cite{Kim03}. \end{proof} Now as in \cite{Kim03} for any $\eta>0$ we construct a smooth test function $\phi(x,t)=\phi(|x-x_0|,t)$ with the following properties: \begin{equation} \left\{\begin{array}{lll} -\Delta_x \phi(x,t) < 0 & \text{for} & (x,t) \not\in \tfrac{1}{4}B_2 \\ \\ \phi > 0 & \text{for} & (x,t) \in B_2 \\ \\ \phi <0 & \text{for} & (x,t)\not\in \overline{B_2} \\ \\ |D\phi|(x,t) = \sqrt{m+1}(1-\eta) & \text{ on } & \partial B_2 \cap \{ t = t_2 \}. \end{array}\right. \end{equation} Now from the definition of $m'$ we have for $x \in \partial B_2 \cap \{t=t_2\}$, $$ \tfrac{\phi_t}{|D\phi|}(x,t_2) = m' \leq m-\delta$$ and choosing $\tau$ sufficiently small depending on the $C^2$ norm of $\phi$ and $\eta$ small depending on $\delta$ and $m$ we get for $x \in \partial B_2 \cap \{t_2-\tau\leq t\leq t_2\}$, $$ \tfrac{\phi_t}{|D\phi|}(x,t_2)<m-\delta/2 < |D\phi|^2-1.$$ Therefore $\phi$ is a strict subsolution in the region $$ (B_2 \setminus \tfrac{1}{4}B_2 )\cap \{t_2-\tau \leq t \leq t_2\}.$$ Next we show that $\phi$ touches $u$ from below at $P_2 = (x_2,t_2)$. For $d>0$ sufficiently small depending on the $C^2$ norm of $\phi$ we have \begin{equation}\label{eqn: growth estimate I} \begin{array}{lll} \phi(x,t) \leq \sqrt{m+1}(1-2\eta)d & \text{ on } & \partial (1-d)B_2 \cap \{t_2-\tau \leq t \leq t_2\} . \end{array} \end{equation} Let $(x,s) \in B_2$ and let $$d:= d(x,\partial B_2\cap\{t=s\})$$ then since $P_0$ is the center of $B_2$ we have that $|P_0-(x,s)| = \epsilon-\delta t_0-d$ and $$ |(x_0+d\nu,t_0)-(x,s)|\leq |P_0-(x,s)|+d = \epsilon-\delta t_0.$$ Therefore from the definition of $U$ as an infimum and from Lemma~\ref{lem: nontangential estimate} for any $\eta>0$ there exists $d_0$ such that $d<d_0$ implies \begin{equation}\label{eqn: growth estimate II} \sqrt{m+1}(1-2\eta)d\leq U(x_0+d\nu,t_0) \leq u(x,s). \end{equation} Now combining \eqref{eqn: growth estimate I} and \eqref{eqn: growth estimate II} with the fact that $B_2 \subset\Omega(u)$ and $\phi=0$ on $\partial B_2$ we get that \begin{equation} \begin{array}{lll} \phi(x,t) \leq u(x,t) & \text{ for } & (x,t) \in (\partial (1-d)B_2 \cup \partial B_2) \cap \{t_2-\tau \leq t \leq t_2\}. \end{array} \end{equation} Then since $u-\phi$ is superharmonic by the strong minimum principle we get, \begin{equation} \begin{array}{lll} \phi(x,t) < u(x,t) & \text{ for } & (x,t) \in B_2\setminus(1-d)B_2 \cap \{t_2-\tau \leq t \leq t_2\}, \end{array} \end{equation} and $u(P_2) = \phi(P_2)=0$ so $\phi$ touches $u$ from below at $P_2$. This is a contradiction of Lemma~\ref{lem: barrier property for grad flow} since from \eqref{eqn: key thing} $$ \frac{D\phi}{|D\phi|}(x_2,t_2)\cdot x_2 = \nu \cdot x_2 <\nu \cdot x_0<-r_0. $$ \end{proof} \section{Convergence for Solutions with Global in Time Star-Shapedness} Now we can combine the results of the previous sections to get our main result. Under the assumption of sufficient roundness of the initial data $\Omega_0$ phrased in terms of $\rho$-reflection any continuum limit of the restricted gradient flow of the functional $\mathcal{J}$ is also a global in time viscosity solution of the problem \eqref{eqn: CLMV}. To make this precise let us define the class of weak solutions arising from the gradient flow scheme described in section \ref{sec: Gradient Flow}. \begin{DEF} An evolution $\omega(t): [0,\infty) \to \mathcal{S}_r$ is an {\it energy solution} of \eqref{eqn: CLMV} if there exist $\omega_{M_k}(t)$, $M_k \to \infty$ as $k \to \infty$, which are minimizing movements of the restricted gradient flow scheme from Definition \ref{def: JKO scheme} with initial data $\omega(0)$ such that, $$ d_H(\omega_{M_k}(t), \omega(t) ) \to 0 \ \hbox{ as } \ k \to \infty \ \hbox{ locally uniformly in } [0,\infty).$$ A droplet profile $u : {\mathbb R}^N \times [0,\infty) \to [0,\infty)$ is an energy solution if $u = u[\omega(t)]$ for an energy solution $\omega(t)$. \end{DEF} Now we can say the following about any energy solution arising from an initial data with $\rho$-reflection. \begin{thm}\label{thm: convergence thm} Let $V>0$ and $\Omega_0$ a domain in ${\mathbb R}^N$ such that $\Omega_0$ has $\rho$-reflection with $\rho$ satisfying, $$ \rho < C_N V^{\frac{1}{N+1}}, $$ where $C_N$ is a dimensional constant. Then there exists an energy solution $u: {\mathbb R}^N\times[0,+\infty)\to[0,+\infty)$. Moreover any energy solution $u$, the following holds: \begin{itemize} \item[(a)] $u$ is also a viscosity solution of the free boundary problem, \begin{equation}\tag{P-V} \left \{ \begin{array}{lll} -\Delta u(x,t) = \lambda(t) & \text{ in }& \Omega_t(u)\\ \\ u_t = (|Du|^2-1)|Du| & \text{ on } & \Gamma_t(u) \\ \\ \{u(\cdot,0)>0\} = \Omega_0, \end{array}\right. \end{equation} where $\lambda(t)$ is chosen to enforce the volume constraint for all $t>0$, $$ \int u(x,t)dx = V. $$ \item[(b)]The positivity set $\Omega_t(u)$ has $\rho$-reflection for all $t>0$ with $\inf_{x \in \Gamma_t(u)}(|x|-\rho)$ bounded below. The energy $\mathcal{J}$ is decreasing along the flow and additionally $u$ satisfies the energy decay estimate for all $t>s\geq0$, $$ \frac{1}{t-s}\widetilde{\textup{dist}}(\Omega_s(u),\Omega_t(u)) \lesssim_{\rho,N} \mathcal{J}(\Omega_s(u))-\mathcal{J}(\Omega_t(u)).$$ \item[(c)]The flow of the sets $\Omega_t(u)$ converges uniformly modulo translation to the radially symmetric equilibrium solution, $$ \frac{\lambda_* }{2N}(r_*^2-|x|^2)_+,$$ where $r_*$ is given in \eqref{eqn: equilibrium radius} and $\lambda_*$ can be calculated from the volume constraint. More precisely we mean that, $$ \inf \{ d_H(\Omega_t(u),B_{r*}(x)) : x \in \overline{B_{\rho}(0)}\} \to 0 \hbox{ as } t \to \infty. $$ \end{itemize} \end{thm} \begin{proof} 1. The existence proof as well as the energy estimate follow from a fairly straightforward combination of the results we have already proven. We give an outline of the proof. Given the assumption on $\Omega_0$ having $\rho$-reflection we know due to the apriori estimate Lemma \ref{lem: reflection for all time} that any solution of \eqref{eqn: CLMV} will be in $\mathcal{S}_r$ for some $r>0$ as long as it exists. Then we construct the restricted gradient flow solution with initial data $\Omega_0$ as in Section \ref{sec: Gradient Flow} where the restriction on the radius of the strong star-shapedness is strictly weaker than that from the apriori estimate. Given the Lagrange multplier $\lambda(t)$ associated with the restricted gradient flow we then solve \eqref{eqn: CLML}. Then the idea is that the viscosity solution of \eqref{eqn: CLML} is strongly star-shaped with a larger radius than the restriction on the gradient flow and so we will be able to use Proposition \ref{prop: comparison for grad flow} to show that the two solutions agree for all time. \medskip Let $b>0$ so that \begin{equation*} \begin{array}{lll} B_{(1+b)\rho}(0)\subset \Omega& \hbox{ and } &(1+b)\rho <C_NV^{\frac{1}{N+1}}. \end{array} \end{equation*} Then we expect thanks to the apriori estimate Lemma \ref{lem: reflection for all time} that $(1+b)\rho$ will be contained in $\Omega_t$ for all time. In particular we expect $\Omega_t$ will be in $\mathcal{S}_{r}$ for $r = b\rho$. Let $R=R(r)>0$ be so large that any set in $\mathcal{S}_{r}$ which touches $B_R$ from the inside must have larger energy than $\Omega_0$ and then define $\gamma = \tfrac{r}{2R}$. That it is possible to choose $R$ in this way is described in Lemma \ref{lem: htozero}. Let $M>0$ and let $\omega_M(t)=\omega_{M, \gamma r}(t)$ be the restricted gradient flow solution with initial data $\Omega_0$ as constructed in the last section. The notation indicates that $\omega_{M,\gamma r}(t)$ is restricted to remain in $\mathcal{S}_{r}$ and its free boundary can move no faster than $M$. We suppress the dependence on $\gamma r$. Define the putative Lagrange multiplier $\lambda_M$ to be $$ \lambda_M(t) := \lambda[\omega_{M}(t)],$$ and let $u_M$ be the possibly discontinuous viscosity solution of $(P-\lambda_M)$ constructed by Perron's method in Theorem \ref{thm: short time existence}. We will show that $u_M$ and $u[\omega_{M}]$ are the same. \medskip Let $I$ be the largest interval containing the origin on which $u_M$ and $u[\omega_M]$ agree. We will show that $I$ is open and closed in $[0,+\infty)$. Since $u_M$ agrees with $u[\omega_M]$ on $I$ it is continuous and has constant volume and thus it is a volume preserving viscosity solution. Therefore Lemma \ref{lem: reflection for all time} implies that: $$\Omega_t(u_M) \hbox{ has $\rho$-reflection and } B_{ (1+b)\rho} \subset \Omega_t(u_M) \hbox{ for $t \in I$}.$$ In particular $\Omega_t(u_M)$ is in $\mathcal{S}_{r}$ on $I$. Suppose $I= [0,T)$ for some $T>0$, then the short time existence theorem implies that, for some small $t_0(r,R)$, $u_M$ is continuous on $[0,T+t_0)$. Since the set where two continuous functions agree is closed $I=[0,T]$. Suppose $I=[0,T]$ where $T$ may be equal to zero. Then thanks to Lemma \ref{lem: GK short time ss} there exists $t_M>0$ such that: $$\Omega_t(u_M)\in\mathcal{S}_{r'} \hbox{ for some }r'>\gamma r \hbox{ on } [0,T+t_M).$$ Then from Proposition \ref{prop: comparison for grad flow} $u[\omega_M]$ satisfies a strong comparison principle with respect to viscosity solutions which are strongly star-shaped with a larger radius than $\gamma r$ so $u_M = u[\omega_M]$ on $[0,T+t_M)$. Therefore $I= [0,+\infty)$ and there exists a global in time continuous viscosity solution of \eqref{eqn: CLMV}$_M$ which has $\rho$-reflection and is in $\mathcal{S}_{r,R}$ for all $t>0$. \medskip 2. Now we show the existence for \eqref{eqn: CLMV} without the restriction on a maximum speed. The key point in this case is the equicontinuity afforded by the fact that the $\Omega_t(u_M) \in \mathcal{S}_{r,R}$ with $r,R$ independent of $M$. In particular from Corollary \ref{cor: equicontinuity} we get for some $C>0$ and $ \alpha\in(0,1)$ independent of $M$, \begin{equation}\label{eqn: holder for uM} d_H(\Gamma_t(u_M),\Gamma_s(u_M)) \leq C |t-s|^\alpha. \end{equation} Then we also derive thanks to Lemma \ref{lem: hausdorff estimates} the equicontinuity of the Lagrange multipliers $\lambda_M(t)$. Taking a subsequence such that the $\lambda_M(t)$ converge uniformly on compact subsets of $[0,+\infty)$ to some $\lambda(t)$ we can apply Lemma~\ref{lem: sending M to infty}. We get that along this subsequence the $u_M$ converge uniformly to a viscosity solution $u$ of \eqref{eqn: CLMV} with free boundary velocity $F(|Du|) = |Du|^2-1$. Due to the uniform convergence of $\Omega_t(u_M)$ to $\Omega_t(u_M)$ in hausdorff distance sense the energy estimates for the $u_M$ and the H\"{o}lder regularity in time \eqref{eqn: holder for uM} carry over to $u$. Then $u$ is an energy solution by the definition. Unfortunately by the compactness method we do not know whether there is any uniqueness of the limiting Lagrange multiplier $\lambda(t)$. \medskip 3. Now we show that any subsequential limit of the $u(\cdot,t_n)$ must be a viscosity solution of the equilibrium problem \eqref{eqn: EQ}. Note that the same result is true for all the $u_M$. \begin{Claim} Let $t_n\to \infty$ such that $\Omega_{t_n}(u)$ converges in Hausdorff topology to $\Omega_\infty$. Then $u[\Omega_\infty]$ is a stationary solution of \eqref{eqn: CLMV}, that is it solves the equilibrium problem \eqref{eqn: EQ} in the viscosity sense. \end{Claim} Define $U_n: [0,+\infty) \to \mathcal{S}_{r,R}$ by \begin{equation} U_n(t) := \Omega_{t+t_n}, \end{equation} and we also consider the viscosity solutions of \eqref{eqn: CLMV} which lie above the $U_n(t)$, \begin{equation} \begin{array}{lll} v_n(x,t) := u[U_n(t)](x) & \text{ with Lagrange Multipliers } & \lambda_n(t) := \lambda[U_n(t)]. \end{array} \end{equation} \medskip First we show that, uniformly in $t>0$, $$\mathcal{J}(U_n(t)) \to \mathcal{J}(\Omega_\infty).$$ Since is $\mathcal{J}(U_n(t))$ is monotone decreasing for all $n>0$ we have for all $t>0$ $$ \mathcal{J}(\Omega_\infty) = \inf_{s>0} \mathcal{J}(\Omega_s) \leq \mathcal{J}(U_n(t)) \leq \mathcal{J}(U_n(0))= \mathcal{J}(\Omega_{t_n}),$$ but due to Lemma \ref{lem: hausdorff estimates} along with the convergence $\Omega_n\to \Omega_\infty$ in Hausdorff distance, $$\mathcal{J}(\Omega_{t_n}) \searrow \mathcal{J}(\Omega_\infty).$$ \medskip Now we will show that up to a subseqeunce the $v_n$ converge uniformly on compact time intervals. Recalling the uniform H\"{o}lder estimates from \eqref{eqn: holder for uM} for $\Omega_t(u_M)$ which carry over in the limit to $\Omega_t(u)$, \begin{equation} \begin{array}{lll} d_H(U_n(t),U_n(s)) \leq C|t-s|^\alpha, \end{array} \end{equation} and from Lemma \ref{lem: htozero} the energy estimates, \begin{equation}\label{eqn: energy est for Un} \frac{C}{t-s}\widetilde{\text{dist}}(U_n(s),U_n(t))\leq \mathcal{J}(U_n(s))-\mathcal{J}(U_n(t)). \end{equation} The paths $U_n$ are a sequence of equicontinuous maps into $( \mathcal{S}_{r,R},d_h)$ and so from the Compactness Lemma \ref{lem: compactness} there exists $U_\infty: [0,+\infty) \to \mathcal{S}_{r,R}$ with $U_\infty(0) = \Omega_\infty$ such that up to taking a subsequence, $$ U_n \to U_\infty \text{ uniformly on compact subsets of } [0,+\infty).$$ \medskip Now we show that $v_\infty$ is a stationary viscosity solution of \eqref{eqn: CLMV}. From Lemma \ref{lem: hausdorff estimates} we get the following: \begin{enumerate}[(i)] \item $v_n \to v_\infty$ uniformly in $(x,t)$ on compact time intervals and therefore from the stability of viscosity solutions under uniform convergence -- Lemma \ref{lem: uniform stability} -- $v$ is a viscosity solution of \eqref{eqn: CLMV}. \item $\widetilde{\text{dist}}(U_n(s),U_n(t)) \to \widetilde{\text{dist}}(U_\infty(s),U_\infty(t)) $ uniformly on compact subsets of $[0,+\infty)\times[0,+\infty)$. \end{enumerate} Combining (ii) with \eqref{eqn: energy est for Un} we derive the energy estimate for $U_\infty(\cdot)$ for all $t>s>0$: \begin{equation} \frac{C}{t-s}\widetilde{\text{dist}}(U_\infty(s),U_\infty(t))\leq \mathcal{J}(U_\infty(s))-\mathcal{J}(U_\infty(t)) = 0. \end{equation} So $v_\infty = u[\Omega_\infty]$ is a stationary viscosity solution of \eqref{eqn: CLMV}. Then due to Theorem~\ref{thm: serrin}, Theorem~\ref{thm: de silva} and Corollary~\ref{smoothness} it follows that $\Omega_{\infty} = B_{r*}(x_0)$ for some point $x_0 \in {\mathbb R}^N$. Actually $x_0$ is not completely arbitrary since we know that $\Omega_\infty$ must have $\rho$-reflection. One can easily check that this implies $x_0 \in \overline{B_\rho(0)}$. \medskip 4. Finally we show that the convergence is uniform modulo translation. Suppose that there exists a sequence of times $t_n \to \infty$ and a $\delta>0$ such that $$ \inf_{x \ \in \overline{B_\rho(0)}} d_H(\Omega_{t_n}(u),B_{r*}(x)) > \delta. $$ By taking a subsequence of the $t_n$ we may assume that $\Omega_{t_n}(u)$ converges in Hausdorff distance to some $\Omega_\infty$. By part 3 of the argument $u[\Omega_\infty]$ must be equal to $B_{r*}(x_0)$ for some $x_0 \in B_\rho(0)$. Choosing $n$ sufficiently large so that $$ d(\Omega_{t_n}(u),B_{r*}(x_0)) < \delta $$ we derive a contradiction. \end{proof} Note that above theorem leaves the possibility that the drop oscillates between a family of round drops with its speed going to zero but not fast enough to converge to a single profile. Below we show that, if the drops are sufficiently regular at large times, then this does not happen. Such regularity, when the time is sufficiently large so that the profile of $u$ is sufficiently close to a round one is suspected to be true in the light of existing results introduced by Caffarelli et. al. (see e.g. the book \cite{CS}), but verifying this for our setting would be highly nontrivial and thus we do not pursue this question here. \begin{prop}\label{conditional} \textup{(Conditional uniqueness of the limit)} Suppose additionally that $u(\cdot,t)$ are uniformly $C^{1,\alpha}$, then $\Omega_t \to B_{r*}(x_0)$ for some $x_0$. \end{prop} \begin{proof} Let $t_n \to \infty$ be a sequence of times along which $\Gamma_{t_n}(u)$ converges in Hausdorff distance. The limit is a ball $\partial B_{r*}(x_0)$ for some $x_0$ which is compatible with the reflection symmetry of $\Omega$. By translating we may assume that $x_0 = 0$. Note under this translation $\Omega_t(u)$ no longer need have $\rho$-reflection or be strongly star shaped with respect to the origin, this will not affect the proof. From Lemma \ref{lem: hausdorff estimates} we get that $u(\cdot,t_n)$ converge uniformly to $$ u_{EQ} = \frac{\lambda_*}{2N}(r_*^2-|x|^2)_+. $$ Due to the assumption, we have $$ \sup_n||u(\cdot,t_n)||_{C^{1,\alpha}}<C_1<+\infty. $$ In particular the $Du(\cdot,t_n)$ are uniformly bounded and equicontinuous so they must converge uniformly to $Du_{EQ}$. Let $\delta>0$ small enough that $C_1 \delta ^\alpha \leq 1/2$ and choose $n$ sufficiently large that \begin{equation*} \begin{array}{lll} d_H(\Gamma_{t_n}(u),\partial B_{r*}) \leq \delta &\hbox{ and }& ||Du(\cdot,t_n)-Du_{EQ}||_\infty \leq \delta. \end{array} \end{equation*} Now let $x \in \Gamma_{t_n}(u)$ and let $y=y(x) \in \partial B_{r_*}$ such that $|x-y| = d(x,\partial B_{r_*})$. We calculate, $$ |Du(x,t_n)-Du_{EQ}(y)| \leq ||Du(\cdot,t_n)-Du_{EQ}||_\infty+C_1d(x,\partial B_{r_*})^\alpha \leq C\delta^\alpha $$ so since $|Du_{EQ}|=1$ on $\partial B_{r*}$ we have that $|Du(x,t_n)|\geq 1/2$ and $$ |\tfrac{Du}{|Du|}(x,t_n)-\tfrac{y}{|y|}| \leq C\delta^\alpha. $$ Rephrasing the above in terms of the interior normal field $\nu_n$ to $\Gamma_{t_n}(u)$ we get that \begin{equation} \langle \nu_n(x),x\rangle \geq 1-o(1). \end{equation} In particular this means, thanks to Lemma \ref{lem: lip norm closeness}, that for any $\rho>0$ there exists $n$ sufficiently large so that $\Omega_{t_n}(u)$ has $\rho$-reflection. Since $\rho$-reflection is preserved under the flow for $\rho$ small this means that for every $\rho>0$ there is a $T$ such that for $t\geq T$, $\Omega_t(u)$ has $\rho$-reflection. Thus any subsequential limit of the $\Omega_t(u)$ has $\rho$-reflection for every $\rho>0$ and must be a ball centered at the origin. \end{proof} There is one nontrivial case where we can say that the limiting equilibrium solution is unique. When the initial data is symmetric with respect to $N$ orthogonal hyperplanes through the origin in addition to all the conditions given in the statement of Theorem \ref{thm: convergence thm} then the limit is unique. In this case the reflection symmetries are preserved by the equation and so any Hausdorff distance limit of the $\Omega_t(u)$ will share these symmetries. Then it is basic to check that the only ball of radius $r^*$ which is symmetric with respect to $N$ orthogonal axes through the origin is in fact centered at the origin. We record this fact in the following corollary: \begin{cor} If $\Omega_0$ satisfies all the conditions of Theorem \ref{thm: convergence thm} and additionally is symmetric with respect to $N$ mutually orthogonal hyperplanes through the origin then any solution $u$ of \eqref{eqn: CLMV} with initial data $\Omega_0$ constructed from the discrete gradient flow scheme of Section \ref{sec: Gradient Flow} satisfies, $$ d_H(\Omega_t(u),B_{r*}(0)) \to 0 \ \hbox{ as } \ t \to \infty. $$ \end{cor}
2,877,628,089,940
arxiv
\section{Introduction} The derivation of the chiral Lagrangian of the strong interactions \cite{gasser} from QCD is still an open question. This Lagrangian describes the pseudoscalar octet -- the $\pi$'s, $K$'s and the $\eta$ -- in a way that is consistent with the scenario of spontaneously broken chiral symmetry with the pseudoscalars as the Goldstone bosons. Whatever the details of such a derivation may be, it is clear that one of its ingredients must be bosonization, simply because the $\pi$'s, $K$'s and $\eta$ are bosons. There exists a physical and intuitively clear picture of how the chiral Lagrangian arises from QCD [2--4]. In ref.\ \cite{diakonov} Diakonov and Eides gave a formula for the classical action $W$ of a given configuration of pseudoscalar mesons $\pi^a(x)$ in the chiral limit. In our notation\footnote{Compared to refs.\ \cite{diakonov,balog}, the ratio in eq.\ (\ref{DE}) turns up side down -- see below.}, \begin{eqnarray} \label{DE} e^{iW[\pi]} & = & \frac{\int{\cal D}B{\cal D}q{\cal D}\bar q \exp i\left(S[B]+\int d^4x\bar q\;i\!\!\not\!\! D\; q\right)} {\int{\cal D}B{\cal D}q{\cal D}\bar q \exp i\left(S[B]+\int d^4x\bar qe^{i\Pi\gamma_5}\;i\!\!\not\!\! D\; e^{i\Pi\gamma_5}q\right)}, \end{eqnarray} where $B_\mu$ is the gluon field, $S$ is the gluon action, $q$ are the $u$, $d$, and $s$ quarks and $\Pi\equiv\pi^at^a/f$, with $f=93 \mbox{ MeV}$ the pseudoscalar decay constant. It is understood that the quarks both in the numerator and in the denominator are regularized with a (Wess--Zumino consistent) vector current conserving scheme. This renders the Jacobian $J[\pi]$ of a local chiral rotation non-trivial, i.e.\ $W[\pi]$ is non-zero \cite{diakonov}, and we can make the identification \begin{eqnarray} e^{iW[\pi]} & = & J[\pi]. \end{eqnarray} The physical meaning of eq.\ (\ref{DE}) \cite{balog} is that the classical action $W$ of the pseudoscalars is the difference in ``free energy'' between chirally rotated QCD and QCD without this rotation. The pseudoscalars, being the parameters of the chiral rotations, are thus identified as Goldstone bosons of spontaneously broken chiral symmetry. Eq.\ (\ref{DE}) has been taken as the basis for deriving a chiral Lagrangian from a cut-off version of QCD [2--4]. Furthermore, an interesting connection between the underlying idea of this formula and bosonization was pointed out \cite{andrianov,andrianov2}. In particular, Andrianov noted that the Jacobian of a chiral rotation of non-Abelian fermions realizes bosonization in two dimensions (in the chiral limit) \cite{andrianov}. This motivates us to study this particular approach to two dimensional bosonization. Of course, the literature on two dimensional bosonization is extensive, even if one restricts oneself to ``path integral methods''. However, although these methods uses some of the same ingredients (Jacobians of chiral transformations, identification of the bosonic variables with parameters of chiral rotations, etc.) they are essentially different when massive fermions are considered. For example, some approaches uses path integrals in combination with expansions in Green's functions. These can not be generalized to four dimensions since Green's functions cannot in general be calculated explicitly in that case. This mass-term problem justifies a systematic treatment of Andrianov's scheme. In this paper, then, we formulate a complete path integral bosonization procedure for any fermionic theory in two dimensions, based on Jacobians of chiral rotations. It is a generalization of the bosonization procedure in the sense of Andrianov and can be used also for fermionic models with mass-terms. That is, fermions coupled to Abelian or non-Abelian $V$, $A$, $S$ and $P$ sources can be bosonized with this procedure. The bosonization is complete in the sense that a tensor is equivalent to a scalar in two dimensions. The paper is organized as follows. In sec.\ 2 we formulate the bosonization procedure and give a general bosonization formula. We also discuss the sign of the action $W$. The ratio in eq.\ (\ref{DE}) is turned up side down compared to refs.\ \cite{diakonov,balog} in order to produce the right sign \cite{eides,andrianov}, and we give a physical argument for why the ratio should be turned the way it does in eq.\ (\ref{DE}). We work out some examples in sec.\ 3 to demonstrate the power and correctness of the procedure. In sec.\ 4 we discuss some properties of the procedure and speculate on generalizations to four dimensions. General expressions in Minkowski space for the Jacobians of a chiral rotation are given in an appendix. \section{The bosonization procedure} Before we consider bosonization we give a physical interpretation of the sign of $W$ in eq.\ (\ref{DE}). As we have mentioned, the sign should be reversed \cite{eides,andrianov} compared to the expressions in refs.\ \cite{diakonov,balog}. A Grassmann integral over fermion fields describes a second quantized system of fermions with a filled Dirac sea\footnote{See e.g.\ ref.\ \cite{jackiw} for a related discussion of the Dirac sea in the context of chiral anomalies.}. The Dirac sea constitutes a {\em perturbative vacuum} for the fermions but is not necessarily the ground state. Thus, when the system is chirally rotated -- which disturbs the Dirac sea -- it is perfectly possible that the ``free energy'' is {\em lowered}, rather than increased. This is apparently what happens, at least for QCD and the two-dimensional models we shall consider. Therefore, if we wish to describe the pseudoscalars with a {\em positive} ``free energy'' we must turn the ratio the way it does in eq.\ (\ref{DE}). Let us now turn to two dimensional bosonization. The basic formula which we must consider is \begin{eqnarray} e^{iW[\theta]} & = & \frac{\int{\cal D}\psi{\cal D}\bar\psi \exp i\int d^2x\bar\psi[i\!\!\not\!\partial-\Gamma]\psi} {\int{\cal D}\psi{\cal D}\bar\psi \exp i\int d^2x\bar\psi e^{i\theta\gamma_5}[i\!\!\not\!\partial-\Gamma] e^{i\theta\gamma_5}\psi} \end{eqnarray} where $\psi$ is a Dirac fermion, $\Gamma=\;\;\!\!\not\! V+\;\!\!\not\!\! A\gamma_5 +S+i\gamma_5P$ are external $V$, $A$, $S$ and $P$ sources and $\theta$ is the pseudoscalar field -- the parameter of chiral rotations. In the non-Abelian case $\psi$ is in the fundamental representation of, say, $SU(N)$ and all quantities are matrices, $\theta\equiv\theta^at^a$, etc. The functional $W[\theta]$ is, as we have noted, the {\em classical} action for the pseudoscalars. Thus, in order to get a {\em quantum} theory, we must path integrate over $e^{iW[\theta]}$, leading to the partition function of the bosonized theory: \begin{eqnarray} Z[V,A,S,P] & = & \int{\cal D}\theta e^{iW[\theta;V,A,S,P]} \end{eqnarray} where we have explicitly displayed the $V$, $A$, $S$ and $P$ dependence of $W$. The integration measure should be properly normalized and have the appropriate invariance properties. For example, in the non-Abelian case we should use the Haar measure. The quantum theory of $\theta$ is not automatically regularized. That part is left to the physicist who wishes to do calculations with the bosonic theory. The bosonization procedure is then as follows. Calculate the Jacobian $J[\theta]=e^{iW[\theta]}$ of the finite local chiral rotation with $\theta(x)$ as the rotation angles. Then the partition function of the bosonized theory is found by path integrating $J[\theta]$ over $\theta$, using the appropriate measure for the integration. It is important that we use a consistent regularization scheme that conserves vector currents in order to produce the correct action for the bosons. More precisely, we should use the {\em most general} regularization scheme with this requirement. We should not, for example, choose specific simplifying values for the free parameters of the scheme because we would then lose information. We will see examples of this in the next section. For convenience and completeness we give the full Minkowski space expressions for the chiral Jacobians in an appendix. \section{Examples} We can now demonstrate that this bosonization procedure reproduces known results in the cases of the Schwinger model, the massive Thirring model and massive non-Abelian bosonization. \bigskip \noindent {\em i) The Schwinger model} \cite{schwinger} Due to the structural simplicity of this model, this example could be regarded as a warm-up. The partition function is \begin{eqnarray} Z & = & \int{\cal D}A{\cal D}\psi{\cal D}\bar\psi \exp i\int d^2x\left(-\mbox{$\frac{1}{4}$}F_{\mu\nu}^2 +\bar\psi[i\!\!\not\!\partial-e\!\!\not\!\! A]\psi\right) \end{eqnarray} We choose Lorentz gauge for the photon field, $\partial_\mu A^\mu=0$. Performing a chiral transformation of the fermion gives the Jacobian \begin{eqnarray} J[\theta] & = & \exp i\int d^2x \left(\frac{1}{2\pi}\partial_\mu\theta\partial^\mu\theta -\frac{e}{\pi}A_\mu\epsilon^{\mu\nu}\partial_\nu\theta\right) \end{eqnarray} We bosonize the fermionic part of the partition function by replacing it by this Jacobian, leading to \begin{eqnarray} \nonumber Z & = & \int{\cal D}A\int{\cal D}\theta\exp i\int d^2x \left(-\mbox{$\frac{1}{4}$}F_{\mu\nu}^2 +\frac{1}{2\pi}\partial_\mu\theta\partial^\mu\theta -\frac{e}{\pi}A_\mu\epsilon^{\mu\nu}\partial_\nu\theta\right) \\ & = & \int{\cal D}A\exp i\int d^2x \left(-\mbox{$\frac{1}{4}$}F_{\mu\nu}^2+\frac{e^2}{2\pi}A^2\right), \end{eqnarray} which is appropriate for the description of a vector field with mass $m=e/\sqrt{\pi}$. \bigskip \noindent {\em ii) The massive Thirring model} [9--12] We will only consider the mass-term bosonization rules for simplicity and because it is the hardest part in the path integral formalism. Current bosonization rules can be obtained in the same way by coupling the Thirring fermion to external vector and axial vector sources. The partition function with external scalar sources $m(x)$ and $m^\dagger(x)$ is \begin{eqnarray} \nonumber Z[m,m^\dagger] & = & \int{\cal D}\psi{\cal D}\bar\psi\exp i\int d^2x \left(\bar\psi[i\!\!\not\!\partial-mP_+-m^\dagger P_-]\psi-\mbox{$\frac{1}{2}$} gj^2\right) \\ & = & \int{\cal D}B{\cal D}\psi{\cal D}\bar\psi\exp i\int d^2x \left(\bar\psi[i\!\!\not\!\partial-\:\!\!\not\!\! B-mP_+-m^\dagger P_-]\psi +\frac{1}{2g}B^2\right) \end{eqnarray} where $P_\pm=\mbox{$\frac{1}{2}$}(1\pm\gamma_5)$ and $j_\mu=\bar\psi\gamma_\mu\psi$. The bosonization of the fermion is given by the chiral Jacobian: \begin{eqnarray} \nonumber J[\theta] & = & \exp i\int d^2x \bigg(\frac{1}{2\pi}\partial_\mu\theta\partial^\mu\theta -\frac{1}{\pi}B_\mu\epsilon^{\mu\nu}\partial_\nu\theta \\ & & \mbox{} \hspace{6em} +\frac{1}{4\pi}\kappa_1m(e^{2i\theta}-1) +\frac{1}{4\pi}\kappa_1m^\dagger(e^{-2i\theta}-1) \\ \nonumber & & \mbox{} \hspace{6em} +\frac{1}{8\pi}m^2(e^{4i\theta}-1) +\frac{1}{8\pi}m^{\dagger 2}(e^{-4i\theta}-1)\bigg). \end{eqnarray} The coefficient $\kappa_1$ is an arbitrary mass which appears, for example, in the Pauli--Villars scheme \cite{damgaard,ball}. Bosonization is now completed by taking the path integral over $\theta$: \begin{eqnarray} \nonumber Z[m,m^\dagger] & = & \int{\cal D}B{\cal D}\theta\exp i\int d^2x \bigg(\frac{1}{2\pi}\partial_\mu\theta\partial^\mu\theta -\frac{1}{\pi}B_\mu\epsilon^{\mu\nu}\partial_\nu\theta +\frac{1}{2g}B^2 \\ \nonumber & & \mbox{} \hspace{10em} +\frac{1}{4\pi}\kappa_1m(e^{2i\theta}-1) +\frac{1}{4\pi}\kappa_1m^\dagger(e^{-2i\theta}-1) \\ & & \mbox{} \hspace{10em} +\frac{1}{8\pi}m^2(e^{4i\theta}-1) +\frac{1}{8\pi}m^{\dagger 2}(e^{-4i\theta}-1)\bigg) \\ \nonumber & = & \int{\cal D}\varphi\exp i\int d^2x \bigg(\mbox{$\frac{1}{2}$}\partial_\mu\varphi\partial^\mu\varphi +\frac{1}{4\pi}\kappa_1m(e^{i\beta\varphi}-1) +\frac{1}{4\pi}\kappa_1m^\dagger(e^{-i\beta\varphi}-1) \\ \nonumber & & \mbox{} \hspace{9em} +\frac{1}{8\pi}m^2(e^{2i\beta\varphi}-1) +\frac{1}{8\pi}m^{\dagger 2}(e^{-2i\beta\varphi}-1)\bigg) \end{eqnarray} In the second expression we have rescaled $\theta$, \begin{eqnarray} \varphi & = & \sqrt{\frac{1}{\pi}\left(1+\frac{g}{\pi}\right)}\;\theta, \end{eqnarray} and introduced the parameter $\beta$ by \begin{eqnarray} \label{coupling} \frac{4\pi}{\beta^2} & = & 1+\frac{g}{\pi}. \end{eqnarray} We are allowed to add suitable polynomial counterterms in $m$ and $m^\dagger$ to the action. If we choose \begin{eqnarray} {\cal L}_{ct} & = & \frac{1}{4\pi}\kappa_1(m+m^\dagger) +\frac{1}{8\pi}(m^2+m^{\dagger 2}) \end{eqnarray} we can read off the bosonization rules: \begin{eqnarray} \label{rules} \nonumber -\sigma_+ & = & \frac{\kappa_1}{4\pi}e^{i\beta\varphi} +\frac{1}{4\pi}me^{2i\beta\varphi}, \\ -\sigma_- & = & \frac{\kappa_1}{4\pi}e^{-i\beta\varphi} +\frac{1}{4\pi}m^\dagger e^{-2i\beta\varphi}, \end{eqnarray} with $\sigma_\pm\equiv\bar\psi P_\pm\psi$. The presence of the $m$- and $m^\dagger$-dependent terms makes this a ``field dependent'' bosonization rule \cite{divecchia}. Modulo these terms -- which generate contact terms in the Greens functions -- eqs. (\ref{rules}) are the usual mass bosonization rules \cite{coleman,dorn}. We emphasize that we should not choose any special values for $\kappa_1$. In particular, we should not choose $\kappa_1=0$, since we would then lose the sine in the sine--Gordon equation of the bosonic theory. A similar arbitrary mass is also present in Coleman's expressions \cite{coleman}. \bigskip \noindent {\em iii) The massive non-Abelian case} \cite{dorn,polyakov,witten} Here too we will only consider mass-term bosonization -- bosonization of the currents, as we have already mentioned, is covered by the remark in ref.\ \cite{andrianov}. The partition function is \begin{eqnarray} Z[m,m^\dagger] & = & \int{\cal D}\psi{\cal D}\bar\psi \exp i\int d^2x\left(\bar\psi[i\!\!\not\!\partial-mP_+-m^\dagger P_-]\psi\right), \end{eqnarray} where $\psi$ is now an $SU(N)$ multiplet and $m=m^at^a$, $m^\dagger=m^{\dagger a}t^a$ are in the Lie algebra. From the chiral rotation we get the Jacobian \begin{eqnarray} \nonumber J[U] & = & \exp i\int d^2x\bigg(\frac{1}{8\pi} \mbox{tr}\:\partial_\mu U^\dagger\partial^\mu U +\frac{1}{12\pi}\int_0^1dt\:\epsilon_{\mu\nu\tau} \mbox{tr}\left(\hat U^\dagger\partial^\mu\hat U \hat U^\dagger\partial^\nu\hat U \hat U^\dagger\partial^\tau\hat U\right) \\ & & \mbox{} \hspace{6em} +\frac{1}{4\pi}\kappa_1\mbox{tr}\:m(U-1) +\frac{1}{4\pi}\kappa_1\mbox{tr}\:m^\dagger(U^\dagger-1) \\ \nonumber & & \mbox{} \hspace{6em} +\frac{1}{8\pi}\mbox{tr}\,[mUmU-m^2] +\frac{1}{8\pi}\mbox{tr}\,[m^\dagger U^\dagger m^\dagger U^\dagger-m^{\dagger 2}]\bigg) \end{eqnarray} where $U=e^{2i\theta}$, $\hat U=e^{2it\theta}$, and our three-dimensional conventions are $x^2\equiv t$, $g_{\mu\nu}=(+,-,-)$ and $\epsilon_{012}=1$. Again, this leads to the required form for the bosonized theory \cite{witten} modulo contact terms. \section{Discussion and speculations} These examples demonstrates clearly that in two dimensions the procedure of calculating the Jacobian of a chiral rotation in the most general vector current conserving regularization scheme and then path integrating over it realizes bosonization. This bosonization procedure was suggested earlier by Andrianov \cite{andrianov} in the special case of massless non-Abelian fermions. In this paper we have displayed the full generality of the procedure. Of course, only the information in the ``uncharged sector'' of the fermionic theory can be cast in a bosonized form, corresponding to Greens functions obtainable from the partition function by differentiation wrt.\ $V$, $A$, $S$ and $P$ sources. ``Fermionic information'' -- that which is expressable in terms of Feynman diagrams with external fermion lines -- can not be obtained from a bosonic theory. Let us also point out that in a non-Abelian model we could also bosonize the singlet, or $U(1)$, degree of freedom in addition to the $SU(N)$ ones. The complete Jacobian is then just a product of the singlet and non-singlet Jacobians, and mixed Greens functions in the bosonized theory can be found accordingly. One may ask why the bosonization procedure works since we are unable to actually {\em derive} the bosonic form of the partition function from its fermionic form. Under a chiral rotation the partition function transforms such that \begin{eqnarray} Z[\Gamma] & = & e^{iW[\theta;\Gamma]}Z[\theta;\Gamma] \end{eqnarray} where $Z[\theta;\Gamma]$ is the rotated partition function without the Jacobian. Since, due to bosonization, \begin{eqnarray} Z[\Gamma] & = & \int{\cal D}\theta\;e^{iW[\theta;\Gamma]} \;=\; \int{\cal D}\theta\frac{Z[\Gamma]}{Z[\theta;\Gamma]} \end{eqnarray} we must have \cite{andrianov} \begin{eqnarray} \int{\cal D}\theta\frac{1}{Z[\theta;\Gamma]} & = & 1. \end{eqnarray} It is necessary to prove this equation if we want to ``prove'' the bosonization procedure, but no such proof is known to us. The procedure {\em does} work, however, since it reproduces the bosonization rules that have been proved by other methods, for example by explicitly comparing the Greens functions in the fermionic and bosonic theories, respectively. We can speculate that the chiral degrees of freedom somehow saturate the theory in two dimensions. This will also imply that the same procedure can only be approximately correct in four. If there are other symmetries of a fermionic theory in four dimensions that produces a Jacobian when the partition function is locally transformed, then the degrees of freedom associated with this transformation would also be important for bosonization. In fact, attempts at ``bosonizing'' QCD [2--4] according to eq.\ (\ref{DE}) using {\em only} chiral degrees of freedom gives a chiral Lagrangian with a quadratically divergent ``kinetic term'' for the pions. This violates the principle that physics should not depend on the regularization scheme. The inclusion of further degrees of freedom could be part of a cure for this problem. More generally, it appears to us not altogether improbable that if we could identify all such symmetries of a fermionic theory, we would be able to find a completely bosonized version of this theory. \noindent \paragraph{Acknowledgments} I would like to thank M. Faber, P.H. Damgaard and A.N. Ivanov for discussions and comments on the manuscript, and M.I. Eides, also for comments on the manuscript. \section*{Appendix} In this appendix we give complete expressions for the Jacobians of chiral rotations in Minkowski space. The partition function is \begin{eqnarray} Z & = & \int{\cal D}\psi{\cal D}\bar\psi\exp i\int d^2x \bar\psi[i\!\!\not\!\partial\;-\!\!\not\! V-\;\!\!\not\!\! A\gamma_5-S-i\gamma_5P]\psi \end{eqnarray} where all fields are either Abelian or non-Abelian. We are interested in the Jacobian of the change of variables \begin{eqnarray} \psi & = & e^{i\theta\gamma_5}\chi, \hspace{2em} \bar\psi \;=\; \bar\chi e^{i\theta\gamma_5}. \end{eqnarray} In the Abelian case, the Jacobian is \cite{dorn,damgaard} \begin{eqnarray} \nonumber J[\theta] & = & \exp i\int d^2x \bigg(\frac{1}{2\pi}\partial_\mu\theta\partial^\mu\theta -\frac{1}{\pi}V_\mu\epsilon^{\mu\nu}\partial_\nu\theta +\frac{1}{\pi}A_\mu\partial^\mu\theta \\ & & \hspace{6em} \mbox{} +\frac{1}{4\pi}\kappa_1m(e^{2i\theta}-1) +\frac{1}{4\pi}\kappa_1m^\dagger(e^{-2i\theta}-1) \\ \nonumber & & \hspace{6em} \mbox{} +\frac{1}{8\pi}m^2(e^{4i\theta}-1) +\frac{1}{8\pi}m^{\dagger 2}(e^{-4i\theta}-1)\bigg) \end{eqnarray} where $m\equiv S+iP$. In the non-Abelian case we get \cite{dorn,pak} \begin{eqnarray} \nonumber J[U] & = & \exp i\int d^2x \bigg(\frac{1}{8\pi}\mbox{tr}\:\partial_\mu U^\dagger\partial^\mu U +\frac{1}{12\pi}\int_0^1dt\:\epsilon_{\mu\nu\tau} \mbox{tr}\left(\hat U^\dagger\partial^\mu\hat U \hat U^\dagger\partial^\nu\hat U \hat U^\dagger\partial^\tau\hat U\right) \\ \nonumber & & \hspace{4em} \mbox{} -\frac{1}{4\pi}\mbox{tr} \Big[R_\mu i\partial^\mu UU^\dagger -U^\dagger i\partial_\mu UL^\mu -\epsilon_{\mu\nu}(R^\mu i\partial^\nu UU^\dagger -U^\dagger i\partial^\mu UL^\nu) \\ & & \hspace{8em} \mbox{} +U^\dagger R_\mu UL^\mu-R_\mu L^\mu -\epsilon_{\mu\nu}(U^\dagger R^\mu UL^\nu-R^\mu L^\nu)\Big] \\ \nonumber & & \hspace{4em} \mbox{} +\frac{1}{4\pi}\kappa_1\mbox{tr}\: m(U-1) +\frac{1}{4\pi}\kappa_1\mbox{tr}\: m^\dagger(U^\dagger-1) \\ \nonumber & & \hspace{4em} \mbox{} +\frac{1}{8\pi}\mbox{tr}\,[mUmU-m^2] +\frac{1}{8\pi}\mbox{tr}\,[m^\dagger U^\dagger m^\dagger U^\dagger -m^{\dagger 2}]\bigg) \end{eqnarray} where we use standard $LR$-notation: $R_\mu=V_\mu+A_\mu$ and $L_\mu=V_\mu-A_\mu$. For the coefficient $\kappa_1$ and the other conventions, see the text.
2,877,628,089,941
arxiv
\subsection*{Acknowledgments} We would like to thank Ben Poole, Aditya Grover, David Belanger, and Colin Raffel for insightful comments on the draft, Mohammad Norouzi for support and encouragement launching the project, and Jiaming Song for discussions and help running A-NICE-MC. \part*{Appendix} \section{Reverse leapfrog operator}\label{app:rev op} Let $\xi = \{x, v, d\}$ in the extended state space with $d = -1$. Here, we describe the leapfrog updates for a single time step $t$, this consists of inverting the equations presented in the corresponding section. Let $\zeta_1 = \{x, v, t\}$, we have: \begin{equation} v' = \left[v + \frac{\epsilon}{2}\left(\partial_x U(x) \odot \exp(\epsilon Q_v(\zeta_1)) + T_v(\zeta_1) \right)\right] \odot \exp(-S_v(\zeta_1)) . \end{equation} With the notation from Section~\ref{sec:l2hmc}, let $\zeta_2 \triangleq \{x_{m^t}, v, t\}$ \begin{equation} x' = x_{m^t} + \bar{m}^t \odot \left[ \left(x - \epsilon(\exp(\epsilon Q_x(\zeta_2))\odot v' + T_x(\zeta_2)\right)\right]\odot\exp(-\epsilon S_v(\zeta_2)) . \end{equation} Let us denote $\zeta_3 \triangleq \{x'_{\bar{m}^t}, v, t\}$: \begin{equation} x'' = x_{\bar{m}^t} + m^t \odot \left[ \left(x' - \epsilon(\exp(\epsilon Q_x(\zeta_3))\odot v' + T_x(\zeta_3)\right)\right]\odot\exp(-\epsilon S_v(\zeta_3)) . \end{equation} Finally, the last update, with $\zeta_4 \triangleq \{x'', \partial_xU(x''), t \}$: \begin{equation} v' = \left[v + \frac{\epsilon}{2}\left(\partial_x U(x'') \odot \exp(\epsilon Q_v(\zeta_4)) + T_v(\zeta_4) \right)\right] \odot \exp(-S_v(\zeta_4)) . \end{equation} It is important to note that to invert $\mathbf{L}_\theta$, these steps should be ran for $t$ from $M$ to $1$. \section{Determinant of the Jacobian}\label{app:jac det} Given the derivations (and notations) from Section~\ref{sec:l2hmc}, for the forward operator $\mathbf{L}_\theta$, we can immediately compute the Jacobian: \begin{equation} \log\left| \pd{[\mathbf F \mathbf{L}_\theta \xi]}{\xi^T} \right| = d\sum_{t \leq M}\left[ \frac{\epsilon}{2}\mathbf 1 \cdot S_v(\zeta_1^t) + \epsilon m^t \cdot S_x(\zeta_2^t) + \epsilon \bar{m}^t \cdot S_x(\zeta_3^t) + \frac{\epsilon}{2}\mathbf 1 \cdot S_v(\zeta_4^t)\right] . \end{equation} Where $\zeta^t_i$ denotes the intermediary variable $\zeta_i$ at time step $t$ and $d$ is the direction of $\xi$ i.e. $\xi = \{x, v, d\}$. \section{Experimental details of Section~\ref{sec:experiments}}\label{app:experimental details} \subsection{Implementation details} First of all, we keep separate parameters for the network responsible for updating $v$ and those updating $x$. The architectures are the same. Let us take the example of $Q_v, S_v, T_v$. The time step $t$ is given as input to the MLP, encoded as $\tau(t) = \left(\cos(\frac{2\pi t}{M}), \sin(\frac{2\pi t}{M})\right)$. $\sigma(\cdot)$ denotes the ReLU non-linearity. For $n_h$ hidden units per layer: \begin{itemize} \item We first compute $h_1 = \sigma(W_1x + W_2v + W_3\tau(t) + b)$ ($h\in\mathbb{R}^{n_h}$). \item $h_2 = \sigma(W_4h + b_4) \in \mathbb{R}^{n_h}$ \item $S_v = \lambda_s \mathtt{tanh}(W_sh_2 + b_s), Q_v = \lambda_q \mathtt{tanh}(W_qh_2 + b_q), T_v = W_t h_2 + b_t$. \end{itemize} In Section~\ref{sec:toy}, the $Q, S, T$ are neural networks with $2$ hidden layers with $10$ ($100$ for the $50$-d ICG) units and ReLU non-linearities. We train with Adam \citep{kingma2014adam} and a learning rate $\alpha = 10^{-3}$. We train for $5,000$ iterations with a batch size of $200$. $\lambda_b$ was set to $0$ for ICG and SCG and to $1$ for MoG and Rough Well. For the MoG tasks, we train our sampler with a temperature parameter that we continuously anneal; we evaluate the trained sampler without using temperature. \subsection{Auto-correlation and ESS} Let $(x_\tau)_{\tau \leq T}$ be a set of correlated samples converging to the distribution $p$ with mean $\mu$ and covariance $\Sigma$. We define auto-correlation at time $t$ as: \begin{equation} \rho_t \triangleq \frac{1}{\mathrm{Trace}(\Sigma)(T-t)}\sum_{\tau \leq T-t-1}(x_\tau-\mu)^T(x_{\tau+t} - \mu) . \end{equation} We can now define effective sample size (ESS) as: \begin{equation} \mathrm{ESS}\left((x_\tau)_{\tau \leq T}\right) \triangleq \dfrac{1}{1 + 2\sum_{t}\rho_t} . \end{equation} Similar to \citet{hoffman2014no}, we truncate the sum when the auto-correlation goes below $0.05$. \subsection{Comparison with LAHMC}\label{sec:lahmc} We compare our trained sampler with LAHMC \citep{pmlr-v32-sohl-dickstein14}. Results are reported in Table~\ref{table:ess lahmc}. L2HMC largely outperforms LAHMC on all task. LAHMC is also \textit{unable to mix between modes} for the MoG task. We also note that L2HMC could be easily combined with LAHMC, by replacing the leapfrog integrator of LAHMC with the learned one of L2HMC. \begin{table}[ht] \centering \begin{tabular}{lcccc} \toprule Distribution & Gradient Evaluations & ESS-L2HMC & ESS-LAHMC & Ratio\\ \midrule $50$-d ICG & $2000$ & $156.6$ & $21.4$ & $7.3$ \\ Rough Well & $200$ & $12.5$ & $8.6$ & $1.5$ \\ $2$-d SCG & $5000$ & $116$ & $16.7$ & $14.9$ \\ MoG & $20,000$ & $65.0$ & $\ll 0.53$ & $\gg 123.5$ \\ \bottomrule \end{tabular} \vspace{0.8cm} \caption{ESS for a fixed number of gradient evaluations.} \label{table:ess lahmc} \end{table} \section{L2HMC-DGLM}\label{app:vae} \subsection{Training algorithm} In this section, we present our training algorithm as well as a diagram explaining L2HMC-DGLM. For conciseness, given our operator $\mathbf{L}_\theta$, we denote by $\mathbf K_\theta(\cdot|x)$ the distribution over next state given sampling of a momentum and direction and the Metropolis-Hastings step. \input{alg_vae} \begin{figure} \centering \includegraphics[scale=1.0]{vae.pdf} \caption{Diagram of our L2HMC-DGLM model. Nodes are functions of their parents. Round nodes are deterministic, diamond nodes are stochastic and the doubly-circled node is observed.} \label{fig:vae} \end{figure} \subsection{Implementation details of L2HMC-DGLM} Similar to our L2HMC training on unconditional sampling, we share weights across $Q, S$ and $T$. In addition, the auxiliary variable $x$ (here the image from MNIST) is first passed through a $2$-layer neural network, with softplus non-linearities and $512$ hidden units. This input is given to both networks $\{\cdot\}_x$ and $\{\cdot\}_v$. The architecture then consists of $2$ hidden layers of $200$ units and ReLU non-linearities. For $\lambda$ (scale parameter of the loss), we use the standard deviation of the approximate posterior. \paragraph{AIS Evaluation} For each data point, we run $20$ Markov Chains in parallel, $10,000$ annealing steps with $10$ leapfrogs per step and choose the step size for an acceptance rate of $0.65$. \subsection{MNIST Samples} We show in Figure~\ref{fig:samples} samples from the three evaluated models: VAE \citep{kingma2013auto}, HMC-DGLM \citep{pmlr-v70-hoffman17a} and L2HMC-DGLM. \begin{figure} \centering \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=\linewidth]{samples_l2hmc.png} \caption{L2HMC} \label{fig:l2hmc_samples} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=\linewidth]{samples_hmc.png} \caption{HMC} \label{fig:hmc_samples} \end{subfigure}% \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=\linewidth]{samples_vae.png} \caption{VAE} \label{fig:vae_samples} \end{subfigure} \caption{L2HMC-DGLM decoder produces sharper mean activations } \label{fig:samples} \end{figure} \section{Background} \subsection{MCMC methods and Metropolis-Hastings} Let $p$ be a target distribution, analytically known up to a constant, over a space $\mathcal{X}$. Markov chain Monte Carlo (MCMC) methods \citep{neal1993probabilistic} aim to provide samples from $p$. To that end, MCMC methods construct a Markov Chain whose stationary distribution is the target distribution $p$. Obtaining samples then corresponds to simulating a Markov Chain, i.e., given an initial distribution $\pi_0$ and a transition kernel $K$, constructing the following sequence of random variables: \begin{align} X_0 \sim \pi_0, \quad X_{t+1} \sim K(\cdot | X_t) . \end{align} In order for $p$ to be the stationary distribution of the chain, three conditions must be satisfied: $K$ must be irreducible and aperiodic (these are usually mild technical conditions) and $p$ has to be a fixed point of $K$. This last condition can be expressed as: $p(x') = \int K(x'|x)p(x)\mathrm{d}x$. This condition is most often satisfied by satisfying the stronger {\em detailed balance} condition, which can be written as: $p(x')K(x|x') = p(x)K(x'|x)$. Given any proposal distribution $q$, satisfying mild conditions, we can easily construct a transition kernel that respects detailed balance using Metropolis-Hastings \citep{hastings1970monte} accept/reject rules. More formally, starting from $x_0 \sim \pi_0$, at each step $t$, we sample $x' \sim q(\cdot|X_t)$, and with probability $A(x'|x_t) = \operatorname{min}\left(1, \frac{p(x')q(x_t|x')}{p(x_t)q(x'|x_t)} \right)$, accept $x'$ as the next sample $x_{t+1}$ in the chain. If we reject $x'$, then we retain the previous state and $x_{t+1}=x_t$. For typical proposals this algorithm has strong asymptotic guarantees. But in practice one must often choose between very low acceptance probabilities and very cautious proposals, both of which lead to slow mixing. For continuous state spaces, Hamiltonian Monte Carlo \citep[HMC; ][]{neal2011mcmc} tackles this problem by proposing updates that move far in state space while staying roughly on iso-probability contours of $p$. \subsection{Hamiltonian Monte Carlo} Without loss of generality, we assume $p\left(x\right)$ to be defined by an energy function $U\left(x\right)$, s.t. $p(x) \propto \exp(-U(x))$, and where the state $x \in \mathbb{R}^n$. HMC extends the state space with an additional momentum vector $v\in\mathbb{R}^n$, where $v$ is distributed independently from $x$, as $p(v) \propto \exp(-\frac{1}{2}v^Tv)$ (i.e., identity-covariance Gaussian). From an augmented state $\xi \triangleq (x, v)$, HMC produces a proposed state $\xi' = (x', v')$ by approximately integrating Hamiltonian dynamics jointly on $x$ and $v$, with $U\left(x\right)$ taken to be the potential energy, and $\frac{1}{2}v^Tv$ the kinetic energy. Since Hamiltonian dynamics conserve the total energy of a system, their approximate integration moves along approximate iso-probability contours of $p(x,v) = p(x)p(v)$. The dynamics are typically simulated using the leapfrog integrator \citep{hairer2003geometric,leimkuhler2004simulating}, which for a single time step consists of \begin{equation} {\textstyle v^{\frac{1}{2}} = v - \frac{\epsilon}{2}\partial_x U(x) ;\quad x' = x + \epsilon v^{\frac{1}{2}} ;\quad v' = v - \frac{\epsilon}{2}\partial_x U(x')} . \end{equation} Following \citet{pmlr-v32-sohl-dickstein14}, we write the action of the leapfrog integrator in terms of an operator $\mathbf L$: $\mathbf L \xi \triangleq \mathbf L (x, v) \triangleq (x', v')$, and introduce a momentum flip operator $\mathbf F$: $\mathbf F (x, v) \triangleq (x, -v)$. It is important to note two properties of these operators. First, the transformation $\mathbf F \mathbf L$ is an involution, i.e. $\mathbf F \mathbf L\mathbf F \mathbf L(x, v) = \mathbf F \mathbf L(x', -v') = (x, v)$. Second, the transformations from $(x, v)$ to $(x, v^{\frac{1}{2}})$, from $(x, v^{\frac{1}{2}})$ to $(x', v^{\frac{1}{2}})$, and from $(x', v^{\frac{1}{2}})$ to $(x', v')$ are all volume-preserving \emph{shear} transformations i.e., only one of the variables ($x$ or $v$) changes, by an amount determined by the other one. The determinant of the Jacobian, $\left| \pd{[\mathbf F \mathbf L \xi]}{\xi^T} \right|$, is thus easy to compute. For vanilla HMC $\left| \pd{[\mathbf F \mathbf L \xi]}{\xi^T} \right|=1$, but we will leave it in symbolic form for use in Section \ref{sec:l2hmc}. The Metropolis-Hastings-Green \citep{hastings1970monte,green1995reversible} acceptance probability for the HMC proposal is made simple by these two properties, and is \begin{align} \label{eq accept} A(\mathbf F \mathbf L \xi|\xi) &= {\textstyle \operatorname{min}\left( 1, \frac{p(\mathbf F \mathbf L \xi)}{p(\xi)} \left| \pd{[\mathbf F \mathbf L \xi]}{\xi^T} \right| \right)} . \end{align} \section{Future Work} The loss in Section \ref{sec loss} targets lag-one autocorrelation. It should be possible to extend this to also target lag-two and higher autocorrelations. It should also be possible to extend this loss to reward fast decay in the autocorrelation of other statistics of the samples, for instance the sample energy as well as the sample position. These additional statistics could also include learned statistics of the samples, combining benefits of the adversarial approach of \citep{song2017nice} with the current work. Our learned generalization of HMC should prove complementary to several other research directions related to HMC. It would be interesting to explore combining our work with the use of HMC in a minibatch setting \citep{chen2014stochastic}; with shadow Hamiltonians \citep{izaguirre2004shadow}; with gradient pre-conditioning approaches similar to those used in Riemannian HMC \citep{girolami2009riemannian,betancourt2013general}; with the use of alternative HMC accept-reject rules \citep{pmlr-v32-sohl-dickstein14,berger2015markov}; with the use of non-canonical Hamiltonian dynamics \citep{tripuraneni2016magnetic}; with variants of AIS adapted to HMC proposals \citep{sohl2012hamiltonian}; with the extension of HMC to discrete state spaces \citep{zhang2012continuous}; and with the use of alternative Hamiltonian integrators \citep{creutz1989higher,chao2015exponential}. Finally, our work is also complementary to other methods not utilizing gradient information. For example, we could incorporate the intuition behind Multiple Try Metropolis schemes \citep{martino2013flexibility} by having several parametric operators and training each one when used. In addition, one could draw inspiration from the adaptive literature \citep{haario2001adaptive,andrieu2008tutorial} or component-wise strategies \citep{gilks1992adaptive}. \section{Conclusion} In this work, we presented a general method to train expressive MCMC kernels parameterized with deep neural networks. Given a target distribution $p$, analytically known up to a constant, our method provides a fast-mixing sampler, able to efficiently explore the state space. Our hope is that our method can be utilized in a ``black-box'' manner, in domains where sampling constitutes a huge bottleneck such as protein foldings \citep{schutte1999direct} or physics simulations \citep{olsson1995two}. \section{Experiments}\label{sec:experiments} We present an empirical evaluation of our trained sampler on a diverse set of energy functions. We first present results on a collection of toy distributions capturing common pathologies of energy landscapes, followed by results on a task from machine learning: maximum-likelihood training of deep generative models. For each, we compare against HMC with well-tuned step length and show significant gains in mixing time. Code implementing our algorithm is available online\footnote{\url{https://github.com/brain-research/l2hmc}.}. \subsection{Varied Collection of Energy Functions}\label{sec:toy} We evaluate our L2HMC sampler on a diverse collection of energy functions, each posing different challenges for standard HMC. \textbf{Ill-Conditioned Gaussian} (ICG): Gaussian distribution with diagonal covariance spaced log-linearly between $10^{-2}$ and $10^{2}$. This demonstrates that L2HMC can learn a diagonal inertia tensor. \textbf{Strongly correlated Gaussian} (SCG): We rotate a diagonal Gaussian with variances $[10^2, 10^{-2}]$ by $\frac{\pi}{4}$. This is an extreme version of an example from \citet{neal2011mcmc}. This problem shows that, although our parametric sampler only applies element-wise transformations, it can adapt to structure which is not axis-aligned. \textbf{Mixture of Gaussians} (MoG): Mixture of two isotropic Gaussians with $\sigma^2=0.1$, and centroids separated by distance $4$. The means are thus about $12$ standard deviations apart, making it almost impossible for HMC to mix between modes. \textbf{Rough Well}: Similar to an example from \citet{pmlr-v32-sohl-dickstein14}, for a given $\eta >0, U(x) = \frac{1}{2}x^Tx + \eta\sum_i \cos(\frac{x_i}{\eta})$. For small $\eta$ the energy itself is altered negligibly, but its gradient is perturbed by a high frequency noise oscillating between $-1$ and $1$. In our experiments, we choose $\eta=10^{-2}$. For each of these distributions, we compare against HMC with the same number of leapfrog steps and a well-tuned step-size. To compare mixing time, we plot auto-correlation for each method and report effective sample size (ESS). We compute those quantities in the same way as \citet{pmlr-v32-sohl-dickstein14}. We observe that samplers trained with L2HMC show greatly improved autocorrelation and ESS on the presented tasks, providing more than $106\times$ improved ESS on the SCG task. In addition, for the MoG, we show that L2HMC can easily mix between modes while standard HMC gets stuck in a mode, unable to traverse the low density zone. Experimental details, as well as a comparison with LAHMC \citep{pmlr-v32-sohl-dickstein14}, are shown in Appendix~\ref{app:experimental details}. \begin{figure}[t!] \begin{subfigure}[b]{\linewidth} \includegraphics[width=\linewidth]{acl_curves_new.pdf} \caption{Autocorrelation vs. gradient evaluations of energy function $U(x)$ } \end{subfigure} \centering \begin{subfigure}[b]{.4\linewidth} \centering \includegraphics[width=\linewidth]{mog_samples.pdf} \caption{Samples from single MCMC chain} \label{fig:chains} \end{subfigure}% \begin{subfigure}[b]{.6\linewidth} \centering \begin{tabular}{lccc} \toprule Distribution & ESS-L2HMC & ESS-HMC & Ratio\\ \midrule $50$-d ICG & $7.83\times10^{-1}$ & $1.65\times10^{-2}$ & $36.6$ \\ Rough Well & $6.25\times10^{-1}$ & $1.16\times10^{-1}$ & $5.4$ \\ $2$-d SCG & $4.97\times10^{-1}$ & $4.69\times10^{-3}$ & $106.2$ \\ MoG & $3.24\times10^{-2}$ & $\ll 2.61\times10^{-4}$ & $\gg 124$ \\ \bottomrule \end{tabular} \vspace{0.8cm} \caption{ESS per Metropolis-Hastings step} \label{table:ess} \end{subfigure} \begin{subfigure}[b]{\linewidth} \center \includegraphics[width=\textwidth]{beat_nice.pdf} \caption{\small L2HMC can mix between modes for a MoG with different variances, contrary to A-NICE-MC } \label{fig:nice} \end{subfigure} \caption{\small L2HMC mixes faster than well-tuned HMC, and than A-NICE-MC, on a collection of toy distributions.} \label{fig:plots} \end{figure} \paragraph{Comparison to A-NICE-MC \citep{song2017nice}} In addition to the well known challenges associated with adversarial training \citep{arjovsky2017wasserstein}, we note that parameterization using a volume-preserving operator can dramatically fail on simple energy landscapes. We build off of the \emph{mog2} experiment presented in \citep{song2017nice}, which is a $2$-d mixture of isotropic Gaussians separated by a distance of $10$ with variances $0.5$. We consider that setup but increase the ratio of variances: $\sigma_1^2 = 3, \sigma_2^2 = 0.05$. We show in Figure~\ref{fig:nice} sample chains trained with L2HMC and A-NICE-MC; A-NICE-MC cannot effectively mix between the two modes as only a fraction of the volume of the large mode can be mapped to the small one, making it highly improbable to traverse. This is also an issue for HMC. On the other hand, L2HMC can both traverse the low-density region between modes, and map a larger volume in the left mode to a smaller volume in the right mode. It is important to note that the distance between both clusters is less than in the \emph{mog2} case, and it is thus a good diagnostic of the shortcomings of volume-preserving transformations. \subsection{Latent-Variable Generative Model} We apply our learned sampler to the task of training, and sampling from the posterior of, a latent-variable generative model. The model consists of a latent variable $z \sim p(z)$, where we choose $p(z) = \mathcal N\left(z; 0, I \right)$, and a conditional distribution $p(x | z)$ which generates the image $x$. Given a family of parametric `decoders' $\{ z \mapsto p(x|z;\phi), \phi \in \Phi\}$, and a set of samples $\mathcal{D} = \{x^{(i)}\}_{i\leq N}$, training involves finding $\phi^* = \arg\max_{\phi\in\Phi} p(\mathcal{D};\phi)$. However, the log-likelihood is intractable as $p(x;\phi) = \int p(x|z;\phi)p(z)\mathrm{d}z$. To remedy that problem, \cite{kingma2013auto} proposed jointly training an approximate posterior $q_\psi$ that maximizes a tractable lower-bound on the log-likelihood: \begin{equation} \mathcal{L}_{\mathrm{ELBO}}(x, \phi, \psi) = \mathbb{E}_{q_\psi(z|x)}\left[p(x|z;\phi)\right] - \mathrm{KL}(q_\psi(z|x)||p(z)) \leq p(x), \end{equation} where $q_\psi(z|x)$ is a tractable conditional distribution with parameters $\psi$, typically parameterized by a neural network. Recently, to improve upon well-known pitfalls like over-pruning \citep{burda2015importance} of the VAE, \cite{pmlr-v70-hoffman17a} proposed HMC-DLGM. For a data sample $x^{(i)}$, after obtaining a sample from the approximate posterior $q_\psi(\cdot|x^{(i)})$, \cite{pmlr-v70-hoffman17a} runs a MCMC algorithm with energy function $U(z, x^{(i)}) = -\log p(z) - \log p(x^{(i)}|z;\phi)$ to obtain a more exact posterior sample from $p(z|x^{(i)};\phi)$. Given that better posterior sample $z'$, the algorithm maximizes $\log p(x^{(i)}|z';\phi)$. To show the benefits of L2HMC, we borrow the method from \cite{pmlr-v70-hoffman17a}, but replace HMC by jointly training an L2HMC sampler to improve the efficiency of the posterior sampling. We call this model \textbf{L2HMC-DLGM}. A diagram of our model and a formal description of our training procedure are presented in Appendix~\ref{app:vae}. We define, for $\xi = \{z, v, d\}, r(\xi|x;\psi) \triangleq q_\psi(z|x)\mathcal{N}\left(v;0, I\right)\mathcal{U}\left(d; \{-1, 1\}\right)$. In the subsequent sections, we compare our method to the standard VAE model from \cite{kingma2013auto} and HMC-DGLM from \cite{pmlr-v70-hoffman17a}. It is important to note that, since our sampler is trained jointly with $p_\phi$ and $q_\psi$, \emph{it performs exactly the same number of gradient computations of the energy function as HMC}. We first show that training a latent variable generative model with L2HMC results in better generative models both qualitatively and quantitatively. We then show that our improved sampler enables a more expressive, non-Gaussian, posterior. \textbf{Implementation details:} Our decoder ($p_\phi$) is a neural network with $2$ fully connected layers, with $1024$ units each and softplus non-linearities, and outputs Bernoulli activation probabilities for each pixel. The encoder ($q_\psi$) has the same architecture, returning mean and variance for the approximate posterior. Our model was trained for $300$ epochs with Adam \citep{kingma2014adam} and a learning rate $\alpha=10^{-3}$. All experiments were done on the dynamically binarized MNIST dataset \citep{lecun1998mnist}. \subsubsection{Sample Quality and Data likelihood} We first present samples from decoders trained with L2HMC, HMC and the ELBO (i.e. vanilla VAE). Although higher log likelihood does not necessarily correspond to better samples \citep{theis2015note}, we can see in Figure~\ref{fig:samples}, shown in the Appendix, that the decoder trained with L2HMC generates sharper samples than the compared methods. We now compare our method to HMC in terms of log-likelihood of the data. As we previously stated, the marginal likelihood of a data point $x\in\mathcal{X}$ is not tractable as it requires integrating $p(x, z)$ over a high-dimensional space. However, we can estimate it using annealed importance sampling (AIS; \cite{neal2001annealed}). Following~\cite{wu2016quantitative}, we evaluate our generative models on both training and held-out data. In Figure~\ref{fig:ll}, we plot the data's log-likelihood against the number of gradient computation steps for both HMC-DGLM and L2HMC-DGLM. We can see that for a similar number of gradient computations, L2HMC-DGLM achieves higher likelihood for both training and held-out data. This is a strong indication that L2HMC provides \emph{significantly better posterior samples}. \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{ll_plots.pdf} \caption{\small Training and held-out log-likelihood for models trained with L2HMC, HMC, and the ELBO (VAE).} \label{fig:ll} \end{figure} \subsubsection{Increased Expressivity of the posterior} In the standard VAE framework, approximate posteriors are often parametrized by a Gaussian, thus making a strong assumption of uni-modality. In this section, we show that using L2HMC to sample from the posterior enables learning of a richer posterior landscape. \textbf{Block Gibbs Sampling} To highlight our ability to capture more expressive posteriors, we in-paint the top of an image using Block Gibbs Sampling using the approximate posterior or L2HMC. Formally, let $x_0$ be the starting image. We denote top or bottom-half pixels as $x_0^{\mathrm{top}}$ and $x_0^{\mathrm{bottom}}$. At each step $t$, we sample $z^{(t)} \sim p(z|x_t;\theta)$, sample $\tilde{x} \sim p(x|z_t;\theta)$. We then set $x^{\mathrm{top}}_{t+1} = \tilde{x}^{\mathrm{top}}$ and $x^{\mathrm{bottom}}_{t+1} = x_0^{\mathrm{bottom}}$. We compare the results obtained by sampling from $p(z|x;\theta)$ using $q_\psi$ (i.e. the approximate posterior) vs. our trained sampler. The results are reported in Figure~\ref{fig:gibbs}. We can see that L2HMC easily mixes between modes (3, 5, 8, and plausibly 9 in the figure) while the approximate posterior gets stuck on the same reconstructed digit (3 in the figure). \begin{figure} \centering \begin{subfigure}[b]{0.7\linewidth} \includegraphics[width=\textwidth]{bg_l2hmc_bin_new.png}\\ \includegraphics[width=\textwidth]{bg_q_bin_new.png} \caption{Block Gibbs inpainting of the top half of an MNIST digit, using \textit{(top)} L2HMC as a posterior sampler, and \textit{(bottom)} $q_\psi$ as a posterior sampler. } \label{fig:gibbs} \end{subfigure} \begin{subfigure}[b]{0.25\linewidth} \includegraphics[width=\textwidth]{bimodal_posterior.png} \caption{Non-Gaussian posterior} \label{fig:bimodal} \end{subfigure} \vskip -0.8em \caption{\small Demonstrations of the value of a more expressive posterior approximation. } \end{figure} \textbf{Visualization of the posterior} After training a decoder with L2HMC, we randomly choose an element $x_0 \in \mathcal{D}$ and run $512$ parallel L2HMC chains for $20,000$ Metropolis-Hastings steps. We then find the direction of highest variance, project the samples along that direction and show a histogram in Figure~\ref{fig:bimodal}. This plot shows non-Gaussianity in the latent space for the posterior. Using our improved sampler enables the decoder to make use of a more expressive posterior, and enables the encoder to sample from this non-Gaussian posterior. \section{Introduction} High-dimensional distributions that are only analytically tractable up to a normalizing constant are ubiquitous in many fields. For instance, they arise in protein folding \citep{schutte1999direct}, physics simulations \citep{olsson1995two}, and machine learning \citep{andrieu2003introduction}. Sampling from such distributions is a critical task for learning and inference \citep{mackay2003information}, however it is an extremely hard problem in general. Markov Chain Monte Carlo (MCMC) methods promise a solution to this problem. They operate by generating a sequence of correlated samples that converge in distribution to the target. This convergence is most often guaranteed through detailed balance, a sufficient condition for the chain to have the target equilibrium distribution. In practice, for any proposal distribution, one can ensure detailed balance through a Metropolis-Hastings \citep{hastings1970monte} accept/reject step. Despite theoretical guarantees of eventual convergence, in practice convergence and mixing speed depend strongly on choosing a proposal that works well for the task at hand. What's more, it is often more art than science to know when an MCMC chain has converged (``burned-in' ), and when the chain has produced a new uncorrelated sample (``mixed"). Additionally, the reliance on detailed balance, which assigns equal probability to the forward and reverse transitions, often encourages random-walk behavior and thus slows exploration of the space \citep{ichiki2013violation}. For densities over continuous spaces, Hamiltonian Monte Carlo \citep[HMC; ][]{duane1987hybrid,neal2011mcmc} introduces independent, auxiliary momentum variables, and computes a new state by integrating Hamiltonian dynamics. This method can traverse long distances in state space with a single Metropolis-Hastings test. This is the state-of-the-art method for sampling in many domains. However, HMC can perform poorly in a number of settings. While HMC mixes quickly spatially, it struggles at mixing across energy levels due to its volume-preserving dynamics. HMC also does not work well with multi-modal distributions, as the probability of sampling a large enough momentum to traverse a very low-density region is negligibly small. Furthermore, HMC struggles with ill-conditioned energy landscapes \citep{girolami2011riemann} and deals poorly with rapidly changing gradients \citep{pmlr-v32-sohl-dickstein14} Recently, probabilistic models parameterized by deep neural networks have achieved great success at {\em approximately} sampling from highly complex, multi-modal {\em empirical} distributions \citep{kingma2013auto,rezende2014stochastic,goodfellow2014generative,bengio2014deep,sohl2015deep}. Building on these successes, we present a method that, given an {\em analytically} described distribution, automatically returns an {\em exact} sampler with good convergence and mixing properties, from a class of highly expressive parametric models. The proposed family of samplers is a generalization of HMC; it transforms the HMC trajectory using parametric functions (deep networks in our experiments), while retaining theoretical guarantees with a tractable Metropolis-Hastings accept/reject step. The sampler is trained to minimize a variation on expected squared jumped distance (similar in spirit to \cite{pasarica2010adaptively}). Our parameterization reduces easily to standard HMC. It is further capable of emulating several common extensions of HMC such as within-trajectory tempering \citep{neal1996sampling} and diagonal mass matrices \citep{bennett1975mass}. We evaluate our method on distributions where HMC usually struggles, as well as on a the real-world task of training latent-variable generative models. Our contributions are as follows: \begin{itemize} \item We introduce a generic training procedure which takes as input a distribution defined by an energy function, and returns a fast-mixing MCMC kernel. \item We show significant empirical gains on various distributions where HMC performs poorly. \item We finally evaluate our method on the real-world task of training and sampling from a latent variable generative model, where we show improvement in the model's log-likelihood, and greater complexity in the distribution of posterior samples. \end{itemize} \subsubsection{Intuition} To give intuition into these terms, the scaling applied to the momentum can enable, among other things, acceleration in low-density zones, to facilitate mixing between modes. The scaling term applied to the gradient of the energy may allow better conditioning of the energy landscape (e.g., by learning a diagonal inertia tensor), or partial ignoring of the energy gradient for rapidly oscillating energies. \section{L2HMC: Training MCMC Samplers} \label{sec:l2hmc} In this section, we describe our proposed method L2HMC (for `Learning To Hamiltonian Monte Carlo'). Given access to only an energy function $U$ (and not samples), L2HMC learns a parametric leapfrog operator $\mathbf{L}_\theta$ over an augmented state space. We begin by describing what desiderata we have for $\mathbf{L}_\theta$, then go into detail on how we parameterize our sampler. Finally, we conclude this section by describing our training procedure. \subsection{Augmenting HMC} HMC is a powerful algorithm, but it can still struggle even on very simple problems. For example, a two-dimensional multivariate Gaussian with an ill-conditioned covariance matrix can take arbitrarily long to traverse (even if the covariance is diagonal), whereas it is trivial to sample directly from it. Another problem is that HMC can only move between energy levels via a random walk \citep{neal2011mcmc}, which leads to slow mixing in some models. Finally, HMC cannot easily traverse low-density zones. For example, given a simple Gaussian mixture model, HMC cannot mix between modes without recourse to additional tricks, as illustrated in Figure~\ref{fig:chains}. These observations determine the list of desiderata for our learned MCMC kernel: \emph{fast mixing}, \emph{fast burn-in}, \emph{mixing across energy levels}, and \emph{mixing between modes}. While pursuing these goals, we must take care to ensure that our proposal operator retains two key features of the leapfrog operator used in HMC: it must be invertible, and the determinant of its Jacobian must be tractable. The leapfrog operator satisfies these properties by ensuring that each sub-update only affects a subset of the variables, and that no sub-update depends nonlinearly on any of the variables being updated. We are free to generalize the leapfrog operator in any way that preserves these properties. In particular, we are free to translate and rescale each sub-update of the leapfrog operator, so long as we are careful to ensure that these translation and scale terms do not depend on the variables being updated. \subsubsection{State Space} As in HMC, we begin by augmenting the current state $x\in\mathbb{R}^n$ with a continuous momentum variable $v\in\mathbb{R}^n$ drawn from a standard normal. We also introduce a binary direction variable $d\in\{-1,1\}$, drawn from a uniform distribution. We will denote the complete augmented state as $\xi\triangleq (x, v, d)$, with probability density $p(\xi) = p(x)p(v)p(d)$. Finally, to each step $t$ of the operator $\mathbf{L}_\theta$ we assign a fixed random binary mask $m^t\in\{0,1\}^n$ that will determine which variables are affected by each sub-update. We draw $m^t$ uniformly from the set of binary vectors satisfying $\sum_{i=1}^n m^t_i = \lfloor\frac{n}{2}\rfloor$, that is, half of the entries of $m^t$ are 0 and half are 1. For convenience, we write $\bar{m}^t = \mathds{1} - m^t$ and $x_{m^t} = x\odot m^t$ ($\odot$ denotes element-wise multiplication, and $\mathds{1}$ the all ones vector). \subsubsection{Update Steps} \input{leapfrogs_eq_new} The corresponding integrator for $d=-1$ is given in Appendix \ref{app:rev op}; it essentially just inverts the updates in equations \ref{eq:vupdate}, \ref{eq:xupdate} and \ref{eq:vupdate2}. For all experiments, the functions $Q, S, T$ are implemented using multi-layer perceptrons, with shared weights. We encode the current time step in the MLP input Our leapfrog operator $\mathbf{L}_\theta$ corresponds to running $M$ steps of this modified leapfrog, $\mathbf{L}_\theta\xi = \mathbf{L}_\theta(x, v, d) = (x''^{\times M}, v''^{\times M}, d )$, and our flip operator $\mathbf F$ reverses the direction variable $d$, $\mathbf F \xi = (x,v,-d)$. Written in terms of these modified operators, our proposal and acceptance probability are identical to those for standard HMC. Note, however, that this parameterization enables learning non-volume-preserving transformations, as the determinant of the Jacobian is a function of $S_x$ and $S_v$ that does not necessarily evaluate to $1$. This quantity is derived in Appendix \ref{app:jac det}. \subsubsection{MCMC Transitions} For convenience, we denote by $\mathbf{R}$ an operator that re-samples the momentum and direction. I.e., given $\xi = (x, v, d)$, $\mathbf{R}\xi = (x, v', d')$ where $v' \sim \mathcal{N}(0, I), d' \sim\mathcal{U}\left(\{-1, 1\}\right)$. Sampling thus consists of alternating application of the $\mathbf F \mathbf L_\theta$ and $\mathbf R$, in the following two steps each of which is a Markov transition that satisfies detailed balance with respect to $p$: \begin{enumerate} \item $\xi' = \mathbf F \mathbf{L}_\theta \xi$ with probability $A(\mathbf F \mathbf{L}_\theta \xi|\xi)$ (Equation \ref{eq accept}), otherwise $\xi' = \xi$. \item $\xi' = \mathbf R \xi$ \end{enumerate} This parameterization is effectively a generalization of standard HMC as it is non-volume preserving, with learnable parameters, and easily reduces to standard HMC for $Q, S, T = 0$. \subsection{Loss and Training Procedure}\label{sec loss} We need some criterion to train the parameters $\theta$ that control the functions $Q$, $S$, and $T$. We choose a loss designed to reduce mixing time. Specifically, we aim to minimize lag-one autocorrelation. This is equivalent to maximizing expected squared jumped distance \citep{pasarica2010adaptively}. For $\xi, \xi'$ in the extended state space, we define $\delta(\xi', \xi) = \delta((x', v', d'), (x, v, d)) = ||x-x'||^2_2$. Expected squared jumped distance is thus $\mathbb{E}_{\xi \sim p(\xi)}\left[\delta(\mathbf{F}\mathbf{L}_\theta\xi, \xi)A(\mathbf{F}\mathbf{L}_\theta\xi|\xi)\right]$. However, this loss need not encourage mixing across the entire state space. Indeed, maximizing this objective can lead to regions of state space where almost no mixing occurs, so long as the average squared distance traversed remains high. To optimize both for typical and worst case behavior, we include a reciprocal term in the loss, \begin{align} \textstyle \ell_\lambda(\xi, \xi', A(\xi'|\xi)) = \frac{\lambda^2}{\delta(\xi, \xi')A(\xi'|\xi)} - \frac{\delta(\xi, \xi')A(\xi'|\xi)}{\lambda^2} , \end{align} where $\lambda$ is a scale parameter, capturing the characteristic length scale of the problem. The second term encourages typical moves to be large, while the first term strongly penalizes the sampler if it is ever in a state where it cannot move effectively -- $\delta(\xi,\xi')$ being small resulting in a large loss value. We train our sampler by minimizing this loss over both the target distribution and initialization distribution. Formally, given an initial distribution $\pi_0$ over $\mathcal{X}$, we define $q(\xi) = \pi_0(x)\mathcal{N}(v;0, I)p(d)$, and minimize \begin{equation} \mathcal{L}(\theta) \triangleq \mathbb{E}_{p(\xi)}\left[ \ell_\lambda(\xi, \mathbf{F}\mathbf{L}_\theta\xi, A(\mathbf{F}\mathbf{L}_\theta\xi|\xi)) \right] + \lambda_b \mathbb{E}_{q(\xi)}\left[ \ell_\lambda(\xi, \mathbf{F}\mathbf{L}_\theta\xi, A(\mathbf{F}\mathbf{L}_\theta\xi|\xi)) \right] . \end{equation} The first term of this loss encourages mixing as it considers our operator applied on draws from the distribution; the second term rewards fast burn-in; $\lambda_b$ controls the strength of the `burn-in' regularization. Given this loss, we exactly describe our training procedure in Algorithm~\ref{alg:training}. It is important to note that each training iteration can be done with only one pass through the network and can be efficiently batched. We further emphasize that this training procedure can be applied to any learnable operator whose Jacobian's determinant is tractable, making it a general framework for training MCMC proposals. \input{alg_training} \section{Related Work} Adaptively modifying proposal distributions to improve convergence and mixing has been explored in the past \citep{andrieu2008tutorial}. In the case of HMC, prior work has reduced the need to choose step size \citep{neal2011mcmc} or number of leapfrog steps \citep{hoffman2014no} by adaptively tuning those parameters. \citet{salimans2015markov} proposed an alternate scheme based on variational inference. We adopt the much simpler approach of \citet{pasarica2010adaptively}, who show that choosing the hyperparameters of a proposal distribution to maximize expected squared jumped distance is both principled and effective in practice. Previous work has also explored applying models from machine learning to MCMC tasks. Kernel methods have been used both for learning a proposal distribution \citep{sejdinovic2014kernel} and for approximating the gradient of the energy \citep{strathmann2015gradient}. In physics, Restricted and semi-Restricted Boltzmann machines have been used both to build approximations of the energy function which allow more rapid sampling \citep{liu2017self,huang2017accelerated}, and to motivate new hand-designed proposals \citep{wang2017can}. Most similar to our approach is recent work from \citet{song2017nice}, which uses adversarial training of a volume-preserving transformation, which is subsequently used as an MCMC proposal distribution. While promising, this technique has several limitations. It does not use gradient information, which is often crucial to maintaining high acceptance rates, especially in high dimensions. It also can only indirectly measure the quality of the generated sample using adversarial training, which is notoriously unstable, suffers from ``mode collapse'' (where only a portion of a target distribution is covered), and often requires objective modification to train in practice \citep{arjovsky2017wasserstein}. Finally, since the proposal transformation preserves volume, it can suffer from the same difficulties in mixing across energy levels as HMC, as we illustrate in Section~\ref{sec:experiments}. To compute the Metropolis-Hastings acceptance probability for a deterministic transition, the operator must be invertible and have a tractable Jacobian. Recent work \citep{dinh2016density}, introduces RNVP, an invertible transformation that operates by, at each layer, modifying only a subset of the variables by a function that depends solely on the remaining variables. This is exactly invertible with an efficiently computable Jacobian. Furthermore, by chaining enough of these layers, the model can be made arbitrarily expressive. This parameterization will directly motivate our extension of the leapfrog integrator in HMC.
2,877,628,089,942
arxiv
\section{Individuals} \label{sec:individual} In the individual scope, no two entities have any initial subjective reputation value of each other and all reputations are based on the work the entities perform. However, once an entity has attained the trust of some other entity, it might propagate this trust value to peers, depending on the underlying mechanism. A physical social phenomenon resembling such situation is a networking event. During networking events, no two people have any initial measures of trust of each other, but any two people may grow to trust each other through reciprocity. The gradual creation of such trust relations may be used to form a trust graph. The assessed trustworthiness may then be shared throughout one's `network', such that someone can obtain a reputation value of an entity, which whom they did not have direct interaction. While direct experience with an entity is the most reliable metric to assess the trustworthiness of an entity \cite{sabater2005review}, sociological research has found that reputational values are often spread through gossip \cite{feinberg2014gossip}. Recipients of such reputational values have been shown to use these to selectively interact with cooperative rather than selfish individuals. An example of a reputation mechanism adopting this social behaviour in an online setting is ARRep \cite{arrep}. \subsection*{ARRep} ARRep (adaptive and robust reputation mechanism) \cite{arrep} is a social reputation mechanism which leverages direct experience with reported experiences from other entities. While ARRep is proposed for usage in a peer-to-peer environment, the resemblance with the social phenomenon as depicted previously is vivid. Furthermore, ARRep applies heuristic for improving the accuracy of reported experiences, by giving more weight to entities who have had more experiences. Given some entity $i$ assessing the trustworthiness of some entity $j$, $j$'s overall reputation value $R_{ij}$ can be calculated according to: $$R_{ij} = \alpha \cdot R_{ij}^D + (1 - \alpha) \cdot R_{ij}^R$$ where $R_{ij}^D$ represents the reputation value extracted from $i$'s direct experience with $j$, $R_{ij}^R$ corresponds to the reputation value extracted from the recommendation of peers, and $\alpha$ represents the confidence factor of $i$'s direct experience. For some threshold $M > 0$, $\alpha$ is equal to the ratio between the number of experiences and $M$ while the number of experiences is lower than $M$, otherwise $\alpha = 1$. The value of $R_{ij}^D$ corresponds to: $$R_{ij}^D = \frac{\sum_{k = 1}^{n_{ij}} (\lambda^{n_{ij} - k} \cdot ex^k_{ij})}{\sum_{k = 1}^{n_{ij}} \lambda^{n_{ij} - k}}$$ where $n_{ij}$ is the total number of interactions between $i$ and $j$, $\lambda$ is some decay value such that $0 < \lambda \leq 1$ and $ex$ is a function returning either 1 (good) or 0 (bad) depending on the experience of interactions between $i$ and $j$ from $i$'s perspective. Moreover, the recommended reputation value $R_{ij}^R$ is calculated, such that: $$R_{ij}^R = \frac{\sum_{i \neq k} (C_{ik} \cdot R_{ik}^D \cdot \eta^{1/n_{kj}})}{\sum_{i \neq k} C_{ik}}$$ where $\eta$ denotes some value $0 < \eta \leq 1$ and $C_{ik}$ corresponds to the recommendation credibility based on the similarity between entity $i$ and the recommender $k$ (see \cite{arrep} for details). During evaluation, it was found that ARRep outperforms existing work \cite{peertrust} in a number of attacks for which peer-to-peer networks are susceptible. More specifically, ARRep has shown to performs better in \textit{on-off attacks}, \textit{bad mouthing attacks} and \textit{collusive cheat attacks}.\\[1\baselineskip] There exist several reputation mechanisms similar to ARRep, focused on the same principles of combining direct experience with recommendations \cite{peertrust, gauthier2004dealing, certifiedreputation, supportingtrustinvirtualcommunities, keung2008using}. Continuing on the phenomenon in which reputation may be passed on through gossiping, an example of a reputation mechanism which directly applies this, is PageRank. PageRank uses the number of references an entity receives to determine its reputation compared to others. This behaviour is again very similar to that during networking events. PageRank has been used for assigning reputation values in social networks \cite{hogg2004enhancing} or to measure academic reputation through citation graphs \cite{massucci2019measuring}. \subsection*{PageRank} In the early ages of the internet, Google was among the first to adopt a reputation mechanism. Larry Page, Google's co-founder, introduced PageRank \cite{page1999pagerank}: an algorithm used to rank search engine results based on relevance. While PageRank might no longer be Google's only reputation mechanism, it is the basis of numerous other reputation mechanism \cite{zhang2017pagerank, bi2008trust, wang2008research, pujol2002extracting}. PageRank considers the internet as a network of web pages connected through their links. If many pages link to another page, it has a higher reputation and therefore a higher `rank' on the search results page. PageRank's algorithm employs the usage of rounds: initially, every page has the same amount of `rank'. Every subsequent round, the rank flows uniformly distributed over all outgoing links to other web pages. An example execution of PageRank is displayed in Figure \ref{fig:pagerank}, in which the edges represent links between web pages. In round $n$, pages (or entities) have a specific amount of rank. In round $n+1$, the rank is propagated over the outgoing edges. Web page 3 passes the 0.7 rank over its only outgoing link while receiving 0.5 from web page 1 and $\frac{1.2}{2}$ from web page 2. Web page 1's rank is lowered and web page 2's rank is stationary. This process continues until the amount of rank for all pages becomes stationary. Once the network reaches a stationary state, extracting the final amount of rank per web page is trivial. One may note that this algorithm shows high similarity to finding the limiting probabilities of a Markov chain. Let A be a matrix such that $\forall (i,j,v) \in E : A_{i,j} = \frac{1}{|N_i|}$. Note that the value $v$ is not used by PageRank as it utilizes the notion of global reputation, i.e. the reputation is equivalent from all perspectives. Let R the reputation value of web page i, such that: $$R_i = c\sum_{v \in B_i}\frac{R_i}{|N_i|}$$ where $B_i$ is the set of states $\{j \in N\ |\ i \in N_j\}$ and c is a factor used for normalization, ensuring the total amount of `rank' remains constant. When R reaches a stationary state, i.e. it does not change anymore, it is an eigenvector of matrix A, such that $A = cAR$. However, if the trust graph takes the shape of a directed cyclic graph, loops with no outgoing edges may occur, causing the accumulation of rank over time. To tackle this issue, Page a new formula for reputational values R' of web page i, such that $R_i' = R_i + cS_i$, where $||R'||_1 = 1$ and $S_i$ is a vector of web page i which corresponds to the rank originating from each page. As we have that $||R'||_1 = 1$, c must be reduced when S is an all-positive vector, implying that c is a decay factor. The original version of PageRank as described above is prone to Sybil attacks, as has been shown in many studies \cite{cheng2006manipulability, dinh2008sybil, danezis2006network, chang2012survey}. Such an attack would introduce many new entities who all link to the attacker, thereby increasing its reputation. This process is also known as `link farming' \cite{danezis2006network}. The original PageRank algorithm does by itself not contain any defense mechanisms against Sybil attacks. \\[1\baselineskip] PageRank makes use of the notion that reputation/rank flows through a network over directed edges. However, this is not the only use case of directed edges. Another example of a use case are maxflow algorithms, in which the weights of the edges can be exploited to find the maximum \textit{flow rate} between entities. BarterCast is a reputation mechanism which makes use of such a maxflow algorithm to help determine an entity's subjective reputation and trustworthiness. \subsection*{BarterCast} Designed for peer-to-peer settings and deployed in Tribler \cite{pouwelse2008tribler}, BarterCast \cite{meulpolder2008bartercast} integrates up- and download rates of peers into a directed graph. By using a maxflow algorithm (e.g. Ford-Fulkerson \cite{ford_fulkerson_1957}), one can find the net up- and download rates among (in)directly connected peers. These rates are integrated through the arctan function in which -1 corresponds to a lower bound for the amount of reputation, 0 represents a neutral position (i.e. newcomers), and 1 is the upper bound on the reputation one could attain. More specifically, entities employing the BarterCast protocol aggregate their own up- and download speeds grouped by the entity with whom they are currently transferring files. These statistics are shared with known peers periodically, through messages known as the \textit{BarterCast messages}. Note that entities do not propagate messages they receive, implying that any given entity merely accumulates information of entities at most 2 \textit{hops} away, which is in turn used for creating a \textit{local view of the network} in the form of a directed graph. When any entity \textit{i} wants to transfer files to/from entity \text{j}, they perform a maxflow algorithm to determine the net up- and download rate between itself and entity \textit{j}, which are represented by $f_{ij}$ and $f_{ji}$, respectively. Using these values, a reputation value $R_{ij}$ is computed, such that: $$R_{ij} = \frac{arctan(\gamma(f_{ji} - f_{ij}))}{\pi/2}$$ where $\gamma$ is a scaling factor. The usage of the arctan function has a double incentive. Firstly, it bounds the reputation such that $R_{ij} \in (-1, 1)$. Second, it ensures that changes in the net up- and download rate on a lower scale have more impact compared to a larger scale, e.g. the difference between 0KB and 100KB affects the reputation more significantly compared to 800KB and 900KB, which eases the process for newcomers. The resulting trust graph has strong similarities with real social networks, as entities can have high reputation in entity $i$'s perspective while having a low reputation in entity $j$'s point of view, depending on the network topology. Along with a metric for reputation calculation, BarterCast is designed with a built-in resilience against the purposeful spread of false information. As all entities keep track of both their up- and downloading rate, the outcome of any correctly implemented maxflow algorithm is (upper) bounded by the statistics measured by the entity itself. However, reliance on a maxflow algorithm is accompanied with a risk for Sybil attacks, as maxflow algorithms are prone to Sybil attacks as shown by \cite{nasrulin2022meritrank}. Through parallel attacks, adversaries can trivially exploit this vulnerability and obtain infinite resources. \\[1\baselineskip]All prior discussed reputation mechanisms are part of the family of \textit{symmetric reputation mechanisms}. In such reputation mechanisms, one's reputation only depends on the topology of the trust graph, which makes them generally prone to Sybil attacks \cite{levine2006survey}. However, such reputation mechanisms can be extended with defense mechanisms to increase their overall Sybil proofness. An example of a defensive mechanism against Sybil attacks is MeritRank, which wraps existing social symmetric reputation mechanisms and adds additional constraints, providing these mechanisms with Sybil attack tolerance. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{graphics/pagerank.jpg} \caption{Two rounds of the PageRank algorithm.} \label{fig:pagerank} \end{figure} \subsection*{MeritRank} MeritRank \cite{nasrulin2022meritrank} is a novel reputation mechanism which main goal is to bound the gain of Sybil attacks. That is, MeritRank does not attempt to solve Sybil attacks, but merely defines a number of strategies towards tolerating them. Furthermore, MeritRank generically assumes the existence of an underlying implementation for communication and reputation calculation using a `flow-based' network, much alike the implementation used by PageRank. Trust graphs satisfying MeritRank's constraints are shown to be Sybil tolerant. That is, for some value $0 < c < \infty$ and Sybil attack $\sigma_S$, the following holds: $$\lim_{|S| \to \infty }\frac{\omega^+(\sigma_S)}{\omega^-(\sigma_S)} < c$$ where $S$ is the set of Sybils, $\omega^+$ is a function returning the gain for a Sybil attack and $\omega^+$ is a function returning the amount of loss for a Sybil attack. By defining certain properties for trust graph, MeritRank is capable of bounding the amount of gain an attacker can get from attacking the network. Such an attack is also known as a weakly beneficial Sybil attack \cite{stannat2021achieving}, which contrasts an attack where an adversary can obtain infinite gain, also known as a strongly beneficial Sybil attack. The constraints which MeritRank poses the trust graph are relative feedback/reputation, connectivity decay, transitivity decay and epoch decay. The aforementioned constraints are a set of intuitive measures to bound the gain of an adversary. Relative feedback limits the amount of reputation an entity can give to some other entity by its own degree. More specifically, the updated function for assigning reputation is defined as: $$\bar{w}(i, j) = \frac{w(i,j)}{\sum_{k \in N_i} w(i, k)}$$ where $w$ is the original function for assigning reputation. Note the sum of reputation/feedback an entity assigns to its neighbours consistently equals 1. Transitivity decay defines a probability $\alpha$ which is equivalent to stop a random walk (see the Random Surfer model \cite{page1999pagerank}) for reputation determination for any given entity. Furthermore, connectivity decay defines a constant $0 \leq \beta \leq 1$ and ratio $t$, such that if for some entity $i$ (transitively) connected to some entity $j$ through some entity $k$ for at least the ratio $t$ of all possible paths, $(1-\beta)$ serves as a punishment factor for decreasing the reputation of the entity $j$ in $i$'s perspective. The connectivity decay constraint's main purpose is to identify and punish separate components. Lastly, the epoch decay defines a constant $\gamma$, which indicates the reputation decay with each epoch of the graph, incentivizing entities to keep performing work to receive reputation. MeritRank has been evaluated on all constraints separately. It has been shown that ``transitivity decay and connectivity decay can provide a desirable level of Sybil tolerance'' \cite{nasrulin2022meritrank}. On the other hand, it was found that epoch decay, when naively implemented, may prefer new reputation assignments over existing reputation assignments. As aforementioned, MeritRank does not provide resistance against Sybil attacks, but accepts their existence and introduces a number of possible strategies towards bounding the maximum gain an attack may muster. \\[1\baselineskip] Individually-based social reputation mechanisms are often the most prone to Sybil attacks, as there exists no other external notion on which to base reputation calculations. While MeritRank proposes a number of strategies towards tolerating such attacks, it recognizes their existence and its inability towards preventing them. Arguably the most effective defense against Sybil attacks is the usage of fixed identity's and disabling the arbitrary creation of new virtual entities. An example of such fixed identity is the European Digital Identity \cite{eudigitalid}, which will enable EU residents to claim a single online identity. An external party verifying or providing an entity's identity is said to be the only way of preventing Sybil attacks \cite{levine2006survey}, as identities cannot instantly be created without the external party's permission and verification. \section{Neighbourhoods} \label{sec:neighbourhoods} The final scope is focused on the notion of neighbourhoods. In a social context, one's neighbourhood often determines their opportunities and success in later stages of life \cite{neighbourhoodthing}. Moreover, social groups often arise from these neighbourhoods. These groups may determine one's reputation as it has been shown that social groups are often assigned a single reputational value \cite{masuda2012ingroup}. Similar concepts have been applied in the design of reputation mechanisms. One such reputation mechanism is GroupRep, in which entity $j$'s reputation in entity $i$'s perspective may be determined by their group if no direct interaction has occurred. \subsection*{GroupRep} Based on the assumption that in large peer-to-peer networks, two peers will not often interact more than once, making it hard to profit from direct experiences between peers, GroupRep \cite{tian2006group} adopts the notion of groups to calculate reputational values. By assuming that users with similar interests in a peer-to-peer environment have constructed virtual groups, GroupRep provides a framework for calculating reputational values between groups, between groups and peers and between peers. In GroupRep, the notion of a trust graph is applied on two scales. On the first scale, every node in the trust graph are groups of entities in which the edges represent reputations from the group perspective. The second scale considers all nodes individual entities, in which the edges represent reputation values based on direct experiences between entities. Moreover, GroupRep defines utility $u$ and costs $c$, which represent the gain and costs from interactions with other entities or entity groups. In general, reputation is calculated by $\frac{c_{ij} - u_{ij}}{c_{ij} + u_{ij}}$, where $c_{ij}$ represents the cumulative cost some entity or group $j$ has brought entity or group $i$ and $u_{ij}$ represents the cumulative utility. However, if $c_{ij} + u_{ij} = 0$, a fall-back policy is applied in which a path (on the group-based trust graph) is searched between $G(i)$ and $G(j)$, where $G$ is a function returning an entity's group. Note that for all groups along this path, including $G(i)$, the most trusted group is selected for each next step. The reputation of this path is equivalent to the minimum reputation edge on the path. However, if no such path exists, a stranger policy is applied, in which the reputation is calculated using the cumulative utility and cost for all previous interactions with strangers. Note that GroupRep will always first attempt to find direct reputation values on the trust graph on entity-level, however, if no such direct edge exists, the group reputation is used for determining a reputation value. After an interaction, entity $i$ updates its local information, creating an edge in the entity trust graph, and sends the rating to its group $G(i)$, which then may sends the rating to group $G(j)$. Furthermore, GroupRep introduces a methodology for detecting malicious entities through clustering entities within groups. By assuming two entities as similar when they have similar reputations on the entities they both have had interactions with, clustering can take place. It is assumed that a maximum cluster of similar entities will take shape, in which all entities are deemed credible. GroupRep has been compared against two existing reputation mechanisms on the performance against malicious collusive attacks. It was shown that GroupRep achieves a higher ratio of success queries (ratio of peers satisfied with the result of the interaction) and a higher satisfaction level, where satisfaction represents the average ratio of cumulative authentic file sizes to cumulative inauthentic file sizes. However, the scope of this evaluation was somewhat limited and did not include comparison against any well-known reputation mechanisms.\\[1\baselineskip] While entities are still somewhat free to choose which group to join when using GroupRep, there also exist more discriminative approaches, which may be associated with originative discrimination. Such methodologies are commonly adopted in email spam measures where IP addresses are blacklisted. One such mechanism is IPGroupRep (name similarity with GroupRep is coincidental), which aggressively groups IP addresses into blocks based on subnets and assigns single reputation values to these groups based on their behaviour. \subsection*{IPGroupRep} IPGroupRep \cite{ipgrouprep} is an aggressive reputation mechanism for calculating a reputation for IP blocks based on existing spam classifiers. It only considers groups of IP addresses, rather than leveraging individual reputations with a group reputation. In \cite{ipgrouprep}, it is suggested to consider cluster IP into blocks of 256 by naively assuming the first 24 bits of all IP addresses in a block static, similar to a 255.255.255.0 subnet mask. An IP block's reputation should be decreased when a spam message originating from this group is detected, while it should be increased upon sending legitimate messages. Note that IPGroupRep is in itself not capable or designed to detect spam, but rather to combine the outputs of several existing spam detection mechanisms and combine these into a single reputation value. For each group, a sum $r$ and $s$ are defined, representing the aggregation of positive and negative spam feedback respectively, provided by the numerous spam detection mechanisms. IPGroupRep applies a beta distribution, where $\alpha = r + 1$ and $\beta = s + 1$ and assumes the expected value E(p) to be the reputation value, such that: $$E(p) = \frac{r+1}{r+s+2}$$ If this value $E(p)$ is larger than some threshold $T_{threshold}$, the group can be assumed trustworthy. In evaluation it was found that this reputation mechanism shows very high precision and accuracy compared to existing reputation mechanisms used for the protection of mail servers. However, we argue that this method may negatively affect innocent parties within a group by disregarding the individual reputations. A possible solution to alleviate this is by decreasing the group sizes or automatically detect dynamic IP address blocks which may be used for spam \cite{udmap}.\\[1\baselineskip] While the usage of groups may be effective against spamming and the danger of strangers, it is very generative and should be implemented cautiously such that malicious entities cannot hide in highly reputed groups and enjoy their benefits. \section{Acquaintances} \label{sec:direct_relationships} In the scope of acquaintances, we consider social reputation mechanisms which rely on the existence of real relationships between entities. By leveraging these existing relationships, one may strengthen the defenses of online social reputation mechanisms. An example of a social phenomenon leveraging existing relationships is \textit{vouching}: \textit{``to be able from your knowledge or experience to say that something is true''} \cite{cambridgedictionary}. In the context of reputation mechanism, vouching may generally be used as a method of putting one's reputation at stake. More specifically, in the case where some person (the \textit{voucher}) has vouched for someone else (the \textit{vouchee}), while this vouch was misplaced, the voucher loses their credibility. As a voucher willingly puts their reputation at stake for the vouchee, it makes one believe that the voucher has had prior external experience with the vouchee. In recent years, the government of the United Kingdom has composed a rigorous guide as how to use vouches in daily-life situations \cite{govuk_2020}. It describes how people can use vouching for verifying one's identity. For instance, a parent has the ability to vouch for their child's identity. They know their child well and are certain of their child's identity, inducing no risk of vouching for them. An example of a social mechanism employing vouching is Souche, which can be deployed on online social networks for protecting real users against fake accounts, often created for malicious purposes, such as spamming. \subsection*{Souche} Souche \cite{souche} is a vouch-based reputation mechanism developed partially by Microsoft\footnote{\url{https://microsoft.com/}}. Its main goal is to quickly be able to distinguish between legitimate and illegitimate users in the context of online social communities, and to slow down any malicious undetected users. Souche has been evaluated in simulations utilizing large anonymized email and Twitter\footnote{\url{https://twitter.com/}} datasets and has been shown to accurately identify 85\% of legitimate users in an early stage. Furthermore, Souche can relief users of periodic humanity checks, such as CAPTCHAs, by only performing a CAPTCHA upon registration. Souche's main means for creating relationships between entities, i.e. users, is through implicit vouching. Such process takes place through by considering regular activities as vouching. As such, Souche defines a vouch through emails by the conversation between two users, i.e. both users have written each other at least two emails for a conversation to be considered a vouch. Moreover, when modelling such approach to large datasets, it was found that a \textit{Giant Connected Component} (GCC) starts to take shape. Such a GCC is a large trust graph which contains 93\% of all users for the e-mail dataset, where the remaining connected components are orders of magnitude smaller than the GCC. Souche crowdsources the detection of malicious accounts, by assuming that malicious accounts are not included in the GCC. Souche defines a quota $q_i$ for each entity $i$ to determine whether an entity is allowed to vouch for some new entity. Every unit of time, this quota grows with rate $r$. An entity is allowed to vouch for some other entity when their quota is larger than 1. Naively, the quota can be defined as: $$q_i = (1+r)^{t-b_i} - c_i - 1$$ where $t$ is the current time, $b_i$ is the time at which entity $i$ joined the network and $c_i$ indicates the number of entity $i$ has already vouched for. However, in order to approach the growth rate with which online social networks grow, growth rate $r$ should be configured to have a small value, such as 0.001 where the time interval equals 1 day. This implies that users are unable to vouch for any other users during their first registered year. To tackle this issue, Souche divides the GCC trust graph in subtrees, starting at the leaves, i.e. entities with no outbound vouches. An example of such a subtree can be found in Figure \ref{fig:souche}. In this particular example, A has vouched for both B and C and C has vouched for E and F. Note that if C were to be exposed as a malicious entity, it is evident that at least A, B, E and F should be further investigated, as they share a close relation to C. Souche subtrees have a size of approximately 50 entities and have a single root. Within subtrees, entities can freely use the cumulative quota. More specifically, entity $i$ of subtree $T_i$ can vouch for some other entity when $\sum_{k \in T_i} q_k > 1$. In order to account for the usage of shared quota, the definition of quota is finalized to: $$q_i = (1+r)^{t-b_i} - c_i - d_i - 1$$ where $d_i$ represents the quota used by other entities to retain the total balance of quota within the network. Note that, due to the exponential growth of quota, older entities are assumed to be more trusted vouchers. Other than sharing quota, the subtree data structure serves another purpose, namely that of assisting in the detection of malicious entities. While Souche itself does not focus on malicious entity detection, given an existing detection implementation, Souche can assist by marking an entities's parent, siblings or descendants as suspicious. Another defense against malicious entities is the limited quota per time interval, preventing adversaries from vouching for other adversaries. Smaller trees will result in less available shared quota for malicious entities to claim. Finally, Sybil attacks may also suffer from these features. \\[1\baselineskip] Another example of a study applying a vouching-based mechanism has been employed by the CloudSurfing platform \cite{couchsurfing}. This approach implements a more explicit method of vouching, requires more manual user interaction, and does not protect users from malicious and potentially fake entities, but is used as a rating for hosts on the CloudSurfing platform. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{graphics/souche.jpg} \caption{Example of a Souche subtree.} \label{fig:souche} \end{figure} On the other hand, there exist other social phenomena leveraging existing relationships which have been translated to social reputation mechanisms. One such example includes the usage of invitations. In an offline setting, invitations are often used to invite people to participate in a certain event. This social behaviour has been studied and integrated as a core component in social reputation mechanisms, such as \textit{Trust by association}, which combines the usage of invitations with a mechanism similar to vouching. \subsection*{Trust by association} Trust by association (TbyA) \cite{kellett2011trust} has been designed for deployment in a peer-to-peer environment. It utilizes invitations to add new entities to the network and links the reputation values of the \textit{inviter} and \textit{invitee}, similar to vouching. More specifically, inviters may be punished for the bad behaviour of their invitees, while incentivized by profiting from the good reputation of their invitees and the rewards for growing the network. Due to these reputational incentives, it is assumed that users will only invite people they already have experience with from another channel, i.e. the acquaintance. TbyA assumes the following properties of the network: \begin{itemize} \item \textit{Invitation-only network} $-$ entities can only join the network through invitation. \item \textit{Homogeneous Resource or Service} $-$ entities participate in the network for a common type of resource or service. \item \textit{Bounded Existing Reputation Mechanism} $-$ there exists an underlying reputation mechanism, such that the resulting reputation values are bounded within a fixed interval. Kellett et al. \cite{kellett2011trust} suggests the usage of EigenTrust \cite{kamvar2003eigentrust}. \item \textit{Central Point of Calculation} $-$ there exists a central machine on which all calculations take place. \end{itemize} In its simplest form, the reputation value $R$ for entity $i$, $R_i$, is defined by: $$R_i = (1-\alpha)U_i + \alpha\frac{\sum_{j\in N_i} U_j}{|N_i|}$$ where $U_i$ is a function returning the reputation of entity $i$ according to the underlying reputation mechanism, $N_i$ refers to the set of invitees invited by entity $i$, and $\alpha$ is some value $0 \leq \alpha < 1$ and is assumed to be 0 when $|N_i| = 0$. While \cite{kellett2011trust} only uses this formula as a starting point to introduce enhancements, the general idea remains unchanged. These enhancements include rewarding network growth by varying $\alpha$ depending on the amount of entities invited and support for recursive reputation, i.e. the reputation of the invitee's of entity $i$'s invitees affects entity. In an effort to measure TbyA's efficacy, a simulation was performed. It was found that TbyA performs well in the case where there exists an external party capable of identifying malicious entities and punishing their inviters. TbyA is said to be able to turn lawless peer-to-peer networks into networks of benevolent peers, but requires future work on decentralized methods of identifying malicious entities. \\[1\baselineskip] TbyA uses elements we have previously seen in Souche similar to vouching, as it punishes the inviter for any bad behaviour shown by their invitees. However, besides social reputation mechanisms in which you need a voucher to participate, there also exist less strict mechanisms. One such mechanisms is SocialTrust, in which anyone can participate, but where existing social ties are useful. SocialTrust uses the notion that friends are more trusted than strangers \cite{li2008friends}. \subsection*{SocialTrust} SocialTrust \cite{socialtrust} attempts to combine entity reputation values as well as friendships to provide the best QoS in a decentralized network. SocialTrust's main goal is to attempt being served by a friend or, if no friend is available, the server with the highest global reputation, provided by a \textit{trusted authority}. First of all, SocialTrust defines two types friendship, namely `friends' and `partners', both being bidirectional relationships. An entity can choose their own friends based on their experiences in the offline physical world and send a `friend request'. However, partners are assigned by the \textit{trusted authority} and are defined as entities with whom a certain entity has had many interactions with. In order to participate in a partnership, both entities must have a reputation larger than a certain partner threshold. When some entity $i$ requires a certain service or resource, it first composes a list of all possible entities which may pose as server. In this process, the reputation mechanism takes the current load of entities into account, such that overloaded entities are not included in the list of possible servers. After composing the list, entity $i$ scans for any friends or partners and, if present, selects one of these to request the service or resource. If such friend or partner does not exist, entity $i$ queries the trusted authority for the reputations of the possible servers and chooses the server with the highest reputation. In SocialTrust, each entity is assigned an \textit{impact factor}, which represents both their reputation the amount of damage they could inflict and is used to calculate entity's new reputation after an interaction, depending on whether it cooperates. The impact factor $T$ is defined such that: $$T(i) = \beta\frac{R(i)}{R_{max}} + (1- \beta)\frac{D(i)}{D_{max}}$$ where $R$ is a function returning an entity's reputation, $R_{max}$ is the maximum achievable reputation, $D(i)$ represents the number of friends and partners, $D_{max}$ is the maximum number of friends and partners and $\beta$ is some value $0 \leq \beta \leq 1$. After each interaction, the client will provide a service rating of the server, which helps the trusted authority to calculate the new reputations by taking into account the impact factor. We consider two cases: an interaction in which both the server and client are cooperative and an interaction in with the server is cooperative, but the client is non-cooperative. In the first case, the client will, subsequently to the interaction, send a service rating to the trusted authority, in which it rates the server with some value $Y$, such that $0 < Y \leq 1$. A cooperative server will accept this rating and the servers reputation increases by $\alpha(1 + T_c \cdot Y)$, where $T_c$ is the client's impact factor and $0 \leq \alpha \leq 1$. The client's reputation will increase with $\alpha$. On the other hand, we consider the cases where the client is non-cooperative and provides no feedback or negative feedback, while the server provided honest work (the trusted authority checks this by verifying the signatures on the request and response). In this case, the server is assigned $\alpha$ reputation and the client loses $-\alpha(1 + T_c)$ reputation. Similar reputation assignments are presented in \cite{socialtrust} for non-cooperative servers, in which the server loses reputation. Note that the more reputation and friends/partners an entity has, the more their reputation is affected in interactions, promoting honest work for all, regardless of reputation. In a performance evaluation, SocialTrust has shown stronger capabilities in excluding non-cooperative entities from the network compared to EigenTrust \cite{kamvar2003eigentrust}, as well as obtaining a more accurate model mapping an entity's reputation to its benevolence. \\[1\baselineskip] Acquaintance-based social reputation mechanism using concepts like vouching often offer built-in defenses against attacks. However, bootstrapping such mechanisms is a challenge, as they often require an initial set of trusted entities from which all remaining participants join the network. The concept of \textit{implicit vouching} as introduced by Souche might open the opportunities for deploying vouching-based mechanisms, but may inadvertently punish innocent entities. Reputation mechanisms such as SocialTrust suffer less from the bootstrap problem, but have weaker defences for filtering malicious entities. \section{Background} \label{sec:background} Shaping trust in the online world, arguably the telos of all reputation mechanisms, is a hard challenge, which has been studied as early as 2002 \cite{gil2002trusting, douceur2002sybil, mui2002computational}. As the space of defense mechanisms gradually evolves, so does the space of attack possibilities. For instance, people are getting more aware of the risk of the internet and start to become sceptic towards (spam)mails, causing scammers to invent more intelligent and sophisticated scams \cite{binks2019art}. Another example of the need for online trust is in the world of e-commerce, where criminals are actively attempting to swindle innocent users on large e-commerce platforms, like eBay \cite{ebayfraud}. Nowadays, this responsibility of creating trust cannot be entrusted to private corporations. In recent events, Alphabet Inc. (e.g. Google) has been fined €220 million by French authorities for abusing its dominance in the advertisement industry. The French government has accused Google of promoting their own advertisements over their competitors'. Furthermore, in 2019, Google has been fined €1.28 billion by the European Union on similar charges \cite{googlelawsuit}. Google's dominance in the advertisement industry and the abuse of their position manifests their absolute control over the ranking of advertisements and online resources, incentivizing one to dispute their role in creating online trust. This case shows a typical example of the Red Queen hypothesis, which, in the general case, states that biological species must consistently adapt to their ever-evolving ecosystem in the eternal fight for survival \cite{redqueenbio}. In our e-commerce setting, this hypothesis corresponds to the ever-present necessity for companies to adapt to stay ahead of their evolving competition \cite{redqueen}. Such online wars only help in creating distrust between different parties, strengthening the need for widely accepted trust mechanisms. Lastly, the United States House of Representatives has composed a report assessing the power wielded by the 4 largest bigtech corporations: Facebook, Google, Amazon and Apple. These companies have become monopolies in different spaces of the internet, providing them with absolute power. It has been found that these corporations are abusing this power through ``charging exorbitant fees, imposing oppressive contract terms, and extracting valuable data from the people and businesses that rely on them'' \cite{449_page_rep}. Exploiting social phenomena for the purpose of creating trust in online settings has previously been considered with the proposal of a novel peer-to-peer file-sharing system, named Tribler \cite{pouwelse2008tribler}. Tribler is a peer-to-peer file-sharing system, which introduces social ties to incentivize users not to misbehave at the expense of their friends, partners or community. Tribler suggests the usage of public and private keys as an authenticational method for recognizing previously encountered users in the anonymous peer-to-peer environment, enabling users to keep track of benevolent and malicious interactions. \section{Conclusion} \label{sec:conclusion} In this paper, we have discussed numerous social phenomena on different scales and reviewed social reputation mechanisms directly adopting the social phenomena as core component. First, we focused on the individual scope, in which every entity is responsible for their own reputation and entities may refer to each other based on past interactions, increasing each other's reputation by performing honest work. Secondly, we reviewed the acquaintances scope, where mechanisms may benefit from existing social ties to create more secure environments through vouching and friends. In this space, the existing trust relations are essential and may heavily influence one's reputation, compared to the individual scope. Finally, we reviewed mechanisms in the neighbourhood scope, in which entities may be part of a group which can greatly affect their reputation. Over the years, many reputation mechanisms have been proposed, evaluated and criticised. However, the holy grail of a social reputation mechanism creating secure online trust is yet to be invented. \section{Terms and definitions} \label{sec:definitions} This section provides the formal definitions of various concepts and data structures we use in the description of existing social reputation mechanisms. \textbf{Entity} $-$ The notion of an entity encapsulates any type of instance which may participate in the network employing the underlying social reputation mechanisms. For example, an entity may be a real person, but could also be a computer. \textbf{Reputation mechanism} $-$ We adopt the definition of reputation mechanism as formulated by Swamynathan: ``\textit{A reputation mechanism collects, aggregates, and disseminates feedback about a user's behavior, or reputation, based on the user's past interactions with others}'' \cite{swamynathan2010design}. In other words, a reputation mechanism processes feedback received from all entities participating in the network to cumulatively calculate a subjective or global reputation value for each entity. \textbf{Trust} $-$ In the context of computing systems, we may adopt the definition of trust as formalized by Saputra: ``\textit{Trust is a Trustor’s level of confidence in regard to the ability of a Trustee to provide expected result in an interaction between Trustor and Trustee}'' \cite{saputra2020defining}, where a trustor is the party which receives some service and the trustee is the party entrusted with performing or providing the trustor with a certain service or resource. In other words, trust is the certainty at which entity A (trustor) believes that entity B (trustee) is able to provide them with some service. More formally, trust is defined as a weighted directional relation $(i, j, v) \in E$ between two entities $i,j \in N$ and $v \in \mathbb{R}$, where $N$ is the set of all entities, $E$ is the set of all directed relations between two entities and $v$ is the trustworthiness value assigned by some entity $i$ to some entity $j$. \textbf{Trust graph} $-$ Trust relations as defined previously can be aggregated in a directed graph. We call such graph a \textit{trust graph}, or alternatively \textit{social graph}. This graph is defined by the tuple $(V, E)$, where $V$ is the set of entities and $E$ is the set of trust relationships, also referred to as edges. Such a trust graph often facilitates the necessary structural foundation. More specifically, we say that if some entity $i$ which has had sufficient (in)direct interaction with some arbitrary entity $j$, such that $j \in N_i$ and $\exists (i,j,v) \in E : v \in \mathbb{R}$, where $N_i$ is the called a \textit{trust set}, consisting of entities with whom entity $i$ has had sufficient interaction with to assess their trustworthiness, depending on the underlying reputation mechanism. Furthermore, entities can occur in multiple \textit{trust sets}, but no entity can contain itself in its trust set: $\forall i \in N : i \notin N_i$. Additionally, all entities occur at most exactly once in every trust set, such that $\forall i \in N : \{\forall j, k \in N_i : ID(j) = ID(k) \Leftrightarrow j = k\}$, where $ID$ is a deterministic implementation-specific function capable of identifying individual entities. Note that the prior implies that $\forall (i, j, v) \in E : i \neq j$. We argue that every directional relation in the graph is unique, such that $\forall (i, j, v), (k, l, w) \in E : \{(i = k \land j = l) \Leftrightarrow (i, j, v) = (k, l, w)\}$. Finally, all entities occur exactly once in a \textit{trust graph}: $\forall i,j \in N : \{ID(i) = ID(j) \Leftrightarrow i = j\}$. An example of a trust graph representing the trust relations between nodes A, B and C can be found in Figure \ref{fig:example_trustgraph}. In this example, C has a trust/reputation value of 0.8 in B's perspective, implying that $(B,C,0.8)\in E$. The reputation value is calculated and interpreted by the underlying reputation mechanism. Furthermore, B's trust set corresponds to $N_B = \{C\}$ and A's trust set to $N_A = \{B, C\}$. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{graphics/example_trustgraph.jpg} \caption{Example trust graph representing trust relations between entity's A, B and C.} \label{fig:example_trustgraph} \end{figure} \textbf{Sybil attacks} $-$ The Sybil attack \cite{douceur2002sybil} is a well-known attack used against reputation mechanisms. Many reputation mechanisms are unable to distinguish original entities from their copies \cite{levine2006survey}; a weakness abused by the Sybil attack. An adversary may employ the Sybil attack to increase its own reputation through the instant creation of virtual entities, such that they may enjoy the benefits of high reputations. The method used to increase one's reputation using `Sybil entities' depends heavily on the implementation details of the underlying social reputation mechanism. In 2011, Seuken et al. have shown that under specific circumstances, there exists a passive strongly beneficial Sybil attack \cite{seuken2011sybil}. In such an attack, a malicious entity can obtain an infinite gain with minimal effort. \section{Introduction} Nowadays, most business and social interactions have moved to the internet, highlighting the relevance of creating online trust. The COVID-19 pandemic has shown that, in time of crises, the online news and social media usage increases \cite{van2021does}, increasing the risk and impact of misinformation. As such, it is commonly known that governments have attempted to control news media to spread propaganda in the past. Additionally, research shows that individuals getting their news from social media are often more likely to belief conspiracy theories \cite{enders2021relationship}. Such matters raise the relevant and contemporary question: who to trust? In an automated setting, the trust measure is often extracted from one's reputation. Their reputation may be calculated through the amount of `good' work one has performed, or the reputation of their direct peers. We call systems performing such calculations \textit{reputation mechanisms}. Many reputation mechanisms have been proposed and evaluated \cite{arrep, page1999pagerank, nasrulin2022meritrank, souche, kellett2011trust, socialtrust, tian2006group, ipgrouprep}. The core components of reputation mechanisms may vary greatly, e.g. it may assume that entities have a fixed initial identity or that some entity $i$ sending some entity $j$ a message provides a proof of personhood for entity $j$. However, the common purpose of reputation mechanisms is to provide some measure of benevolence or trustworthiness. The overall scope of this paper is focused on \textit{social reputation mechanisms}. Such reputation mechanisms are a virtual reflection of genuine social phenomena, such as vouching or familial relationships. We provide a survey in which we have rigorously reviewed social reputation mechanisms by exploring various social concepts and mapping these to existing reputation mechanisms. Through the course of this paper, we gradually increase our scope and consider social reputation mechanisms based on social phenomena on an increasingly larger scale. First, we discuss the individual level. In this scope, no two persons necessarily know each other initially and everyone's reputation is based solely on the work they perform or the quality they provide. Secondly, we consider one's acquaintances. Existing social ties and vouching are concepts which may transpire in this space. Lastly, we discuss phenomena occurring in one's direct neighbourhood. For instance, the neighbourhood in which you live may affect your reputation to both members outside and inside that neighbourhood. First, we provide more background on the importance and relevance of creating trust and reputation mechanisms in section \ref{sec:background}. Section \ref{sec:definitions} provides formal definitions and data structures, which we use to generalise the mathematical foundations of reviewed mechanisms in order to reduce the usage of varying mathematical models across the different reputation mechanisms. Section \ref{sec:individual} considers entities individually, and rigorously discusses different reputation mechanisms based on social phenomena within this scope. Section \ref{sec:direct_relationships} continues exploring social concepts in the acquaintances scope and their associated social reputation mechanisms. The last scope, neighbourhoods, is discussed in section \ref{sec:neighbourhoods}. A brief overview of all discussed social reputation mechanisms can be found in Table \ref{table}. \begin{table*}[] \caption{ \label{table} Overview of all social reputation mechanisms reviewed in this paper, as well as associated work.} \begin{tabular}{p{0.6cm}p{7.3cm}p{7cm}p{1.35cm}} \textbf{Year} & \textbf{Mechanism} & \textbf{Reputation model} & \textbf{Related} \\ \hline 1999 & \textbf{PageRank} \cite{page1999pagerank}. \textit{Type: individual}. Assumes important websites are likely linked to from other websites. Links are quantified iteratively until stationary state. & \begin{minipage}[t!][0.5cm][t]{0.1cm} $$R_i = c\sum_{v \in B_i}\frac{R_i}{|N_i|}$$ \end{minipage} & \cite{zhang2017pagerank, bi2008trust, wang2008research, pujol2002extracting} \\ \rowcolor[HTML]{EFEFEF} 2006 & \textbf{GroupRep} \cite{tian2006group}. \textit{Type: neighbourhood}. Users form natural groups. The group reputation is assumed when there has not been sufficient direct interaction. & Given utility $u_{ij}$ and cost $c_{ij}$ from $i$ to $j$:\newline \begin{minipage}[t!][1.1cm][t]{2cm} $$R_{ij} = \frac{\sum c_{ij} - u_{ij}}{\sum c_{ij} + u_{ij}}$$ \end{minipage}& \cite{sabater2001regret, wu2009group, sun2005adaptive, GARM, he2011social} \\ 2008 & \textbf{BarterCast} \cite{meulpolder2008bartercast}. \textit{Type: individual}. Peers measure up- and download rates and calculate subjective reputational values using a maxflow algorithm. & Given upload rate $f_{ji}$ and download rate $f_{ij}$:\newline \begin{minipage}[t!][1.1cm][t]{2cm} $$R_{ij} = \frac{arctan(\gamma(f_{ji} - f_{ij}))}{\pi/2}$$ \end{minipage} & \cite{pouwelse2008tribler} \\ \rowcolor[HTML]{EFEFEF} 2009 & \textbf{IPGroupRep} \cite{ipgrouprep}. \textit{Type: neighbourhood}. Adopts IP-based groups and aggregates spam detection feedback for reputation values. & Given positive and negative feedback $r_i$ and $s_i$:\newline \begin{minipage}[t!][1.1cm][t]{2cm} $$R_i = \frac{r_i + 1}{r_i + s_i + 2}$$ \end{minipage} & \cite{udmap, thomas2010rapid, ontheeffectivenessofusingip} \\ 2010 & \textbf{ARRep} \cite{arrep}. \textit{Type: individual}. Leverages direct experiences with recommendations. & \begin{minipage}[t!][0.5cm][t]{3cm} $$R_{ij} = \alpha \cdot R_{ij}^D + (1 - \alpha) \cdot R_{ij}^R$$ \end{minipage} & \cite{peertrust, gauthier2004dealing, certifiedreputation, supportingtrustinvirtualcommunities, keung2008using} \\ \rowcolor[HTML]{EFEFEF} 2011 & \textbf{Trust by Association} \cite{kellett2011trust}. \textit{Type: acquaintances}. Invite-only network; reputation of the invitee directly affects inviter. & Given some underlying reputation mechanism $U$:\newline \begin{minipage}[t!][1.1cm][t]{2cm} $$R_i = (1-\alpha)U_i + \alpha\frac{\sum_{j\in N_i} U_j}{|N_i|}$$ \end{minipage} & \cite{rogers2007disappear} \\ 2012 & \textbf{Souche} \cite{souche}. \textit{Type: acquaintances}. Frictionless vouching and assumes all benevolent users are member of a giant connected component. & Given a giant connected component (GCC), which growth is limited for each time interval: \newline \begin{minipage}[t!][1.3cm][t]{2cm} \[ R_i= \begin{cases} \text{Trusted},& \text{if } i \in GCC\\ \text{Not trusted}, & \text{otherwise} \end{cases} \] \end{minipage} & \\ \rowcolor[HTML]{EFEFEF} 2015 & \textbf{SocialTrust} \cite{socialtrust}. \textit{Type: acquaintances}. Prefer friends over strangers. Relies on reputation of strangers if no friend available. & Entity's reputation is modified based on the rating of the other party and their impact factor $T_i$:\newline \begin{minipage}[t!][1.1cm][t]{2cm} $$T_i = \beta\frac{R_i}{R_{max}} + (1- \beta)\frac{D_i}{D_{max}}$$ \end{minipage}& \cite{kamvar2003eigentrust} \\ 2022 & \textbf{MeritRank} \cite{nasrulin2022meritrank}. \textit{Type: individual}. Defines set of strategies to make Sybil prone reputation mechanisms Sybil tolerant. & Transitivity decay, connectivity decay and epoch decay applied on existing reputation mechanisms. & \\\hline \end{tabular} \end{table*}
2,877,628,089,943
arxiv
\section*{Abstract} Statistical analysis of alignments of large numbers of protein sequences has revealed ``sectors'' of collectively coevolving amino acids in several protein families. Here, we show that selection acting on any functional property of a protein, represented by an additive trait, can give rise to such a sector. As an illustration of a selected trait, we consider the elastic energy of an important conformational change within an elastic network model, and we show that selection acting on this energy leads to correlations among residues. For this concrete example and more generally, we demonstrate that the main signature of functional sectors lies in the small-eigenvalue modes of the covariance matrix of the selected sequences. However, secondary signatures of these functional sectors also exist in the extensively-studied large-eigenvalue modes. Our simple, general model leads us to propose a principled method to identify functional sectors, along with the magnitudes of mutational effects, from sequence data. We further demonstrate the robustness of these functional sectors to various forms of selection, and the robustness of our approach to the identification of multiple selected traits. \section*{Author summary} Proteins play crucial parts in all cellular processes, and their functions are encoded in their amino-acid sequences. Recently, statistical analyses of protein sequence alignments have demonstrated the existence of “sectors” of collectively correlated amino acids. What is the origin of these sectors? Here, we propose a simple underlying origin of protein sectors: they can arise from selection acting on any collective protein property. We find that the main signature of these functional sectors lies in the low-eigenvalue modes of the covariance matrix of the selected sequences. A better understanding of protein sectors will make it possible to discern collective protein properties directly from sequences, as well as to design new functional sequences, with far-reaching applications in synthetic biology. \section*{Introduction} Proteins play crucial roles in all cellular processes, acting as enzymes, motors, receptors, regulators, and more. The function of a protein is encoded in its amino-acid sequence. In evolution, random mutations affect the sequence, while natural selection acts at the level of function, however our ability to predict a protein's function directly from its sequence has been very limited. Recently, the explosion of available sequences has inspired new data-driven approaches to uncover the principles of protein operation. At the root of these new approaches is the observation that amino-acid residues which possess related functional roles often evolve in a correlated way. In particular, analyses of large alignments of protein sequences have identified ``sectors'' of collectively correlated amino acids~\cite{Lockless99, Suel03, socolich2005evolutionary, halabi2009protein, Dahirel11, mclaughlin2012spatial}, which has enabled successful design of new functional sequences~\cite{socolich2005evolutionary}. Sectors are spatially contiguous in the protein structure, and in the case of multiple sectors, each one may be associated with a distinct role~\cite{halabi2009protein, Rivoire16}. What is the origin of these sectors, and can we identify them from sequence data in a principled way? To address these questions, we developed a general physical model that naturally gives rise to sectors. Specifically, motivated by the observation that many protein properties reflect additive contributions from individual amino acids~\cite{DePristo05,Starr16,Otwinowski18}, we consider any additive trait subject to natural selection. As a concrete example, we study a simple elastic-network model that quantifies the energetic cost of protein deformations~\cite{bahar2010global}, which we show to be an additive trait. We then demonstrate that selection acting on any such additive trait automatically yields collective correlation modes in sequence data. We show that the main signature of the selection process lies in the small-eigenvalue modes of the covariance matrix of the selected sequences, but we find that some signatures also exist in the widely-studied large-eigenvalue modes. Finally, we demonstrate a principled method to identify sectors and to quantify mutational effects from sequence data alone. \section*{Results} \subsection*{Selection on an additive trait} We focus on selection on an additive scalar trait \begin{equation} T (\vec{\alpha})=\sum_{l=1}^L\Delta_l(\alpha_l)\,, \label{eq:trait} \end{equation} where $\vec{\alpha}=(\alpha_1,\dots,\alpha_L)$ is the amino-acid sequence considered, $L$ is its length, and $\Delta_l(\alpha_l)$ is the mutational effect on the trait $T$ of a mutation to amino acid $\alpha_l$ at site $l$. Mutational effects can be measured with respect to a reference sequence $\vec{\alpha}^0$, satisfying $\Delta_l(\alpha_l^0)=0$ for all $l$. Eq.~\ref{eq:trait} is very general as it amounts to saying that, to lowest order, mutations have an additive effect on the trait $T$, which can be any relevant physical property of the protein, say its binding affinity, catalytic activity, or thermal stability~\cite{Cunningham17}. System-specific details are encoded by the single-site mutational effects $\Delta_l(\alpha_l)$, which can be measured experimentally. The assumption of additivity is experimentally validated in many cases. For instance, protein thermal stability, measured through folding free energy, is approximately additive~\cite{DePristo05,Wylie11}. Importantly, we allow selection to act on a phenotype that is a nonlinear function of $T$. Permitting a phenotypic nonlinearity on top of our additive trait model is motivated by the fact that actual phenotype data from recent high-throughput mutagenesis experiments were accurately modeled via a nonlinear mapping of an underlying additive trait~\cite{Otwinowski18}. Protein sectors are usually defined operationally as collective modes of correlations in amino-acid sequences. However, the general sequence-function relation in Eq.~\ref{eq:trait} suggests an operational definition of a \textit{functional} protein sector, namely as the set of sites with dominant mutational effects on a trait under selection. Selection can take multiple forms. To be concrete, we first consider a simple model of selection, assuming a favored value $T^*$ of the trait $T$, and using a Gaussian selection window. We subsequently show that the conclusions obtained within this simple model are robust to different forms of selection. Our Gaussian selection model amounts to selecting sequences according to the following Boltzmann distribution: \begin{equation} P(\vec{\alpha})=\frac{\exp(w(\vec{\alpha}))}{ \sum_{\vec{\alpha}} \exp(w(\vec{\alpha}))}\,, \label{eq:distr} \end{equation} where the fitness $w(\vec{\alpha})$ of a sequence is given by \begin{equation} w(\vec{\alpha})=-\frac{\kappa}{2}\left(T(\vec{\alpha})-T^*\right)^2=-\frac{\kappa}{2}\left(\sum_{l=1}^L\Delta_l(\alpha_l)-T^*\right)^2\,. \label{eq:quadr} \end{equation} The selection strength $\kappa$ sets the width of the selection window. Such selection for intermediate values of a trait can be realistic, e.g. for protein stability~\cite{DePristo05}. However, the form of selection can vary, for example selection can be for a nonlinear transform of a trait to be above a certain threshold~\cite{Otwinowski18}, and several relevant selection variants are investigated below. Crucially, while the trait is additive (Eq.~\ref{eq:trait}), the fact that fitness (Eq.~\ref{eq:quadr}) and selection (Eq.~\ref{eq:distr}) are nonlinear functions of the trait leads to coupling between mutations. This phenomenon is known as global~\cite{Kryazhimskiy14,Otwinowski18} or nonspecific~\cite{Starr16} epistasis, and its relevance has been shown in evolution experiments~\cite{Kryazhimskiy14}, over and above contributions from specific epistasis~\cite{Starr16,Posfai18}. The focus of this paper is on global epistasis, and we do not include specific epistasis. Studying the interplay of these two types of epistasis will be an interesting future direction. \subsection*{A toy model yielding a concrete example of an additive trait} \subsubsection*{Elastic-network model} To illustrate how additive traits naturally arise, we consider the elastic energy associated with a functionally important protein deformation. We explicitly derive the additivity of this trait in the regime of small deformations and weak mutational effects. This concrete example is relevant since functional deformation modes are under selection in proteins~\cite{Zheng06,Lukman09,Saldano16}, and dynamical domains possess a signature in sequence data~\cite{granata2017patterns}. Moreover, elastic-network models have elucidated a variety of protein properties~\cite{de2005functional,Delarue02,Zheng03,bahar2010global}, including the emergence of allostery~\cite{yan2017architecture,Tlusty17,Flechsig17,dutta2018green,rocks2017designing,Yan18,BraviPre}. Thus motivated, we begin by building an elastic-network model~\cite{bahar2010global,de2005functional} for a well-studied PDZ protein domain (Fig.~\ref{Fig1}(a,b))~\cite{doyle1996crystal,Hung02} and computing the relationship between its ``sequence'' and the energetic cost of a functionally-relevant conformational change. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{Fig1} \vspace{0.2cm} \caption{{\bf Selection applied to an elastic protein model leads to a statistical signature among sequences.} (a) Cartoon representation of the third PDZ domain of the rat postsynaptic density protein 95 from the RSCB PDB~\cite{Berman00}, (gray: ligand free, 1BFE; blue: ligand bound, 1BE9 (ligand not shown)). (b) Elastic network model for 1BFE, where each amino-acid residue is represented by its alpha carbon (C$\alpha$, black node) and beta carbon (C$\beta$, purple node). Nearby nodes interact through a harmonic spring~\cite{de2005functional} (\nameref{S1_Appendix}). (c) Relation between protein sequence $\vec{S}$ and elastic network: 0 denotes the reference state, while 1 denotes a mutated residue, which weakens interactions of the corresponding C$\beta$ with all its neighbors by $\epsilon$. (d) Histogram of the energy $\delta E$ required to deform the domain from its ligand-free to its ligand-bound conformation, for randomly sampled sequences where 0 and 1 are equally likely at each site. Sequences are selectively weighted using a narrow Gaussian window (orange) around $\delta E^*$. (e) Eigenvalues of the covariance matrix $C$ for the selectively weighted protein sequences. (f) Upper panel: last principal component $\nu_l^{(L)}$ of $C$ (red) and average mutant fraction $\langle S_l\rangle_*$ (green) at site $l$ after selection; lower panel: effect $\Delta_l$ of a single mutation at site $l$ on $\delta E$. (g) Schematic representation of the selected ensemble in sequence space, where each dot is a highly-weighted sequence; thus dots are restricted to a narrow region around a plane perpendicular to $\vec{\Delta}$. (h) Recovery of $\vec{\Delta}$ for all principal components $\vec{\nu}^{(j)}$, with maximum Recovery=1 (Eq.~\ref{eq:recovery-measure}). Gray dashed line: random expectation of Recovery (\nameref{S1_Appendix}).} \label{Fig1} \end{figure} \clearpage To build the elastic-network model of the PDZ domain, we replace each of the $L=76$ amino-acid residues by its corresponding alpha carbon C$\alpha$ and beta carbon C$\beta$, as shown in Fig.~\ref{Fig1}(b). Every pair of carbons within a cutoff distance $d_c$ is then connected with a harmonic spring~\cite{bahar2010global}. Following a previous analysis of the same PDZ domain~\cite{de2005functional}, we set $d_c=7.5\,\text{\normalfont\AA}$ and assign spring constants as follows: a) 2 for C$\alpha$-C$\alpha$ pairs if adjacent along the backbone, 1 otherwise; b) 1 for C$\alpha$-C$\beta$ pairs; c) 0.5 for C$\beta$-C$\beta$ pairs. Within our elastic model, the energetic cost of a small deformation from the equilibrium structure is \begin{equation} E=\frac{1}{2}\sum_{i,j} \left(\boldsymbol{r}_i-\boldsymbol{r}_i^0\right)M_{ij}\left(\boldsymbol{r}_j-\boldsymbol{r}_j^0\right)= \frac{1}{2}\,\delta \boldsymbol{r}^TM \delta \boldsymbol{r}, \label{eq:energy_cost} \end{equation} where $\boldsymbol{r}_i$ is the position of the $i$th carbon atom, $\boldsymbol{r}_i^0$ is its equilibrium position, and the Hessian matrix $M$ contains the second derivatives of the elastic energy with respect to atomic coordinates. Here, we take $\delta \boldsymbol{r} $ to be the conformational change from a ligand-free state (1BFE) to a ligand-bound state (1BE9) of the same PDZ domain (Fig.~\ref{Fig1}(a)). This conformational change is central to PDZ function, so its energetic cost has presumably been under selection during evolution. Any other coherent conformational change would also be suitable for our analysis. Note that our aim is not to analyze conformational changes in all their richness, but to provide a minimal concrete example of a relevant additive trait, and to analyze the impact of selection acting on this trait on the associated family of sequences. To mimic simply the effect of mutation and selection within our toy model, we introduce ``mutations'' of residues that weaken the spring constants involving their beta carbons by a small fraction $\epsilon$. In practice, we take $\epsilon = 0.2$. We represent mutations using a sequence $\vec{S}$ with $S_l\in \{0,1\}$, where $l$ is the residue index: $S_l=0$ denotes the reference state, while $S_l=1$ implies a mutation (Fig.~\ref{Fig1}(c)). The sequence $\vec{S}$ and the spring network fully determine the Hessian matrix $M$, and thus the energy cost $E$ of a conformational change (Eq.~\ref{eq:energy_cost}). Note that here $\vec{S}$ is a binary sequence, which represents a simplification compared to real protein sequences $\vec{\alpha}$ where each site can feature 21 states (20 amino acids, plus the alignment gap). We start with the binary model for simplicity, and we then extend our results to a more realistic 21-state model. Note that binary representations of actual protein sequences, with a consensus residue state and a ``mutant'' state, have proved useful in sector analysis~\cite{halabi2009protein}, although more recent approaches for diverse protein families have employed full 21-state models~\cite{Rivoire16}. Binary representations are also appropriate to analyze sets of sufficiently close sequences, notably HIV proteins, allowing identification of their sectors~\cite{Dahirel11} and predictions of their fitness landscapes~\cite{Mann14}. \subsubsection*{Deformation energy as an additive trait} Focusing on mutations that weakly perturb the elastic properties of a protein, we perform first-order perturbation analysis: $M=M^{(0)}+\epsilon M^{(1)}+o(\epsilon)$. Using Eq.~\ref{eq:energy_cost} yields $E=E^{(0)}+\epsilon E^{(1)}+o(\epsilon)$, with $E^{(1)}=\delta \boldsymbol{r}^T M^{(1)} \delta \boldsymbol{r}/2$. Both $M^{(1)}$ and $E^{(1)}$ can be expressed as sums of contributions from individual mutations. We define $\Delta_l$ as the first-order energy cost $\epsilon E^{(1)}$ of a single mutation at site $l$ of the sequence. To leading order, the effect of mutations on the energy cost of a deformation reads \begin{equation} \delta E=E-E^{(0)}=\sum_{l=1}^L S_l\Delta_l. \label{eq:generic} \end{equation} This equation corresponds to the binary-sequence case of the general additive trait defined in Eq.~\ref{eq:trait}. Hence, the deformation energy in our toy model of a protein as a sequence-dependent elastic network constitutes a practical example of an additive trait. Within our functional definition, a protein sector is the set of sites with dominant mutational effects on the trait under selection. The vector $\vec{\Delta}$ of mutational effects for our elastic-network model of the PDZ domain is shown in Fig.~\ref{Fig1}(f). The magnitudes of mutational effects are strongly heterogeneous (\nameref{S1_Appendix}, Fig.~\ref{fig:PDZ_Delta_histogram}). Here, the amino acids with largest effects, which constitute the sector, correspond to those that move most upon ligand binding. (Note that the ligand-binding deformation of PDZ is well-described by one low-frequency normal mode of the elastic network~\cite{de2005functional}: hence, our sector significantly overlaps with the sites that are most involved in this mode.) How is such a functionally-defined sector reflected in the statistical properties of the sequences that survive evolution? To answer this question, we next analyze sequences obtained by selecting on the trait $\delta E$. While for concreteness, we use the mutational effects obtained from our elastic model, the analysis is general and applies to any additive trait. Indeed, we later present some examples using synthetically-generated random mutational effect vectors, both binary and more realistic 21-state ones (see Figs. 3, 4, and \nameref{S1_Appendix}). \subsection*{Signature of selection in sequences} For our elastic model of the PDZ domain, the distribution of the additive trait $\delta E$ for random sequences is shown in Fig.~\ref{Fig1}(d). We use the selection process introduced in Eqs.~\ref{eq:distr}-\ref{eq:quadr} to limit sequences to a narrower distribution of $\delta E$s, corresponding, e.g., to a preferred ligand-binding affinity. The fitness of a binary sequence $\vec{S}$, a particular case of Eq.~\ref{eq:quadr}, reads: \begin{equation} w(\vec{S})=-\frac{\kappa}{2} \left( \sum_{l=1}^L \Delta_l S_l-\delta E^*\right)^2. \label{eq:fitness} \end{equation} Here, the selection strength $\kappa$ sets the width of the selection window, and $\delta E^*$ is its center. For all selections, we take $\kappa=10/(\sum_l \Delta_l^2)$, so that the width of the selection window scales with that of the unselected distribution. We have confirmed that our conclusions are robust to varying selection strength, provided $\kappa \sum_l\Delta_l^2\gg 1$ (see Fig.~\ref{fig:Supp_effect_kappa}). Although mutations have additive effects on the trait $\delta E$, the nonlinearities involved in fitness and selection give rise to correlations among sites. For instance, if $\delta E^*=0$ and if $\Delta_l<0$ for all $l$, as in Fig.~\ref{Fig1}, a mutation at a site with large $|\Delta_l|$ will decrease the likelihood of additional mutations at all other sites with large $|\Delta_l|$. Previous approaches to identifying sectors from real protein sequences have relied on modified forms of Principal Component Analysis (PCA). So we begin by asking: can PCA identify sectors in our physical model? PCA corresponds to diagonalizing the covariance matrix $C$ of sequences: it identifies the principal components (eigenvectors) $\vec{\nu}^{(j)}$ associated with progressively smaller variances (eigenvalues) $\lambda^{(j)}$. We introduce $\langle \cdot\rangle_*$ to denote ensemble averages over the selectively weighted sequences, reserving $\langle \cdot\rangle$ for averages over the unselected ensemble. The mutant fraction at site $l$ in the selected ensemble is $\langle S_l\rangle_*= \sum_{\vec{S}}S_l P(\vec{S})$, and the covariance matrix $C$ reads \begin{equation} C_{ll'}=\Big\langle (S_l-\langle S_l\rangle_*)\cdot (S_{l'}-\langle S_{l'}\rangle_*) \Big\rangle_*. \end{equation} To test the ability of PCA to identify a functional sector, we employed the selection window shown in orange in Fig.~\ref{Fig1}(d). The resulting eigenvalues are shown in Fig.~\ref{Fig1}(e). One sees outliers. In particular, why is the last eigenvalue so low? Due to the narrow selection window, according to Eq.~\ref{eq:fitness} the highly-weighted sequences satisfy $\sum_l S_l \Delta_l=\vec{S} \cdot \vec{\Delta}\approx \delta E^*$. This means that in the $L$-dimensional sequence space, the data points for the highly-weighted sequences lie in a narrow region around a plane perpendicular to $\vec{\Delta}$ (Fig.~\ref{Fig1}(g)). Hence, the data has exceptionally small variance in this direction, leading to a particularly small eigenvalue of $C$. Moreover, the corresponding last principal component $\vec{\nu}^{(L)}$ points in the direction with the smallest variance and is consequently parallel to $\vec{\Delta}$ (Fig.~\ref{Fig1}(f)). Formally, in Eq.~\ref{eq:fitness}, $\vec{\Delta}$ appears in a quadratic coupling term where it plays the part of a repulsive pattern in a generalized Hopfield model~\cite{Cocco11,cocco2013principal}: alone, such a term would penalize sequences aligned with $\vec{\Delta}$. But here, $\vec{\Delta}$ also appears in a term linear in $\vec{S}$ and as a result Eq.~\ref{eq:fitness} penalizes sequences that fail to have the selected projection onto $\vec{\Delta}$. In this example, the last principal component accurately recovers the functional sector corresponding to the largest elements of the mutational-effect vector $\vec{\Delta}$. More generally, to quantify the recovery of $\vec{\Delta}$ by a given vector $\vec{\nu}$, we introduce \begin{equation} \mathrm{Recovery}=\frac{\sum_l |\nu_l \Delta_l |}{\sqrt{\sum_l \nu_l^2 }\sqrt{\sum_l \Delta_l^2 }}, \label{eq:recovery-measure} \end{equation} which is nonnegative, has a random expectation of $(\sqrt{2/\pi L})\sum_l |\Delta_l|/\sqrt{\sum_l \Delta_l^2}$ for $L\gg1$ (\nameref{S1_Appendix}), and saturates at 1 (including the case of parallel vectors). For our test case, Fig.~\ref{Fig1}(h) shows Recovery for all principal components. The last one features the highest Recovery, almost 1, confirming that it carries substantial information about $\vec{\Delta}$. The second-to-last principal component and the first two also provide a value of Recovery substantially above random expectation. Outlier eigenvalues arise from the sector, and accordingly, we find that the number of modes with high Recovery often corresponds to the number of sites with strong mutational effects. A more formal analysis of this effect will be an interesting topic for further study. In our model, $\vec{\Delta}$ is fundamentally a direction of \textit{small variance}. So why do the first principal components also carry information about $\vec{\Delta}$? Qualitatively, when variance is decreased in one direction due to a repulsive pattern $\vec{\Delta}$, variance tends to increase in orthogonal directions involving the same sites. To illustrate this effect, let $L=3$ and $\vec{\Delta}=(-1,1,0)$, and consider the sequences $\vec{S}$ satisfying $\vec{\Delta}\cdot\vec{S}=0$ (namely $(0,0,0)$; $(1,1,0)$; $(0,0,1)$; $(1,1,1)$). The last principal component is $\vec{\Delta}$, with zero variance, and the first principal component is $(1,1,0)$: Recovery is 1 for both of them. This selection conserves the trace of the covariance matrix (i.e. the total variance), so that decreasing the variance along $\vec{\Delta}=(-1,1,0)$ necessarily increases it along $(1,1,0)$. This simple example provides an intuitive understanding of why the large-eigenvalue modes of the covariance matrix also carry information about $\vec{\Delta}$. It is worth remarking that Eq.~\ref{eq:fitness} is a particular case of a general fitness function with one- and two-body terms (known as fields and couplings in Ising or Potts models in physics). Here, the values of these one- and two-body terms are constrained by their expressions in terms of $\vec{\Delta}$. In practice, several traits might be selected simultaneously (see below), yielding more independent terms among the fields and couplings. More generally, such one- and two-body descriptions have been very successfully employed via Direct Coupling Analysis (DCA) to identify strongly coupled residues that are in contact within a folded protein~\cite{Weigt09,morcos2011direct,Marks11}, to investigate folding~\cite{Morcos14}, and to predict fitness~\cite{Dwyer13,Cheng14,Cheng16,Mann14,Figliuzzi16,Barton16,Hopf17} and conformational changes~\cite{Morcos11,Malinverni15}, as well as protein-protein interactions~\cite{Bitbol16,Gueudre16}. A complete model of protein covariation in nature should necessarily incorporate both the collective modes described here and the strongly coupled residue pairs which are the focus of DCA. \subsection*{ICOD method} An important concern is whether the last principal component is robust to small and/or noisy datasets. Indeed, other directions of small variance can appear in the data. As a second example, we applied a different selection window, centered in the tail of the distribution of $\delta E$s from our elastic model of the PDZ domain (Fig.~\ref{Fig2}(a), inset). This biased selection generates strong conservation, $\langle S_l\rangle_*\approx 1$, for some sites with significant mutational effects. Extreme conservation at one site now dictates the last principal component, and disrupts PCA-based recovery of $\vec{\Delta}$ (Fig.~\ref{Fig2}(a,b)). \begin{figure}[h!] \centering \includegraphics[width=8.5cm]{Fig2} \vspace{0.2cm} \caption{{\bf Recovery of mutational-effect vector $\vec{\Delta}$ from sequence analysis in the case of strongly biased selection.} (a-c) Principal Component Analysis (PCA) performs poorly due to strong conservation at some sites of large mutational effect. (a) Eigenvalues of covariance matrix obtained for strongly biased selection around $\delta E_{\rm biased}^*$ (inset, orange window) for same model proteins as in Fig. 1. (b) Recovery of $\vec{\Delta}$ for all principal components. (c) Last principal component $\nu_l^{(L)}$ (red) and average mutant fraction $\langle S_l\rangle_*$ (green) at site $l$. (d-e) The ICOD method performs robustly. (d) Eigenvalues of $\tilde{C}^{-1}_{ll'}$ (Eq.~\ref{eq:ICODmatrix}) (upper) and Recovery of $\vec{\Delta}$ for all eigenvectors (lower). (e) Leading eigenvector $\nu_l^{(1)}$ (upper) and mutational effect $\Delta_l$ at site $l$ (lower, same as in Fig. 1(f)). Gray dashed lines in (b,d): random expectation of Recovery (\nameref{S1_Appendix}). } \label{Fig2} \end{figure} To overcome this difficulty, we developed a more robust approach that relies on inverting the covariance matrix. Previously, the inverse covariance matrix was successfully employed in Direct Coupling Analysis (DCA) to identify strongly coupled residues that are in contact within a folded protein~\cite{Weigt09,morcos2011direct,Marks11}. The fitness in our model (Eq.~\ref{eq:fitness}) involves one and two-body interaction terms, and constitutes a particular case of the DCA Hamiltonian (\nameref{S1_Appendix}). A small-coupling approximation ~\cite{Plefka82,morcos2011direct,Marks11,bitbol2016inferring} (\nameref{S1_Appendix}) gives \begin{equation} C^{-1}_{ll'}\approx \left(1-\delta_{ll'}\right)\,\kappa \Delta_l\Delta_{l'}+\delta_{ll'}\left(\frac{1}{P_l}+\frac{1}{1-P_l}\right), \label{eq:inverse-PCA-q=2} \end{equation} where $P_l$ denotes the probability that site $l$ is mutated. Since we are interested in extracting $\vec{\Delta}$, we can simply set to zero the diagonal elements of $C^{-1}$, which are dominated by conservation effects, to obtain a new matrix \begin{equation} \tilde{C}^{-1}_{ll'}\approx (1-\delta_{ll'}) \kappa \Delta_l\Delta_{l'}. \label{eq:ICODmatrix} \end{equation} The first eigenvector of $\tilde{C}^{-1}$ (associated with its largest eigenvalue) should accurately report $\vec{\Delta}$ since, except for its zero diagonal, $\tilde{C}^{-1}$ is proportional to the outer product $\vec{\Delta} \otimes \vec{\Delta}$. We call this approach the \emph{Inverse Covariance Off-Diagonal} (ICOD) method. As shown in Fig.~\ref{Fig2}(d-e), ICOD overcomes the difficulty experienced by PCA for biased selection, while performing equally well as PCA for unbiased selection (Fig.~\ref{eq:supp-inverse-Fig1}, \nameref{S1_Appendix}). Removing the diagonal elements of $C^{-1}$ before diagonalizing is crucial: otherwise, the first eigenvector of $C^{-1}$ is the same as the last eigenvector of $C$ and suffers from the same shortcomings for strong conservation. Here too, eigenvectors associated to both small and large eigenvalues contain information about $\vec{\Delta}$ (Figs.~\ref{Fig2}(b,d)). \subsection*{Selection on multiple traits} An important challenge in sector analysis is distinguishing multiple, independently evolving sectors~\cite{halabi2009protein,Tesileanu15,Rivoire16}. We can readily generalize our fitness function (Eqs.~\ref{eq:quadr},~\ref{eq:fitness}) to allow for selection on multiple additive traits: \begin{equation} w(\vec{S})=-\sum_{i=1}^N \frac{\kappa_i}{2} \left( \sum_{l=1}^L \Delta_{i,l} S_l-T^*_{i}\right)^2, \label{eq:newfitness} \end{equation} where $N$ is the number of distinct additive traits $T_i(\vec{S})=\sum_l \Delta_{i,l} S_l$ under selection, $\vec{\Delta}_i$ is the vector of mutational effects on trait $T_i$, $\kappa_i$ is the strength of selection on this trait, and $T^*_{i}$ is the associated selection bias. For example, $\vec{\Delta}_1$ might measure how mutations change a protein's binding affinity, while $\vec{\Delta}_2$ might be related to its thermal stability, etc. In \nameref{S1_Appendix} Fig.~\ref{fig:two-sector-fig3}, we consider selection on two distinct additive traits, using synthetically-generated random mutational-effect vectors $\vec{\Delta}_1$ and $\vec{\Delta}_2$ (\nameref{S1_Appendix}). Note that these mutational effects are thus unrelated to our toy model of protein elastic deformations: as stated above, our approach holds for any additive trait under selection. ICOD then yields \emph{two} large outlier eigenvalues of the modified inverse covariance matrix $\tilde{C}^{-1}$. The associated eigenvectors accurately recover both $\vec{\Delta}_1$ and $\vec{\Delta}_2$, after a final step of Independent Component Analysis (ICA)~\cite{Hyvarinen, Hansen01, Rivoire16} that successfully disentangles the contributions coming from the two constraints (see \nameref{S1_Appendix}). \subsection*{Performance in sector recovery} We further tested the performance of ICOD by systematically varying the selection bias, both for our toy model of PDZ elastic deformations and for more general synthetically-generated random mutational-effect vectors (Fig.~\ref{Fig3}). ICOD achieves high Recovery of these various mutational-effect vectors for both single and double selection over a broad range of selection biases $T^*$, albeit performance falls off in the limit of extreme bias. \begin{figure}[h!] \centering \includegraphics[width=8.5cm]{Fig3} \vspace{0.2cm} \caption{{\bf Average recovery of mutational-effect vectors $\vec{\Delta}$ as a function of relative selection bias $\gamma\equiv (T^*-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle}$ on the selected additive trait $T$.} (a) Selection on a single trait. Different $\vec{\Delta}$s are used to generate sequence ensembles: the elastic-network $\vec{\Delta}$ from Fig. 1 (red); synthetic $\vec{\Delta}$s (\nameref{S1_Appendix}) with number of sites of large mutational effect (sector sites) ranging from 1 to 100, for sequences of length $L=100$ (blue). Recovery is shown for ICOD (solid curves) and for SCA~\cite{halabi2009protein,Rivoire16} (dashed curves). (b) Selection on two distinct traits. Different pairs of synthetic $\vec{\Delta}$s (\nameref{S1_Appendix}) are used to generate sequence ensembles (with $L=100$): ``0$\%$'' indicates two non-overlapping sectors, each with 20 sites; ``100$\%$'' indicates two fully overlapping sectors, each with 100 sites; ``Aver.'' indicates average Recovery over 100 cases of double selection, where the single-sector size increases from 1 to 100, and the overlap correspondingly increases from 0 to 100. ICA was applied to improve Recovery (\nameref{S1_Appendix}). } \label{Fig3} \end{figure} How does ICOD compare with other approaches to identifying sectors? We compared the performance of ICOD with Statistical Coupling Analysis (SCA), the original PCA-based method~\cite{halabi2009protein,Rivoire16}. In SCA, the covariance matrix $C$ is reweighted by a site-specific conservation factor $\phi_l$, the absolute value is taken, $\tilde{C}_{ll'}^{\mathrm{(SCA)}}=|\phi_l C_{ll'}\phi_{l'}|$, and sectors are identified from the leading eigenvectors of $\tilde{C}^{\mathrm{(SCA)}}$. We therefore tested the ability of the first eigenvector of $\tilde{C}^{\mathrm{(SCA)}}$ to recover $\vec{\Delta}$ for a single selection. We found that the square root of the elements of the first SCA eigenvector can provide high Recovery of $\vec{\Delta}$ (Figs.~\ref{Fig3},~\ref{fig:Covariance_structure},~\ref{fig:reweighting-method}) (\nameref{S1_Appendix}). However, the performance of SCA relies on conservation through $\phi_l$, and it has been shown that residue conservation actually dominates sector identification by SCA in certain proteins~\cite{Tesileanu15}. Consequently, for unbiased selection, SCA breaks down (Fig.~\ref{Fig3}(a), dashed curves) and cannot identify sector sites (\nameref{S1_Appendix} Fig.~\ref{fig:SCA_ICOD}). ICOD does not suffer from such shortcomings, and performs well over a large range of selection biases. Note that in SCA, only the top eigenvectors of $\tilde{C}^{\mathrm{(SCA)}}$ convey information about sectors (Figs.~\ref{fig:Covariance_structure}, \ref{fig:SCAsquareroot}). We also compared ICOD with another PCA-based approach~\cite{Cocco11}, which employs an inference method specific to the generalized Hopfield model, and should thus be well adapted to identifying sectors within our physical model (Eq.~\ref{eq:fitness}). Overall, this specialized approach performs similarly to ICOD, being slightly better for very localized sectors, but less robust than ICOD for strong selective biases and small datasets (\nameref{S1_Appendix}). Exactly as for PCA and ICOD, within this method, the top Recovery is obtained for the bottom eigenvector of the (modified) covariance matrix, consistent with $\vec{\Delta}$ in our model being a repulsive pattern~\cite{Cocco11}, but large Recoveries are also obtained for the top eigenvectors (Fig.~\ref{fig:Cocco_performance}). \subsection*{Robustness to different forms of selection} To assess the robustness of functional sectors to selections different from the simple Gaussian selection window of Eqs.~\ref{eq:distr}-\ref{eq:quadr}, we selected sequences with an additive trait $T$ above a threshold $T_t$, and varied this threshold. For instance, a fluorescent protein might be selected to be fluorescent enough, which could be modeled by requiring that (a nonlinear transform of) an additive trait be sufficiently large~\cite{Otwinowski18}. As shown in Fig.~\ref{Fig4}, the corresponding sectors are identified by ICOD as well as those resulting from our initial Gaussian selection window. In Fig.~\ref{Fig4}(d), we show the performance of both ICOD and SCA at recovering sectors arising from selection with a threshold. Consistent with previous results (see Fig.~\ref{Fig3}), we find that ICOD is more robust than SCA to extreme selections. We also successfully applied ICOD to other forms of selection: Fig.~\ref{fig:quartic_selection} shows the case of a quartic fitness function replacing the initial quadratic one (Eq.~\ref{eq:quadr}) in the Boltzmann distribution (Eq.~\ref{eq:distr}) and Fig.~\ref{fig:square_selection} shows the case of a rectangular selection window (\nameref{S1_Appendix}). These results demonstrate the robustness of functional sectors, and of ICOD, to different plausible forms of selection. \begin{figure}[h!] \centering \includegraphics[width=8.5cm]{Fig4} \vspace{0.2cm} \caption{{\bf Identification of sectors that result from threshold-based selection. } (a) Histogram of the additive trait $T(\vec{S})=\vec{S}\cdot\vec{\Delta}$ for randomly sampled sequences where 0 and 1 are equally likely at each site. Sequence length is $L=100$, mutational effects are synthetically generated with 20 sector sites (see \nameref{S1_Appendix}). Sequences are selected if they have a trait value $T(\vec{S})>T_t$ (orange shaded region). Selection is shown for $T_t=\langle T\rangle$, or equivalently $\gamma_t=0$, in terms of the relative threshold $\gamma_t\equiv (T_t-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle}$. (b) Eigenvalues of the ICOD-modified inverse covariance matrix $\tilde{C}^{-1}$ (Eq.~\ref{eq:ICODmatrix}) of the selected sequences for $\gamma_t=0$. (c) Recovery of $\vec{\Delta}$ for all eigenvectors of $\tilde{C}^{-1}$ for $\gamma_t=0$. Gray dashed line: random expectation of Recovery. (d) Recovery of $\vec{\Delta}$ for ICOD and for SCA as functions of the relative selection threshold $\gamma_t$. The data in (d) is averaged over 100 realizations of $\vec \Delta$. } \label{Fig4} \end{figure} \newpage \subsection*{Extension to 21-state sequences and to natural sequences} So far, we have considered binary sequences, with only one type of mutation with respect to the reference state. In the \nameref{S1_Appendix}, we demonstrate that our formalism, including the ICOD method, extends to mutations among $q$ different states. The case $q=21$, which includes the 20 different amino-acid types plus the alignment gap is the relevant one for real proteins. The single-site mutational effects $\Delta_l$ are then replaced by state-specific mutational effects $\Delta_l(\alpha_l)$ with $\alpha_l\in\{1,\dots,21\}$ (see Eq.~\ref{eq:trait}). Fig.~\ref{fig:protein-data-no-pseudocount} (\nameref{S1_Appendix}) shows that the generalized version of ICOD performs very well on synthetic data generated for the case $q=21$. We further demonstrate that sector identification is robust to gauge changes (reference changes) and to the use of pseudocounts (\nameref{S1_Appendix}). While the main purpose of this article is to propose an operational definition of functional protein sectors and to understand how they can arise, an interesting next question will be to investigate what ICOD can teach us about real data. As a first step in this direction, we applied ICOD to a multiple sequence alignment of PDZ domains. In this analysis, we employed a complete description with $q=21$, but we compressed the ICOD-modified inverse matrix using the Frobenius norm to focus on overall (and not residue-specific) mutational effects (see \nameref{S1_Appendix} for details). As shown in Figs.~\ref{Fig5}(a) and (b), both ICOD and SCA identify one strong outlying large eigenvalue, thus confirming that PDZ has only one sector~\cite{mclaughlin2012spatial}. Recall that due to the inversion step, the largest eigenvalue in ICOD is related to the mode with smallest variance, whose importance was demonstrated above. Furthermore, as seen in Figs.~\ref{Fig5}(c) and (d), both methods correctly predict the majority of residues found experimentally to have important mutational effects on ligand binding to the PDZ domain shown in Fig.~\ref{Fig1}(a)~\cite{mclaughlin2012spatial}. For instance, over the 20 top sites identified by ICOD (resp. SCA), we find that 85\% (resp. 75\%) of them are also among the 20 experimentally most important sites. Note that for SCA, we recover the result from Ref.~\cite{mclaughlin2012spatial}. The performance of ICOD is robust to varying the cutoff for removal of sites with a large proportion of gaps (see Fig.~\ref{fig:Supp_Effect_Cut_gap}), but notably less robust than SCA to pseudocount variation (see Fig.~\ref{fig:Supp_PseudoCount_ICOD_SCA}). \begin{figure}[h!] \centering \includegraphics[width=8.5cm]{Fig5} \vspace{0.2cm} \caption{ {\bf Performance of ICOD and SCA at predicting the 20 sites with largest experimentally-determined mutational effects in a PDZ domain.} (a) Eigenvalues of the compressed ICOD-modified inverse covariance matrix $\tilde{C}^{-1}$ (\nameref{S1_Appendix}). (b) Eigenvalues of the SCA matrix. (c) True Positive (TP) rates obtained by taking the first eigenvector $\vec{\nu}^{(1)}$ from the compressed ICOD-modified inverse covariance matrix, generating a ranked list of sites of descending magnitudes of the components $||\nu_l^{(1)}||$ of this eigenvector at each site $l$ (\nameref{S1_Appendix}), and computing the fraction of the top sites in this predicted ordering that are also among the 20 experimentally most important sites~\cite{mclaughlin2012spatial}. Results are shown versus the number of top predicted sites (``count''). (d) TP rates from SCA, computed as in panel (c). In panels (c) and (d), the TP rate values obtained for the top 20 predicted sites are indicated by arrows. In all panels, a pseudocount ratio $\Lambda=0.02$ was used, and sites with more than $15\%$ gap state were discarded (see \nameref{S1_Appendix} for details). } \label{Fig5} \end{figure} Importantly, both ICOD and SCA perform much better than random expectation, which is $29\%$. Hence, both of these methods can be useful to identify functionally important sites. The slightly greater robustness of SCA to pseudocounts on this particular dataset (see Fig.~\ref{fig:Supp_PseudoCount_ICOD_SCA}) might come from the fact that many of the experimentally-identified functionally important sites in the PDZ domain are strongly conserved~\cite{Tesileanu15}, which makes the conservation reweighting step in SCA advantageous. Since residue conservation alone is able to predict most of the experimentally important PDZ sites~\cite{Tesileanu15}, we also compared conservation to SCA and ICOD: ranking sites by conservation (employing the conservation score of Ref.~\cite{Rivoire16}, see \nameref{S1_Appendix}) indeed identifies 70\% of the top 20 experimentally-determined sites with important mutational effects. Interestingly, ICOD scores are slightly more strongly correlated with conservation than SCA scores are correlated with conservation (see Fig.~\ref{fig:Supp_Similarity_Conservation_ICOD_SCA}), despite the fact that conservation is explicitly used in SCA and not in ICOD. Overall, this preliminary application to real data highlights the ability of ICOD to identify functionally related amino acids in a principled way that only relies on covariance. We emphasize that the main goal of this paper is to provide insight into the possible physical origins of sectors, and into the statistical signatures of these physical sectors in sequence data. A more extensive application of ICOD and related methods to real sequence data will be the subject of future work. \section*{Discussion} We have demonstrated how sectors of collectively correlated amino acids can arise from evolutionary constraints on functional properties of proteins. Our model is very general, as it only relies on the functional property under any of various forms of selection being described by an underlying additive trait, which has proven to be valid in many relevant situations~\cite{DePristo05,Wylie11,Starr16,Otwinowski18}. We showed that the primary signature of functional selection acting on sequences lies in the small-eigenvalue modes of the covariance matrix. In contrast, sectors are usually identified from the large-eigenvalue modes of the SCA matrix~\cite{halabi2009protein,Rivoire16}. This is not in contradiction with our results because, as we showed, signatures of our functional sectors are often also found in large-eigenvalue modes of the covariance matrix. Besides, the construction of the SCA matrix from the covariance matrix involves reweighting by conservation and taking an absolute value or a norm~\cite{halabi2009protein,Rivoire16}, which can substantially modify its eigenvectors, eigenvalues, and their order. Conservation is certainly important in real proteins, especially in the presence of phylogeny; indeed, the SCA matrix, which includes both conservation and covariance, was recently found to capture well experimentally-measured epistasis with respect to the free energy of PDZ ligand binding~\cite{Salinas18}. However, the fundamental link we propose between functional sectors and small-eigenvalue modes of the covariance matrix is important, since large-eigenvalue modes of the covariance matrix also contain confounding information about subfamily-specific residues~\cite{Casari95} and phylogeny~\cite{Qin18}, and consistently, some sectors identified by SCA have been found to reflect evolutionary history rather than function~\cite{halabi2009protein}. Interestingly, the small-eigenvalue modes are also the ones that contain most information about structural contacts in real proteins~\cite{cocco2013principal}. Hence, our results help explain previously observed correlations between sectors and contacts, e.g. the fact that contacts are overrepresented within a sector but not across sectors~\cite{rivoire2013elements}. We introduced a principled method to detect functional sectors from sequence data, based on the primary signature of these sectors in the small-eigenvalue modes of the covariance matrix. We further demonstrated the robustness of our approach to the existence of multiple traits simultaneously under selection, to various forms of selection, and to data-specific questions such as reference choices and pseudocounts. Importantly, our modeling approach allowed us to focus on functional selection alone, in the absence of historical contingency and of specific structural constraints, thus yielding insights complementary to purely data-driven methods. The collective modes investigated here are just one source of residue-residue correlations. Next, it will be interesting to study the intriguing interplay between functional sectors, phylogeny, and contacts, and to apply our methods to multiple protein families. Our results shed light on an aspect of the protein sequence-function relationship and open new directions in protein sequence analysis, with implications in synthetic biology, building toward function-driven protein design. \section*{Supporting information} \paragraph*{S1~Appendix.} \label{S1_Appendix} {\bf Methodological details and further results.} In \nameref{S1_Appendix}, we present additional details about our model and methods, as well as additional results. \section*{Acknowledgments} S.-W. W. and N. S. W. acknowledge the Center for the Physics of Biological Function under NSF Grant PHY-1734030. A.-F. B. and N. S. W. acknowledge the Aspen Center for Physics, which is supported by NSF Grant PHY-1607611. S.-W. W. was supported by the NSFC under Grants No. U1430237 and 11635002. S.-W. W. also acknowledges Tsinghua University for supporting a half-year visit in Princeton University. N. S. W. was supported by NSF Grant MCB-1344191 and by NIH Grant R01 GM082938. \newpage \noindent{\huge \textbf{ S1~Appendix} } \bigskip \renewcommand{S\arabic{figure}}{S\arabic{figure}} \setcounter{figure}{0} \renewcommand{S\arabic{equation}}{S\arabic{equation}} \setcounter{equation}{0} \renewcommand{\thetable}{S\arabic{table}} \setcounter{table}{0} \defS\arabic{equation}{S\arabic{equation}} \setcounter{equation}{0} \defS\arabic{figure}{S\arabic{figure}} \setcounter{figure}{0} \newcommand{\wh}[1]{\widehat{#1}} \tableofcontents \newpage \section{Supplemental results for elastic network model of PDZ domain} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{PDZ_Delta_histogram} \vspace{0.2cm} \caption{{\bf Magnitude of single-site mutational effects $\Delta_l$ for the PDZ domain conformational change from Fig.~1.} (a) Magnitudes by rank. (b) Histogram of magnitudes. According to our definition, the sites of large magnitude constitute ``sector'' sites with respect to selection on the energy cost of this conformational change, while all others are ``non-sector'' sites. } \label{fig:PDZ_Delta_histogram} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{Supp_inverse} \vspace{0.2cm} \caption{{\bf Performance of ICOD for the selected sequence ensemble from Fig.~1.} (a) Eigenvalues for ICOD method (upper) and Recovery of $\vec{\Delta}$ for all eigenvectors (lower). (b) Leading eigenvector $\nu_l^{(1)}$ (upper) and mutational effect $\Delta_l$ at site $l$ (lower, same as in Fig. 1(f)). The excellent performance of ICOD on this unbiased ensemble of sequences supports the general applicability of the ICOD method to both biased and unbiased sequence ensembles. } \label{eq:supp-inverse-Fig1} \end{figure} \section{Recovery by a random vector} Here, we calculate the random expectation of the Recovery of the mutational-effect vector $\vec{\Delta}$ by a generic other vector $\vec{\nu}$, in order to establish a null model to which to compare. For a binary sequence, Recovery, as defined in Eq.~\ref{eq:recovery-measure}, can be expressed as \begin{equation} \mathrm{Recovery}= \vec{\Delta}'\cdot \vec{\nu'}=\sum_{l=1}^L \Delta'_l\,\nu'_l, \label{rec_new} \end{equation} with $\Delta'_l=|\Delta_l|/\sqrt{\sum_l \Delta_l^2}$ and $\nu'_l=|\nu_l|/\sqrt{\sum_l \nu_l^2}$. As before, $L$ denotes the length of the sequence and hence the number of components of $\vec{\Delta}$ and $\vec{\nu}$. As $\vec{\nu'}$ is a normalized $L$-dimensional vector, its components can be expressed in $L$-dimensional spherical coordinates using $L-1$ angles $\theta_i$: \begin{align} \nu'_l&=\left(\prod_{i=1}^{l-1}\sin\theta_i \right)\cos\theta_{l}\,\,\,\,\,\,\,\,\forall l\in\{1,\dots,L-1\}\,,\label{nuprime}\\ \nu'_L&=\prod_{i=1}^{L-1}\sin\theta_i\,, \label{nuprimeL} \end{align} where $\theta_i\in [0,\pi/2]$ for all $i\in\{1,\cdots,L\}$, because all components of $\vec{\nu'}$ are nonnegative. Note that we employ the usual convention that empty products are equal to one: Eq.~\ref{nuprime} yields $\nu'_1=\cos\theta_1$. The average Recovery for a random vector $\vec{\nu'}$ with an orientation uniformly distributed in the $L$-dimensional sphere reads: \begin{equation} \langle \mathrm{Recovery}\rangle= \frac{\int_\Omega d\Omega\,\, \sum_l \Delta'_l \nu'_l}{\int_\Omega d\Omega}=\frac{ \sum_l \Delta'_l\,I_l}{\int_\Omega d\Omega}\,, \label{recov_expr_int} \end{equation} where the angular element is $d\Omega=\prod_{i=1}^{L-1}d\theta_{i}\,\sin^{L-i-1}(\theta_i)$, the integration domain is $\Omega=\left[0,\pi/2\right]^{L-1}$, and we have introduced $I_l=\int_\Omega d\Omega\,\nu'_l$. Using Eq.~\ref{nuprime}, we obtain for $1\leq l\leq L-1$ \begin{equation} I_l=\int_\Omega d\Omega\,\nu'_l=\left(\prod_{i=1}^{l-1}\int_0^{\pi/2}d\theta_i\,\sin^{L-i}(\theta_i)\right)\left(\int_0^{\pi/2}d\theta_l\,\sin^{L-l-1}(\theta_l)\cos(\theta_l)\right)\left(\prod_{i=l+1}^{L-1}\int_0^{\pi/2}d\theta_i\,\sin^{L-i-1}(\theta_i)\right)\,, \end{equation} and similarly, Eq.~\ref{nuprimeL} yields \begin{equation} I_L=\int_\Omega d\Omega\,\nu'_L=\prod_{i=1}^{L-1}\int_0^{\pi/2}d\theta_i\,\sin^{L-i}(\theta_i)\,. \end{equation} Using the following results valid for $n>-1$: \begin{equation} \int_0^{\pi/2} d\theta\,\sin^n(\theta)=\frac{\sqrt{\pi}}{2}\,\frac{\Gamma \left(\frac{1+n}{2}\right)}{\Gamma\left(\frac{n+2}{2}\right)}\,;\quad \int_0^{\pi/2}d\theta\, \sin^n(\theta) \cos(\theta) =\frac{1}{n+1}\,, \end{equation} where $\Gamma$ denotes the Euler Gamma function, which satisfies $\Gamma(x+1)=x\,\Gamma(x)$ for all $x$, we obtain for $1\leq l\leq L$: \begin{equation} I_l=\frac{\pi^{(L-1)/2}}{2^{L-1}\,\Gamma\left(\frac{L+1}{2}\right)}, \label{I_l} \end{equation} which is independent of $l$. Besides, \begin{equation} \int_ \Omega d\Omega=\frac{\pi^{L/2}}{2^{L-1}\,\Gamma(L/2)}. \label{Om} \end{equation} Combining Eq.~\ref{recov_expr_int} with Eqs.~\ref{I_l} and~\ref{Om} finally yields \begin{equation} \langle \mathrm{Recovery}\rangle=\frac{ \sum_l \Delta'_l\,I_l}{\int_\Omega d\Omega}= \frac{\Gamma \left(L/2\right)}{\sqrt{\pi}\,\,\Gamma\left(\frac{L+1}{2}\right)}\sum_l \Delta'_l=\frac{\Gamma \left(L/2\right)}{\sqrt{\pi}\,\,\Gamma\left(\frac{L+1}{2}\right)}\,\frac{\sum_l |\Delta_l|}{\sqrt{\sum_l \Delta_l^2}}. \label{eq:exact-average-Recovery} \end{equation} In particular, in the relevant regime $L\gg 1$, an asymptotic expansion of $\Gamma$ yields: \begin{equation} \langle \mathrm{Recovery}\rangle \approx \sqrt{\frac{2}{\pi L}} \,\,\frac{\sum_l |\Delta_l|}{\sqrt{\sum_l \Delta_l^2}}. \label{eq:random-Recovery} \end{equation} The maximum expectation of Recovery is obtained when all components of $\vec{\Delta}$, i.e. all mutational effects, are identical: \begin{equation} \langle \mathrm{Recovery}\rangle_\textrm{max}= \sqrt{\frac{2}{\pi}}\approx 0.798. \end{equation} Conversely, the average Recovery becomes minimal when only one component of $\vec{\Delta}$ is nonzero, which constitutes the limit of the case where the mutational effect at one site is dominant: \begin{equation} \langle \mathrm{Recovery}\rangle_\textrm{min}= \sqrt{\frac{2}{\pi L}}, \end{equation} which approaches zero in the limit $L\to \infty$. \section{Inverse covariance matrix of our sequence ensembles} Here, we present a derivation of the small-coupling approximation of the inverse covariance matrix for our artificially-generated sequence ensembles. In this small-coupling limit, the inverse covariance matrix provides an estimate of the energetic couplings used to generate the data. More generally, deducing energetic parameters from observed statistics is a well-known inference problem, also known as an inverse problem. Two-body energetic couplings can be inferred from the one and two-body frequencies observed in the data, using a standard maximum entropy approach. However, the exact calculation of the energetic terms is difficult, and various approximations have been developed. Following Refs~\cite{Marks11, morcos2011direct}, we use the mean-field or small-coupling approximation, which was introduced in Ref.~\cite{Plefka82} for the Ising spin-glass model. For the sake of completeness, we now review the main steps of the calculation, which follow Ref.~\cite{morcos2011direct}. Note that we do not use inference methods specific to low-rank coupling matrices~\cite{Cocco11,cocco2013principal} because we wish to retain generality, with the application to real sequence data in mind. We begin with the case of binary sequences, which is discussed in the main text. Following that, we generalize to cases where more than two states are allowed at each site, such as the 21 possible states for real protein sequence (20 amino acids plus gap). \subsection{Binary sequences} We begin by deriving Eq.~\ref{eq:inverse-PCA-q=2} from the main text, which provides an approximation for the inverse covariance matrix of the ensembles of our binary artificial sequences. Each sequence $\vec{S}$ is such that $S_l\in\{0,1\}$ for each site $l$ with $1\leq l\leq L$, where $L$ is the length of the sequence. \subsubsection{From a sector model for binary sequences to an Ising model} \label{Sec_Ising} Recall the fitness $w$ of a binary sequence $\vec{S}$ under selection for trait $T$ to be close to $T^*$ (Eq.~\ref{eq:fitness}): \begin{equation} w(\vec{S})=-\frac{\kappa}{2}\left(T(\vec{S})-T^* \right)^2=-\frac{\kappa}{2}\left(\sum_l \Delta_l S_l-T^* \right)^2\,. \label{Hini} \end{equation} We introduce $s_l=2 S_l-1$: it is an ``Ising spin'' variable ($S_l=0\Leftrightarrow s_l=-1$ and $S_l=1\Leftrightarrow s_l=1$). The fitness in Eq.~\ref{Hini} can be rewritten as \begin{equation} w(\vec{s})=-\frac{\kappa}{2}\left(\sum_l D_l s_l-\alpha \right)^2\,, \label{HIs} \end{equation} with $D_l=\Delta_l/2$ and $\alpha=T^*-\sum_l D_l$. Expanding yields \begin{equation} w(\vec{s})=-\frac{\kappa}{2}\left(\sum_{l\neq p} D_l D_p s_l s_p +\sum_l D_l^2 - 2\alpha\sum_l D_l s_l +\alpha^2 \right)\,, \label{HIs2} \end{equation} where we have used the fact that $s_l^2=1$. The second term and the last term in Eq.~\ref{HIs2} are both constants, and therefore our fitness is equivalent to \begin{equation} w(\vec{s})=-\frac{\kappa}{2}\left(\sum_{l\neq p} D_l D_p s_l s_p - 2\alpha\sum_l D_l s_l \right)\,. \label{SecToIs} \end{equation} This fitness has the form of a standard Ising Hamiltonian with inter-spin couplings and local fields, albeit with the convention difference in overall sign between fitness and energy. \subsubsection{First-order small-coupling expansion} We next consider the general Ising Hamiltonian with inter-spin couplings and local fields \begin{equation} H(\vec{s})=-\frac{1}{2}\epsilon \sum_{i\neq j} J_{ij} s_i s_j -\sum_i h_i s_i\,, \label{HIs_SC} \end{equation} where $\epsilon$ is a constant to be employed in a small-coupling expansion. With this Hamiltonian, taking thermal energy $k_{\rm B}T = 1$, the equilibrium probability of finding a particular sequence $\vec{s}$ is \begin{equation} P(\vec{s})=\frac{1}{Z}e^{-H\left(\vec{s}\right)}, \end{equation} where $Z=\sum_{\vec{s}}e^{-H\left(\vec{s}\right)}$. Introducing $F=-\log Z$, we have \begin{align} \frac{\partial F}{\partial h_i}&=-\langle s_i\rangle= -m_i\,,\nonumber\\ \frac{\partial^2 F}{\partial h_i\partial h_j}&=-\frac{\partial m_i}{\partial h_j}=\langle s_i\rangle \langle s_j\rangle-\langle s_i s_j\rangle=-C'_{ij}\,, \end{align} where, following the Ising terminology, $m_i$ denotes the average magnetization at site $i$, while $C'$ denotes the covariance matrix in the Ising convention. Note that, using the identity $m_i=2P_i-1$, where $P_i$ denotes the probability that $s_i=1$, we obtain \begin{equation} C'_{ij}=\langle s_i s_j\rangle-\langle s_i\rangle \langle s_j\rangle=4\left(P_{ij}-P_i P_j\right)=4\, C_{ij}\,, \label{Cconv} \end{equation} where $P_{ij}$ is the probability that $s_i=s_j=1$, and $C$ denotes the covariance matrix in the Potts convention, which is used in the main text because it allows straightforward generalization to the case where more than two states are possible at each site. Performing a Legendre transform, we introduce $G=F+\sum_i m_i h_i$, yielding \begin{align} \frac{\partial G}{\partial m_i}&=h_i\,, \label{eq1}\\ \frac{\partial^2 G}{\partial m_i\partial m_j}&=\frac{\partial h_i}{\partial m_j}={C'_{ij}}^{-1}\,. \label{eq2} \end{align} We now perform a small-coupling expansion and express $G$ to first order in $\epsilon$ (see Eq.~\ref{HIs_SC}): $G(\epsilon)\approx G(0)+\epsilon G'(0)$. Since sites are independent for $\epsilon=0$, it is straightforward to express $G(0)$ and $G'(0)$ as a function of the one-body expectations, represented by $m_i$, and of the couplings. We obtain \begin{equation} G(0)=\sum_i \frac{m_i+1}{2}\log\left(\frac{m_i+1}{2}\right)+\frac{1-m_i}{2}\log\left(\frac{1-m_i}{2}\right)\,, \end{equation} and \begin{equation} G'(0)=\frac{\partial G}{\partial \epsilon}(0)=-\frac{1}{2}\sum_{i\neq j} J_{ij} m_i m_j\,. \end{equation} Using these expressions, and taking $\epsilon=1$ in the expansion, we obtain the following approximation for $G$: \begin{equation} G\approx \sum_i \frac{m_i+1}{2}\log\left(\frac{m_i+1}{2}\right)+\frac{1-m_i}{2}\log\left(\frac{1-m_i}{2}\right)-\frac{1}{2}\sum_{i\neq j} J_{ij} m_i m_j\,. \label{Gappx} \end{equation} Using Eqs.~\ref{eq1} and~\ref{eq2}, we obtain the elements of the inverse covariance matrix from Eq.~\ref{Gappx}: \begin{align} {C'_{kl}}^{-1}&=-J_{kl}\,,\,\,\forall l\neq k\,,\nonumber\\ {C'_{ll}}^{-1}&= \frac{1}{2}\left(\frac{1}{1+m_l}+\frac{1}{1-m_l}\right)= \frac{1}{4}\left(\frac{1}{P_l}+\frac{1}{1-P_l}\right)\,, \label{Corr_Ising} \end{align} where $P_l$ denotes the probability that $s_l=1$. Note that Eq.~\ref{Gappx} is a first-order small-coupling (or mean-field) approximation. The expansion can be extended to higher order, and the second-order expansion is known as the Thouless, Anderson, and Palmer (TAP) free energy~\cite{Thouless77,Plefka82}. \subsubsection{Application to our sector model} Comparing Eqs.~\ref{SecToIs} and~\ref{HIs_SC} (with $\epsilon=1$) allows us to identify the couplings in our sector model as \begin{equation} J_{kl}=-\kappa\, D_k D_l=- \kappa\, \Delta_k \Delta_l /4\,,\,\,\forall k\neq l\,. \end{equation} Note that this expression is in the Ising gauge (also known as the zero-sum gauge). Recall also that the link to the Potts convention is made through $C'=4\,C$ (Eq.~\ref{Cconv}), which implies ${C'}^{-1}=C^{-1}/4$. Finally, recall that fitness and energy have opposite signs. Hence, in the Potts convention, Eq.~\ref{Corr_Ising} yields for our sector model: \begin{align} C_{kl}^{-1}&=\kappa \Delta_k \Delta_l\,,\,\,\forall l\neq k\,,\nonumber\\ C_{ll}^{-1}&=\frac{1}{P_l}+\frac{1}{1-P_l}\,. \label{eqn2} \end{align} This corresponds to Eq.~\ref{eq:inverse-PCA-q=2} in the main text. \subsection{Sequences with $q$ possible states at each site} \label{Sec:q-state} \subsubsection{From a sector model to a Potts model for sequences} Motivated by the fact that a real protein sequence has 21 possible states at each site (20 different amino acids plus gap), we now generalize the above result to the case where $q$ states are possible at each site. We denote these states by $\alpha$ with $\alpha\in\{1,..,q\}$. Our sector model can then be mapped to a $q$-state Potts model. The length-$L$ vector $\vec{\Delta}$ of single-site mutational effects introduced in the two-state case in the main text is replaced by a $(q-1)\times L$ matrix of mutational effects, each being denoted by $\Delta_l(\alpha_l)$. These mutational effects can be measured with respect to a reference sequence $\vec{\alpha}^0$ satisfying $\Delta_l(\alpha_l^0)=0,\,\,\forall l\in\{1,\dots,L\}$: at each site $l$, the state present in the reference sequence $\vec{\alpha}^0$ serves as the reference with respect to which the mutational effects at that site are measured. For the sake of simplicity, we will take state $q$ as reference state at all sites. This does not lead to any loss of generality, since it is possible to reorder the states for each $l$. The generalization of the fitness function Eq.~\ref{eq:fitness} (Eq.~\ref{Hini}) to our $q$-state model can be written as \begin{equation} w(\vec{\alpha})=-\frac{\kappa}{2}\left(T(\vec{\alpha})-T^*\right)^2=-\frac{\kappa}{2}\left(\sum_{l=1}^L\Delta_l(\alpha_l)-T^*\right)^2\,, \label{Ham0} \end{equation} (see Eq.~\ref{eq:quadr} in the main text). Expanding this expression, discarding a constant term, and using the fact that there can only be one state at each site, we find that the fitness of sequences can be expressed as \begin{equation} w(\vec{\alpha})=-\frac{\kappa}{2}\sum_{l\neq k}\Delta_l(\alpha_l)\Delta_k(\alpha_k) -\frac{\kappa}{2}\sum_{l=1}^L\Delta_l(\alpha_l)\left(\Delta_l(\alpha_l)-2\,T^*\right)\,. \label{Ham2} \end{equation} This is a particular case of the more general Potts Hamiltonian \begin{equation} H(\vec{\alpha})=-\frac{1}{2}\sum_{l\neq k} e_{lk}(\alpha_l,\alpha_k)-\sum_{l=1}^{L}h_{l}(\alpha_l)\,, \label{Potts1} \end{equation} which is the one usually considered in Direct Coupling Analysis (DCA)~\cite{morcos2011direct,Marks11}. In order to identify Eq.~\ref{Ham2} and Eq.~\ref{Potts1}, one must deal with the degeneracies present in Eq.~\ref{Potts1}, where the number of independent parameters is $L(q-1)+L(L-1) (q-1)^2/2$~\cite{Ekeberg14}. To lift this degeneracy, we choose the gauge usually taken in mean-field DCA~\cite{morcos2011direct}: $e_{lk}(\alpha_l,q)=e_{lk}(q,\alpha_k)=h_l(q)=0$ for all $l, k, \alpha_l, \alpha_k$. This choice is consistent with taking state $q$ as the reference state for mutational effects (see above), and we will refer to it as the reference-sequence gauge. This gauge choice enables us to identify the couplings between Eq.~\ref{Ham2} and Eq.~\ref{Potts1}: \begin{align} e_{lk}(\alpha_l,\alpha_k)&=-\kappa\Delta_l(\alpha_l)\Delta_k(\alpha_k)\,, \label{identify} \end{align} for all $l \neq k$, and all $\alpha_l, \alpha_k$, with $\Delta_l(q)=0$ for all $l$ (recalling that fitness and energy have opposite signs). \subsubsection{First-order small-coupling expansion} The derivation of the first-order mean-field or small-coupling approximation for $q$-state models is very similar to the Ising case presented above. Hence, we will simply review the main results (see Ref.~\cite{morcos2011direct}). We start with the Hamiltonian \begin{equation} H(\vec{\alpha})=-\frac{\epsilon}{2}\sum_{l\neq k} e_{lk}(\alpha_l,\alpha_k)-\sum_{l=1}^{L}h_{l}(\alpha_l)\,, \label{Potts1b} \end{equation} where $\epsilon$ has been introduced to perform the small-coupling expansion. Eq.~\ref{Potts1b} coincides with Eq.~\ref{Potts1} for $\epsilon=1$. Considering $F=-\log(Z)$ with $Z=\sum_{\vec{\alpha}}e^{-H\left(\vec{\alpha}\right)}$, where $H\left(\vec{\alpha}\right)$ is the Potts Hamiltonian in Eq.~\ref{Potts1b}, we have for all $k$ and all $\alpha_k<q$: \begin{equation} \frac{\partial F}{\partial h_k(\alpha_k)}=-P_k(\alpha_k)\,, \end{equation} where $P_k(\alpha_k)$ is the one-body probability. Similarly, we have for all $k, l$ and all $\alpha_k<q$ and $\alpha_l<q$: \begin{equation} \frac{\partial^2 F}{\partial h_l(\alpha_l)\partial h_k(\alpha_k)}=-\frac{\partial P_k(\alpha_k)}{\partial h_l(\alpha_l)}=-C_{kl}(\alpha_k,\alpha_l)\,, \label{Fseconde} \end{equation} where we have introduced the covariance $C_{kl}(\alpha_k,\alpha_l)=P_{kl}(\alpha_k,\alpha_l)-P_k(\alpha_k)P_l(\alpha_l)$. We perform a Legendre transform and introduce $G=F-\sum_i\sum_{\alpha_i}h_i(\alpha_i)P_i(\alpha_i)$, yielding \begin{align} \frac{\partial G}{\partial P_k(\alpha_k)}&=h_k(\alpha_k)\,,\label{field}\\ \frac{\partial^2 G}{\partial P_l(\alpha_l)\partial P_k(\alpha_k)}&=\frac{\partial h_l(\alpha_l)}{\partial P_k(\alpha_k)}=C^{-1}_{kl}(\alpha_k,\alpha_l)\,, \label{invC} \end{align} for all $k, l$ and all $\alpha_k<q$ and $\alpha_l<q$. Note that, in the latter equation, $C^{-1}_{kl}(\alpha,\beta)$ is shorthand for $A^{-1}_{ij}$, where $A$ is the $(q-1)L\times (q-1)L$ covariance matrix where terms involving the reference state $q$ have been excluded: $A_{ij}=C_{kl}(\alpha,\beta)$, where $i=(q-1)(k-1)+\alpha$ and $j=(q-1)(l-1)+\beta$, with $\alpha\in\{1,\dots,q-1\}$ and $\beta\in\{1,\dots,q-1\}$~\cite{Ekeberg13}. We next perform a first-order expansion of $G$ in $\epsilon$, and take $\epsilon=1$, yielding: \begin{equation} G\approx \sum_l\sum_{\alpha_l}P_l(\alpha_l)\log\left(P_l(\alpha_l)\right)-\frac{1}{2}\sum_{l\neq k}\sum_{\alpha_l,\alpha_k}e_{lk}(\alpha_l,\alpha_k)P_l(\alpha_l)P_k(\alpha_k)\,. \label{G1} \end{equation} Applying Eqs.~\ref{field},~\ref{invC} to Eq.~\ref{G1}, and using $P_l(q)=1-\sum_{\alpha_l<q}P_l(\alpha_l)$ gives \begin{align} C^{-1}_{kl}(\alpha_k,\alpha_l)&=-e_{kl}(\alpha_k,\alpha_l),\,\,\,\forall \,l\neq k\,,\nonumber\\ C^{-1}_{ll}(\alpha_k,\alpha_l)&=\frac{1}{P_k(q)}+\frac{\delta_{\alpha_k\alpha_l}}{P_k(\alpha_k)}\,. \label{mf_coupl} \end{align} This result is the standard one found in DCA ~\cite{morcos2011direct}. \subsubsection{Application to our sector model} Combining Eqs.~\ref{identify} and \ref{mf_coupl}, we obtain for our sector model: \begin{align} C^{-1}_{kl}(\alpha_k,\alpha_l)&=\kappa \Delta_{k}(\alpha_k)\Delta_l(\alpha_l),\,\,\,\forall \,l\neq k\,,\nonumber\\ C^{-1}_{ll}(\alpha_k,\alpha_l)&=\frac{1}{P_k(q)}+\frac{\delta_{\alpha_k\alpha_l}}{P_k(\alpha_k)}\,. \label{invcorrPotts} \end{align} For $q=2$, Eq.~\ref{invcorrPotts} reduces to Eq.~\ref{Corr_Ising} (Eq.~\ref{eq:inverse-PCA-q=2} in the main text), using $1-P_l=P_{l}(q)$. \subsubsection{Selection on multiple traits} So far, we have mainly discussed the case where there selection on only one trait (yielding one sector). However, real proteins face various selection pressures. The generalization of the fitness in Eq.~\ref{Ham0} to $N$ simultaneous selection on different traits reads \begin{equation} w(\vec{S})=-\sum_{i=1}^N \frac{\kappa_i}{2} \left(T_i-T^*_i\right)^2=-\sum_{i=1}^N \frac{\kappa_i}{2} \left(\sum_{l=1}^L \Delta_{i,l}(\alpha_l)-T^*_i\right)^2\,, \label{eq:landscape-1} \end{equation} which corresponds to Eq.~\ref{eq:newfitness} in the main text. We choose the reference-state gauge, assuming again for simplicity that the reference state is $q$ at each site. The identification to the general Potts Hamiltonian Eq.~\ref{Potts1} (recalling that fitnesses and energies have opposite signs) then yields \begin{equation} e_{lk}(\alpha_l,\alpha_k)=-\sum_{i=1}^N \kappa_i \Delta_{i,l}(\alpha_l)\Delta_{i,k}(\alpha_k)\,, \end{equation} which generalizes Eq.~\ref{identify} to the multiple selection case. Using the small-coupling expansion result in Eq.~\ref{mf_coupl}, we obtain the following approximation for the inverse covariance matrix: \begin{align} C^{-1}_{kl}(\alpha_k,\alpha_l)&=\sum_{i=1}^N \kappa_i \Delta_{i,k}(\alpha_k)\Delta_{i,l}(\alpha_l),\,\,\,\forall \,l\neq k\,,\nonumber\\ C^{-1}_{ll}(\alpha_k,\alpha_l)&=\frac{1}{P_k(q)}+\frac{\delta_{\alpha_k\alpha_l}}{P_k(\alpha_k)}\,. \label{invcorrPottsMulti} \end{align} This generalizes Eq.~\ref{invcorrPotts} to the case of simultaneous selection on multiple traits. \section{Robustness of functional sectors and of ICOD} In the main text, we introduced the Inverse Covariance Off-Diagonal (ICOD) method to identify protein sectors from sequence data. The ICOD method exploits the approximate expression derived above for the inverse covariance matrix (Eq.~\ref{invcorrPotts}); in particular, ICOD makes use of the fact that the off-diagonal elements of $C^{-1}$ are simply related to the elements of the mutational effect vector $\vec\Delta$. In this section, we first describe our comparison of ICOD to SCA for single selection, and detail our test of ICOD for double selection, using synthetic binary sequences. Next, we confirm the robustness of the ICOD method to different forms of selection and then show how ICOD can be extended to sequences with more than two states per site, and finally demonstrate its robustness to gauge choice and pseudocounts. \subsection{Robustness of ICOD to selection bias, selection strength, and multiple selections} To quantify the performance of ICOD and to compare to SCA over a range of selection biases we focused on binary sequences. To obtain the average curve for single selections in Fig.~3(a), we first generated 100 distinct synthetic $\vec{\Delta}$s, one for each sector size from $n=1$ to 100, where sector sites are defined as those with large mutational effects. To this end, the mutational effects of the sector sites and the non-sector sites were sampled, respectively, from zero-mean Gaussian distributions with standard deviations 20 and 1. For each sector size and each selection bias we generated a sequence ensemble of 50,000 random sequences and weighted each sequence according to the distribution \begin{equation} P(\vec{S})=\frac{\exp(w(\vec{S}))}{ \sum_{\vec{S}} \exp(w(\vec{S}))}\,, \label{eq:distrb} \end{equation} where $w(\vec{S})$ is the fitness of sequence $\vec{S}$, given by the single selection formula Eq.~\ref{eq:fitness}. In general, we wish to employ a selection window whose width in energy (or any other selected variable) scales with the overall width of the unselected distribution. Hence, as mentioned in the main text, we perform all selections with a strength \begin{equation} \kappa=\frac{10}{\sum_{l}\Delta_l^2}\,. \label{eq:kappa} \end{equation} Then, for each method (ICOD or SCA), performance as measured by Recovery of $\vec{\Delta}$ by the first eigenvector was averaged over the 100 different sector sizes. As an aside, Fig.~\ref{fig:Supp_effect_kappa} demonstrates that the performance of ICOD and SCA is robust to varying selection strength $\kappa$, as long as $\kappa \sum_l\Delta_l^2\gg 1$. (A small value of $\kappa \sum_l\Delta_l^2$ implies weak selection, where most random sequences pass selection and the resulting ensemble does not significantly reflect the constraint.) \begin{figure}[htb] \centering \includegraphics[width=5cm]{Supp_effect_kappa} \vspace{0.2cm} \caption{ {\bf Impact of selection strength $\kappa$ on the performance of ICOD and SCA on synthetic data.} Results obtained on binary synthetic sequences with $L=100$, selected using a synthetic $\vec{\Delta}$ where the first 20 and the other 80 mutational effects are, respectively, sampled from Gaussian distributions with variances of 20 and 1. Selection is performed on ensembles of 50,000 random sequences, and each data point is obtained by averaging over 100 realizations. The relative bias is $\gamma=0.5$. } \label{fig:Supp_effect_kappa} \end{figure} Similarly, to obtain the average curve for double selection in Fig.~3(b), we generated 100 distinct pairs of $\vec{\Delta}_1$s and $\vec{\Delta}_2$s, one pair for each sector size from $n=1$ to 100. Specifically, the sector for $\vec{\Delta}_1$ consisted of the first $n$ sites, while the sector for $\vec{\Delta}_2$ corresponded to the last $n$ sites, so that the two sectors overlap for $n>50$. As for the single selections, the mutational effects of the sector sites and the non-sector sites were sampled, respectively, from Gaussian distributions with standard deviations 20 and 1. As an example, two synthetic $\vec{\Delta}$s for $n=20$ are shown in Fig.~\ref{fig:synthetic_delta_explain}. Again, for each sector size and each selection bias, we generated an ensemble of 50,000 random sequences and weighted them according to Eq.~\ref{eq:distrb} along with the double selection formula Eq.~\ref{eq:landscape-1} (i.e. Eq.~\ref{eq:newfitness} in the main text). The performance of ICOD as measured by Recovery of $\vec{\Delta}_1$ and $\vec{\Delta}_2$ by the first two eigenvectors was averaged over the 100 different sector sizes. In Fig.~3(b) we also reported the performance of ICOD for two non-overlapping sectors, each with 20 sites, and for two fully overlapping sectors, each with 100 sites. We followed a protocol similar to that described above, but in each of these cases, we averaged Recovery over 100 realizations using distinct pairs of $\vec{\Delta}_1$ and $\vec{\Delta}_2$. \begin{figure}[htb] \centering \includegraphics[width=9cm]{synthetic_delta_explain} \vspace{0.2cm} \caption{{\bf Example of two synthetic $\vec{\Delta}$s generated for the double selection in Fig.~3(b).} (a) Generation of $\vec{\Delta}_1$, where the mutational effects for the first $20$ sites and for the last $80$ sites are sampled, respectively, from zero-mean Gaussian distributions with a standard deviation of 20 and 1. (b) Generation of $\vec{\Delta}_2$, where the mutational effects for the last $20$ sites and for the first $80$ sites are sampled, respectively, from zero-mean Gaussian distributions with a standard deviation of 20 and 1. } \label{fig:synthetic_delta_explain} \end{figure} Unless otherwise stated, data for other plots were generated in the same way, i.e. using 50,000 random sequences, sequence length $L=100$, selection strength $\kappa$ in Eq.~\ref{eq:kappa}, and standard deviation 20/1 of $\Delta_l$ in the sector/non-sector sites. Note that to improve Recovery in the case of double selection, we applied Independent Component Analysis (ICA)~\cite{Hyvarinen, Hansen01, Rivoire16} to the first two eigenvectors in order to disentangle the contributions coming from the two constraints. In general, we expect that the first $N$ eigenvectors of the ICOD matrix $\tilde{C}^{-1}$ will report $N$ constraints. However, each of these $N$ eigenvectors is likely to include a mixture of contributions from different constraints. Applying ICA to the first $N$ eigenvectors to recover the individual constraints amounts to assuming that all the constraints are statistically independent. As an example, in Fig.~\ref{fig:two-sector-fig3}, we consider the case of two selections targeting a different set of sites and with different selection windows (one biased, one non-biased). In this case, ICOD plus ICA yields excellent Recovery (Fig.~\ref{fig:two-sector-fig3}). Without ICA, the results are noticeably worse (Fig.~\ref{fig:beforeICA-inverse}). Moreover, Fig.~3(b) demonstrates that ICOD plus ICA can achieve a high Recovery for a broad range of overlaps between two sectors. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{fig_3_two_sectors} \vspace{0.2cm} \caption{{\bf ICOD method for simultaneous selection on two traits.} (a) Upper panels: Components at each site $l$ of two synthetically generated mutational-effect vectors, with insets showing biased selection around $T_1^*$ for $\vec{\Delta}_1$ and neutral selection around $T_2^*$ for $\vec{\Delta}_2$. Lower panel: average mutant fraction $\langle S_l\rangle_*$ at site $l$ after selection on both traits. (b) Performance of ICOD method. Recovery of $\vec{\Delta}_1$ and $\vec{\Delta}_2$ for all eigenvectors (upper) and corresponding eigenvalues (lower). The gray dashed line indicates the random expectation of Recovery (Eq.~\ref{eq:random-Recovery}). } \label{fig:two-sector-fig3} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{Supp_before_ICA_inverse} \vspace{0.2cm} \caption{{\bf Performance of ICOD for the two-sector case in Fig.~\ref{fig:two-sector-fig3}, without applying ICA.} } \label{fig:beforeICA-inverse} \end{figure} In Fig.~3(b), one observes a slight decrease of performance of ICOD plus ICA for double selection with overlapping sectors. Does this arise from increasing sector size or from increasing overlap? As expected from Eqs.~\ref{eq:recovery-measure} and~\ref{eq:inverse-PCA-q=2}, Fig.~\ref{fig:Supp_overlap_sites}(a) shows that Recovery does not fall off with increased sector size. Thus, we tested whether larger sector overlaps could reduce Recovery. Fig.~\ref{fig:Supp_overlap_sites}(b) shows that this is indeed the case for sequence ensembles subject to two selections each with a fixed sector size of 20, but with different numbers of overlapping sites. However, the reduction of Recovery is quite modest, as even for 100\% overlap, Recovery remains above 0.9. It is interesting to note that, independent of sector size and overlap, Recovery decreases faster for double selection than for single selection at large relative biases (see Figs.~3 and~\ref{fig:Supp_overlap_sites}). \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{Supp_overlap_sites} \vspace{0.2cm} \caption{ {\bf Performance of ICOD for different sector sizes and sector overlaps.} (a) Selection on a single trait with varying sector size. Recovery is shown as a function of relative selection bias $\gamma\equiv (T^*-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle}$ for sectors of size 1, 10, 20, 40, 60, 80, and 100 out of 100 sequence sites ({\it cf.} Fig.~3(a)). Recovery is almost perfect for sectors of size larger than 10, but is substantially lower for sector size 1, which violates the criteria $\Delta_l\ll \sqrt{\sum_{l'}\Delta_{l'}^2}$. (b) Simultaneous selection on two traits with different degrees of sector overlap. For each selection, the sector size is 20 out of 100 sequence sites, and the overlap varies from 0 to 20 sites. The average Recovery for $\vec{\Delta}_1$ and $\vec{\Delta}_2$ is shown as a function of relative selection bias. The data in (b) is averaged over 20 realizations of $\vec \Delta$s. } \label{fig:Supp_overlap_sites} \end{figure} \subsection{Robustness of functional sectors to different forms of selection} To assess the robustness of physical sectors to forms of selection other than the simple Gaussian selection window of Eqs.~\ref{eq:distr}-\ref{eq:quadr}, we generated ensembles of 50,000 random binary sequences as above, and used synthetically generated mutational effects, with 20 sector sites out of $L=100$ total sites. As before, the mutational effects of the sector sites and the non-sector sites were sampled, respectively, from zero-mean Gaussian distributions with standard deviations 20 and 1. We first addressed selection for sequences with an additive trait $T$ above a threshold $T_t$. We thus considered the selected ensembles of sequences such that the value of the trait $T(\vec{S})=\vec{S}\cdot\vec{\Delta}$ is larger than a threshold $T_t$, and we varied this threshold. Fig.~\ref{Fig4} in the main text demonstrates that the corresponding sectors are identified by ICOD just as well as those resulting from our initial Gaussian selection window. We also successfully applied ICOD to various other forms of selection. In Fig.~\ref{fig:quartic_selection}, we used the quartic fitness function: \begin{equation} w(\vec{S})=-\frac{\kappa_1}{4}\left(\sum_{l=1}^L \Delta_lS_l-T^* \right)^4\,, \label{eq:quartic} \end{equation} with $\kappa_1=(10/ \sum_l \Delta_l^2)^{2}$, instead of our initial quadratic fitness function (see Eq.~\ref{Hini} and Eq.~\ref{eq:quadr} in the main text) and we weighted sequences using the Boltzmann distribution in Eq.~\ref{eq:distr}. Finally, in Fig.~\ref{fig:square_selection}, we considered the selected ensembles of sequences such that the value of the trait $T(\vec{S})=\vec{S}\cdot\vec{\Delta}$ is between $T^*-\eta/2$ and $T^*+\eta/2$, where $\eta$ is the width of the selection window. In Fig.~\ref{fig:square_selection}, we used $\eta=0.6\sqrt{\sum_l\Delta_l^2}$. \begin{figure}[h!] \centering \includegraphics[width=8.5cm]{quartic_selection} \vspace{0.2cm} \caption{{\bf Identification of sectors that result from quartic selection. } (a) Histogram of the additive trait $T(\vec{S})=\vec{S}\cdot\vec{\Delta}$ for randomly sampled sequences where 0 and 1 are equally likely at each site. Sequence length is $L=100$, mutational effects are synthetically generated with 20 sector sites. Sequences are selectively weighted using a quartic window (orange) around $T^*$. Selection is shown for $T^*=\langle T\rangle$, or equivalently $\gamma=0$, in terms of the relative selection bias $\gamma\equiv (T^*-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle}$. (b) Eigenvalues of the ICOD-modified inverse covariance matrix $\tilde{C}^{-1}$ (Eq.~\ref{eq:ICODmatrix}) of the selected sequences for $\gamma=0$. (c) Recovery of $\vec{\Delta}$ for all eigenvectors of $\tilde{C}^{-1}$ for $\gamma=0$. Gray dashed line: random expectation of Recovery. (d) Recovery of $\vec{\Delta}$ for ICOD and for SCA as functions of the relative selection bias $\gamma$. The data in (d) is averaged over 100 realizations of $\vec \Delta$. } \label{fig:quartic_selection} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=8.5cm]{square_selection} \vspace{0.2cm} \caption{{\bf Identification of sectors that result from rectangular-window selection. } (a) Histogram of the additive trait $T(\vec{S})=\vec{S}\cdot\vec{\Delta}$ for randomly sampled sequences where 0 and 1 are equally likely at each site. Sequence length is $L=100$, mutational effects are synthetically generated with 20 sector sites. Sequences are selected if they have a trait value $T^*-\eta/2<T(\vec{S})<T^*+\eta/2$ (orange shaded region). Selection is shown for $T^*=\langle T\rangle$, or equivalently $\gamma=0$, in terms of the relative selection bias $\gamma\equiv (T^*-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle}$. (b) Eigenvalues of the ICOD-modified inverse covariance matrix $\tilde{C}^{-1}$ (Eq.~\ref{eq:ICODmatrix}) of the selected sequences for $\gamma=0$. (c) Recovery of $\vec{\Delta}$ for all eigenvectors of $\tilde{C}^{-1}$ for $\gamma=0$. Gray dashed line: random expectation of Recovery. (d) Recovery of $\vec{\Delta}$ for ICOD and for SCA as functions of the relative selection threshold $\gamma$. The data in (d) is averaged over 100 realizations of $\vec \Delta$. } \label{fig:square_selection} \end{figure} These results confirm the robustness of our approach to different plausible forms of selection. \clearpage \subsection{Multiple states per site and alternative gauge choice} \label{secMulti} In Section~\ref{Sec:q-state} above, we described how to generalize from binary sequences to sequences with $q$ possible states at each site. Correspondingly, we now generalize the ICOD method to higher values of $q$. Since we are interested in extracting the single-site mutational effects $\Delta_l(\alpha_l)$ with respect to a reference state at each site, we can simply set to zero the diagonal blocks of $C^{-1}$ in Eq.~\ref{invcorrPottsMulti}, yielding the modified inverse covariance matrix \begin{equation} \tilde{C}^{-1}_{kl}(\alpha_k,\alpha_l)=(1-\delta_{kl})\sum_{i=1}^N \kappa_i \Delta_{i,k}(\alpha_k)\Delta_{i,l}(\alpha_l)\,, \end{equation} for the case of multiple selections, or more simply for a single selection \begin{equation} \tilde{C}^{-1}_{kl}(\alpha_k,\alpha_l)=(1-\delta_{lk})\,\kappa \Delta_{k}(\alpha_k)\Delta_l(\alpha_l). \label{ICOD_Potts} \end{equation} This equation generalizes Eq.~\ref{eq:ICODmatrix} obtained for $q=2$ in the main text. As in that case, the first eigenvector of $\tilde{C}^{-1}$ (associated with the largest eigenvalue) should accurately report the single-site mutational effects $\Delta_k(\alpha_k)$. Indeed, Fig.~\ref{fig:protein-data-no-pseudocount} in the main text shows that this generalized version of ICOD performs very well on synthetic data generated for the case $q=21$ relevant to real protein sequences. Note that in the reference-sequence gauge, Recovery generalizes naturally to the $q$-state model as \begin{equation} \mathrm{Recovery}=\frac{\sum_{l,\alpha_l} |\nu_l(\alpha_l) \Delta_l(\alpha_l) |}{\sqrt{\sum_{l,\alpha_l} \nu_l(\alpha_l)^2 }\sqrt{\sum_{l,\alpha_l} \Delta_l(\alpha_l)^2 }}, \label{eq:qstaterecovery} \end{equation} where the sums over states $\alpha_l$ do not include the reference state at each site. \begin{figure}[htb] \centering \includegraphics[width=8cm]{Supp_protein_data_no_pseudocount_reference_gauge} \vspace{0.2cm} \caption{ {\bf Performance of ICOD on synthetic sequence data with $q=21$ possible states at each site.} (a) Mutational effects $\Delta_l(k)$ with respect to a reference sequence, chosen to be state $21$ at every site. The mutational effect at $q=21$ is not shown. Note that while mutational effects are initially generated from a Gaussian distribution, \emph{relative} mutational effects (calculated with respect to the reference sequence) can have a systematic bias at each site $l$. (b) Eigenvalues of the ICOD-modified inverse covariance matrix $\tilde C^{-1}$ defined in Eq.~\ref{ICOD_Potts}. (c) Recovery of $\vec{\Delta}$ (see Eq.~\ref{eq:qstaterecovery}). The green dashed line indicates the random expectation of Recovery (Eq.~\ref{eq:random-Recovery}). } \label{fig:protein-data-no-pseudocount} \end{figure} While the reference-sequence gauge is convenient and allows a clear interpretation of the mutational effects, other gauge choices are possible. For instance, in the DCA literature, the zero-sum (or Ising) gauge is often employed~\cite{Ekeberg13,Baldassi14}. In this gauge, the couplings satisfy \begin{equation} \sum_{\alpha}e_{ij}(\alpha,\beta)=\sum_{\beta}e_{ij}(\alpha,\beta)=0\,, \label{gauge2} \end{equation} Qualitatively, the gauge degree of freedom means that contributions to the Hamiltonian in Eq.~\ref{Potts1} can be shifted between the fields and the couplings~\cite{Weigt09}. In DCA, the focus is on identifying the dominant two-body interactions, so one does not want the couplings to include contributions that can be accounted for by the one-body fields~\cite{Ekeberg14}. The zero-sum gauge satisfies this condition because it minimizes the Frobenius norms of the couplings \begin{equation} \left\Vert e_{ij}\right\Vert=\sqrt{\sum_{\alpha,\beta=1}^q \left[e_{ij}(\alpha,\beta)\right]^2}\,. \label{Frob} \end{equation} Hence, the zero-sum gauge attributes the smallest possible fraction of the energy in Eq.~\ref{Potts1} to the couplings, and the largest possible fraction to the fields~\cite{Weigt09,Ekeberg13}. In order to transform to the zero-sum gauge defined in Eq.~\ref{gauge2}, each coupling $e_{ij}(\alpha,\beta)$ is replaced by \begin{equation} \tilde{e}_{ij}(\alpha,\beta)=e_{ij}(\alpha,\beta)-\langle e_{ij}(\zeta,\beta)\rangle_\zeta-\langle e_{ij}(\alpha,\eta)\rangle_\eta+\langle e_{ij}(\zeta,\eta)\rangle_{\zeta,\eta}\,, \label{gc} \end{equation} where $\langle .\rangle_\zeta$ denotes an average over $\zeta\in\{1,...,q\}$~\cite{Ekeberg13}. Shifting from the reference-sequence gauge where one state (in our derivations, state $q$) is taken as a reference at each site to the zero-sum gauge requires the replacement \begin{equation} \tilde{\Delta}_{l}(\alpha)= \Delta_{l}(\alpha)-\frac{1}{q}\sum_{\beta=1}^q \Delta_{l}(\beta), \label{gc_Delta} \end{equation} The new reference-state-free mutational effects satisfy $\sum_{\beta=1}^q \tilde{\Delta}_{l}(\beta)=0$, and the associated couplings $\tilde{e}_{lk}(\alpha_l,\alpha_k)=-\kappa \tilde{\Delta}_l(\alpha_l)\tilde{\Delta}_k(\alpha_k)$ (see Eq.~\ref{identify}) are related to the initial ones $e_{lk}(\alpha_l,\alpha_k)$ through Eq.~\ref{gc}. Importantly, these reference-state-free mutational effects can be used to assess the overall importance of mutations at each particular site in the sequence. To this end, let us introduce the Frobenius norm of the reference-state-free mutational effects: \begin{equation} ||\Delta_l||= \sqrt{\sum_{\beta=1}^q \left(\tilde{\Delta}_{l}(\beta)\right)^2}. \label{eq:fn_Delta} \end{equation} This quantity, which we refer to as the ``site significance'', measures the overall importance of mutational effects at site $l$. In order to assess site significances from an ensemble of sequences, without investigating the impact of each particular mutation at each site, one can apply the zero-sum gauge to the ICOD-modified inverse covariance matrix (see Eq.~\ref{ICOD_Potts}), and compute the Frobenius norm of each $20\times 20$ block associated to each pair of sites $(i,j)$ according to Eq.~\ref{Frob}. The first eigenvector of this compressed $L\times L$ matrix accurately reports the mutational significance of each site, as illustrated in Fig.~\ref{fig:Supp_protein_data_no_pseudocount_zero_sum_gauge}. Specifically, it yields a high Recovery of site significances as defined in Eq.~\ref{eq:fn_Delta} (see Fig.~\ref{fig:Supp_protein_data_no_pseudocount_zero_sum_gauge}(c)), and it successfully predicts the most important sites, i.e. the sector sites, in our synthetic data (see Fig.~\ref{fig:Supp_protein_data_no_pseudocount_zero_sum_gauge}(d)). \newpage \begin{figure}[htb] \centering \includegraphics[width=9cm]{Supp_protein_data_no_pseudocount_zero_sum_gauge} \vspace{0.2cm} \caption{{\bf Assessing site significance for synthetic sequence data.} The same synthetic data as in Fig.~\ref{fig:protein-data-no-pseudocount} (with $q=21$ possible states at each site) is used. (a) Significance $||\Delta_l||$ of each site $l$, computed directly by applying Eqs.~\ref{gc_Delta} and~\ref{eq:fn_Delta} to the mutational effects $\Delta_l(k)$ shown in Fig.~\ref{fig:protein-data-no-pseudocount}(a). (b) Eigenvalues of the compressed ($L\times L$) ICOD-modified inverse covariance matrix, calculated by applying the zero-sum gauge to the ICOD-modified inverse covariance matrix (see Eq.~\ref{ICOD_Potts}), and by computing the Frobenius norm of each $20\times 20$ block associated to each pair of sites $(i,j)$ according to Eq.~\ref{Frob}. (c) Recovery of site significances $|| \vec{\Delta}||$ from each eigenvector of the compressed ICOD-modified inverse covariance matrix (see panel (a) and Eq.~\ref{eq:qstaterecovery}) (d) Estimated site significances computed from the first eigenvector $\vec{\nu}^{(1)}$ of the compressed ICOD-modified inverse covariance matrix. } \label{fig:Supp_protein_data_no_pseudocount_zero_sum_gauge} \end{figure} \subsection{Pseudocounts} As pseudocounts are often necessary to regularize real sequence data, and as a high fraction of pseudocounts is generally used in DCA, we consider here whether the ICOD method is robust to the addition of pseudocounts. Until now, we used only raw empirical frequencies obtained from sequence data. For instance, one-body frequencies were obtained by counting the number of sequences where a given state occured at a given site and dividing by the total number $M$ of sequences in the ensemble. Covariances were computed from the empirical single-site frequencies of occurrence of each state $\alpha$ at each site $i$, denoted by $f^e_i(\alpha)$, and the empirical two-site frequencies of occurrence of each ordered pair of states $(\alpha,\beta)$ at each ordered pair of sites $(i,j)$, denoted by $f^e_{ij}(\alpha,\beta)$. Specifically, we obtained the covariance matrix as $C_{ij}(\alpha,\beta)=f^e_{ij}(\alpha,\beta)-f^e_{i}(\alpha)f^e_{j}(\beta)$~\cite{Weigt09}. To avoid issues arising from limited sample size, such as states that never appear at some sites (which present mathematical difficulties, e.g. a non-invertible covariance matrix~\cite{morcos2011direct}), one can introduce pseudocounts via a parameter $\Lambda$~\cite{Weigt09,Procaccini11,Marks11,morcos2011direct}. The one-site frequencies $f_i$ then become \begin{equation} f_i(\alpha)=\frac{\Lambda}{q}+(1-\Lambda)f^e_i(\alpha)\,, \label{fi} \end{equation} where $q$ is the number of states per site. Similarly, the two-site frequencies $f_{ij}$ become \begin{align} f_{ij}(\alpha,\beta)&=\frac{\Lambda}{q^2}+(1-\Lambda)f^e_{ij}(\alpha,\beta)\textrm{ if }i\neq j\,, \label{fij}\\ f_{ii}(\alpha,\beta)&=\frac{\Lambda}{q} \delta_{\alpha\beta}+(1-\Lambda)f^e_{ii}(\alpha,\beta)= f_i(\alpha)\delta_{\alpha\beta}\,. \label{fii} \end{align} These pseudocount corrections are uniform (i.e. they have the same weight $1/q$ for all states), and their influence relative to the raw empirical frequencies can be tuned through the parameter $\Lambda$. In DCA, a high value of f $\Lambda$ has been found to improve contact prediction: typically $\Lambda\approx 0.5$~\cite{morcos2011direct,Marks11,bitbol2016inferring}. Note that the correspondence of $\Lambda$ with the parameter $\lambda$ in Refs.~\cite{Procaccini11,Marks11,morcos2011direct} is obtained by setting $\Lambda=\lambda/(\lambda+M)$. From these quantities, we define the pseudocount-corrected covariances \begin{equation} C'_{ij}(\alpha,\beta)=f_{ij}(\alpha,\beta)-f_i(\alpha)f_j(\beta)\,. \label{Cij} \end{equation} We show in Fig.~\ref{fig:protein-data-with-pseudocount} that adding pseudocounts as high as $\Lambda = 0.3$ still allows for accurate extraction of mutational effects (Recovery 0.96) and provides a reliable prediction of sector sites. \begin{figure}[htb] \centering \includegraphics[width=12cm]{Supp_protein_data_with_pseudocount} \vspace{0.2cm} \caption{ {\bf Effect of pseudocounts on ICOD performance on synthetic sequence data with $q=21$ possible states at each site.} The same synthetic data as in Fig.~\ref{fig:protein-data-no-pseudocount} and~\ref{fig:Supp_protein_data_no_pseudocount_zero_sum_gauge} is used, but here pseudocounts are employed, with weight $\Lambda=0.3$. (a-c) Similar analysis as in Fig.~\ref{fig:protein-data-no-pseudocount}: (a) Eigenvalues of the ICOD-modified inverse covariance matrix. (b) Recovery of $\vec \Delta$ from each eigenvector of the ICOD-modified inverse covariance matrix. (c) First eigenvector of the ICOD-modified inverse covariance matrix. (d-f) Similar analysis as in Fig.~\ref{fig:Supp_protein_data_no_pseudocount_zero_sum_gauge}: (d) Eigenvalues of the compressed ICOD-modified inverse covariance matrix. (e) Recovery of site significances $|| \vec{\Delta}||$ from each eigenvector of the compressed ICOD-modified inverse covariance matrix. (f) Estimated site significances computed from the first eigenvector of the compressed ICOD-modified inverse covariance matrix. } \label{fig:protein-data-with-pseudocount} \end{figure} \section{Performance of SCA} \subsection{Analytical estimates for $\langle S_l\rangle_*$ and $C_{ll'}$ for a single selection with binary sequences} {Protein sectors were first discovered from sequence data using a PCA-based method called Statistical Coupling Analysis (SCA)~\cite{halabi2009protein,Rivoire16}. Interestingly, in SCA, sectors are found from the eigenvectors associated to the largest eigenvalues, while in ICOD they are found from the (modified) eigenvectors associated to the smallest eigenvalues. This difference stems from the fact that SCA and ICOD do not start from the same matrix. For binary sequences, SCA uses the absolute value of a conservation-weighted covariance matrix, $\tilde{C}_{ll'}^{\mathrm{(SCA)}}=|\phi_l C_{ll'}\phi_{l'}|$ (see main text and Ref.~\cite{halabi2009protein}). When all amino-acid states are accounted for, SCA compresses each block of the conservation-weighted matrix corresponding to two sites to obtain one positive value, e.g. the Frobenius norm of the block~\cite{Rivoire16}. Conversely, ICOD employs the regular covariance matrix, suppressing the diagonal blocks of its inverse at the last step before diagonalization. To better understand the performance of SCA in recovering the site-dependent mutational effects associated with a selective constraint, it is helpful to have analytical estimates for the average mutant fraction $\langle S_l\rangle_*$ at each site $l$ and the covariance matrix $C_{ll'}$ for an ensemble of {binary sequences obtained from a single selection using vector of mutational effects $\vec{\Delta}$. To this end, we provide the following two ansatzes: \begin{equation} \langle S_l\rangle_*-\langle S_l\rangle\approx (T^*-\langle T\rangle) \frac{\Delta_l}{\sum_l \Delta_l^2}, \label{eq:S_ansatz} \end{equation} \begin{equation} C_{ll'} \approx \begin{cases} & - \frac{\Delta_l\Delta_{l'}\sigma_l^2\sigma_{l'}^2}{\sum_l \Delta_l^2\sigma_l^2},\quad l\neq l'\\ & \sigma_l^2 ,\quad l=l', \end{cases} \label{eq:Corr_ansatz} \end{equation} where $\sigma_l^2=\langle S_l^2\rangle_* -\langle S_l\rangle_*^2=\langle S_l\rangle_* \left(1-\langle S_l\rangle_*\right)$ represents the variance of $S_l$. Recall that $S_l\in\{0,1\}$, where 0 is the reference state and 1 the mutant state, and that $\langle \cdot\rangle_*$ denotes ensemble averages over the selectively weighted subset of sequences, while $\langle \cdot\rangle$ denotes averages over the unselected (unweighted) ensemble. Although we have not proven these two ansatzes, numerical tests (Fig.~\ref{fig:Covariance_structure}) have verified these two relations for ensembles generated from a $\vec{\Delta}$ with multiple sites of comparably large mutational effects so as not to be dominated by a single site, i.e., $\Delta_{l}/\sqrt{\sum_{l'} \Delta_{l'}^2}\ll 1$ for any $l$. As a counterexample, the $\vec{\Delta}$ from our elastic network model does not satisfy this condition. \begin{figure}[htb] \centering \includegraphics[width=10cm]{Covariance_structure} \vspace{0.2cm} \caption{ {\bf Numerical verification of the ansatzes in Eq.~\ref{eq:S_ansatz} and Eq.~\ref{eq:Corr_ansatz}.} We generate a sequence ensemble by considering four values of relative selection bias $\gamma\equiv (T^*-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle}= 0, 0.25, 0.5, 1$ and for each case we use a synthetic $\vec{\Delta}$ with a sector size of 20. (a) Numerically computed average bias of the mutant fractions $\langle S_l\rangle_*-\langle S_l\rangle$. Here, $\langle S_l\rangle=0.5$ for the unselected ensemble. (b) Numerically computed covariances $C_{ll'}$. The results in (a,b) compare well with the analytical predictions (orange lines), provided that $\Delta_{l}/\sqrt{\sum_l' \Delta_l'^2}\ll 1$ for any $l$. For each case, $10^6$ random sequences were generated to minimize noise from sampling. } \label{fig:Covariance_structure} \end{figure} \subsection{Analysis of the SCA method} \label{SCAsub} Here, we provide a detailed analysis of the SCA method from Refs.~\cite{halabi2009protein,Rivoire16}. Following these references, the reweighting factor is chosen to be \begin{equation} \phi_l=\frac{\partial D\left(\langle S_l\rangle_* ,\,\langle S_l\rangle\right)}{\partial \langle S_l\rangle_*}, \label{phil} \end{equation} where, for each site $l$, $D\left(\langle S_l\rangle_* ,\,\langle S_l\rangle\right)$ is the Kullback-Leibler divergence between the distribution of mutant fractions for the selected sequences and the background distribution: \begin{equation} D\left(\langle S_l\rangle_*,\,\langle S_l\rangle\right)=\langle S_l\rangle_* \log\frac{\langle S_l\rangle_*}{\langle S_l\rangle}+\left(1-\langle S_l\rangle_*\right) \log\frac{1-\langle S_l\rangle_*}{1-\langle S_l\rangle}. \label{eq:KL} \end{equation} In our case, the background distribution is obtained from the unselected sequence ensemble, for which $\langle S_l\rangle=0.5$. Hence, we have \begin{equation} \phi_l= \log\left[\frac{\langle S_l\rangle_*\left(1-\langle S_l\rangle\right)}{\langle S_l\rangle \left(1-\langle S_l\rangle_*\right)}\right], \end{equation} as illustrated in Fig.~\ref{fig:reweighting-method}(a). In the regime of relatively weak conservation, i.e. when $\langle S_l\rangle$ is not close to 0 or 1, and $|\langle S_l\rangle_*-\langle S_l\rangle|\ll \langle S_l\rangle$, a first-order expansion yields \begin{equation} \phi_l\approx \frac{\langle S_l\rangle_*-\langle S_l\rangle}{\langle S_l\rangle(1-\langle S_l\rangle)}, \end{equation} as shown in Fig.~\ref{fig:reweighting-method}(b). Employing the ansatz~(\ref{eq:S_ansatz}) in this regime, we obtain \begin{equation} \phi_l\propto (T^*-\langle T\rangle)\Delta_l. \label{eq:phi-propto} \end{equation} This relation is verified in Fig.~\ref{fig:reweighting-method}(c) for a sequence ensemble generated with a synthetic $\Delta_l$. Hence, the SCA} reweighting factor carries information about $\Delta_l$ as long as $T^*\neq \langle T\rangle$. In this regime, information about conservation (namely $\phi_l$) should thus be sufficient to recover mutational effects and sectors. This was indeed found to be the case for some real proteins with a single sector~\cite{Tesileanu15}. However, when the selection bias, $T^*-\langle T\rangle$, is small, random noise due to finite sampling will typically swamp this relationship. \begin{figure}[htb] \centering \includegraphics[width=14cm]{Supp_Rama} \vspace{0.2cm} \caption{ {\bf Underpinnings of Recovery of mutational effect vector $\vec{\Delta}$ by SCA.} (a) Kullback-Leibler divergence versus mutant fraction $\langle S\rangle_*$ for background mutant fraction $\langle S\rangle=0.5$. (b) Reweighting factor $\phi$ as a function of mutant fraction $\langle S\rangle_*$ for background mutant fraction $\langle S\rangle=0.5$. (c) Reweighting factor $\phi_l$ and synthetic $\Delta_l$ for an ensemble of sequences generated with a single selection at relative selection bias $\gamma=1$. $\vec{\Delta}$ was generated with the first 50 sites as sector sites, and 50,000 sequences were employed, as in most of our examples using ICOD (see above). (d-e) Performance of SCA and ICOD for this ensemble, respectively. In computing Recovery using SCA, we use the normalized vector $\sqrt{\nu_l^{(j)}}$ to predict $\vec\Delta$. The gray dashed lines in (d) and (e) indicate the random expectation of Recovery (Eq.~\ref{eq:random-Recovery}). } \label{fig:reweighting-method} \end{figure} \clearpage In Refs.~\cite{halabi2009protein,Rivoire16}, the first eigenvectors of the conservation-reweighted SCA covariance matrix, $\tilde{C}_{ll'}^{\mathrm{(SCA)}}=|\phi_l C_{ll'}\phi_{l'}|$, were used to find sectors from sequence data. How does the first eigenvector of $\tilde{C}^{\mathrm{(SCA)}}$ relate to the mutational effect vector $\vec \Delta$? Utilizing both Eq.~\ref{eq:Corr_ansatz} and Eq.~\ref{eq:phi-propto}, and assuming $T^*\neq \langle T\rangle$, we obtain \begin{equation} \tilde{C}_{ll'}^{\mathrm{(SCA)} }\propto \begin{cases} & \Delta_l^2\Delta_{l'}^2\sigma_l^2\sigma_{l'}^2,\quad l\neq l'\\ & \Delta_l^2 \sigma_l^2 ,\quad l=l'. \end{cases} \end{equation} Apart from the diagonal, the matrix is approximately proportional to the tensor product of $\Delta_l^2\sigma_l^2$ with itself. If we neglect the contribution from the diagonal elements of $\tilde{C}^{\mathrm{(SCA)}}$, the first eigenvector $\vec{\nu}^{(1)}$ satisfies \begin{equation} \nu_l^{(1)}\propto \Delta_l^2\sigma_l^2. \label{eq:SCA_eigenvector} \end{equation} Eq.~\ref{eq:SCA_eigenvector} explains why $\sqrt{\nu_l^{(1)}}$ carries information about $\Delta_l$. In Fig.~\ref{fig:reweighting-method}(d), Recovery using SCA (and Eq.~\ref{eq:recovery-measure} with $\sqrt{\nu_l^{(1)}}$ instead of $\nu_l^{(1)}$) is 0.97, which remains lower than Recovery using ICOD, which is 0.999 here. Besides, Fig.~\ref{fig:SCAsquareroot} illustrates that Recovery of $\vec \Delta$ by SCA is much better using $\sqrt{\nu_l^{(1)}}$ than $\nu_l^{(1)}$. \begin{figure}[htb] \centering \includegraphics[width=4.5cm]{SCA_sqrt_or_not} \vspace{0.2cm} \caption{ {\bf Recovery of $\vec{\Delta}$ from the first SCA eigenvector using $\vec{\nu}^{(1)}$ or $\sqrt{\vec{\nu}^{(1)}}$.} The sequence data are the same as used for the blue curves in Fig.~3(a). As suggested by Eq.~\ref{eq:SCA_eigenvector}, use of the square root of $\vec{\nu}^{(1)}$ significantly improves Recovery. } \label{fig:SCAsquareroot} \end{figure} \subsection{Comparison between ICOD and SCA} In the main text, we compared the performance of ICOD and SCA with respect to Recovery of mutational-effect vectors $\vec\Delta$ in synthetic data (see Fig.~3). We found that ICOD performs well over a broader range of relative biases $\gamma$ than SCA. The failure of SCA at biases close to zero can be explained by the fact that the conservation weights $\phi_l$ then vanish (see Eq.~\ref{eq:phi-propto}). A further example of the failure of SCA for non-biased selections is given by the case studied in Fig.~\ref{fig:two-sector-fig3}, where we considered two selections, a biased one associated to $\vec{\Delta}_1$ and a non-biased one associated to $\vec{\Delta}_2$. Fig.~\ref{fig:Rama-two-sectors} shows that SCA recovers $\vec{\Delta}_1$ well, but performs badly for $\vec{\Delta}_2$, while ICOD recovers both of them very well (see Fig.~\ref{fig:two-sector-fig3}). \begin{figure}[htb] \centering \includegraphics[width=13cm]{Supp_Rama_two_sectors} \vspace{0.2cm} \caption{{\bf Performance of SCA for the double selection from Fig.~\ref{fig:two-sector-fig3}.} (a) Eigenvalues. (b) Before applying ICA, the first eigenvector has high Recovery of $\vec{\Delta}_1$, but no eigenvector has substantial Recovery of $\vec{\Delta}_2$. This difference matches our observation that SCA performs well for selections of intermediate bias, but not for unbiased selections. (c) Applying ICA on the first two eigenvectors does not improve Recovery. } \label{fig:Rama-two-sectors} \end{figure} While the comparison of Recovery favors ICOD, SCA was originally used to identify sectors (in our model, sites with important mutational effects under a given selection) rather than to recover complete mutational effect vectors $\vec \Delta$. Hence, in Fig.~\ref{fig:SCA_ICOD}, we compare the ability of ICOD and SCA to predict the $n$ sites with the largest mutational effects. Note that this comparison is independent of whether we use $\vec{\nu}^{(1)}$ or $\sqrt{\vec{\nu}^{(1)}}$ as the predictor in SCA, since the square-root function is increasing and preserves order. Using this criterion, we again find that ICOD performs well over a broad range of relative biases $\gamma$, while SCA only works well for sequences selected under moderate biases. \begin{figure}[htb] \centering \includegraphics[width=10cm]{SCA_ICOD} \vspace{0.2cm} \caption{{\bf Comparison of sector-site identification by ICOD and SCA (see also Fig.~3).} We use the synthetic $\vec{\Delta}$ in (a) to selectively weight 5,000 random sequences at four relative bias values $\gamma\equiv (T^*-\langle T\rangle)/\sqrt{\langle (T-\langle T\rangle)^2 \rangle} = 0,1,2,3$ and test the ability of ICOD or SCA to correctly predict the sites with the $n$ largest mutational effects. (b) Magnitudes of mutational effects of $\vec{\Delta}$ by rank. (c-d). True Positive (TP) rates obtained by taking the first eigenvector $\vec{\nu}^{(1)}$ from either ICOD or SCA, generating a ranked list of sites of descending $|\nu_l^{(1)}|$ at each site $l$, and computing the fraction of the top $n$ sites in this predicted ordering that are also among the top $n$ sites of the actual ordering of mutational effect magnitudes $|\Delta_l|$. The effect of relative bias $\gamma$ on Recovery is shown in Fig.~3. (c) As expected, the prediction of ICOD is very good under all relative biases. (d) On the other hand, SCA does not perform well at the smallest or largest relative biases. } \label{fig:SCA_ICOD} \end{figure} \section{Performance of a method based on the generalized Hopfield model} As mentioned in the main text, we also compared ICOD with another PCA-based approach developed in Ref.~\cite{Cocco11}, which employs an inference method specific to the generalized Hopfield model. For $L$ Ising spins ($s_l\in\{-1,1\}$ for $1\leq l \leq L$), the Hamiltonian of the generalized Hopfield model reads (see Eq.~\ref{eq:fitness} in Ref.~\cite{Cocco11}) \begin{equation} H(\vec{s})=-\sum_{l=1}^L h_l \,s_l-\frac{1}{2L}\sum_{i=1}^N\left(\sum_{l=1}^L\xi_{i,l}\,s_l\right)^2+\frac{1}{2L}\sum_{i=1}^{N'}\left(\sum_{l=1}^L\xi'_{i,l}\,s_l\right)^2\,, \label{Hop} \end{equation} where $h_l$ is the local field at site $l$, while $\vec{\xi}_i=(\xi_{i,1},\dots,\xi_{i,L})$ is an attractive pattern and $\vec{\xi'}_i=(\xi'_{i,1},\dots,\xi'_{i,L})$ is a repulsive pattern. Here there are $N$ attractive patterns and $N'$ repulsive ones. In our model, in the single-selection case, the fitness of a sequence $\vec{s}$ in the Ising representation reads (see above, Sec.~\ref{Sec_Ising}, Eq.~\ref{HIs}) \begin{equation} w(\vec{s})=-\frac{\kappa}{2}\left(\sum_{l=1}^L D_l s_l-\alpha \right)^2=-\frac{\kappa}{2}\left[\left(\sum_{l=1}^L D_l s_l\right)^2-2\alpha \sum_{l=1}^L D_l s_l +\alpha^2\right]\,, \label{HIs_b} \end{equation} with $D_l=\Delta_l/2$ and $\alpha=T^*-\sum_l D_l$. Recalling that fitnesses and Hamiltonians have opposite signs, a comparison of Eqs.~\ref{Hop} and~\ref{HIs_b} shows that $\vec{\Delta}$ plays the part of a repulsive pattern in the two-body coupling terms, with the exact correspondence given by $\vec{\xi'}=\vec{\Delta}\,\sqrt{\kappa L}/2$. Note that in our model the local fields are proportional to the components of $\vec{\Delta}$. Ref.~\cite{Cocco11} proposed a method to infer attractive and repulsive patterns from data generated using a generalized Hopfield model Eq.~\ref{Hop}. Introducing the correlation matrix $G$, which is related to the covariance matrix $C$ through \begin{equation} G_{ll'}=\frac{C_{ll'}}{\tilde{\sigma}_l\tilde{\sigma}_{l'}}\,, \label{correlmat} \end{equation} where $\tilde{\sigma}_l^2=\langle s_l^2\rangle_* -\langle s_l\rangle_*^2=1 -\langle s_l\rangle_*^2$. Ref.~\cite{Cocco11} found, to lowest order, the following approximation for a single repulsive pattern $\vec{\xi}'$ (see Eq.~9 in Ref.~\cite{Cocco11}): \begin{equation} \xi'_l\approx\sqrt{L\left(\frac{1}{\lambda^{(L)}}-1\right)}\,\,\frac{\nu_l^{(L)}}{\tilde{\sigma}_l}, \label{eq:Cocco0} \end{equation} where $\lambda^{(L)}$ is the smallest (last) eigenvalue of the correlation matrix $G$ and $\nu_l^{(L)}$ is the associated eigenvector. This yields \begin{equation} \Delta_l\propto\frac{\nu_l^{(L)}}{\tilde{\sigma}_l}. \label{eq:Cocco} \end{equation}Inference of $\vec{\Delta}$ based on Eq.~\ref{eq:Cocco} is referred to as GHI (for Generalized Hopfield Inference) below. GHI performs very well for the sequence ensembles from the elastic network model used in Fig.~1 and Fig.~2 (Fig.~\ref{fig:Cocco_performance}). Importantly, just as for simple PCA and for ICOD (see main text), the top Recovery is obtained for the (modified) bottom eigenvector of the covariance matrix, consistently with $\vec{\Delta}$ being a repulsive pattern, but the large-eigenvalue modes also contain some information about $\vec{\Delta}$ (Fig.~\ref{fig:Cocco_performance}). \begin{figure}[h!] \centering \includegraphics[width=9cm]{Cocco_performance} \vspace{0.2cm} \caption{ {\bf Performance of GHI on sequence ensembles generated with our elastic-network $\vec{\Delta}$.} (a) Eigenvalues of $G$ and Recovery under mild selection bias, as in Fig.~1 in the main text. (b) Eigenvalues of $G$ and Recovery under extreme selection bias, as in Fig.~2 in the main text. The green dashed lines in (a,b) indicate the random expectation of Recovery (Eq.~\ref{eq:random-Recovery}). } \label{fig:Cocco_performance} \end{figure} In Fig.~\ref{fig:Cocco_sector_size}, we systematically compare all methods discussed in our work to recover $\vec{\Delta}$ from sequence data under various selection biases, using different sector sizes, for selectively weighted ensembles of 50,000 random sequences. We focus on the case of a single selection and compare Recovery of $\vec{\Delta}$ according to: \setlist{nolistsep} \begin{itemize}[noitemsep] \item ICOD, using the first eigenvector of the modified inverse covariance matrix $\tilde{C}^{-1}$ (see main text, Eq.~\ref{eq:ICODmatrix}) \item PCA, using the last principal component of the data (last eigenvector of the covariance matrix, see main text) \item SCA, using the first eigenvector of the absolute value of a conservation-weighted covariance matrix, $\tilde{C}_{ll'}^{\mathrm{(SCA)}}=|\phi_l C_{ll'}\phi_{l'}|$ (see main text and Ref.~\cite{halabi2009protein}) \item GHI, using the reweighted last eigenvector of the correlation matrix (see Eqs.~\ref{correlmat} and~\ref{eq:Cocco}). \end{itemize} \noindent Overall, ICOD and GHI perform best. For small selection biases, all methods perform accurately, except SCA, which fails when selection bias vanishes, as explained above. When the sector size is small compared to the sequence length $L$ (Fig.~\ref{fig:Cocco_sector_size} (a-d)), GHI performs a little bit better than ICOD for relatively small selection biases (however Recovery remains $\gtrsim 95\%$ with ICOD). Conversely, GHI is significantly outperformed by ICOD for relatively large selection bias, and the performance of PCA and SCA falls off quite rapidly in this regime. The performances of ICOD, PCA, and GHI become similar when the sector size becomes comparable to the sequence length (Fig.~\ref{fig:Cocco_sector_size} (e, f)). \begin{figure}[htb] \centering \includegraphics[width=12cm]{Compare_Cocco_sector_size_50000} \vspace{0.2cm} \caption{ {\bf Comparing Recovery of different methods for various $\vec{\Delta}$s.} Here, GHI refers to inference based on Eq.~\ref{eq:Cocco}. Curves are obtained by averaging over 100 realizations, each for an ensemble of 50,000 random sequences. For synthetic $\vec{\Delta}$s, each realization corresponds to a new $\vec{\Delta}$. } \label{fig:Cocco_sector_size} \end{figure} We further find that GHI is more sensitive to the size of the sequence ensemble than ICOD, although it becomes the most accurate for very large dataset sizes (see Fig.~\ref{fig:Cocco_sequence_size}). The performance of ICOD is quite robust to dataset size. Note that PCA outperforms other methods when the data size becomes very small (Fig.~\ref{fig:Cocco_sequence_size}, number of sequences~$=500$). \begin{figure}[htb] \centering \includegraphics[width=13cm]{Compare_Cocco_sequence_size} \vspace{0.2cm} \caption{ {\bf Effect of dataset size on Recovery of $\vec{\Delta}$.} Selectively reweighted ensembles of $5\times 10^2$, $5\times 10^3$, $5\times 10^4$, and $5\times 10^5$ random sequences are generated for the elastic-network $\vec{\Delta}$ and synthetic $\vec{\Delta}$s with sector sizes 1, 10, and 50. All results are averaged over 100 realizations, except those using $5\times 10^5$ sequences, where only 5 realizations were used. For synthetic $\vec{\Delta}$s, each realization employs a different $\vec{\Delta}$ with the same sector size. For the case of 500 sequences, some Recoveries were not computed at high biases due to numerical instabilities. } \label{fig:Cocco_sequence_size} \end{figure} \clearpage Overall, we find that GHI is very well suited to infer $\vec{\Delta}$ from very large synthetic datasets. However, ICOD is more robust to variation of dataset size and to selection bias, which should be an advantage in the application to real protein data. \section{Application of ICOD to a multiple sequence alignment of PDZ domains} Our general physical model for sectors provides insights into the statistical signatures of sectors in sequence data. In particular, we have found that the primary signature of physical sectors lies in the modes associated with the smallest eigenvalues of the covariance matrix, even though there is often additional signal from these sectors in the large eigenvalue modes, as studied more conventionally, e.g. in SCA. The success of ICOD on synthetic data demonstrates that information about sectors can indeed be extracted from the small eigenvalue modes of the covariance matrix. How well does ICOD perform on real sequence data? Here, we apply ICOD to an actual alignment of sequences of PDZ domains from the Pfam database (\texttt{https://pfam.xfam.org/}) containing 24,934 sequences of length $L=79$ (corresponding to sites 313-391 in the numbering in Fig.~2 of Ref.~\cite{mclaughlin2012spatial}). In Ref.~\cite{mclaughlin2012spatial}, sites important for the specific binding of PDZ to peptide ligands were identified experimentally via complete single-site mutagenesis. In particular, 20 sites showing particularly high mutational effects were deemed functionally significant~\cite{mclaughlin2012spatial}. It was further shown that 15 among the 20 sector amino acids found by SCA (i.e. 75\%) were also functionally significant sites. In order to compute the empirical covariance matrix of the data, we first removed sites with more than { $15\%$ gaps (11 sites out of 79). } To eliminate the confounding effects of very rare residues at particular sites, we used a pseudocount weight { $\Lambda=0.02$.} Next, we performed both SCA and ICOD using this empirical covariance matrix: \begin{itemize} \item For SCA, we computed the conservation reweighting factors as in Refs.~\cite{halabi2009protein,Rivoire16}, using the background frequency values from Ref.~\cite{halabi2009protein}. We compressed the conservation-reweighted covariance matrix using the Frobenius norm, and we focused on the first eigenvector of this reweighted and compressed covariance matrix in order to predict sector sites. { Finally, we took the square root of each component of this eigenvector to predict the mutational effect at each site} (see above, Section~\ref{SCAsub}, and Ref.~\cite{Rivoire16}). \item For ICOD, we inverted the covariance matrix and set its diagonal blocks to zero, thus obtaining the ICOD-modified inverse covariance matrix (see Eq.~\ref{ICOD_Potts}). Next, we computed the Frobenius norm of each $20\times 20$ block associated to each pair of sites $(i,j)$ according to Eq.~\ref{Frob}. The magnitude of the $l$-th component of the first eigenvector $\vec{\nu}^{(1)}$ of this compressed $L\times L$ matrix, denoted by $||\nu_l^{(1)}||$, is the ICOD prediction of the overall mutational effect at site $l$ (see above, Section~\ref{secMulti}, especially Fig.~\ref{fig:protein-data-with-pseudocount}). Since mutational effects were experimentally measured with respect to the wild-type residues~\cite{mclaughlin2012spatial}, we used as reference the wild-type sequence of the PDZ domain employed in Fig.~\ref{Fig1} and retained this reference-sequence gauge to perform ICOD, thus allowing direct comparison to experiments. \end{itemize} { We then assessed the ability of SCA and ICOD to predict experimentally-measured mutational effects~\cite{mclaughlin2012spatial}. Specifically, we compared SCA and ICOD predictions to the overall mutational effects corresponding to the Frobenius norm of the experimentally-measured residue-specific mutational effects $\Delta_l(\alpha)$ with $\alpha\in\{1,\dots,20\}$ : \begin{equation} ||\Delta_l||= \sqrt{\sum_{\alpha=1}^q \left(\Delta_{l}(\alpha)\right)^2}, \label{eq:fn_Delta_2} \end{equation} which is the counterpart in the reference-sequence gauge of the ``site significance'' introduced in the zero-sum gauge in Eq.~\ref{eq:fn_Delta}. The ability of SCA and ICOD to identify the sites with the experimentally most important mutational effects is shown in Fig.~5 in the main text. Here, we discuss the impact of parameters on these results. Fig.~\ref{fig:Supp_Effect_Cut_gap} shows the effect of varying the cutoff for removal of sites with a large proportion of gaps. As illustrated in panel (a), sites with a fraction of gaps larger than a cutoff are discarded. Many of these sites are on the edges of the PDZ domain, and tend to be less conserved. Fig.~\ref{fig:Supp_Effect_Cut_gap}(b) shows that ICOD performance is robust to variations of this cutoff within a reasonable range. We have chosen a cutoff of 15\% in the rest of this analysis. \begin{figure}[htb] \centering \includegraphics[width=8cm]{Supp_Effect_Cut_gap} \vspace{0.2cm} \caption{{\bf Impact of varying the cutoff for removal of sites with a large fraction of gaps.} (a) Fraction of sequences that have a gap at each site. Sites with a fraction of gaps larger than the cutoff shown by the dashed line are discarded in the rest of our analysis. (b) Impact of varying the gap-fraction cutoff on the performance of ICOD. A pseudocount weight of $\Lambda=0.02$ is used. } \label{fig:Supp_Effect_Cut_gap} \end{figure} \newpage Fig.~\ref{fig:Supp_PseudoCount_ICOD_SCA} shows the effect of varying the pseudocount weight, both for ICOD and for SCA. Panels (a) and (b) show the TP rate, defined as the fraction of the top 20 predicted sites that are among the 20 sites with the largest experimentally-determined mutational effects. Panels (c) and (d) show the Pearson correlation between ICOD or SCA predictions of mutational effects and the corresponding experimental measurements. Both ICOD and SCA identify experimentally-important sites significantly better than random expectation over the whole range of pseudocounts shown. However, ICOD performs best with small but nonzero pseudocount weights (panels (a) and (c)), while the performance of SCA is more robust to changing the pseudocount weight (panels (b) and (d)). \begin{figure}[htb] \centering \includegraphics[width=8cm]{Supp_PseudoCount_ICOD_SCA} \vspace{0.2cm} \caption{ {\bf Impact of the pseudocount weight $\Lambda$ on the performance of ICOD and SCA}. (a) Fraction of the 20 top sites predicted by ICOD that are among the 20 sites with the largest experimentally-determined mutational effects (``TP rate'') versus pseudocount weight. The TP rate definition is the same as that shown in Fig.~\ref{Fig5}. Gray dashed line: random expectation for the TP rate, namely 20/68=0.29 (68 sites are left after removing those with a gap fraction larger than the cutoff). (b) Counterpart of (a) for SCA. (c) Pearson correlation: between mutational effects predicted by ICOD and those measured experimentally at each site (ICOD-Exp; data from Fig.~\ref{Fig5}(c) for $\Lambda=0.02$); between mutational effects predicted by ICOD and conservation scores $\phi_l$ (ICOD-Conserv; see Ref.~\cite{Rivoire16} and Eq.~\ref{phil} in the binary case); and between experimentally measured mutational effects and conservation scores $\phi_l$ (Exp-Conserv). (d) Counterpart of (b) for SCA. In all panels, a gap-fraction cutoff of $15\%$ is used.} \label{fig:Supp_PseudoCount_ICOD_SCA} \end{figure} Since residue conservation plays a very important part in the PDZ sector~\cite{Tesileanu15}, we compared prediction based simply on conservation to those of SCA and ICOD. We employed the conservation scores $\phi_l$ used in SCA~\cite{Rivoire16}, which are a generalization of Eq.~\ref{phil} to 21 states. Conservation alone identifies 70\% of the 20 sites with largest experimentally-determined mutational effects, versus 85\% for ICOD (for $\Lambda=0.02$) and 75\% for SCA (see Fig.~\ref{Fig5}). In addition, the Pearson correlation between conservation scores and experimentally-measured mutational effects is significant (see Fig.~\ref{fig:Supp_Similarity_Conservation_ICOD_SCA}(c)), even though it is smaller than between ICOD or SCA scores and experimentally-measured mutational effects (see Fig.~\ref{fig:Supp_Similarity_Conservation_ICOD_SCA}(a-b)). In fact, both ICOD and SCA scores are significantly correlated with conservation scores (see Fig.~\ref{fig:Supp_Similarity_Conservation_ICOD_SCA} (d-e)). In the case of SCA, this is not surprising given that conservation scores are explicitly used to weight the covariance matrix. Interestingly, ICOD naturally identifies these conserved sites as being important. This correlation between ICOD and conservation highlights the ability of ICOD to identify functionally important amino acids in a principled way that only relies on covariance. } \begin{figure}[htb] \centering \includegraphics[width=14cm]{Supp_Similarity_Conservation_ICOD_SCA} \vspace{0.2cm} \caption{ {\bf Predicting experimentally-measured mutational effects using ICOD, SCA, or conservation.}\protect\\ (a) Experimentally-measured mutational effect versus mutational effect predicted by ICOD for each site of the PDZ sequence. (b) Experimentally-measured mutational effect versus mutational effect predicted by SCA for each site of the PDZ sequence. (c) Experimentally-measured mutational effect versus Conservation score $\phi_l$ for each site of the PDZ sequence. In panels (a, b, c), to highlight the matches between the top 20 predictions and the top 20 experimentally important sites~\cite{mclaughlin2012spatial}, correct hits are shown in red, false negatives in blue, and false positives in green. (d) Conservation score $\phi_l$ versus mutational effect predicted by ICOD for each site of the PDZ sequence. (e) Conservation score $\phi_l$ versus mutational effect predicted by SCA for each site of the PDZ sequence. In all panels, a pseudocount weight $\Lambda=0.02$ and a gap-fraction cutoff of $15\%$ were used. } \label{fig:Supp_Similarity_Conservation_ICOD_SCA} \end{figure}
2,877,628,089,944
arxiv
\subsection{Notation and Problem Definition} Given the input text sequence $X = [\text{CLS}, x_1, x_2, \cdots, x_L, \text{SEP}]$ with $L$ tokens, a language model $f_{LM}$ produces the contextual word representation $f_{LM}(X) = [h_{\text{CLS}}, h_1, {h}_2, \cdots, {h}_L, h_{\text{SEP}}]$. For a specific downstream task, a header function $f_{H}$ further uses $f_{LM}(X)$ and generates the prediction as $f_H({h}_{\text{CLS}})$ for sequence classification or $f_H([h_{\text{CLS}}, {h}_1, {h}_2, \cdots, {h}_L, h_{\text{SEP}}])$ for token classification. \textit{The goal of our work} is to learn better contextual word representation $f_{LM}(x)$ by leveraging definitions of the rare words in dictionaries (e.g., Wiktionary). Suppose $S = [s_1, \cdots, s_K]$ and $C = [c^{(1)}, \cdots, c^{(K)}]$ are the sets of rare words in the input text sequence $X$ and their definitions in the dictionary. When a rare word $s_i$ appears in the input text sequence, we fetch its definition from the dictionary as $c^{(i)} = [c_1^{(i)}, \cdots, c_{N_i}^{(i)}]$ with $N_i$ tokens, and append it to the end of the input text sequence. So, an input sequence $X$ with appended definitions of $K$ rare words can be written as: $\mathrm{input} = [\text{CLS}, x_1, x_2, ..., x_L, \text{SEP}^{(1)}, c_1^{(1)}, c_2^{(1)}, ..., c_{N_1}^{(1)}; ...; \text{SEP}^{(K)}, c_1^{(K)}, c_2^{(K)}, ..., c_{N_K}^{(K)}, \text{SEP}]$, And the corresponding contextual representation generated from the language model $f_{LM}$ as: $f_{LM}(\mathrm{input}) = [h_{\text{CLS}}, {h}_1, {h}_2, \cdots, {h}_L, h_{\text{SEP}}^{(1)}, {h}_1^{(1)}, \cdots, {h}_{N_1}^{(1)}; \cdots \cdots; h_{\text{SEP}}^{(K)}, {h}_1^{(K)}, \cdots, {h}_{N_K}^{(K)}, h_{\text{SEP}}]$. For a specific downstream task, a header function $f_{H}$ still uses $f_{LM}(\mathrm{input})$ to generate the prediction as $f_H({h}_{\text{CLS}})$ for sequence classification or $f_H([h_{\text{CLS}}, {h}_1, {h}_2, \cdots, {h}_L])$ for token classification. \subsection{Choosing the Rare Words} \label{sec:rare} There are different ways to choose the rare word set $S$ in a pre-training corpus. One way is to use a pre-defined absolute frequency value as the threshold. \cite{wu2021taking} used 500 as the threshold to divide frequent words and rare words, and maintained a fixed vocabulary of rare words during pre-training and fine-tuning. However, rare words can vary greatly in different corpora. For example, rare words in the medical domain are very different from those in general domain \citep{lee2020biobert}. Besides, keeping a large threshold for a small downstream datasets makes the vocabulary of rare words too large. For example, only 51 words in the RTE dataset have a frequency of more than 500. Therefore, we propose to choose specialized rare words for each pre-training corpus and downstream tasks. Specifically, we ranked all word frequency from smallest to largest, and add them to the list one by one until the word frequency of the added word reaches 10\% of the total word frequency. Compared with \cite{wu2021taking} which maintained a fixed vocabulary, our method can dynamically adjust the vocabulary of rare words, obtain and represent their definitions in dictionary in a plug-and-play manner. To fetch the definition of rare words, we leveraged the largest online dictionary, i.e., Wiktionary, and collected a dump of Wiktionary\footnote{https://www.wiktionary.org/} which includes definitions of 999,614 concepts. \subsection{Preliminary: BERT Pre-training} We use the BERT~\citep{devlin2019bert} model as an example to introduce the basics of the model architecture and training objective of PLMs. BERT is developed on a multi-layer bidirectional Transformer~\citep{vaswani2017attention} encoder. The Transformer encoder is a stack of multiple identical layers, where each layer has two sub-layers: a self-attention sub-layer and a position-wise feed-forward sub-layer. The self-attention sub-layer produces outputs by calculating the scaled dot products of queries and keys as the coefficients of the values, i.e., \begin{equation} \text {Attention}(Q, K, V) = \text{Softmax}(\frac{QK^T}{\sqrt{d}})V. \end{equation} $Q$(Query), $K$(Key), $V$(Value) are the hidden representations produced by the previous self-attention layer and $d$ is the dimension of the hidden representations. Transformer also extends the aforementioned self-attention layer to a multi-head self-attention layer version in order to jointly attend to information from different representation subspaces. BERT uses the Transformer model as its backbone neural network architecture and trains the model parameters with the masked language modeling (MLM) objective on large text corpora. In the masked language modeling task, a random sample of the words in the input text sequence is selected. The selected positions will be either replaced by special token [MASK], replaced by randomly picked tokens or remain the same. The objective of masked language modeling is to predict words at the masked positions correctly given the masked sentences. \begin{figure*}[t] \centering {\includegraphics[width=1.0\textwidth]{figures/framework-1.pdf}} \vspace{-0.25in} \caption{The overall architecture of Dict-BERT. The definitions of rare words are appended to the end of input text. In additional to training with masked language modeling, Dict-BERT performs two novel self-supervised learning tasks: word-level mutual information maximization ($\S$\ref{sec:mmi}) and sentence-level definition discrimination ($\S$\ref{sec:dhp}). ``SARS'' is a negatively sampled rare word.} \vspace{-0.1in} \label{fig:framework} \end{figure*} \subsection{Dict-BERT: Language Model Pre-training with Dictionary} Dict-BERT is based on the BERT architecture, which can be initialized either randomly or from a pre-trained checkpoint with the same structure. It is worth noting that we slightly modified the type embedding, in which the type embedding of the input text is set as $0$, and the type embedding of the dictionary definitions is set as $1$. Besides, we used the absolute positional embedding. We represent each input text sequence and dictionary definitions pair as a tuple $(x, c)$. The semantics of a word in the input text depends on the current context, while the semantics of a word in the dictionary is standardized by linguistic experts. In order to better align the representations between them, we propose two novel pre-training tasks on word and sentence-level alignment between input text sequence and rare word definitions to enhance pre-trained language models with dictionary. \subsubsection{Word-level Mutual Information Maximization.} \label{sec:mmi} Recently, there has been a revival of approaches inspired by the InfoMax principle~\citep{oord2018representation,tschannen2020mutual}: maximizing the mutual information (MI) between the input and its representation. MI measures the amount of information obtained about a random variable by observing another random variable. As the input text sequence and rare word definitions are obtained from different sources, in order to better align the representations, we proposed to maximize the MI between a rare word $x_i$ in the input sequence and its well-defined meaning in the dictionary $c^{(i)}$, with joint density $p(x_i, c^{(i)})$ and marginal densities $p(x_i)$ and $p(c^{(i)})$, is defined as the Kullback–Leibler (KL) divergence between the joint and the product of the marginals, \begin{equation} I(x_i; c^{(i)}) = D_{KL} (p(x_i, c^{(i)})||p(x_i)p(c^{(i)})) = \mathbbm{E}_{p(x_i, c^{(i)})}[\log \frac{p(x_i, c^{(i)})}{p(x_i)p(c^{(i)})}]. \end{equation} The intuition of maximizing mutual information between a rare word appeared in the input text sequence and its definitions in the dictionary is to encode the underlying shared information and align the semantic representation between the contextual meaning and well-defined meaning of a word. Nevertheless, estimating MI in high-dimensional spaces is a notoriously difficult task, and in practice one often maximizes a tractable lower bound on this quantity~\citep{poole2019variational}. Intuitively, if a classifier can accurately distinguish between samples drawn from the joint $p(x_i, c^{(i)})$ and those drawn from the product of marginals $p(x_i)p(c^{(i)})$, then $x_i$ and $c^{(i)}$ have a high mutual information. In order to approximate the mutual information, we adopted InfoNCE~\citep{oord2018representation}, which is one of the most commonly used estimators in the representation learning literature, defined as \begin{equation} I(x_i;c^{(i)}) \geq \mathbbm{E}[\frac{1}{K} \sum_{i=1}^{K} \log \frac{e^{f_{\text{MI}}(h_{i}, h^{(i)})}}{\frac{1}{K}\sum_{j=1}^{K} \mathbbm{1}_{[j \neq i]} e^{f_{\text{MI}}(h_{i}, h^{(j)})}}] \triangleq I_{NCE}(x_i; c^{(i)}), \label{eq:mmi-ob} \end{equation} where the expectation is over $K$ independent samples $\{(h_{i}, h^{(i)})\}^{K}_{i=1}$ from the joint distribution $p(x_i, c^{(i)})$ \citep{poole2019variational}. Intuitively, the critic function $f_{\text{MI}}(\cdot)$ measures the similarity (e.g., inner product) between two word representations. The model should assign high values to the positive pair $(h_{i}, h^{(i)})$, and low values to all negative pairs. We compute InfoNCE using Monte Carlo estimation by averaging over multiple batches of samples~\citep{chen2020simple}. By maximizing the mutual information between the encoded representations, we extract the underlying latent variables that the rare words in the input text sequence and their dictionary definitions have in common. \subsubsection{Sentence-level Definition Discrimination} \label{sec:dhp} Instead of locally aligning the semantic representation, learning to differentiate between correct and polluted word definitions helps the language model capture global information of input text and dictionary definitions. We denote the set of definitions from rare words in the input text as ${C}$. We then sample a set of ``polluted'' definitions from dictionary by replacing $C$ with probability 50\% with a different word randomly sampled from the entire vocabulary together with its definition. Since the last layer representation on the special token [SEP] is the fused representation of a word definition, we apply a multi-layer perception (MLP) as a binary classifier $f_{\text{MLP}}(\cdot)$ to predict whether the appended definition is for a rare word ($y=1$) or any polluted one ($y=0$) in the input text sequence. Therefore, the discriminative objective can be formally defined as follows, \begin{equation} \mathcal{L}_{\text{DD}} = - \mathbbm{E} \sum_{i=1}^{K} \log p(y|f_{\text{MLP}}(h_{\text{SEP}}^{(i)}). \end{equation} \subsubsection{Overall objective.} Now we present the overall training objective of Dict-BERT. To avoid catastrophic forgetting~\citep{mccloskey1989catastrophic} of general language understanding ability, we train the masked language modeling together with word-level mutual information maximization (MIM) and definition discrimination (DD) tasks. We denote $\mathcal{L}_{\text{MIM}}$ as the loss function of the MIM task which is the opposite of expectation in Equation \ref{eq:mmi-ob}. Hence, the overall learning objective is formulated as: \begin{equation} \mathcal{L} = \mathcal{L}_{\text{MLM}}+ \lambda_1 \mathcal{L}_{\text{MIM}} + \lambda_2 \mathcal{L}_{\text{DD}} \end{equation} where $\lambda_1$, $\lambda_2$ are introduced as hyperparameters to control the importance of each task. \begin{wrapfigure}{R}{0.43\textwidth} \vspace{-0.3in} \begin{center} \includegraphics[width=0.4\textwidth]{figures/framework-3.pdf} \end{center} \vspace{-0.2in} \caption{An illustration of knowledge-visible attention matrix. ``Def 1'' is the dictionary definition of the second word in the input text, and ``Def 2'' is the definition of the third word in the input text. Colored circle means token $i$ can attention information from token $j$, while white circle means no attention from token $i$ to token $j$. } \vspace{-0.2in} \label{fig:kvattn} \end{wrapfigure} \subsection{Dict-BERT: Fine-tuning with Knowledge-visible Attention} \label{sec:fine-tune} Most existing work uses the final hidden state of the first token (i.e., the [CLS] token) as the sequence representation~\citep{devlin2019bert,liu2019roberta,yang2019xlnet}. For a sequence classification task, a multi-layer perception network function $f_{H}$ takes the output of $f_{LM}$ as input and generates the prediction as $f_H({h}_{\text{CLS}})$. Notably, when fine-tuning a language model on downstream tasks, there could be many rare/unseen words in the dataset. Therefore, in the fine-tuning stage, when encountering a rare word in the input text, we append its definition to the end of input text, just like what we did in pre-training. However, the appended dictionary definitions may change the meaning of the original sentence since the [CLS] token attend information from both input text and dictionary description. As pointed in \cite{liu2020k} and \cite{xu2021does}, too much knowledge incorporation may divert the sentence from its original meaning by introducing a lot of noise. This is more likely to happen if there are multiple rare words in the input text. To address this issue, we adopt the visibility matrix \citep{liu2020k} to limit the impact of definitions on the original text. In BERT, an attention mask matrix is added with the self-attention weights before $\mathrm{softmax}$. If token $j$ is not supposed to be visible to token $i$, we add an -$\infty$ value in attention matrix ($i$, $j$). As shown in Figure \ref{fig:kvattn}, we modify the attention mask matrix such that a token $i$ can attend to another token $j$ only if: (1) both tokens belong to the input text sequence, or (2) both tokens belong to the definition of the same rare word, or (3) $i$ is a rare word in the input text and $j$ is from its definition. \subsection{Overall Setting} To show the wide adaptability of our Dict-BERT, we conducted experiments on 16 NLP benchmark datasets. we use BERT~\citep{devlin2019bert} and RoBERTa~\citep{liu2019roberta} as the backbone pre-trained language methods. First, we followed \cite{liu2019roberta} and \cite{wu2021taking} to use 8 natural language understanding tasks in GLUE, including CoLA, RTE, MRPC, STS, SST, QNLI, QQP, and MNLI. Second, we followed \cite{gururangan2020don} to use 8 specialized domain tasks, including Chemprot, RCT-20k, ACL-ARC, SciERC, HyperPartisan, AGNews, Helpfulness, IMDB. \subsection{Rare Word Collection} Here, we briefly introduce the statistic of rare words in BERT pre-training corpus: English Wikipedia and BookCorpus. By concatenating these two datasets, we obtained a corpus with roughly 16GB in size. The total number of unique words in the pre-training corpus is 504,812, of which 112,750 (22.33\%) words are defined as frequent words. In other words, the sum of the occurrences of these 112,750 words in the corpus accounts for 90\% of the occurrences of all words in the corpus. We look up definitions of the remaining 392,062 (77.67\%) words in the Wiktionary, of which 252581 (50.03\%) can be found. The average length of definition is 9.57 words. \begin{table*}[t] \begin{center} \caption{Performance of different models on GLUE tasks. Each configuration is run five times with different random seeds, and the average of these five results on the validation set is reported in the table. \textit{We note that} our code is implemented on Huggingface Transformer~\citep{wolf2020transformers}. The performance of our implemented BERT is consistent with the official performance, but it is slightly lower than the performance reported by \cite{wu2021taking} (indicated by $\S$, where BERT $\S$ $>$ BERT (ours)). Since no open-source code is released by BERT-TNF~\citep{wu2021taking}, we reported the relative improvement ($\Delta$) of BERT-TNF and Dict-BERT compared with the original BERT.} \vspace{0.05in} \setlength{\tabcolsep}{1.3mm}{\scalebox{0.89}{\begin{tabular}{l||cc|cccccccc|cc} \toprule {\multirow{2}*{Methods}} & \multicolumn{2}{c|}{{Dict in}} & MNLI & QNLI & QQP & SST & CoLA & MRPC & RTE & STS-B & {\multirow{2}*{Avg}} & {\multirow{2}*{$\Delta$}} \\ \cmidrule{2-11} & PT & FT & Acc. & Acc. & Acc. & Acc. & Matthews & Acc. & Acc. & Pearson & & \\ \midrule BERT $\S$ & $\times$ & $\times$ & 85.00 & 91.50 & 91.20 & 93.30 & 58.30 & 88.30 & 69.00 & 88.50 & 83.10 & - \\ BERT-TNF $\S$ & $\surd$ & $\surd$ & 85.00 & 91.00 & 91.20 & 93.20 & 59.50 & 89.30 & 73.20 & 88.50 & 83.90 & +0.80 \\ \midrule BERT (ours) & $\times$ & $\times$ & 84.12 & 90.69 & 90.75 & 92.52 & 58.89 & 86.17 & 68.67 & 89.39 & 82.65 & - \\ Dict-BERT-F & $\times$ & $\surd$ & 84.19 & 90.94 & 90.68 & 92.59 & 59.16 & 85.75 & 68.10 & 88.72 & 82.51 & -0.14 \\ \midrule Dict-BERT-P & $\surd$ & $\times$ & 84.33 & 91.02 & 90.69 & 92.62 & 60.44 & 86.81 & \textbf{73.86} & \textbf{89.81} & 83.70 & +1.05 \\ ~$\vdash$ w/o MIM & $\surd$ & $\times$ & 84.24 & 90.79 & 90.24 & 92.22 & 60.14 & 87.03 & 73.79 & 89.67 & 83.52 & +0.87 \\ ~$\vdash$ w/o DD & $\surd$ & $\times$ & 84.18 & 90.54 & 90.30 & 92.39 & 61.49 & 86.49 & 71.89 & 89.60 & 83.36 & +0.71 \\ \midrule Dict-BERT-PF & $\surd$ & $\surd$ & \textbf{84.34} & \textbf{91.20} & \textbf{90.81} & \textbf{92.65} & \textbf{61.68} & \textbf{87.21} & 72.89 & 89.68 & \textbf{83.80} & \textbf{+1.15} \\ ~$\vdash$ w/o MIM & $\surd$ & $\surd$ & 84.22 & 90.67 & 90.66 & 92.53 & 61.58 & 87.20 & 71.58 & 89.37 & 83.47 & +0.82 \\ ~$\vdash$ w/o DD & $\surd$ & $\surd$ & 84.16 & 90.21 & 90.78 & 92.39 & 61.14 & 87.19 & 71.84 & 89.24 & 83.37 & +0.72 \\ \bottomrule \end{tabular}}} \label{tab:dictbert-glue} \end{center} \vspace{-0.1in} \end{table*} \begin{table*}[t] \begin{center} \caption{Performance of different models on eight specialized domain datasets under the domain adaptive pre-training (DAPT) setting. Each configuration is run five times with different random seeds, and the average of these five results on the test set is calculated as the final performance.} \vspace{0.05in} \setlength{\tabcolsep}{1.2mm}{\scalebox{0.89}{\begin{tabular}{l||cccccccc|c} \toprule {\multirow{2}*{Methods}} & ChemProt & RCT & ACL-ARC & SciERC & HP & AGNews & Helpful & IMDB & {\multirow{2}*{Avg}} \\ \cmidrule{2-9} & Mi-F1 & Mi-F1 & Ma-F1 & Ma-F1 & Ma-F1 & Ma-F1 & Ma-F1 & Ma-F1 \\ \midrule BERT & 81.16 & 86.91 & 64.20 & 80.40 & 91.17 & 94.48 & 69.39 & 93.67 & 82.67 \\ BERT-DAPT & 83.10 & 86.85 & 71.45 & 81.62 & 93.52 & \textbf{94.58} & 70.73 & 94.78 & 84.57 \\ Dict-BERT-DAPT-PF & 83.49 & \textbf{87.46} & \textbf{74.18} & \textbf{83.01} & 94.70 & \textbf{94.58} & 70.04 & \textbf{94.80} & \textbf{85.25} \\ ~$\vdash$ w/o MIM & 83.33 & 87.38 & 72.26 & 82.70 & \textbf{94.72} & \textbf{94.58} & 70.33 & 94.73 & 85.06 \\ ~$\vdash$ w/o DD & \textbf{84.09} & 87.23 & 72.78 & 82.54 & 94.69 & 94.57 & \textbf{70.43} & 94.70 & 85.01 \\ \midrule RoBERTa & 82.03 & 87.14 & 66.20 & 79.55 & 90.15 & 94.43 & 68.35 & 95.16 & 83.15 \\ RoBERTa-DAPT & 84.02 & 87.62 & 73.56 & 81.85 & 90.22 & 94.51 & 69.06 & 95.18 & 84.51 \\ Dict-RoBERTa-PF & 84.41 & 87.42 & \textbf{75.33} & \textbf{82.53} & 92.51 & \textbf{94.80} & 70.57 & \textbf{95.51} & \textbf{85.32} \\ ~$\vdash$ w/o MIM & \textbf{84.49} & \textbf{87.51} & 74.83 & 81.58 & \textbf{93.27} & 94.75 & 70.67 & 95.40 & 85.31 \\ ~$\vdash$ w/o DD & 84.09 & 87.39 & 74.04 & 81.18 & 90.91 & 94.64 & \textbf{70.81} & \textbf{95.51} & 84.82 \\ \bottomrule \end{tabular}}} \label{tab:dictbert-dsp} \end{center} \vspace{-0.1in} \end{table*} \subsection{Pre-training Corpus and Tasks} \paragraph{Experiments on the GLUE benchmark:} The language model is first pre-trained on the general domain corpus, and then fine-tuned on the training set of different GLUE tasks. Following BERT~\citep{devlin2019bert}, we used the English Wikipedia and BookCorpus as the pre-training corpus. We removed the next sentence prediction (NSP) as suggested in RoBERTa~\citep{liu2019roberta}, and kept masked language modeling (MLM) as the objective for pre-training a vanilla BERT. \vspace{-0.1in} \paragraph{Experiments on specialized domain datasets:} The language model is not only pre-trained on the general domain corpus, but also pre-trained on domain specific corpus before fine-tuned on domain specific tasks. To realize it, we initialized our model with the checkpoint from pre-trained BERT/RoBERTa and continue to pre-train on domain-specific corpus \citep{gururangan2020don}. The four domains we focus on are biomedical (BIOMED) papers, computer science (CS) papers, news text from REALNEWS, and e-commerce reviews from AMAZON. \subsection{Baseline Methods} \textbf{Vanilla BERT/RoBERTa.} We use the off-the-shelf BERT-base and RoBERTa-base model and perform supervised fine-tuning of their parameters for each downstream tasks. \textbf{BERT-DAPT/RoBERTa-DAPT}. It continues pre-training BERT/RoBERTa on a large corpus of unlabeled domain-specific text (e.g., BioMed) using masked language modeling (MLM). \textbf{BERT-TNF}. It takes notes for rare words on the fly during pre-training to help the model understand them when they occur next time. Specifically, it maintains a note dictionary and saves a rare word’s contextual information in it as notes when the rare word occurs in a sentence. \subsection{Ablation Settings} Dict-BERT-F/Dict-BERT-P indicate only using dictionary in the pre-training/fine-tuning stage. Dict-BERT-PF indicates using dictionary in the both pre-training and fine-tuning stages. Furthermore, Dict-BERT w/o MIM removes the word-level mutual information maximization task and Dict-BERT w/o DD removes the sentence-level definition discriminative task during pre-training. \subsection{Evaluation Metrics} For GLUE, we followed RoBERTa~\citep{liu2019roberta} and reported Matthews correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks. For specialized tasks, we followed \cite{gururangan2020don} and reported Micro-F1 for Chemprot and RCT-20k, and Macro-F1 for other tasks. For WNLaMPro, we followed \cite{schick2020rare} and reported MRR and Precision@K. \begin{figure*}[t] \centering \subfigure[Full attn. (FT) v.s. Knowledge attn. (KT)] {\includegraphics[width=0.45\textwidth]{figures/ex-attention.pdf}\label{fig:ktvsft}} \hspace{0.1in} \subfigure[Rare word ratios (5\% v.s. 10\% v.s. 15\%)] {\includegraphics[width=0.45\textwidth]{figures/ex-wordratio.pdf}\label{fig:ratio}} \vspace{-0.1in} \caption{Model performance on CoLA, RTE, STSB and MRPC with different variant settings.} \end{figure*} \subsection{Experimental Results} \paragraph{Only using Dictionary during Fine-tuning.} As shown in Table \ref{tab:dictbert-glue}, comparing with the vanilla BERT and Dict-BERT-F, we can observe that only using dictionary during fine-tuning cannot improve the model performance on the GLUE benchmark. This indicates the pre-trained language model cannot quickly learn rare word definitions in the dictionary to help improve downstream task performance. Furthermore, the pre-trained language model might even be misled by noisy explanations in the dictionary. Therefore, it is important to integrate dictionary into language model during pre-training so the dictionary definitions can be better utilized. \paragraph{Dict-BERT v.s. Baseline Methods.} As shown in Table \ref{tab:dictbert-glue}, Dict-BERT-PF can outperform the vanilla BERT on the GLUE benchmark by improving +1.15\% accuracy on average. The BERT performance from \cite{wu2021taking} is higher than our implemented BERT, however, they do not have open-source code for reproducing their experimental results. Though Dict-BERT-PF and BERT-TNF achieved very close performance on GLUE benchmark, i.e., 83.80\% and 83.90\%, our Dict-BERT-PF has achieved greater relative improvement on the GLUE benchmark than BERT-TNF, i.e., +1.15\% and +0.80\%. In addition, BERT-TNF keeps a fixed note dictionary so it cannot update any unseen words into the note dictionary during fine-tuning. On the contrary, Dict-BERT can dynamically adjust the vocabulary of rare words, obtain and represent their definitions in dictionary in a plug-and-play manner. On RTE, Dict-BERT-P obtains the biggest performance improvement compared with the vanilla BERT. On another small-data sub-tasks CoLA, Dict-BERT-PF also outperforms the baseline with considerable margins. This indicates that when Dict-BERT is fine-tuned on a small downstream dataset, the improvement is particularly significant. Besides, as shown in Table \ref{tab:dictbert-dsp}, Dict-BERT-DAPT can outperform BERT-DAPT on the specialized domain datasets by improving +0.68\% F1 on average. The same observation can be obtained from the RoBERTa setting. \paragraph{Fine-tuning with Dictionary v.s. without Dictionary.} As shown in Table \ref{tab:dictbert-glue}, we compared model performance between using dictionary in fine-tuning and not using dictionary in fine-tuning. First, after pre-training the language model with dictionary, even without using dictionary in fine-tuning, the performance has been greatly improved. This indicates the pre-training the language model with dictionary can generally improve the language representation and provide better initiation before fine-tuning the language model on the downstream tasks. Besides, we also observe the performance of using dictionary can perform slightly better on the GLUE benchmark. We hypothesize the reason behind can be the distribution discrepancy of the pre-training and fine-tuning data. \paragraph{Ablation Study.} As shown in Table \ref{tab:dictbert-glue} and Table \ref{tab:dictbert-dsp}, we conducted ablation study on both GLUE benchmark and specialized domain datasets. First, both MIM and DD can help learning knowledge from dictionary and improve language model pre-training. Specifically, DD demonstrates larger average improvement than MIM on two benchmarks. The average improvements on GLUE benchmark brought by DD and MIM are +0.63\% and +0.52\%. Second, combining MIM and DD together can achieve the highest performance on GLUE benchmark, in which the average gain enlarges to +1.15\%. For specialized domain datasets, we have the same observations as above. \paragraph{Knowledge Attention v.s. Full Attention.} As we mentioned in the Section \ref{sec:fine-tune}, too much knowledge incorporation may divert the sentence from its original meaning by introducing some noise. This is more likely to happen if there are multiple rare words appeared in the input text. Therefore, we compared the model performance between using knowledge attention and full attention. As shown in Figure \ref{fig:ktvsft}, we observed that using knowledge attention can consistently perform better than using full attention mechanism during the fine-tuning stage on CoLA, RTE, STSB and MRPC datasets. Besides, Dict-BERT with full attention even under-performed than the vanilla BERT without using any dictionary definition, which indicates the appended description in the dictionary may change the meaning of the original sentence. For example, STSB compares similarity between two sentence. Using full attention includes semantic meanings of definitions into the sentence representation, which might reduce the sentence similarity score and hurt the model performance. \paragraph{Learning with Different Rare Word Proportions.} As we mentioned in Section \ref{sec:rare}, we select rare words for each downstream tasks by truncating the tail distribution of the word frequency. In order to verify the impact of using different tail proportions of rare words on the downstream tasks, we selected three different ratios (i.e., 5\%, 10\%, and 15\%) and experimented on CoLA, RTE, STSB and MRPC datasets. As shown in Figure \ref{fig:ratio}, on the CoLA and STSB datasets, the model achieves the best performance when using 10\% words at the tail as rare words. On the MRPC data, there is no significant difference of model performance in using different proportions of rare words. However, the performance on RTE data demonstrates a trend, that is, the more rare words selected, the worse the performance of the model. This is consistent with the conclusion of whether the dictionary is used in fine-tuning in Table \ref{tab:dictbert-glue}, i.e., the performance of not using dictionary is better than using dictionary on the RTE dataset. Therefore, the selection of rare words with different tails has no obvious correlation with the performance of the model on downstream tasks. \begin{table*}[t] \caption{Performance of different models on WNLaMPro test set, subdivided by word frequency.} \vspace{0.05in} \centering \scalebox{0.90}{\begin{tabular}{l||ccc|ccc|ccc} \toprule {\multirow{2}*{Methods}} & \multicolumn{3}{c|}{\textsc{Rare} (0, 10)} & \multicolumn{3}{c|}{\textsc{Frequent} (100, +$\infty$)} & \multicolumn{3}{c}{\textsc{Overall} (0, +$\infty$)} \\ & MRR & P@3 & p@10 & MRR & P@3 & p@10 & MRR & P@3 & p@10 \\ \midrule BERT (base) & 0.117 & 0.053 & 0.036 & 0.356 & 0.179 & 0.116 & 0.266 & 0.130 & 0.084 \\ Dict-BERT & \textbf{0.145} & \textbf{0.068} & \textbf{0.041} & \textbf{0.359} & \textbf{0.181} & \textbf{0.117} & \textbf{0.274} & \textbf{0.137} & \textbf{0.088} \\ ~$\vdash$ w/o MIM & 0.144 & 0.067 & \textbf{0.041} & 0.357 & 0.180 & 0.115 & 0.272 & 0.135 & 0.087 \\ ~$\vdash$ w/o DD & 0.141 & 0.065 & 0.040 & 0.355 & 0.179 & 0.116 & 0.269 & 0.133 & 0.086 \\ \bottomrule \end{tabular}} \label{tab:wnlapro} \end{table*} \paragraph{Unsupervised Language Model Probing.} In order to assess the ability of language models to understand words as a function of their frequency, we used WordNet Language Model Probing (WNLaMPro) dataset~\citep{schick2020rare} to test how well a language model understands a given word: we can ask it for properties of that word using natural language. For example, a language model that understands the concept of ``guilt'', should be able to correctly complete the sentence ``Guilt is the opposite of \_\_\_'' with the word ``innocence''. WNLaMPro contains four different kinds of relations: antonym, hypernym, cohyponym+, and corruption. Based on the word frequency in English Wikipedia, WNLaMPro defines three subsets based on keyword counts: WNLaMPro-rare $(0, 10)$, WNLaMPro-medium $(10, 100)$, and WNLaMPro-frequent $(100, +\infty)$. As shown in Table \ref{tab:wnlapro}, Dict-BERT can greatly improve the word representation compared with the vanilla BERT without using a dictionary during pre-training. Based on the word frequency, we observe Dict-BERT can significantly help learn rare word representations. Compared to the vanilla BERT, Dict-BERT improves MRR and P@3 by relatively +23.93\% and +28.30\%, respectively. In addition, Dict-BERT is also able to learn better frequent word representations. Although we did not directly take frequent word definitions as part of the input, Dict-BERT spends less memory on rare words, because it is easier to predict rare words than the vanilla BERT, so the saved memory power could be used to memorize the facts involving popular words and interactions between popular words. \subsection{Fine-tuning Details} \label{sec:fine-tuning-details} We conduct experiments on pre-training BERT-Base with 110M parameters~\citep{devlin2019bert}. BERT-base consists of 12 Transformer layers. For each layer, the hidden size is set to 768 and the number of attention head is set to 12. All models are pre-trained for 300k steps with batch size 2,000 and maximum sequence length 512. We use Adam~\cite{kingma2015adam} as the optimizer, and set its hyperparameter $\epsilon$ to $1e$-$6$ and $(\beta_1, \beta_2)$ to $(0.9, 0.98)$. The peak learning rate is set to $7e$-$4$. We set the dropout probability to $0.1$ and weight decay to $0.01$. All configurations are reported in Table \ref{tab:hyper}. \begin{table*}[h] \begin{center} \caption{Hyperparameters for BERT pre-training and domain-adaptive pre-training (DAPT).} \vspace{0.05in} \begin{tabular}{ccc} \toprule \textbf{Hyperparameter} & \multicolumn{2}{c}{{\textbf{Assignments}}} \\ \midrule Pre-training setting & BERT pre-training & Domain adaptive pre-training \\ \midrule number of steps & 300K & 12.5K \\ batch size & 2,000 & 2,000 \\ maximum learning rate & 7e-4 & 1e-4 \\ learning rate optimizer & Adam & Adam \\ Adam epsilon & 1e-6 & 1e-6 \\ Adam beta weights & 0.9, 0.98 & 0.9, 0.98 \\ Weight decay & 0.01 & 0.01 \\ Warmup proportion & 0.06 & 0.06 \\ learning rate decay & linear & linear \\ \bottomrule \end{tabular} \label{tab:hyper} \end{center} \end{table*} \section{Introduction} \label{sec:introduction} \input{1-introduction} \section{Related Work} \label{sec:related} \input{2-realtedwork} \section{Proposed Method} \label{sec:method} \input{3-method} \section{Experiments} \label{sec:Experiments} \input{4-experiments} \section{Conclusions} \label{sec:conclusions} \input{5-conclusion}
2,877,628,089,945
arxiv
\section{Introduction} Detection of spatial patterning is important in many domains, including molecular biology \cite{lee_13}, ecology \cite{stoyan_00} and epidemiology \cite{gartrell_96}. Spatial patterning can be identified by testing whether observed data departs from a model of spatial randomness: For instance, the homogeneous Poisson process may serve as a model of spatial randomness for point process data, and deviations from Poisson statistics may be used to detect spatial structure such as clustering. The Ripley $K$ function is widely employed, along with associated $L$ and $H$ functions, to analyse deviations from homogeneous Poisson statistics \cite{ripley_77}, since it permits tests for clustering and dispersion at multiple scales. A number of related statistics have been introduced based on the $K$ function to summarize such deviations (employing either simulations or analytic approaches to evaluate the critical quantiles under Poisson statistics), including the clustering index and degree of clustering \cite{lee_13}, and a variance normalized alternative to the $L$ function \cite{lagache_13}. The above spatial statistics have been defined and applied in the context of point process, where the data to be analysed consists of a collection of points in (typically) Euclidean 2- or 3-space: For instance, the degree of clustering has been applied to the study of the spatial distribution of individual mRNA transcripts from a single gene, treated as point particles at positions inferred from Fluorescent In Situ Hybridization (FISH) microscopy data, whose clustered organization was shown to be dependent on the spatial aggregation of an associated RNA binding protein, and necessary for asynchronous cell-cycle timing in multinucleate fungal cells \cite{lee_13}. Point processes can be defined as a special class of random measures, the {\em counting measures}, which assign non-negative integer values to all measurable subsets of a space. Spatial statistics such as the Ripley $K$ function can be generalized to the framework of random measures; the $K$ function generalizes directly to the {\em reduced second moment measure} \cite{chui_13}, which can be defined for stationary random measures taking either discrete values (counting measures) or continuous values. Further, the concept of spatial randomness can be generalized, leading to the class of {\em completely spatially random} (CSR) {\em measures}, which includes the homogenous Poisson processes as a subclass (those which are simultaneously counting measures and CSR). However, while such generalizations appear to enable the treatment of more general kinds of data, for instance continuous measurements which can be modeled as samples from a random measure, spatial statistics such as those above are rarely applied outside the point process context. Problems which arise in straightforwardly applying similar techniques to other kinds of data include choosing a general estimator for the $K$ function, and determining a method to evaluate the necessary critical quantiles either by simulations or analytically for a general class of CSR null hypotheses. Unlike the homogeneous Poisson processes, which can be parameterized by a single intensity parameter, the class of all CSR random measures has a more complex structure, as characterized in \cite{kingman_67,jordan_10}. In addition to the homogeneous Poisson processes, further subclasses of CSR random measures include Gamma processes, and sum measures associated with Marked Poisson processes (referred to as Mark Sum Poisson processes below). We propose here a general approach to $K$ function-based statistical tests in the context of arbitrary random measures. We provide a consistent convolution estimator for the $K$ function based on the approach of \cite{stoyan_84}, and investigate a number of ways in which the critical quantiles of the clustering index and degree of clustering estimators can be estimated for various classes of null model. First, we consider null hypotheses in the classes of Gamma processes and Mark Sum Poisson processes, and show how to fit these models to data and draw samples to simulate CSR in each case, providing an expectation-maximization (EM) algorithm to fit the Marked Poisson process. Further, we derive an exact permutation-based estimator for the clustering index, which provides a general test against the null hypothesis class of all CSR measures. We show that our permutation test using the convolution-based estimator reduces to the clustering index estimator used by Lee et. al. for the point process case \cite{lee_13}, and hence provides a further rationale for the {\em conditionality principle} discussed in \cite{ripley_77}, which circumvents model fitting in the homogeneous Poisson case by fixing the number of points across simulations. An advantage of adopting a general random measure based approach to identification of spatial patterning is that it provides a unifying framework in which statistics and indicators can be compared when analysing diverse data types. It also has the potential to provide a unifying framework for the modeling and inference of spatially distributed regulatory networks (at both inter- and intra-cellular levels) as diverse kinds of spatial omics data become available \cite{crosetto_15}. Random measures have emerged in a variety of areas of machine learning as a robust general framework for modeling diverse kinds of data, while avoiding the need to make arbitrary assumptions about the parametrization of distributions, particularly in context of Bayesian non-parametric approaches (see \cite{jordan_10} for a general summary, and \cite{blei_10,lin_10,rao_09,sudderth_09} for applications in text and image processing). We discuss in further detail below the potential relevance of our approach and the random measure framework within the broader context of modeling spatial omics data. We begin by introducing formally the concepts of complete spatial randomness and random measures, and outline existing statistical tests for Ripley's {\it K}, {\it L} and {\it H} functions, the clustering index, and degree of clustering in the point process context (Sec. \ref{sec:prelim}). We then outline our generalization of these tests to the context of arbitrary random measures, including a convolution-based estimator for the $K$ function, and tests against various null hypothesis classes as described above (Sec. \ref{sec:results1}). We assess the ability of these tests to identify spatial randomness and patterning (clustering) first in synthetic data (Sec. \ref{sec:results2}), and then apply the method to probe for patterns of clustering over time in fluorescence microscopy data from pairs of corresponding mRNAs and proteins in a polarizing mouse fibroblast system (Sec. \ref{sec:results3}). The strong relationship between mRNA and protein clustering profiles suggests that mRNA localization and local translation provides a mechanism for protein localization in a number of cases, providing a small-scale demonstration of a spatial omics application. We conclude with a discussion (Sec. \ref{sec:discuss}). \section{Preliminaries}\label{sec:prelim} \subsection{Complete Spatial Randomness and Random Measures} A {\em random measure} can be defined on any measurable space $\mathcal{S}$, that is, a set equipped with a $\sigma$-algebra. For convenience, we will assume below that $\mathcal{S}$ is a Euclidean space of dimension $d$, ($\mathcal{S}=\mathbb{R}^d$), and that the $\sigma$-algebra is $\mathcal{B}$, the collection of Borel sets. A {\em Borel set} is any set that can be formed by the operations of countable union, countable intersection and relative complement from the open sets in the standard topology. A {\em measure} on $\mathbb{R}^d$ is a mapping $\phi$ from $\mathcal{B}$ to the non-negative reals with infinity, such that $\phi(\emptyset) = 0$, and $\phi(\cup_i B_i) = \sum_i \phi(B_i)$ for all countable collections of disjoint sets in $\mathcal{B}$, $\{B_i\}_{i\in \mathbb{N}}$. A measure is called {\em locally finite} if $\phi(B)$ is finite whenever $B$ is a bounded set, and we denote the collection of all locally finite measures as $\mathbb{M}$. A {\em random measure} is then defined to be a random variable taking values in $\mathbb{M}$, and we will write $\Phi$ for the random variable itself, and $\phi$ for a specific value (measure) taken by $\Phi$. A random measure is necessarily defined with respect to a $\sigma$-algebra over $\mathbb{M}$, and all examples below will assume the $\sigma$-algebra $\mathcal{M}$, which is the smallest $\sigma$-algebra of subsets of $\mathbb{M}$ such that all functions $\phi \mapsto \phi(B)$ are measurable for arbitrary Borel set $B$. Further, we will use the notation $P(\phi(B)\in R)$ to denote the probability that a random measure assigns a value in $R$ to set $B$, where $R$ is an open interval in $\mathbb{R}$. A random measure is {\em completely random} if $P(\phi(B_1)\in R_1)$ is independent of $P(\phi(B_2)\in R_2)$ whenever $B_1\cap B_2 = \emptyset$. {\em Complete Spatial Randomness} (CSR) is a stronger property of a random measure which implies both (a) complete randomness, and (b) {\em stationarity}, $P(\phi(B)\in R) = (\phi(B+z)\in R)$ for any displacement $z\in\mathbb{R}^d$. A number of properties follow from complete spatial randomness. First, a CSR measure is necessarily isotropic, and there exists a fixed {\em intensity parameter} $\lambda$ such that $\mathbb{E}[P(\phi(B))] = \lambda\nu(B)$, where $\nu(B)$ is the {\em Lebesgue} measure on $\mathbb{R}^d$, which returns the volume of $B$ \cite{chui_13}, and $\mathbb{E}[.]$ denotes expectation. Further, any CSR measure over $\mathbb{R}^d$ can be represented as a Poisson process $\phi^*$ over $\mathbb{R}^{d+1}$, whose intensity measure has the form $\lambda^*(B\times R) = \lambda_0\nu(B)\gamma(R)$, where $\gamma(R)$ is a measure over $\mathbb{R}$ (with $\gamma(\mathbb{R})$ finite), $\lambda_0$ is a non-negative real constant, and $\phi(B)=\sum_{\mathbf{x}\in \phi^*\cap(B\times\mathbb{R})}x_{d+1}$ (see below for notational conventions for point processes). This follows from the general characterization of CSR measures given in \cite{kingman_67} (see also \cite{kingman_02,jordan_10}). A consequence of this representation is that $P(\phi(B_1)\in R) = P(\phi(B_2)\in R)$ whenever $B_1$ and $B_2$ have equal volume, $\nu(B_1)=\nu(B_2)$, so that the distribution of $\phi(B)$ is determined only by the volume of $B$. A {\em point process} can be defined as a special type of random measure for which $\phi(B)\in\mathbb{N}\cup\{0,\infty\}$ with probability 1, along with the technical condition that $\phi(\{\mathbf{x}\}) \in \{0,1\}$ for all $\mathbf{x}\in\mathbb{R}^d$, which ensures that no two points coincide (also called {\em simplicity}, \cite{chui_13}). Since point processes take only non-negative integer values on bounded subsets, they are also called {\em counting measures}. Further, since a sample $\phi$ from a point process is (with probability 1) a countable subset of $\mathbb{R}^d$ \cite{chui_13}, we can use set notation and replace integrals by infinite sums in defining quantities for point processes, writing for example $\phi(B) = \sum_{\mathbf{x}\in\phi\cap B} 1 = |\phi\cap B|$. The class of CSR point processes is equivalent to the class of homogeneous Poisson processes. The homogeneous Poisson processes are parameterized by a single intensity parameter, $\lambda$, such that $P(\phi(B)=n) = \Poisson(n;\lambda\nu(B))$, where $\Poisson(a;b)=(b^a/a!)\exp(-b)$ is the Poisson probability mass function. The more general class of Poisson processes (as used in the general characterization of CSR above) are completely random measures (without stationarity), parameterized by an {\em intensity measure} $\lambda$ such that $P(\phi(B)=n) = \Poisson(n;\lambda(B))$ \cite{chui_13}. \subsection{Statistical Tests for Spatial Patterning in Point Processes} For a stationary point process, the Ripley $K$ function can be defined in terms of the {\em reduced second moment measure} $\mathcal{K}$ \cite{chui_13,ripley_77}: \begin{eqnarray}\label{eq:ripK1} K(r) &=& \mathcal{K}(B(o,r)) \nonumber \\ \mathcal{K}(B) &=& (1/\lambda)\mathbb{E}_{P_o}[\phi(B\backslash\{o\})], \end{eqnarray} where $o$ is the origin, $B(o,r)$ is an open ball at the origin of radius $r$, and $\mathbb{E}_{P_o}[.]$ is the expectation under the {\em Palm distribution} at the origin, which for a stationary point process can be thought of as the original process conditioned on observing a point at $o$. Hence, $P_o$ includes a point at $o$ with probability 1, and this point is removed by taking the set difference $\phi(B\backslash\{o\})$ when evaluating the reduced second moment measure. $K(r)$ is therefore the expected number of further points observed within a radius $r$ of an arbitrary point (due to stationarity) in $\phi$. Ripley's $L$ and $H$ functions can be defined in terms of $K$ as: \begin{eqnarray}\label{eq:ripLH} L(r) &=& \sqrt[d]{K(r)/\nu(B(o,r))} \nonumber \\ H(r) &=& L(r) - r. \end{eqnarray} which have the effect of normalizing $K$ so that for a homogeneous Poisson processes they take the form $L(r)=r$ and $H(r)=0$. In \cite{osher_81}, an estimator for $K$ is proposed: \begin{eqnarray}\label{eq:Kestimator} \hat{K}(r) = \frac{1}{\lambda^2 w(r)} \sum_{\substack{\mathbf{x},\mathbf{y}\in\phi\cap W,\\\mathbf{y}\neq \mathbf{x}}}[d(\mathbf{x},\mathbf{y})\leq r], \end{eqnarray} where $[A]$ is the Iverson bracket, which is 1 when $A$ is true and 0 otherwise, $d(.,.)$ is the Euclidean distance, $W\in\mathcal{B}$ is the window region in which the sample $\phi$ is observed, and $w(.)$ is an {\em edge correction}: \begin{eqnarray}\label{eq:w} w(r) = \mathbb{E}_{t_r}[\nu(W\cap(W+t_r))], \end{eqnarray} where $t_r$ is a random vector sampled from a uniform distribution over the sphere centered at the origin of radius $r$. Eq. \ref{eq:Kestimator} is shown to be unbiased for all $r$ less than the diameter of $W$ for any convex $W$ \cite{osher_81}. A simpler (but biased) estimator for $K$ is also commonly use \cite{chui_13,ripley_77}, which replaces the edge correction function with the volume/area of the observed region: \begin{eqnarray}\label{eq:Kbiased} \tilde{K}(r) = \frac{1}{\lambda^2 \nu(W)} \sum_{\substack{\mathbf{x},\mathbf{y}\in\phi\cap W,\\\mathbf{y}\neq \mathbf{x}}}[d(\mathbf{x},\mathbf{y})\leq r]. \end{eqnarray} The associated statistical tests introduced below are unaffected by the choice between $\hat{K}$ and $\tilde{K}$, and estimators for $L$ and $H$ can be straightforwardly derived from $\hat{K}$ and $\tilde{K}$ by substituting these estimators for true values in Eq. \ref{eq:ripLH}. In \cite{lee_13}, the {\em clustering index} statistic is introduced, which is denoted $H^*$. We provide a general expression for $H^*$ below, which provides a test for clustering or dispersion at significance level $\omega\in(0\;0.5)$: \begin{eqnarray}\label{eq:Hstar} H^*(r) &=& \begin{cases} \frac{\hat{H}(r)-\hat{H}_{0.5}(r)}{\hat{H}_{(1-\omega)}(r)-\hat{H}_{0.5}(r)} &\mbox{if } (\hat{H}(r)\geq \hat{H}_{0.5}(r)) \wedge (\hat{H}_{(1-\omega)}(r)>\hat{H}_{0.5}(r)) \\ -\frac{\hat{H}_{0.5}(r)-\hat{H}(r)}{\hat{H}_{0.5}(r)-\hat{H}_{\omega}(r)} & \mbox{if } (\hat{H}(r)\leq \hat{H}_{0.5}(r)) \wedge (\hat{H}_{0.5}(r)>\hat{H}_{\omega}(r)) \\ 0 & \mbox{otherwise}. \end{cases} \end{eqnarray} where $\hat{H}_{\omega}(r)$ denotes the $\omega$'th quantile ($(100\omega)$'th percentile) of $\hat{H}(r)$ under an appropriate simulation of CSR (unlike \cite{lee_13}, we use a median instead of a mean simulation-based estimator to center $H^*(r)$, so that $H^*(r)=0$ when $\hat{H}(r)=\hat{H}_{0.5}(r)$, to avoid complications arising if the mean estimator is greater than $\hat{H}_{(1-\omega)}(r)$ or less than $\hat{H}_{\omega}(r)$). $H^*$ is thus a further normalization of $\hat{H}$ such that, for a given value of $r$, $H^*(r)>1$ iff $H(r)$ (and hence $K(r)$) is significantly above the range expected under CSR on a 1-sided test at level $\omega$, providing evidence of clustering (respectively, $-H^*(r)>1$ for dispersion) at length-scale $r$. By inspecting Eq. \ref{eq:Hstar}, we see that the edge correction terms from Eq. \ref{eq:Kestimator} will cancel in calculating $H^*$ from $\hat{K}$, and hence it is sufficient to use the simpler estimator $\tilde{K}$. To calculate $\hat{H}_{(1-\omega)}(r)$ and $\hat{H}_{\omega}(r)$ it is necessary to fix a distribution for simulations appropriate for the CSR null hypothesis. One possibility is to estimate the intensity parameter $\lambda$ directly from $\phi$ ($\lambda = \phi(W)/\nu(W)$), and simulate a homogeneous Poisson process with this $\lambda$ parameter by drawing first a Poisson distributed value $N$ for the number of points in $W$ from $\Poisson(N;\lambda\nu(W))$ for each simulation, and then distributing $N$ points across $W$ (independently and uniformly). This method is termed {\em parametric bootstrapping}, as discussed in \cite{chui_13,davison_97,ripley_77}, and provides an asymptotically consistent statistical test (as $\nu(W)\rightarrow\infty$). Alternatively, we may condition all simulations on the number of points observed in $\phi$. Hence, we can take advantage of the {\em conditionality principle} discussed in \cite{ripley_77}, whereby the distribution of points in region $W$ for any homogeneous Poisson process is independent of $\lambda$ when conditioned on $N$. The points must be independently and uniformly distributed in $W$ regardless of $\lambda$, forming a {\em binomial process} over $W$ (see \cite{chui_13}). By conditioning on $N$, we therefore derive a consistent statistical test independent of the size of $W$ against all CSR point processes (homogeneous Poisson processes), which is the approach taken in \cite{chui_13,ripley_77}. We note however that the simulations for the conditional test are no longer strictly CSR, since they are simulations of a binomial process. This distinction will be important in generalizing $H^*$. In particular, if an observation $\phi$ is quantized across $W$ into voxels which are small enough that the probability of two points occupying the same voxel is negligible, it is possible to view simulations of a binomial process as permutations of the voxels in $W$, and derive the binomial process test as a Monte-Carlo approximation to an exact permutation test, as will be proposed for the general case. The options discussed above for calculating $H^*$ are summarized in Algorithm \ref{alg1}, which also serves as a template for generalization below (where $\mathbf{X}$ denotes a spatially quantized observation of $\phi$; here, a binary indicator vector across voxels lying in an observation window $W$ which is 1 iff a voxel contains a point in $\phi$). In \cite{chui_13}, $H^*$ is further used to define the {\em degree of clustering} $\hat{\delta}(r)=\int_{t\in(0,r)}\max(H^*(t)-1,0)\text{d}t$, which is the area of the curve $H^*(.)$ above $1$ from $0$ to $r$, and hence serves as an indicator for the degree of departure from CSR in this range. \begin{algorithm} \caption{Generalized Estimator for Clustering Index, $H^*$} \label{alg1} \begin{algorithmic}[1] \REQUIRE $\mathbf{X}$ (Vectorized sample from point process / random measure), $T$ (number of simulation / permutation trials), $\omega$ (significance level) \STATE Calculate estimators for $K_X$, $L_X$ and $H_X$ using $\mathbf{X}$ (Eqs. \ref{eq:ripLH} and \ref{eq:Kbiased}). \STATE Draw vectorized samples $\mathbf{Y}_1 \; ... \; \mathbf{Y}_T$ using one of the following methods: \STATEx (a) (parametric bootstrapping) Find the best fitting CSR model $M$ for $\mathbf{X}$ in chosen null hypothesis class and run $T$ simulations of $M$, \STATEx (b) (conditioning) Simulate the null model $T$ times conditioned on the measure or point count of the whole observed region in $\mathbf{X}$, \STATEx (c) (permutation) Draw $T$ permutations of $\mathbf{X}$. \STATE Calculate $K_{Y_t}$, $L_{Y_t}$ and $H_{Y_t}$ estimators on $\mathbf{Y}_t$ for $t = 1\; ... \; T$ (Eqs. \ref{eq:ripLH} and \ref{eq:Kbiased}). \STATE Calculate $\omega$'th, 0.5'th and $(1-\omega)$'th quantiles of $H_{Y_1}(r)...H_{Y_T}(r)$ for each value of $r$, and use to normalize $H_X$ to calculate the clustering index, $H^*$ (Eq. \ref{eq:Hstar}). \RETURN $H^*$ \end{algorithmic} \end{algorithm} \section{Results} \subsection{Generalized Statistical Tests for Spatial Patterning in Arbitrary Random Measures}\label{sec:results1} We now consider the generalization of the statistical tests and indices above from the point process case to the general random measure case. For a stationary random measure, the {\em reduced second moment measure} $\mathcal{K}$ and Ripley's $K$ function are defined exactly as in Eq. \ref{eq:ripK1} (see \cite{benes_07}, Eq. 2.19). The relevant {\em Palm distribution}, $P_o$, in the random measure case takes the form $P_o(Y) = \int g(x) \mathbf{1}_{Y}(\phi+x) \phi(\text{d}x) P(\text{d}\phi)$, with $g(.):\mathbb{R}^d\rightarrow\mathbb{R}^+$ an arbitrary non-negative measurable function integrating to $1$ and $Y\in \mathcal{M}$ (see \cite{stoyan_84}). This definition can be seen to reduce to the distribution of further points conditioned on a point at the origin for the point process case, since $g(x)\mathbf{1}_{Y}(\phi+x)$ will be non-zero only for $x\in\phi$, regardless of $g(.)$. Ripley's $L$ and $H$ functions follow directly, as in Eq. \ref{eq:ripLH}. To provide a general estimator for the Ripley $K$ function, we must first specify how samples from the random measure $\phi$ are observed. We assume that we have an observation window $W$, which can be partitioned into a collection of $N$ regular cubical voxels with sides of length $l$, denoted $v_1,v_2,...,v_N\subset W$, whose centres lie at $\mathcal{C} = \{c_1,c_2,...,c_N\}$. Our observation of $\phi$ is limited to the value it takes on each voxel, hence we observe the quantities $\phi(v_1),...,\phi(v_N)$. We can thus alternatively represent a sample as a measure $\bar{\phi}$ with atoms at $c_1,...,c_n$ having weights $\phi(v_1),...,\phi(v_N)$ respectively. We now consider the estimator: \begin{eqnarray}\label{eq:KestimatorRM2} \bar{K}(r) &=& \frac{1}{\lambda^2\nu(W)}\sum_{n_1,n_2\in\{1...N\}} [|c_{n_1}-c_{n_2}|\leq r] \phi(v_{n_1})\phi(v_{n_2}) - \bar{C} \nonumber \\ &=& \frac{1}{\lambda^2\nu(W)}\int\int [|x-y|\leq r] \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y) - \bar{C} \nonumber \\ \bar{C} &=& \frac{\sum_{n=1...N} (\bar{\phi}(v_n))^2}{\lambda^2\nu(W)}. \end{eqnarray} $\bar{K}(.)$ can be efficiently calculated using a discrete convolution, since we have: \begin{eqnarray}\label{eq:convolution} \int\int [|x-y|\leq r] \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y) = \int [|x|\leq r] (\bar{\phi} * \bar{\phi}^{\prime})(x) \text{d}x, \end{eqnarray} where $\bar{\phi}^{\prime}(\{x\}) = \bar{\phi}(\{-x\})$, and $(\bar{\phi}*\bar{\phi}^{\prime})$ is the convolution of $\bar{\phi}$ and $\bar{\phi}^{\prime}$ when treated as functions from $\mathbb{R}^d$ to $\mathbb{R}$, hence $\bar{\phi}(x)=\bar{\phi}(\{x\})$. $\bar{K}(.)$ is an estimator for $K(.)$ in the following sense: \vspace{0.3cm} \noindent\textbf{Proposition 1.} \textit{For all values of $r$, $\bar{K}(\max(r-\sqrt{ld},0))\leq \tilde{K}(r) \leq \bar{K}(r+\sqrt{ld}) + \bar{C}$, where} \begin{eqnarray}\label{eq:KestimatorRM} \tilde{K}(r) &=& \frac{1}{\lambda^2\nu(W)}\int\int \mathbf{1}_W(x)\mathbf{1}_W(y) [|x-y|\leq r] \phi(\text{d}x)\phi(\text{d}y) - C \nonumber \\ C &=& \frac{\int_W \phi(\{x\}) \phi(\text{d}x)}{\lambda^2\nu(W)}. \end{eqnarray} \vspace{0.3cm} \noindent\textbf{Proof.} We begin by defining a function $V:W\rightarrow \mathcal{C}$ such that we have $x\in v_n$ implies $V(x)=c_n$ (hence $V$ sends $x$ to the centre of the voxel to which it belongs). Then, we can rearrange Eq. \ref{eq:KestimatorRM2} as follows: \begin{eqnarray}\label{eq:KestimatorRM3} \bar{K}(r) &=& \frac{1}{\lambda^2\nu(W)}\int\int [|x-y|\leq r] \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y) - \bar{C} \nonumber \\ &=& \frac{1}{\lambda^2\nu(W)}\int\int \mathbf{1}_W(x)\mathbf{1}_W(y) [|V(x)-V(y)|\leq r] \phi(\text{d}x)\phi(\text{d}y) - \bar{C}. \end{eqnarray} By inspection, the form of Eq. \ref{eq:KestimatorRM3} is identical to Eq. \ref{eq:KestimatorRM} with the term $[|V(x)-V(y)|\leq r]$ substituted for $[|x-y|\leq r]$, and $\bar{C}$ substituted for $C$. Since each voxel is a $d$ dimensional cube with sides of length $l$, we have $\max(|x-V(x)|) = \sqrt{ld}/2$. Hence, by the triangle inequality: \begin{eqnarray}\label{eq:triangle} |V(x)-V(y)| - \sqrt{ld} \leq |x-y| \leq |V(x)-V(y)| + \sqrt{ld}. \end{eqnarray} Writing $S(|x-y|\leq r)$ for the subset of $W\times W$ for which $[|x-y|\leq r]=1$ (and similarly for $S(|V(x)-V(y)|\leq r-\sqrt{ld})$ and $S(|V(x)-V(y)|\leq r+\sqrt{ld})$), this implies: \begin{eqnarray}\label{eq:sset} S(|V(x)-V(y)|\leq r-\sqrt{ld}) \subseteq S(|x-y|\leq r) \subseteq S(|V(x)-V(y)|\leq r+\sqrt{ld}). \end{eqnarray} The three subsets in Eq. \ref{eq:sset} correspond directly to the regions of integration for which the Iverson bracket conditions in Eqs. \ref{eq:KestimatorRM3} and \ref{eq:KestimatorRM} are true ($[.]=1$) when the three quantities in the proposition are substituted (noting that $r\geq 0$). The proposition follows from the nested relationship between these regions of integration, the fact that $\phi$ is non-negative, and the fact that $\bar{C}\geq C$ (since for any voxel $v_n$, $(\bar{\phi}(v_n))^2 = (\int_{v_n} \phi(\text{d}x))^2 \geq \int_{v_n} \phi(\{x\}) \phi(\text{d}x)$). \begin{flushright} $\square$ \end{flushright} $\tilde{K}(r)$ is related to a further estimator $\hat{K}$, which substitutes $(\mathbf{1}_W(x)\mathbf{1}_W(y))/(w'(x-y))$ for $\mathbf{1}_W(x)\mathbf{1}_W(y)/\nu(W)$ in Eq. \ref{eq:KestimatorRM}, where $w'(x-y)=\nu(W\cap (W+x-y))$ is an edge-correction term. We can derive $\hat{K}$ as an unbiased estimator of $K$ from fully observed (not spatially quantized) samples $\phi$ using a result in \cite{stoyan_84} (using their Eq. 10, following Theorem 1, see Appendix A for the derivation). For values of $r$ which are small compared to the diameter of $W$, $w'(x-y)\approx \nu(W)$ whenever $[|x-y|\leq r]=1$, and hence $\tilde{K}(r)\approx\hat{K}(r)$; hence $\bar{K}(r)$ provides an approximation to $\hat{K}(r)$ for small $r$ up to the bounds in Prop. 1. We note also that $\bar{K}(r)$ reduces (up to spatial quantization) to the previous estimator Eq. \ref{eq:Kbiased} in the point process case: \vspace{0.3cm} \noindent\textbf{Proposition 2.} \textit{For a point process sample $\phi$, for $\bar{K}$ as in Eq. \ref{eq:KestimatorRM2} we have:} \begin{eqnarray}\label{eq:KestimatorRMprop2} \bar{K}(r) = \frac{1}{\lambda^2 \nu(W)} \sum_{\substack{\mathbf{x},\mathbf{y}\in\bar{\phi}\cap W,\\\mathbf{y}\neq \mathbf{x}}} [d(\mathbf{x},\mathbf{y})\leq r], \end{eqnarray} \noindent\textit{where $x'\in\bar{\phi}$ iff $x'=c_n$ for a voxel $v_n$ for which there exists $x\in\phi$ such that $x\in v_n$, and $\lambda = \phi(W)/\nu(W)$ (where $\phi(W)$ is the number of observed points). Additionally, we assume that the voxel width is chosen so that for no voxel $\phi(v_n)>1$.} \vspace{0.3cm} \noindent\textbf{Proof.} For the point process sample $\phi$ as in the theorem (such that the voxel width is chosen so that for no voxel $\phi(v_n)>1$), we have: \begin{eqnarray}\label{eq:appC1} \bar{K}(r) &=& \frac{1}{\lambda^2\nu(W)}\int\int [|x-y|\leq r] \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y) - \frac{\phi(W)}{\lambda^2\nu(W)} \nonumber \\ &=& \frac{1}{\lambda^2\nu(W)}\sum_{\substack{\mathbf{x},\mathbf{y}\in\bar{\phi}\cap W}} [d(\mathbf{x},\mathbf{y})\leq r] - \frac{1}{\lambda}, \end{eqnarray} using the fact that $\phi(v_n)\in\{0,1\}$ implies $\sum_{n=1...N}(\bar{\phi}(v_n))^2=\phi(W)$, and $\lambda=\phi(W)/\nu(W)$. Further, \begin{eqnarray}\label{eq:appC1} \frac{1}{\lambda^2\nu(W)}\sum_{\substack{\mathbf{x},\mathbf{y}\in\bar{\phi}\cap W,\\\mathbf{y}= \mathbf{x}}} [d(\mathbf{x},\mathbf{y})\leq r] &=& \frac{\phi(W)}{\lambda^2\nu(W)}\nonumber\\ &=& \frac{1}{\lambda}, \end{eqnarray} and the proposition follows. \begin{flushright} $\square$ \end{flushright} The estimator $\bar{K}$ can be used in Algorithm \ref{alg1} as above to calculate the $H^*$ clustering index statistic for a general random measure, where $\mathbf{X}$ here is identical to $\bar{\phi}$. All three options (parametric bootstrapping, conditioning and permutation) can be used in step 2 of the algorithm, and we consider below these options in connection with various null hypothesis classes. We begin by considering two restricted classes of CSR random measures as null hypotheses, stationary {\em Gamma} and {\em Mark Sum Poisson} processes, before considering options for the class of all CSR random meaures. \vspace{0.3cm} \noindent\textbf{Gamma process.} A stationary {\em Gamma} process is defined as a random measure whose marginals are Gamma distributed as follows: \begin{eqnarray}\label{eq:gammProc} P(\phi(B)) = \GamDist(.;a\nu(B),b), \end{eqnarray} which is a CSR measure \cite{jordan_10}. On the null hypothesis that $\phi$ is a sample from a Gamma process, $\phi(v_1),...,\phi(v_N)$ will be distributed according to $\GamDist(.;a,b)$, assuming for simplicity $l=1$ (the voxel sides are unit length) and hence $\nu(v_n)=1$ for all voxels. The parameters $a$ and $b$ can thus be set directly by fitting a Gamma distribution to $\phi(v_1),...,\phi(v_N)$ by maximum likelihood \cite{bishop_06}, and the resulting Gamma process simulated by drawing independent identically distributed values from $\GamDist(.;a,b)$ at each voxel. We note that it is also possible to simulate a Gamma process using a {\em stick-breaking} algorithm, as in \cite{rao_09}, which may be more efficient if the number of voxels is large, with many taking values close to zero. Further, if it is assumed that $b=1$, it is possible to choose the conditional version of step 2 in Algorithm \ref{alg1}, by first drawing $N$ values from $\GamDist(.;a,1)$, and normalizing to sum to $\phi(W)$, the observed sample total. This is equivalent to simulating a {\em Dirichlet process} with intensity parameter $a$, and scaling by $\phi(W)$, for which it is possible also to use a stick-breaking algorithm \cite{jordan_10}. \vspace{0.3cm} \noindent\textbf{Mark Sum Possion process.} A {\em Mark Sum Poisson} process can be defined as a random measure whose marginals are distributed as: \begin{eqnarray}\label{eq:MSPProc} P(\phi(B)\in R) = \sum_{\mathbf{n}\in(\mathbb{N}\cup\{0\})^M}[(\sum_m w_m n_m)\in R]\cdot \prod_m\Poisson(n_m|\alpha_m\nu(B)), \end{eqnarray} where $m=1...M$ are the {\em marks} of the process, each associated with a weight $w_m\geq0$ and intensity $\alpha_m>0$, and $[A]$ is the Iverson bracket, which is 1 when $A$ is true and 0 otherwise. The process is so-called, since it is equivalent to attaching marks to the points in a homogeneous Poisson process with intensity $\lambda=\sum_m \alpha_m$, where mark $m$ appears with a probability proportional to $\alpha_m$ (forming a {\em Marked Poisson process}), and the value $\phi(B)$ is calculated by summing across the weights $w_m$ of the points in $B$ (forming its associated {\em sum measure}) \cite{chui_13}. In this equivalent representation, each mark independently follows a homogeneous Poisson processes with intensity $\alpha_m$, and hence it follows that Eq. \ref{eq:MSPProc} is CSR \cite{chui_13}. On the null hypothesis that $\phi$ is a sample from a Mark Sum Poisson process, $\phi(v_1),...,\phi(v_N)$ will be distributed according to Eq. \ref{eq:MSPProc}, assuming $l=1$, which we call a {\em Weighted Sum of Poisson distributions}. By fixing the weights $w_1...w_M$, it is possible to derive an expectation-maximization (EM) algorithm to fit $\alpha_1...\alpha_M$ by maximum-likelihood (see Appendix B). Having fitted the model, CSR samples can be drawn by generating values $p_1,...p_M$, distributed as $\Poisson(.|\alpha_1)...\Poisson(.|\alpha_m)$ respectively, and calculating $\sum_m p_m w_m$ at each voxel. Aside from forming a broad CSR measure class, Mark Sum Poisson processes are interesting in that, in the limit of infinite marks, they form a universal representation for CSR measures. This is because, as noted earlier, any CSR measure $\phi$ over $R^d$ can be represented as a (non-homogeneous) Poisson process $\phi^*$ over $R^{d+1}$ with intensity measure $\lambda^*(B\times R) = \lambda_0\nu(B)\gamma(R)$ such that $\phi(B)=\sum_{\mathbf{x}\in \phi^*\cap(B\times\mathbb{R})}x_{d+1}$ \cite{kingman_67}. As the number of marks increases, the $\alpha_m$'s are better able to approximate the measure $\gamma$, and hence any random measure. Although we considered above only the case of fitting a distribution with finite marks and fixed weights, by using a large number marks with densely and evenly sampled weights, it is therefore possible to approximate any CSR measure. \vspace{0.3cm} \noindent\textbf{General case.} We know that, since all voxels have identical volume $\nu(v_n)=l^d$, $\phi(v_1),...,\phi(v_N)$ are independent samples from the same distribution ($P(\phi(B_1)\in R) = P(\phi(B_2)\in R)$ whenever $\nu(B_1)=\nu(B_2)$). Hence, we can use the empirical distribution of voxel values as an estimate for $P(\phi(B))$, $\nu(B)=l^d$, which is completely general in that the only assumption we have made is that $\phi$ is CSR. We can thus approximate a simulation of CSR from the `best fitting' CSR measure (whose marginals approach $\phi$ asymptotically), by generating values for new voxels in the simulation using sampling with replacement of the values $\phi(v_1),...,\phi(v_N)$ already seen (equivalently, sampling from the empirical distribution). We note that this only approximates CSR, since, with probability 1, $\phi(v_n)$ takes a value in the empirical distribution, and hence the values taken by $\phi$ on any disjoint set of sub-voxels which cover a given voxel must be dependent. If instead of sampling with replacement from the empirical distribution to generate new samples, we permute the voxel values $\phi(v_1),...,\phi(v_N)$ (sampling without replacement), $\phi(W)$ must remain unchanged, and we can regard this as approximate sampling from the best fitting CSR measure conditioned on $\phi(W)$. However, rather than viewing permutation as an approximate simulation of CSR, it is also possible to view it in terms of an exact test against the general CSR null hypothesis, based on the exchangeability of the voxels under CSR. We summarize this as: \vspace{0.3cm} \noindent\textbf{Proposition 3.} \textit{Algorithm \ref{alg1} with exhaustive permutation at step 2, is an exact test for CSR of an arbitrary random measure at significance level $\omega$, in the sense that $P(H^*(r)>1)<\omega$ for an arbitrary distribution over the class of all CSR measures.} \vspace{0.3cm} \noindent\textbf{Proof.} Given random measure $\phi$ over $\mathbb{R}^d$, observation window $W$ and $N$ cubical voxels with sides of length $l$ partitioning $W$, $v_1, v_2, ... v_N$, we can construct a related random measure $\phi'$ over $\mathbb{R}^N$ such that: \begin{eqnarray}\label{eq:appD1} \phi'(B') = P([\phi(v_1),\phi(v_2),...,\phi(v_N)]\in B'), \end{eqnarray} where $[a_1,a_2,...,a_N]$ denotes a vector in $\mathbb{R}^N$. Hence, $\phi'(B')$ is the probability that $\phi$ gives a combination of values to voxels $1...N$ lying in $B'$, where $B'$ is a Borel set over $\mathbb{R}^N$. Further, we introduce the {\em rejection function}, $f:\mathbb{R}^N\rightarrow \{0,1\}$, which takes the value $1$ when $H^*(r)>1$ (for a fixed $r$) using an exhaustive permutation test at step 2 of Algorithm \ref{alg1} and significance level $\omega$ at step 4, and $0$ otherwise. Then, for any random measure: \begin{eqnarray}\label{eq:appD2} P(H^*(r)>1) = \int f(\mathbf{x})\phi'(\text{d}\mathbf{x}). \end{eqnarray} Considering now a CSR measure, by exchangeability of voxel regions, for any $B'$ we have $\phi'(B)=\phi'(\pi(B'))$ for all $\pi\in\mathbb{P}$; where $\mathbb{P}$ is the set of all permutations on $N$ elements, and we let $\pi([x_1,x_2,...,x_N])=[x_{\pi(1)},x_{\pi(2)},...,x_{\pi(N)}]$ and $\pi(B) = \{\mathbf{y}|\exists\mathbf{x}\in B\;s.t.\; \pi(\mathbf{x})=\mathbf{y}\}$. Hence, now considering the region $R = \{\mathbf{x}\in \mathbb{R}^N|x_1\leq x_2\leq ... \leq x_N\}$, under the assumption that $\phi$ is CSR, we can rewrite Eq. \ref{eq:appD2} as: \begin{eqnarray}\label{eq:appD3} P(H^*(r)>1) &=& \int [\mathbf{x}\in R] f^*(\mathbf{x})\phi'(\text{d}\mathbf{x}) \nonumber \\ &=& \int [\mathbf{x}\in R] \frac{f^*(\mathbf{x})}{g(\mathbf{x})}\phi''(\text{d}\mathbf{x}), \end{eqnarray} where, \begin{eqnarray}\label{eq:appD4} f^*(\mathbf{x}) &=& \sum_{\mathbf{y}\in\mathbb{P}(\mathbf{x})} f(\mathbf{y}) \nonumber \\ g(\mathbf{x}) &=& |\mathbb{P}(\mathbf{x})| \nonumber \\ \phi''(B') &=& \int g(\mathbf{x}) \phi'(\text{d}\mathbf{x}), \end{eqnarray} and we write $\mathbb{P}(\mathbf{x})$ for the set $\{\mathbf{y}|\exists\pi\in\mathbb{P}\;s.t.\; \pi(\mathbf{x})=\mathbf{y}\}$. By definition of $f$ we have that for all $\mathbf{x}$: \begin{eqnarray}\label{eq:appD5} \frac{f^*(\mathbf{x})}{g(\mathbf{x})} &<& \omega. \end{eqnarray} Hence (using $\phi''(R) = 1$, by definition of $g$): \begin{eqnarray}\label{eq:appD5} P(H^*(r)>1) &=& \int [\mathbf{x}\in R] \frac{f^*(\mathbf{x})}{g(\mathbf{x})}\phi''(\text{d}\mathbf{x}) \nonumber \\ &<& \int [\mathbf{x}\in R] (\omega) \phi''(\text{d}\mathbf{x}) \nonumber\\ &=& \omega. \end{eqnarray} \begin{flushright} $\square$ \end{flushright} As mentioned, Prop. 3 sheds further light on the practice of fixing the number of points $N$ during point process simulations as in \cite{ripley_77}. Further, we note that in practice, Monte Carlo sampling is typically required in place of exhaustive permutation in evaluating $H^*$ using Algorithm \ref{alg1}. The main advantage of Prop. 3 is that it sidesteps the issues of choosing a particular CSR measure or null hypothesis class, and provides a justification for methods which do not simulate CSR exactly in the general case. \subsection{Results on Synthetic Data: Gamma process, Mark Sum Poisson process, and Poisson process with Gaussian Kernel}\label{sec:results2} We first test our approach on synthetically generated data, where we are interested in determining if the various forms of Algorithm \ref{alg1} can distinguish between data which is known to be completely spatially random, and data which is known to contain spatial structure in the form of clustering. For synthetic CSR data, we consider the Gamma process and Mark Sum Poisson process as discussed above, where the techniques used to draw samples from these processes discussed in the context of Algorithm \ref{alg1} can be likewise be used to generate data for a synthetic test set. We sample $a$ and $b$ uniformly in the intervals $(0\;10]$ and $(0\;2]$ respectively for the Gamma process (see Eq. \ref{eq:gammProc}), and use five marks with the fixed weights $0.25, 0.5, 1, 2, 4$ for the Mark Sum Poisson process (which for convenience we also fix during testing) while sampling $\alpha_m$'s uniformly in the interval $[0.37\;2.7]$ (see Eq. \ref{eq:MSPProc}). We also generate CSR data from a Poisson process with $\lambda=0.1$ (equivalently, a Mark Sum Poisson process with $M=1$, $\alpha=0.1$, $w=1$). For a simple synthetic model with spatial structure, we use a model we describe as a {\em Poisson process with Gaussian Kernels}. This model has two parameters: $\lambda$, the intensity of an underlying Poisson process $\phi_{\Poisson}$, and $\sigma$, the standard deviation of a Gaussian kernel attached to each point in $\phi_{\Poisson}$. As a random measure, the model can be defined as follows: \begin{eqnarray}\label{eq:PPGK} \phi(B) = \sum_{\mathbf{x}\in\phi_{\Poisson}}\int_B \mathcal{N}(\mathbf{y};\mathbf{x},\sigma^2) \text{d}\mathbf{y}, \end{eqnarray} where $\mathcal{N}(.;\mu,\sigma^2)$ is the Gaussian probability density function with mean $\mu$ and diagonal covariance $\sigma I_d$ ($I_d$ being the $d$ dimensional identity matrix). Samples are drawn from this process by first drawing a sample from a Poisson process with intensity $\lambda$ in a large region surrounding the simulation region, and for the $n$'th voxel in the observed area summing across the values $\mathcal{N}(c_n;\mathbf{x},\sigma^2)$, where $c_n$ is the centre of voxel $n$, and $\mathbf{x}$ ranges across the points from the Poisson process in the enclosing region. We fix $\lambda=0.1$, and let $\sigma$ take the values $1,2,3,4,5,10$ to simulate clustering at various length-scales. All processes are simulated on a 2D window of $50\times 50$ pixels, with five examples generated under each setting. Examples from each process are shown in Fig. \ref{fig1}A. Algorithm \ref{alg1} is run in three variations on all synthetic samples, the first two using the restricted Gamma and Mark Sum Poisson process classes respectively to fit and test against in step 2 (option (a)), and the third using the general permutation test (option (c)). Ripley $K$ and clustering index functions are calculated for $r=0...10$ pixels, hence restricting $r$ to relatively small values under which edge correction effects are expected to be minimized, as discussed above. In fitting the Mark Sum Poisson process, 5 iterations of the EM algorithm outlined in the Methods section are used, and 20 simulations/permutations are used to estimate the 5-95th percentile range of $H(r)$ under CSR for each test sample and all algorithm variations ($\omega=0.05$). \begin{figure*}[!t] \begin{center} \includegraphics[width = 0.85\columnwidth]{fig1.png} \end{center} \caption{{\bf Detecting spatial patterning in synthetic data.} {\bf (A)} Simulations of various random processes. (i) Sample from a Gamma process (scale is from blue to red for low to high values); (ii) sample from a marked Poisson process (red/green/blue represent marks with different weights); (iii-v) samples from a Poisson process with Gaussian kernels, where the kernel parameter $\sigma=1,2,4$ for images (iii)-(v) respectively (scale as in (i)). (i-ii) are CSR processes, while (iii-v) exhibit clustering at increasing length-scales. {\bf (B)} Clustering index calculated on synthetic data from processes in (A). Horizontal lines at -1 and 1 show 5th and 95th percentiles of CSR simulations after normalization. These are derived from Gamma process simulations (left), Mark sum Poisson process simulations (centre), and permutations of the test sample (right).} \label{fig1} \end{figure*} Results on synthetic data as described are shown in Fig. \ref{fig1}B. In general, all three versions of the algorithm are able to discriminate correctly between CSR and clustered data. For the Gamma and Mark Sum Poisson process restricted null models, as expected, when the same class of models is used for testing and as null hypothesis, the clustering index stays well within the CSR region (Fig. \ref{fig1}B, left and central graphs). However, the results also show these models to be robust as CSR null hypotheses, and in each case test data from the other model and the simple Poisson process also generally stay within the CSR region. Samples from the Gamma process are marginally more dispersed when tested against the Mark Sum Poisson null model (Fig. \ref{fig1}B, central graph), which may be due to limitations in fitting the Mark Sum Poisson model with only five marks and the EM algorithm which only achieves a local optimum. The permutation test version of the algorithm appears to be the most robust, with all CSR test samples generating clustering index functions tightly located around zero (Fig. \ref{fig1}B, right graph). With respect to the clustered data generated from the Poisson process with Gaussian Kernels, all three algorithmic variations are generally able to detect that the data is not CSR, with the clustering index breaching the +1 line (the median clustering index is shown for the five replications of each process, Fig. \ref{fig1}B all graphs). The only exception is the Mark Sum Poisson null model, where the clustering index for the $\sigma=10$ process remains within the CSR region, possibly indicating again difficulty in fitting the marginal distributions sufficiently to distinguish the clustering over large distances from CSR (notably the $\sigma=5$ line is also close to CSR for the Mark Sum Poisson null model, Fig. \ref{fig1}B central graph). Both Gamma process and permutation based variations detect clustering at all $\sigma$ values. However, we note that the permutation test variation achieves a better separation of length-scales associated with the different $\sigma$ values: generally the clustering index peaks earlier and returns to the CSR region earlier for the lower the $\sigma$ value is, while all functions have similar profiles in the Gamma process case, except for $\sigma=5$ and $10$ (Fig. \ref{fig1}B left and right graphs; similarly, the separation pattern is not strongly observed for the Mark Sum Poisson variation, central graph). This could be due to difficulties in fitting close enough Gamma distributions to the marginals of these processes to distinguish fine differences in spatial patterning. In general then, the permutation test variation of Algorithm \ref{alg1} is shown to be robust in its ability to distinguish CSR from spatial patterning over various length-scales, and the variations using restricted classes of CSR null hypotheses are not shown to offer substantial advantages even in the case that the test data is from the matching CSR class for the data considered (while the substantially similar performance of all algorithmic variations is perhaps surprising given the restrictions on the null model imposed by the Gamma and Mark Sum Poisson process variations). \subsection{Results on Fluorescent Microscopy Data: Identifying Patterns of mRNA and Protein Localization over Time in a Polarizing Mouse Fibroblast System}\label{sec:results3} We further tested our approach by using it to probe for spatial patterning in the subcellular distributions of mRNAs and proteins in a polarizing mouse fibroblast system. We used high resolution confocal microscopy to generate 3D data specifying individual mRNA positions using Fluorescence In Situ Hybridization (FISH) followed by spot detection, and protein abundance across at a grid of voxel positions using Immunofluorescence (IF). Such data thus allows us to test the ability of our algorithm to detect spatial patterning in both point data (mRNAs) and continuous valued data (protein intensities), which can be modeled similarly as random measures. Cells were grown on cross-bow shaped micropatterns in order to standardize cell morphology and internal organelle arrangement (see \cite{thery_06,schauer_10}). The cells were serum-starved for 16 hours prior to micropatterning. Micropatterns were plated with Fibronectin, which causes the cells to begin to polarize following adhesion. Cells were then fixed in formaldehyde at various times post adhesion, and FISH or IF probes introduced to generate data for the distributions of four mRNAs, {\em Arhgdia}, {\em Pard3}, {\em $\beta$-Actin} and {\em Gapdh} and corresponding protein products. {\em Arhgdia} and {\em Pard3} were chosen, since they have been shown previously to exhibit significant spatial patterning in fibroblasts at the mRNA and protein \cite{mili_08} and protein only levels \cite{schmoranzer_09} respectively; {\em $\beta$-Actin} is known to exhibit spatial patterning at mRNA and protein levels in a variety of cell types \cite{buxbaum_14}; and the house-keeping gene {\em Gapdh} is not expected to exhibit strong spatial patterning. We were particularly interested in the case of {\em Pard3} to investigate whether spatial patterning could be detected at the mRNA level, as had been shown the protein level previously. FISH data for mRNAs was collected 2, 3, 4 and 5 hours after micropatterning, and IF protein data at 2, 3, 5 and 7 hours, with $\sim$40 cells imaged per mRNA/protein at each of these time-points. Individual micropatterned cells were imaged as separate $z$-stacks (512$\times$512 pixels, 15-25 $z$-levels, with approximately 0.1$\mu$m pixel width and $0.3\mu$m separation between $z$-levels). In addition, IF Tubulin staining was applied to all cells to identify the microtuble cytoskeleton, which enabled a simple cell-volume model to be constructed by identifying a 2D cell boundary and height map, and DAPI staining was applied to identify the nuclear region. Examples of data from each mRNA and protein are shown in Fig. \ref{fig2}A, which have been 2D projected and warped to an average micropattern shape for visualization. Further details on the experimental protocol are provided in Appendix C. \begin{figure*}[!t] \begin{center} \includegraphics[width = 0.9\columnwidth]{fig2.png} \end{center} \caption{{\bf Detecting spatial patterning in FISH and IF data from polarizing micropatterned mouse fibroblast cells.} {\bf (A)} Left column shows representative mRNA detections from FISH probes targeting transcripts of four genes. Right column shows representative IF distributions for protein products of these transcripts (IF intensity corresponding to protein abundance). Dotted lines show the average micropatterned cell shape and nucleus boundary, and cytoplasmic transcript locations and IF intensities are warped to this average shape for visualization. {\bf (B)} The degree of clustering statistic is calculated for four mRNAs (at 2, 3, 4 and 5 hour time-points), and their protein products (at 2,3,5 and 7 hour time-points). CSR is simulated using a permutation test (left column) or the empirical distribution of {\em Gapdh} (right column). Median values are calculated at each time-point, and cubic splines are fitted after subtracting the mean and normalizing by the standard deviation across time for ease of visualization (each mRNA/protein normalized independently). {\bf (C)} The Pearson correlation coefficient is calculated between all mRNA and protein pairings (including corresponding and non-corresponding pairs) based on the median degree of clustering profiles from (B), matching mRNA 2, 3, 4 and 5 hour time-points to protein 2, 3, 5 and 7 hour time-points respectively. The {\em Gapdh}-based CSR test leads to a significant separation of corresponding pairs (involving an mRNA and its protein product) versus others (one-tailed Mann-Whitney test). {\bf (D)} Peripheral region, as described in text, shown in white. {\bf (E)} Z-scores are calculated for the Pearson Correlation coefficient between the clustering profiles of each pair of corresponding mRNAs and proteins (using the permutation test based degree of clustering). The Z-score is calculated by comparing the correlation of a given corresponding pair to the distribution of correlations for all other pairs containing one member (mRNAs or proteins) of the given pair. Shown are the Z-scores calculated using the degree of clustering values for cytoplasmic and peripheral only populations.} \label{fig2} \end{figure*} We applied Algorithm \ref{alg1} to the 3D mRNA point data, and the protein IF to calculated clustering index functions and the degree of clustering (the area of the clustering index graph above +1, indicating significant clustering) from each cell individually. We applied two forms of the algorithm: in the first, we used the 3D Ripley $K$ estimator, and applied the permutation method at step 2 to generate simulation samples, where permutations are applied to the voxels falling within the cell-volume model; in the second, we projected the points/summed intensities into 2D and applied the Ripley $K$ estimator in 2D, while using the empirical distribution of {\em Gapdh} mRNAs and proteins to simulate CSR in step 2 (independently sampling, at each pixel, a binary value or intensity for mRNAs and proteins respectively from the same pixel in a {\em Gapdh} distribution, and normalizing the resulting sample to sum to the same total as the input sample; this is an empirical variation on option (b) of step 2). 100 simulations/permutations are used to estimate the 5-95th percentile range of $H(r)$ under CSR for each test sample and both algorithm variations ($\omega=0.05$). The profiles for the degree of clustering across time are shown for mRNAs and proteins using both versions of the algorithm in Fig. \ref{fig2}B. Cubic splines are fitted to the profiles for ease of visualization. Similarity in the profiles from the two algorithmic variations are readily apparent. Also visually apparent is a similarity between several of the mRNA profiles and the profiles of the their protein products, particularly {\em Arhgdia}, {\em Gapdh} and {\em Pard3}. To further investigate the relationship between mRNA and protein profiles, we calculated the Pearson Correlation coefficient between pairs of mRNA-protein degree of clustering profiles for each algorithm version (matching mRNA time-points 2, 3, 4 and 5 to protein time-points 2, 3, 5 and 7 respectively), and tested for whether the correlations between mRNAs and their corresponding proteins were significantly higher than between randomly matched pairs. We found this was the case for both algorithm variations, although the difference was more pronounced for the second variant ($p=0.039$ versus $p=0.052$, see Fig. \ref{fig2}C). The above suggests that our approach is able to identify significant aspects of spatial patterning in this system. Particularly, since the proteins are observed over a longer time period than the mRNAs, the similarities in profile reflect a stretching of this profile in the proteins with respect to the mRNAs. Plausibly, the spatial patterning at the mRNA level acts as a determinant of the patterning at the protein level through processes such as local translation, although this cannot be established directly from our approach. We note also that the more significant relationships observed for the second algorithmic variation may reflect the difficulties in estimating an accurate 3D cell volume model in the first version, which is required to select the voxels to be permuted in the simulations. To probe the spatial patterning of the mRNA-protein pairs further, we compared the clustering profiles of each corresponding pair individually. We were interested also in gaining information about where in the cell clustering was occurring for each pair. To this end, in addition to the correlations between clustering profiles across the whole cytoplasm as above, we calculated also the correlations between clustering profiles using the mRNA and protein data restricted to a small peripheral region of the cytoplasm (see Fig. \ref{fig2}D), formed from a strip around the boundary of the cell which was 10\% of the radial distance to the nucleus centroid in width (projected across all z-slices), hence reflecting peripheral clustering only. We chose this region, since {\em Arhgdia}, {\em Pard3}, and {\em $\beta$-Actin} proteins are known to localize peripherally (as is visually apparent in Fig. \ref{fig2}A), while localized translation at the periphery is known to occur in the cases of {\em Arhgdia} and {\em $\beta$-Actin} \cite{mili_08,buxbaum_14}. In both cases, we use the permutation test version of the algorithm. Fig. \ref{fig2}E compares the individual profile correlations in the cytoplasmic and peripheral populations for each corresponding mRNA-protein pair, by Z-scoring the correlation of each corresponding pair against the correlations of non-corresponding mRNA-protein pairings (see figure legend). We observe that the {\em Gapdh} correlations are only strong when calculated across the whole cytoplasm, but disappear at the periphery. Although we did not expect strong spatial patterning in either case, the correlations observed in the cytoplasm may be due to the {\em Gapdh} mRNAs and proteins forming diffuse large-scale regions of higher concentration in certain directions in response to polarization, which is suggested visually in the protein case (Fig. \ref{fig2}A; hence using {\em Gapdh} as a model of spatial randomness as above can be expected to mask clustering at such scales). In contrast, for both {\em Arhgdia} and {\em Pard3} we observe correlated mRNA-protein clustering dynamics in both the cytoplasmic and peripheral populations. This is expected in the case of {\em Arhgdia}; however, in the case of {\em Pard3}, clustering at the periphery has previously been demonstrated only at the protein level in a fibroblast system \cite{schmoranzer_09}. Our results suggest that peripheral {\em Pard3} mRNA localization is also important in this system, although as above we cannot directly conclude from our data that local translation establishes the protein localization (since the peripheral mRNA and protein clusters could form independently). Unexpectedly, {\em $\beta$-Actin} does not show significant correlation in the cytoplasmic or peripheral populations. We suggest that, while peripheral local translation may be occurring leading to peripheral protein clustering (as can be seen in Fig. \ref{fig2}A), since the {\em $\beta$-Actin} mRNA is highly dispersed we do not observe direct correlations between mRNA and protein clustering (the Z-score in Fig. \ref{fig2}E is high only if clustering occurs in both mRNAs and proteins and is correlated over time, and does not indicate the strength of independent clustering in mRNAs or proteins). \section{Discussion}\label{sec:discuss} We have described and analysed a general algorithm for calculating the Ripley-$K$ derived clustering index and degree of clustering statistics in the context of arbitrary random measures. Our approach generalizes the point-process-based approach in \cite{lee_13}, while shedding further light on this approach and the conditionality principle noted in \cite{ripley_77} by analysing these statistics as permutation tests in the random measure context, using the exchangeability of elements with identical volume in the context of completely spatially random measures. Through studies on synthetic data, we compared variations of the algorithm which explicitly simulate CSR using Gamma process and Mark Sum Poisson processes against the permutation approach, and found all variations were able to discriminate CSR from spatial patterning (clustering) at various length-scales in the data used, with the permutation-based algorithm offering a marginally more robust approach. Tests on fluorescence microscopy data from polarizing fibroblasts showed that the random-measure-based approach was able to identify spatial patterns in subcellular mRNA and protein distributions which were significantly correlated for corresponding mRNAs and protein products, hence suggesting that the patterns uncovered are biologically significant in the system and possibly reflective of mRNA/protein localization mechanisms tied to local translation. We note that the random measure framework provides a useful theoretical context in which to frame problems in the spatial domain which require integration of diverse data-types, as is necessary for instance in spatially resolved omics problems \cite{crosetto_15}. Although we specifically analysed statistics based on the Ripley-$K$ function (providing a convolution-based estimator appropriate for the random measure context, Eq. \ref{eq:KestimatorRM2}), similar permutation-based tests can be applied to arbitrary statistical estimators to detect other kinds of patterning in a random measure context, using the exchangeability properties of CSR measures \cite{jordan_10}. A possible use of such tests is in identifying functionally related genes from their spatial patterning in a given system, as for instance in the studies of \cite{lecuyer_07} and \cite{junker_14}. The application of our approach to a polarizing fibroblast system above provides an example of how random measures may be used in such a context. We have concentrated here on deriving statistics from individual distributions which may be correlated to identify functional relationships; however the problem of identifying dependencies between spatial patterning across multiple distributions may be framed more generally in a random measure context, using for instance cross-correlation measures \cite{stoyan_84}. A further possible application of random measures and related models is in the inference of spatially distributed regulatory networks. Gaussian process (GP) models have already been applied in the context of modeling transcription factor networks, using a theoretical framework which allows models based on deterministic differential equations to be embedded in the GP covariance function for probabilistic inference \cite{lawrence_06}, and are possibly also a suitable model for spatially distributed networks (we note though that GPs are not strictly random measures, since a function drawn from a GP may take negative values with non-zero probability, and hence cannot be treated directly as a density function). Alternatively, dynamic Dirichlet and Gamma process models \cite{rao_09} potentially offer other attractive ways of formulating the general spatial network inference problem. Within this context, random measure based statistics such as those investigated in this study may serve as evidence for or against causal relationships when combined with perturbations (as in \cite{sachs_05}) or single-cell level multiplexed data (as in \cite{snijder_09}). The above problems may therefore benefit directly from the techniques and analysis outlined the present study, as well as offering a broad set of challenges for future work drawing on similar theoretical approaches. \section*{Appendix A. Derivation of $\hat{K}$ from Stoyan and Ohser's Estimator for the Cross Correlation Measure} Stoyan and Ohser \cite{stoyan_84} consider the case of a stationary {\em weighted random measure}. A special case of this is a random measure over the space $\mathbb{R}^d\times\mathbb{R}$ for which $P(\psi(B\times U)) = P(\psi((B+t)\times U))$ for any $B\in\mathcal{B}$ and $U\in\mathcal{U}$ (with $\mathcal{U}$ any $\sigma$-algebra over $\mathbb{R}$). In their Eq. 4, they introduce the {\em reduced correlation measure}, $\mathcal{K}_{12}$, which can be used to express the second moment measure of $\phi$: \begin{eqnarray}\label{eq:appB1} \mu^{(2)}(B_1\times U_1 \times B_2 \times U_2) = \lambda(U_1)\lambda(U_2)\int_{B_1}\int \mathbf{1}_{B_2}(x+h)\mathcal{K}_{12}(\text{d}h)\text{d}x, \end{eqnarray} where $\lambda(U_1)$ is the intensity of the stationary random measure $\phi_{U_1}(B)=\phi(B\times U_1)$. Stoyan and Ohser provide the following estimator for $\mathcal{K}_{12}$ in their Eq. 10, which, following their Theorem 1, is shown to be unbiased: \begin{eqnarray}\label{eq:appB2} \hat{\mathcal{K}}_{12}(B) = \frac{1}{\lambda(U_1)\lambda(U_2)}\int\int \frac{\mathbf{1}_{B}(y-x)\mathbf{1}_{W_1}(x)\mathbf{1}_{W_2}(y)}{\nu(W_1\cap W_2 + x - y)} \mathbf{1}_{U_1}(w_1)\mathbf{1}_{U_2}(w_2) \phi(\text{d}(x,w_1))\phi(\text{d}(y,w_2)). \end{eqnarray} Any random measure over $\mathbb{R}^d$ can be considered a weighted random measure where $\mathcal{U}$ is taken to be the trivial $\sigma$-algebra, $\mathcal{U}=\{\emptyset,\mathbb{R}\}$. Hence, considering the case that $W_1=W_2=W$ and writing $\phi(B)$ for $\phi(B\times \mathbb{R})$, $\lambda$ for $\lambda(\mathbb{R})$, and $\mathcal{K}_{11}$ for what we shall call the {\em reduced autocorrelation measure} (i.e. the special case of $\mathcal{K}_{12}$ where $U_1=U_2=\mathbb{R}$), Eq. \ref{eq:appB2} reduces to: \begin{eqnarray}\label{eq:appB3} \hat{\mathcal{K}}_{11}(B) = \frac{1}{\lambda^2}\int\int \frac{\mathbf{1}_{B}(y-x)\mathbf{1}_{W}(x)\mathbf{1}_{W}(y)}{\nu(W\cap (W + x - y))} \phi(\text{d}x)\phi(\text{d}y). \end{eqnarray} Following \cite{stoyan_84}, we can express $\mathcal{K}_{11}$ in terms of the Palm distribution $P_o$: \begin{eqnarray}\label{eq:appB4} \mathcal{K}_{11}(B) = (1/\lambda)\mathbb{E}_{P_o}[\phi(B)], \end{eqnarray} where $P_o(Y) = \int g(x) \mathbf{1}_{Y}(\phi+x) \phi(\text{d}x) P(\text{d}\phi)$, with $g(.):\mathbb{R}^d\rightarrow\mathbb{R}^+$ an arbitrary non-negative measurable function integrating to $1$ and $Y\in \mathcal{M}$. Similarly, following \cite{benes_07} (Eq. 2.19) we can express the reduced second moment measure $\mathcal{K}$ as: \begin{eqnarray}\label{eq:appB5} \mathcal{K}(B) &=& (1/\lambda)\mathbb{E}_{P_o}[\phi(B)\backslash\{o\}] \nonumber \\ &=& (1/\lambda)\mathbb{E}_{P_o}[\phi(B)] - (1/\lambda)\mathbb{E}_{P_o}[\{o\}] \nonumber \\ &=& \mathcal{K}_{11}(B) - \mathcal{K}_{11}(\{o\}). \end{eqnarray} The Ripley $K$ function is defined in terms of the reduced second moment measure, giving: \begin{eqnarray}\label{eq:appB6} K(r) &=& \mathcal{K}(B(o,r)) \nonumber\\ &=& \mathcal{K}_{11}(B(o,r)) - \mathcal{K}_{11}(\{o\}). \end{eqnarray} Hence, by applying Eq. \ref{eq:appB3} to each term in Eq. \ref{eq:appB6}, we can form an unbiased estimator for $K$ as discussed in the text: \begin{eqnarray}\label{eq:appB7} \hat{K}(r) &=& \hat{\mathcal{K}}_{11}(B(o,r)) - \hat{\mathcal{K}}_{11}(\{o\}) \nonumber\\ &=& \frac{1}{\lambda^2}\int\int \frac{\mathbf{1}_W(x)\mathbf{1}_W(y)}{\nu(W\cap(W+x-y))} [|x-y|\leq r] \phi(\text{d}x)\phi(\text{d}y) - C, \end{eqnarray} where $C = (\int_W \phi(\{x\}) \phi(\text{d}x))/(\lambda^2\nu(W))$. \section*{Appendix B. EM Algorithm to fit Weighted Sum of Poisson Distributions} Following Eq. \ref{eq:MSPProc}, we can define a weighted sum of Poisson distributions with components $m=1...M$ having weights $w_1 ... w_M$ and means $\alpha_1...\alpha_M$ by the distribution: \begin{eqnarray}\label{eq:MSPProcEM} P(x\in R) = \sum_{\mathbf{n}\in(\mathbb{N}\cup\{0\})^M}[(\sum_m w_m n_m)\in R]\cdot \prod_m\Poisson(n_m|\alpha_m), \end{eqnarray} where $[A]$ is the Iverson bracket, which is 1 when $A$ is true and 0 otherwise. We can reexpress this distribution in a form involving latent variables $Z_1...Z_M$: \begin{eqnarray}\label{eq:latvar} Z_m &\sim& \Poisson(.;\alpha_m) \nonumber \\ X &=& \sum_m w_m z_m, \end{eqnarray} or equivalently: \begin{eqnarray}\label{eq:latvar2} P(x) &=& \sum_{\mathbf{z}} P(\mathbf{z})[x = \sum_m w_m z_m] \nonumber \\ P(\mathbf{z}) &=& \prod_m \Poisson(z_m;\alpha_m) \end{eqnarray} where $\mathbf{z}=[z_1,z_2,...,z_M]$. The EM algorithm can be applied to fit distributions involving latent variables, and maximizes the log-likelihood (to a local optimum) by introducing an auxiliary distribution $q$ over the latent variables, and alternately minimizing the KL-divergence between $q(\mathbf{Z})$ and the true marginal distribution over latent variables $p(\mathbf{Z})$ (E-step), and maximizing the following lower-bound on the log-likelihood (M-step) (see \cite{bishop_06,dempster_77}): \begin{eqnarray}\label{eq:EM} \mathcal{L}(q,\theta) = \sum_{\mathbf{Z}} q(\mathbf{Z}) \log \left(\frac{p(\mathbf{X},\mathbf{Z}|\theta)}{q(\mathbf{Z})} \right), \end{eqnarray} where $\theta$ are the distribution parameters, and $\mathbf{X}$, $\mathbf{Z}$ are the observed data and a fixed setting of the latent variables respectively. We will write $\mathbf{x}_i$ for the $i$'th data point, and $\mathbf{z}_{i,m}$ for the $m$'th latent variable associated with the $i$'th data point, for data points $i=1...N$. For the weighted sum of Poisson distributions, we assume that the weights $w_1...w_M$ are fixed in advance, and hence we optimize $\theta = \{\alpha_1...\alpha_m\}$. Further, since the distribution only places positive probability on values which can be expressed in the form $\sum_m w_m n_m$ for $n_m\in\mathbb{N}\cup\{0\}$, we assume that the values $x_i$ have been rounded to the nearest such value. \vspace{0.3cm} \noindent\textbf{E-step.} The KL-divergence between $q(\mathbf{Z})$ and $p(\mathbf{Z})$ can be minimized by calculating the posterior distribution of $\mathbf{z}_i$ for each data point given the current parameter settings. Writing $\gamma_{i,\mathbf{n}}$ for the posterior $P(\mathbf{z}_i=\mathbf{n}|\alpha'_1...\alpha'_M)$, where $\alpha'_m$ is the current value of $\alpha_m$, these can be calculated as: \begin{eqnarray}\label{eq:Estep} \gamma(i,\mathbf{n}) = \frac{[x_i = \sum_m w_m n_m]\prod_m \Poisson(n_m;\alpha_m)}{Z}, \end{eqnarray} where $Z$ is a normalizing constant. Since for each data point there are only finitely many latent-variable settings for which $\gamma(i,\mathbf{n})$ is non-zero (since any for which $w_m n_m > x_i$ will be zero), the values $\gamma(i,\mathbf{n})$ can be found for all $\mathbf{n}$ by explicit calculation. For initialization, we set $\gamma(i,\mathbf{n})=1$ for an arbitrary $\mathbf{n}$ such that $[x_i = \sum_m w_m n_m]$. $q$ is found by setting $q(\mathbf{Z})=\prod_i\gamma(i,\mathbf{z}_i)$. \vspace{0.3cm} \noindent\textbf{M-step.} Substituting Eqs.\ref{eq:latvar2} and \ref{eq:Estep} into Eq. \ref{eq:EM} yeilds: \begin{eqnarray}\label{eq:Mstep1} \mathcal{L}(q,\mathbf{\alpha}') &=& K + \sum_{i,m,\mathbf{n}} \gamma(i,\mathbf{n}) \log(\Poisson(n_i|\alpha'_m)) \nonumber \\ &=& K + \sum_{i,m,\mathbf{n}} \gamma(i,\mathbf{n}) \log\left(\frac{(\alpha_m')^{n_i}}{n_i!}\exp(-\alpha'_m)\right), \end{eqnarray} where $K$ is the entropy of $q$. Eq. \ref{eq:Mstep1} can be seen to break into separate collections of terms involving each of the $\alpha'_m$'s. Differentiating with respect to $\alpha'_m$ and setting to zero yields the update: \begin{eqnarray}\label{eq:Mstep2} \alpha'_m = \frac{\sum_{i,\mathbf{n}}\gamma(i,\mathbf{n})n_m}{N}. \end{eqnarray} \section*{Appendix C. Experimental Methods for Fluorescent Microscopy Data} NIH/3T3 mouse fibroblast cells were serum-starved for 16 hours prior to seeding on Fibronectin crossbow micropatterned surfaces (individual micropatterns approximately 25$\mu$m in height and width). The cells were allowed to grow for various lengths of time (2, 3, 4, 5 and 7 hours) before fixing in formaldehyde, permeabilization, and hybridization of probes. RNA FISH probes were designed and applied using the method of \cite{raj_08}, which targets multiple 20-mer oligonucleotide probes to each mRNA. Rabbit polyclonal anti-Arghdia, anti-Gapdh, anti-$\beta$-Actin and anti-Par3 antibodies were used for the IF staining (Santa Cruz and Abcam). Rat monoclonal anti-tubulin antibody (Abcam) was used for tublin staining in all cells, along with DAPI for nuclear staining. Images were captured on a spinning disk confocal Revolution XD system (Andor). Each cell was imaged as an individual $z$-stack, with each image comprising 512$\times$512 pixels, 15-25 $z$-levels, and approximately 0.1$\mu$m pixel width and $0.3\mu$m separation between $z$-levels. Background subtraction was applied to all images using ImageJ (IF and FISH), and spot detection was performed to determine mRNA positions from the FISH $z$-stacks using \cite{olivo_02}. 2D segmentation of the nucleus region was performed by max-projecting the DAPI $z$-stacks, thresholding the resulting images, and applying image dilation to the binary masks. 2D segmentation of cellular regions was performed similarly by max-projection, thresholding and dilating the tubulin IF $z$-stacks. To estimate a height map across the cellular region (to construct a 3D cellular model), we first estimated the base $z$-level of the cell to be the level with the maximum total tubulin intensity (cells adhere to micropatterned regions on a 2D surface, and thus achieve greatest spread at their base). We then search at each 2D location for the $z$-level with the max-tubulin intensity above the base level, which we observed empirically to provide a reliable indicator of the cell boundary. The final height-map was formed by smoothing the resulting surface with a 3$\times$3 box filter.
2,877,628,089,946
arxiv
\section{Introduction} The notion of quasiperiodicity we consider in this paper is the one introduced in the area of Text Algorithms by Apostolico and Ehrenfeucht \cite{ApostolicoEhrenfeucht1993TCS} in the following way: ``a string $w$ is quasiperiodic if there is a second string $u \neq w$ such that every position of $w$ falls within some occurrence of $u$ in $w$''. In 1994, Marcus extended this notion to right infinite words and he opened six questions. Four of them were answered in \cite{LeveRichomme2004BEATCS} (see also \cite{MarcusMonteil2004arxiv}). In particular, we proved the existence of a Sturmian word which is not quasiperiodic. In \cite{LeveRichomme2007TCS}, we proved that a Sturmian word is not quasiperiodic if and only if it is an infinite Lyndon word. The proof of this result was based on the $S$-adicity of Sturmian words (Sturmian words form a family of non-periodic words that can be infinitely decomposed over four basic morphisms -- see \cite{BerstelSeebold2002Lothaire} for more properties on Sturmian words) and on a characterization of morphisms that preserve Lyndon words \cite{Richomme2003BBMS}. In \cite{LeveRichomme2007TCS}, we introduced strongly quasiperiodic morphisms as those morphisms that map all infinite words to quasiperiodic ones, and weakly quasiperiodic morphisms that map at least one non-quasiperiodic word to a quasiperiodic one. We characterized Sturmian morphisms that are strongly quasiperiodic and those that are not weakly quasiperiodic. With Glen \cite{GlenLeveRichomme2008TCS}, previous results were extended to the class of episturmian words. All quasiperiodic episturmian words were characterized (unlike the Sturmian case, they do not correspond to infinite episturmian Lyndon words). Two proofs were provided for this result. The first one used connections between quasiperiodicity and return words, the second one used $S$-adic decompositions of episturmian words, and a characterization of strongly quasiperiodic on infinite words episturmian morphisms. Observe that strongly and weakly quasiperiodic morphisms were considered in the context of infinite words. In this paper we consider also these morphisms with respect to finite words. After basic definitions (Sect.~\ref{sec:def}), in Sect.~\ref{sec:relations}, we study existing relations between the four so-defined families of morphisms. Algorithms to check if a morphism is strongly quasiperiodic are provided in Sect.~\ref{sec:decision_wqf} and \ref{sec:decision_wqi}. In Sect.~\ref{sec:weakly}, we provide sufficient conditions for a morphism to be weakly quasiperiodic on infinite words. \section{Quasiperiodic Words and Morphisms\label{sec:def}} We assume readers are familiar with combinatorics on words, morphisms and automata (see for instance \cite{Lothaire2002book}). We let $\varepsilon$ denote the empty word, $|w|$ denote the length of a word $w$, and $|w|_a$ denote the number of occurrences of a letter $a$ in $w$. Let us recall that, if some words $w$, $u$, $p$ and $s$ verify $w = ups$, then $p$ is called a \textit{prefix} of $w$, $s$ a \textit{suffix} of $w$ and $u$ a \textit{factor} of $w$. A factor, prefix or suffix is said to be \textit{proper} if it differs from the whole word. An \textit{internal} factor of a word is any occurrence of a factor except its prefixes and suffixes. For a word $u$ and an integer $k$, $u^k$ denotes the word obtained by concatenating $k$ copies of $u$ and $u^\omega$ denotes the infinite periodic word obtained by concatenating infinitely many copies of $u$. Given a non-empty word $q$, $q$-quasiperiodic words (or strings) are defined in the introduction. Equivalently a finite word $w$ is $q$-\textit{quasiperiodic} if $w \neq q$ and there exist words $p$, $s$ and $u$ such that $w = qu$, $q = ps$, $p \neq \varepsilon$, and $su = q$ or $su$ is a $q$-quasiperiodic word. The word $q$ is called a \textit{quasiperiod} of $w$. It is called \textit{the quasiperiod} of $w$ if $w$ has no smaller quasiperiod. For instance, the word $w=ababaabababaabababa$ is $aba$-quasiperiodic and $ababa$-quasiperiodic. The word $aba$ is the quasiperiod of $w$. A word $w$ is said \textit{quasiperiodic} if it is $q$-quasiperiodic for some word $q$. Otherwise $w$ is called \textit{superprimitive}. The quasiperiod of any quasiperiodic word $w$ is superprimitive. The definition of quasiperiodicity extends naturally to infinite words. Let us recall that a morphism $f$ is an application on words such that for all words $u$ and $v$, $f(uv) = f(u)f(v)$. Such a morphism is defined by images of letters. A well-known morphism is the Fibonacci morphism $\varphi$ defined by $\varphi(a) = ab$, $\varphi(b)=a$. In~\cite{LeveRichomme2004BEATCS}, we proved that the infinite Fibonacci word, the fixed point of $\varphi$, has infinitely many quasiperiods that are superprimitive. The first ones are $aba$, $abaab$, $abaababaa$. Notice that from now on, we will only consider non-erasing morphisms (images of non-empty words differ from the empty word). As mentioned in the introduction, \textit{strongly quasiperiodic on infinite words morphisms} were introduced as a tool to study quasiperiodicity of some infinite words. They are the morphisms that map any infinite word to a quasiperiodic infinite words. Also were introduced \textit{weakly quasiperiodic on finite words morphisms} that map at least one non-quasiperiodic infinite word to a quasiperiodic one. Examples are provided in the next section. It is interesting to observe that a morphism that is not weakly quasiperiodic on infinite words could be called a quasiperiodic-free morphism as it maps any non-quasiperiodic infinite word to another non-quasiperiodic word. This allows to relate the current study to the stream of works around power-free morphisms. In this context, it is natural to consider previous notions on finite words. Thus in this paper, we will also consider \textit{strongly quasiperiodic on finite words morphisms} that map any finite word to a quasiperiodic word, and \textit{weakly quasiperiodic on finite words morphisms} that map at least one finite non-quasiperiodic word to a quasiperiodic word. \section{\label{sec:relations}Relations} In this section, we show that the basic relations between the different families of morphisms are the ones described in Fig.~\ref{fig:basic_relations}. \begin{figure} $$\xymatrix{ & \fbox{\begin{minipage}{3cm}\center Strongly QP\\on finite words\end{minipage}} \ar[dl]\ar[dr]& \\ \fbox{\begin{minipage}{3cm}\center Weakly QP\\on finite words\end{minipage}} & & \fbox{\begin{minipage}{3cm}\center Strongly QP\\on infinite words\end{minipage}}\ar[d]\\ && \fbox{\begin{minipage}{3cm}\center Weakly QP\\on infinite words\end{minipage}}\\ }$$ \caption{Basic relations\label{fig:basic_relations}} \end{figure} Let us first observe that it follows the definitions that any strongly quasiperiodic on finite (resp. infinite) words morphism is also a weakly quasiperiodic on finite (resp. infinite) words morphism. Next result proves the last relation of Fig.~\mbox{\ref{fig:basic_relations}}. Its proof uses Lemma~\mbox{\ref{L:2}}. \begin{proposition} \label{P:1} Any strongly quasiperiodic on finite words morphism is strong\-ly quasiperiodic on infinite words. \end{proposition} \begin{lemma} \label{L:2} Let $f$ be a morphism. Assume the existence of two words $u$ and $v$ and of an integer $k$ such that $|f(u)^k| \geq |f(v)|$. If $f(u)$ and $f(u^k v u^k)$ are quasiperiodic, then their quasiperiods are equal. \end{lemma} \begin{proof} Let $q_u$ be the quasiperiod of $f(u)$ and let $q$ be the quasiperiod of the word $f(u^kvu^k)$. If $|q|< |q_u|$, then $q$ is a prefix and a suffix of $q_u$ and as $f(u)$ is a factor of a $q$-quasiperiodic word, it is also $q$-quasiperiodic (we have $f(u) \neq q$ for length reason). This contradicts the fact that, by definition, $q_u$ is the smallest quasiperiod of $f(u)$. So $|q_u| \leq |q|$. Assume $|q| \geq 2|f(u^k)|$. So by choice of $k$, $|q| \geq |f(u^k)|+|f(v)|$. This implies that the prefix occurrence of $q$ in $f(u^kvu^k)$ overlaps the suffix occurrence. More precisely $q = q_1 q_2 = q_2 q_3$ with $|q_1q_2| \geq 2|f(u^k)|$ and $|q_1| = |q_3| \leq |f(u^k)|$: we have $|q_2| \geq |q_1|$. By a classical result (see \cite[Lem. 1.3.4]{Lothaire1983book}), there exists words $x$ and $y$ with $xy \neq \varepsilon$ and an integer $\ell$ such that $q_1 = xy$, $q_2 = (xy)^\ell x$ and $q_3 = yx$. For length reason, $\ell \neq 0$ so that $q$ is $xyx$-quasiperiodic. This contradicts the fact that $q$ is superprimitive. Thus $|q| < 2|f(u^k)|$. As $q$ is both prefix and suffix-comparable with $f(u^k)$ which is $q_u$-quasiperiodic, as $|q_u| \leq |q|$, and as $q$ is superprimitive, $q = q_u$. \end{proof} \begin{proof2}{Proposition~\ref{P:1}} Assume $f$ is strongly quasiperiodic on finite words. Let $\alpha$ be a letter and let $q_\alpha$ be the quasiperiod of $f(\alpha)$. By Lemma~\ref{L:2}, for any word $u$, there exists an integer $k$ such that $f(\alpha^ku\alpha^k)$ is $q_\alpha$-quasiperiodic. This implies that, for any word $u$, $f(\alpha u)$ is a prefix of a $q_\alpha$-quasiperiodic word. Equivalently, for any infinite word $\mathbf{w}$, $f(\alpha \mathbf{w})$ is a $q_\alpha$-quasiperiodic word.\qed \end{proof2} \medskip Conversely to Proposition~\mbox{\ref{P:1}}, it is easy to find an example showing the existence of a morphism that is strongly quasiperiodic on infinite words but not on finite words. Just look at the morphism that maps $a$ to $aa$ and $b$ to $a$, or at next example of a strongly quasiperiodic morphism on infinite words that is not weakly quasiperiodic on finite words. \begin{example}\rm Let $f$ be the morphism defined on $\{a, b\}^*$ by \begin{quote} $f(a) = abaababaababababaab$\\ $f(b) = abaabaabababababaab$. \end{quote} It is straigthforward that $f(\mathbf{w})$ is $aba$-quasiperiodic for any infinite word $\mathbf{w}$. Let us prove that $f$ is not weakly quasiperiodic on finite words. Assume by contradiction the existence of a non-quasiperiodic word $u$ such that $f(u)$ is quasiperiodic. Observe $u \neq a$, $u \neq b$ and the quasiperiod of $u$ ends with $ab$. An exhaustive verification allows to see that no proper prefix of $f(a)$ nor $f(b)$ could be a quasiperiod of $f(u)$. Hence $f(a)$ or $f(b)$ is a prefix of the quasiperiod $q$ of $f(u)$. Observing this implies $|q| \geq |f(a)| = |f(b)|$, we deduce that $f(a)$ or $f(b)$ is a suffix of $q$. As $f(a)$ and $f(b)$ are not internal factors of $f(aa)$, $f(ab)$, $f(ba)$, $f(bb)$, $q = f(q')$ for some word $q'$. Moreover $u$ is $q'$-quasiperiodic, a contradiction. \end{example} \medskip Next examples show that the other converses of the relations presented in Fig.~\ref{fig:basic_relations} are false. \begin{example} The morphism that maps $a$ to $aa$ and $b$ to $bb$ is weakly quasiperiodic on finite words (as $f(a)$ is quasiperiodic), but we let readers verify that it is not weakly quasiperiodic on infinite words. Thus $f$ is not strongly quasiperiodic on infinite words and, as a consequence of Proposition~\ref{P:1}, it is not strongly quasiperiodic on finite words. \end{example} \begin{example} \label{ex:3.5} The morphism $f$ defined by $f(a) = ba$ and $f(b) = bba$ is weakly quasiperiodic on infinite words since for all word $w \in a\{a,b\}^\omega$, $f(w)$ is $bab$-quasiperiodic. But $f(ba^\omega)=bb(ab)^\omega$ is not quasiperiodic, and so $f$ is not strongly quasiperiodic on infinite words. By Proposition~\ref{P:1}, {$f$ is not strongly quasiperiodic on finite words.} \end{example} \section{\label{sec:decision_wqf}Deciding Strong Quasiperiodicity on Finite Words} Next lemma which is a direct consequence of Lemma~\ref{L:2} is the key observation to decide whether a morphism is strongly quasiperiodic on finite words. \begin{lemma} \label{L:borne_sqf} If $f$ is a strongly quasiperiodic on finite words morphism, then for any word $u$ and any letter $\alpha$, the quasiperiod of $f(u)$ is a factor of $f(\alpha^3)$ of length less than $2|f(\alpha)|$. \end{lemma} \begin{proof} Assume $f$ is strongly quasiperiodic on finite words. Let $u$ be a word and let $q_u$ be the quasiperiod of $f(u)$. Let $i$ be an integer such that $|f(\alpha^i)| \geq 2|q_u|$ ($|f(\alpha)| \neq 0$ as $f(\alpha)$ is quasiperiodic). Let $k$ be an integer such that $|f(u^k)| \geq |f(\alpha^i)|$. By Lemma~\ref{L:2}, the quasiperiod of $f(u^k\alpha^i u^k)$ is $q_u$. As $|f(\alpha)^i| \geq 2|q_u|$, $q_u$ must be a factor of $f(\alpha)^i$. As $q_u$ is superprimitive, $|q_u| < 2|f(\alpha)|$. Consequently $q_u$ is a factor of $f(\alpha)^3$. \end{proof} Observe now that, given two words $u$ and $q$, it follows the definition of quasiperiodicity that the $q$-quasiperiodicity of $f(u)$ implies that, for each non-empty proper prefix $\pi$ of $f(u)$, $\pi = xps$ with $xp = \varepsilon$, $xp = q$ or $xp$ is the longest $q$-quasiperiodic prefix of $\pi$ if $|\pi|>|q|$, and $ps$ a prefix of $q$. {Based on this remark, we introduce an automaton that will allow to recognize words $u$ such that $f(u)$ is $q$-quasiperiodic (or $q$ or the empty word $\varepsilon$), for a given word $q$ and a given morphism $f$. } {Note that a quasiperiod may have several borders, that is, proper suffixes that are prefixes.} {For instance, the word $q = abacaba$ has $\varepsilon$, $a$ and $aba$ as borders.} {Thus while processing the automaton, one cannot determine with precision which will be the word $p$ occurring in previous observation until the reading of next letters.} {Therefore the constructed automaton will just remind (instead of initial $p$) the longest suffix $p$ of $\pi$ such that $ps$ is a prefix of $q$.} \begin{definition} \label{D:automate1} Let $f$ be a morphism over $A^*$ and $q$ be a non-empty word. We denote ${\cal A}_q(f)$, or simply ${\cal A}_q$, the automaton $(A, Q, i, F, \Delta)$ where: \begin{itemize} \item the states, the elements of $Q$, are the couples $(p, s)$ such that $ps$ is a proper prefix of $q$; \item the initial state $i$ is the couple $(\varepsilon, \varepsilon)$; \item the final states, the elements of $F$, are the couples on the form $(p, \varepsilon)$, with $p$ a prefix of $q$; \item the transitions, the elements of $\Delta$, are triples $((p_1, s1), a, (p_2, s_2))$ where $(p_1, s1) \in Q$, $(p_2, s_2) \in Q$ and one of the two following situations holds: \begin{enumerate} \item If $q$ does not occur in $p_1s_1f(a)$ and $|q| > |s_1f(a)|$, then \begin{itemize} \item $s_1f(a)=s_2$, \item $p_2$ is the longest suffix of $p_1$ such that $p_2s_1f(a)$ is a proper prefix of $q$. \end{itemize} \item If $q$ occurs in $p_1s_1f(a)$ \begin{itemize} \item there exist a suffix $x$ of $p_1$ and a word $y$ such that $xs_1f(a) = ys_2$ with $y = q$ or $y$ is $q$-quasiperiodic, \item $p_2$ is the longest suffix of $y$ such that $p_2s_2$ is a proper prefix of $q$. \end{itemize} \end{enumerate} \end{itemize} \end{definition} {The automaton} defined in previous definition {is determinist}. It should be emphasized that given a state $(p, s)$ and a letter $a$, there may not exist a state $(p', s')$ such that a transition $((p,s), a, (p',s'))$ exists. {We let readers verify the next observation and its corollary.} \begin{fact} \label{F:fonctionnement_automate} Any state $(p,s)$ in ${\cal A}_q$ is reached by reading a word $u$ if and only if there exist words $\pi$, $p$ and $s$, such that $f(u) = \pi p s$ with $\pi p = \varepsilon$, $\pi p = q$ or $\pi p$ is a $q$-quasiperiodic word, and, $ps$ is the longest prefix of $q$ that is a suffix of $f(u)$. \end{fact} \begin{lemma} \label{L:automate} A word $u$ is recognized by ${\cal A}_q$ if and only if $f(u) = \varepsilon$ or $f(u) = q$ or $f(u)$ is $q$-quasiperiodic. \end{lemma} Let us give some examples of automata following the previous definition. Notice that we just construct the states that are accessible from $(\varepsilon, \varepsilon)$. \begin{example} \label{ex:aut1} Let $f$ be the morphism defined by $f(a) = ab$, $f(b) = aba$. The automaton ${\cal A}_{aba}$ is the following one. $$\xymatrix{ & & (\varepsilon, ab)\ar[dr]^a\ar@/_/[dd]_b & \\ \ar[r]&\ar[d](\varepsilon, \varepsilon)\ar[ur]^a\ar[dr]^b && (a, b)\ar[dl]_b\ar@(ru,r)[]^a\\ && (a, \varepsilon)\ar@/_/[uu]_a\ar[d]\ar@(rd,r)[]_b \\ &&&\\ }$$ \end{example} \begin{example} \label{ex:aut2} Let $f$ be the morphism defined by $f(a) = abaaba$, $f(b) = baabaaba$. Here follow automata ${\cal A}_{aba}$ and ${\cal A}_{baaba}$. \hfill\begin{minipage}{5cm} $$\xymatrix{ &&&\\ \ar[r]&\ar[u](\varepsilon, \varepsilon)\ar[r]^a & (a, \varepsilon)\ar[u]\ar@(ru,r)[]^{a,b}\\ }$$ \end{minipage}\hfill \begin{minipage}{5cm} $$\xymatrix{ &&&\\ \ar[r]&\ar[u](\varepsilon, \varepsilon)\ar[r]^b & (ba, \varepsilon)\ar[u]\ar@(ru,r)[]^{a,b}\\ }$$ \end{minipage}\hfill \end{example} \begin{example} \label{ex:aut3} Let $f$ be the morphism defined by $f(a) = aabaab$, $f(b) = aabaaaba$ and $f(c) = aabaababaabaa$. Here follows automaton ${\cal A}_{aabaa}$. $$\xymatrix{ & & (aa, b)\ar@/_/[dd]_b\ar@(ru,r)[]^a \\ \ar[r]&\ar[d](\varepsilon, \varepsilon)\ar[ur]^a\ar[dr]^b \\ && (a, aba)\ar@/_/[uu]_a\ar[d]\ar@(ru,r)[]^b \\ &&&\\ }$$ \end{example} Let ${\cal Q}(f)$ be the set of all words $q$ such that, for all letters $\alpha$ in $A$, $|q| \leq 2|f(\alpha)|$ and $q$ is a factor of $f(\alpha)^3$. Following Lemma~\ref{L:borne_sqf}, ${\cal Q}(f)$ is the set of all possible quasiperiods of a word on the form $f(u)$. Thus Lemma~\ref{L:automate} implies the next characterization of strongly quasiperiodic morphisms on finite words. \begin{proposition} \label{P:strong_qp_on_finite} A morphism $f$ is strongly quasiperiodic on finite words if and only if, for each letter $\alpha$, the word $f(\alpha)$ is quasiperiodic, and \[ A^* = \bigcup_{q \in {\cal Q}(f)} {\cal L}({\cal A}_q) \] where ${\cal L}({\cal A}_q)$ is the language recognized by the automaton ${\cal A}_q$. \end{proposition} As ${\cal Q}(f)$ is finite, and as it is decidable whether a finite word is quasiperiodic~\cite{ApostolicoEhrenfeucht1993TCS,BrodalPedersen2000,IliopoulosMouchard1999a} (see also \cite{GroultRichomme2010TCS} for optimality of the complexity of these algorithms), we can conclude. \begin{corollary} It is decidable whether a morphism is strongly quasiperiodic on finite words. \end{corollary} To end this section, let us illustrate Proposition~\ref{P:strong_qp_on_finite}. If $f$ is the morphism considered in Example~\ref{ex:aut2} ($f(a) = abaaba$, $f(b) = baabaaba$), as $aba$ and $baaba$ belong to ${\cal Q}(f)$, as ${\cal L}({\cal A}_{aba}) = \varepsilon \cup a \{a, b\}^*$ and ${\cal L}({\cal A}_{baaba}) = \varepsilon \cup b \{a, b\}^*$, as $f(a)$ and $f(b)$ are quasiperiodic, we can conclude by Proposition~\ref{P:strong_qp_on_finite} that $f$ is strongly quasiperiodic on finite words. Now consider the morphism defined by $f(a) = ab$, $f(b) = aba$. We have ${\cal Q}(f) = \{a, b, ab, ba, aba\}$. By Example~\ref{ex:aut1}, ${\cal L}({\cal A}_{aba}) = \varepsilon \cup \{a, b\}^*b$. We let readers verify that ${\cal L}({\cal A}_{a}) = {\cal L}({\cal A}_{b}) = {\cal L}({\cal A}_{ba}) = \emptyset$ and ${\cal L}({\cal A}_{ab}) = a^*$. Thus $f$ is not strongly quasiperiodic. As the set ${\cal L}({\cal A}_{aba})$ contains non-quasiperiodic words, this morphism $f$ is weakly quasiperiodic. \section{\label{sec:decision_wqi}Deciding Strong Quasiperiodicity on Infinite Words} We now show how to adapt the ideas of previous section to the study of strongly quasiperiodic on infinite words morphisms. First we adapt Lemma~\ref{L:borne_sqf}. \begin{lemma} \label{L:borne_sqi} If $f$ is a strongly quasiperiodic on infinite words morphism, then for any infinite word $\mathbf{w}$ and any letter $\alpha$, the quasiperiod of $f(\mathbf{w})$ is a factor of $f(\alpha^3)$ of length less than $2|f(\alpha)|$ that is a factor of ${\cal Q}(f)$. \end{lemma} This result is a consequence of the next one whose proof is similar to the one of Lemma~\ref{L:borne_sqf} (without the need of Lemma~\ref{L:2}). \begin{lemma} \label{L:borne_sqi_etape} If $f$ is a strongly quasiperiodic on infinite words morphism, then for any word $u$ and any letter $\alpha$, the quasiperiod of $f(u\alpha^\omega)$ is a factor of $f(\alpha^3)$ of length less than $2|f(\alpha)|$. \end{lemma} \begin{proof2}{Lemma~\ref{L:borne_sqi}} Let $f$ be a strongly quasiperiodic on infinite words morphism. Let $\mathbf{w}$ be an infinite word and let $\alpha$ be a letter. With each prefix $p$ of $\mathbf{w}$, by Lemma~\ref{L:borne_sqi_etape}, one can associate a factor $q_p$ of $f(\alpha^3)$ such that $f(p\alpha^\omega)$ is $q_p$-quasiperiodic. As the set of factors of $f(\alpha^3)$ is finite, there exists one, say $q$, which is associated with an infinity of prefixes of $\mathbf{w}$. This implies $\mathbf{w}$ is $q$-quasiperiodic. \qed \end{proof2} Now we adapt the automaton used in the previous section in order to have a tool to determine if the image of an infinite word is $q$-quasiperiodic for a given morphism and a given word $q$. \begin{definition} Let $f$ be a morphism over $A^*$ and $q$ be a non-empty word. Let ${\cal A}_q'(f)$, or simply ${\cal A}_q'$, denote the automaton $(A, Q, i, F', \Delta)$ where $Q$, $i$, $\Delta$ are defined as in Definition~\ref{D:automate1}, and $F' = Q$. \end{definition} \begin{lemma} \label{L:automate2} An infinite word $\mathbf{w}$ is $q$-quasiperiodic if and only if all its prefixes are recognized by ${\cal A}_q'$. \end{lemma} As a consequence of Lemmas~\ref{L:borne_sqi} and \ref{L:automate2}, we get next characterization of strongly quasiperiodic morphisms on finite words. \begin{proposition} \label{P:carac_strong_qp_on_infinite} A morphism $f$ is strongly quasiperiodic on infinite words if and only if \[ A^* = \bigcup_{q \in {\cal Q}(f)} {\cal L}({\cal A}_q') \] where ${\cal L}({\cal A}_q')$ is the language recognized by the automaton ${\cal A}_q'$. \end{proposition} The proof of Proposition~\ref{P:carac_strong_qp_on_infinite} is a consequence of the previous definition and lemmas. To make all clearer, just observe that, if a word $u$ is recognized by ${\cal A}_q'$ then all its prefixes are also recognized. As an example to illustrate Proposition~\ref{P:carac_strong_qp_on_infinite}, one can consider the morphism $f$ defined by $f(a) = ab$, $f(b) = aba$. Example~\ref{ex:aut1} shows that ${\cal A}_{aba}' = \{a, b\}^*$ and so $f$ is strongly quasiperiodic on infinite words. On the same way, one can verify that the morphism $f$ defined by $f(a) = abaaba$ and $f(b) = aabaaba$ is strongly-quasiperiodic. More precisely, the image of any infinite word beginning with $a$ is $abaa$-quasiperiodic and the image of any word beginning with $b$ is $aaba$-quasiperiodic. As a consequence of Proposition~\ref{P:carac_strong_qp_on_infinite}, we have next result. \begin{corollary} It is decidable whether a morphism is strongly quasiperiodic on infinite words. \end{corollary} \section{\label{sec:weakly}On Weakly Quasiperiodic Morphisms} We now consider the decidability of the questions: given a morphism $f$, is $f$ weakly quasiperiodic on finite words? Is it weakly quasiperiodic on infinite words? Note that this is equivalent to asking for the decidability of the question: given a morphism, are all images of non-quasiperiodic words also non-quasiperiodic? We provide some partial answers. Let us recall that a morphism $f$ is said \textit{prefix} (resp. \textit{suffix}) if for all letters $a$ and $b$, $f(a)$ is not a prefix (resp. a suffix) of $f(b)$. \begin{lemma} \label{L:prefixsuffix} Any non-prefix or non-suffix non-erasing morphism defined on an alphabet of cardinality at least two is weakly quasiperiodic on finite and infinite words. \end{lemma} \begin{proof}If $f(a)$ is a prefix of $f(b)$ then, for all $k \geq 1$, the finite word $f(b^ka)$ is $f(ba)$-quasiperiodic. The infinite word $f(bab^\omega)$ is also $f(ba)$-quasiperiodic. The morphism $f$ is weakly quasiperiodic both on finite words and on infinite words. If $f(a)$ is a suffix of $f(b)$ then, for all $k \geq 1$, the finite word $f(ab^k)$ is $f(ab)$-quasiperiodic. The infinite word $f(ab^\omega)$ is $f(ab)$-quasiperiodic (it is even periodic). The morphism $f$ is weakly quasiperiodic both on finite words and on infinite words. \end{proof} \begin{corollary} Any non-injective non-erasing morphism defined on an alphabet of cardinality at least two is weakly quasiperiodic on finite and infinite words. \end{corollary} \begin{proof} If $f$ is not injective, there exist two different words $u$ and $v$ such that $f(u) = f(v)$. If $f(u)$ and $f(v)$ are powers of same word then $f$ is erasing: a contradiction. Otherwise, we can assume that $u$ and $v$ begin with different letters. Thus $f$ is not prefix and so, by Lemma~\ref{L:prefixsuffix}, it is weakly quasiperiodic on finite and infinite words. \end{proof} \begin{proposition} \label{P:primitive} Let $f$ be a non-erasing morphism and let $u$ be a primitive word over $\{a, b\}$. If $f(u)$ is not primitive then $f$ is weakly quasiperiodic on finite words. Moreover, if $|u|_a \geq 1$ and $|u|_b \geq 1$, then $f$ is weakly quasiperiodic on infinite words. \end{proposition} We first need an intermediate result. \begin{lemma} \label{L:primitive} If $f(a^ib^j)$ is not primitive for some integers $i \geq 1$, $j \geq 1$, then one of the words $f(ab^\omega)$, $f(aba^\omega)$, $f(ba^\omega)$, $f(bab^\omega)$ is quasiperiodic. \end{lemma} \begin{proof} Assume first $i \geq 2$, $j \geq 2$. By Lyndon-Sch\"utzenberger's characterization of solutions of the equation $x^i y^j = z^k$ when $i \geq 2$, $j \geq 2$, $k \geq 2$ \cite{LyndonSchutz}, we deduce that $f(a)$ and $f(b)$ are powers of a same word: $f(ab^\omega)$ is quasiperiodic, as any image of a finite (of length at least 2) or of an infinite word. Now consider case $j = 1$. Let $u$ be the primitive word such thay $f(a^ib) = u^k$ ($k \geq 2$). If $|f(a)^{i-1}| \geq |u|$, the words $f(a)^i$ and $u^k$ share a common prefix of length at least $|f(a)| + |u|$. By Fine and Wilf's theorem \cite{FineWilf1965}, $f(a)$ and $u$ are powers of a same word. It follows that $f(a)$ and $f(b)$ are also powers of a same word. We conclude as in case $i, j \geq 2$. Now consider the case $|u| \geq |f(a)^i|$. From $f(a)^if(b) = u^k$, we get $u = f(a)^ix$, $f(b) = xu^{k-1}$ for some word $x$. Hence $f(b) = x(f(a)^ix)^{k-1}$ and the word $f(bab^\omega)$ is $x(f(a)^ix)$-quasiperiodic. It remains to consider the case $|f(a)^{i-1}| < |u| < |f(a)^i|$. In this case, for some words $x$ and $y$, $u = f(a)^{i-1}x$, $f(a) = xy$ and $y$ is a prefix of $u$. In particular, for some word $z$, $f(a) = xy = yz$. By a classical result in Combinatorics on Words (see \cite[Lem. 1.3.4]{Lothaire1983book}), $x = \alpha\beta$, $y = (\alpha \beta)^\ell \alpha$, $z = \beta\alpha$: $f(a) = (\alpha \beta)^{\ell+1}\alpha$, $u = [(\alpha\beta)^{\ell+1}\alpha]^{i-1}\alpha\beta$. Now observe that $yf(b) = u^{k-1} = [[(\alpha\beta)^{\ell+1}\alpha]^{i-1}\alpha\beta]^{k-1}$. When $ i\geq 2$, $f(b) = \beta\alpha [(\alpha\beta)^{\ell+1}\alpha]^{i-2}\alpha\beta[[(\alpha\beta)^{\ell+1}\alpha]^{i-1}\alpha\beta]^{k-2}$, and when $i = 1$, $f(b) = \beta(\alpha\beta)^{k-\ell-2}$. In both cases, $f(aba^\omega)$ is $\alpha\beta\alpha$-quasiperiodic. When $i = 1$, the non-primitivity of $f(ab^j)$ is equivalent to the non-primitivity of $f(b^ja)$. Thus exchanging the roles of $a$ and $b$, we end the proof of the lemma. \end{proof} \begin{proof2}{Proposition~\ref{P:primitive}} First if $u$ contains only the letter $a$ or only the letter $b$, we have $u = a$ or $u =b$ and $f$ is weakly quasiperiodic on finite words. Assume from now on that $|u|_a \geq 1$ and $|u|_b \geq 1$. If $|u|_a = 1$, then there exist integers $i, j$ such that $u = b^iab^j$ with $i+j \geq 1$. As $f(u)$ is not primitive, also $f(ab^{i+j})$ is not primitive: $f$ is weakly quasiperiodic on finite words. By Lemma~\ref{L:primitive}, $f$ is also weakly quasiperiodic on infinite words. The result follows similarly when $|u|_b = 1$. Now consider the case $|u|_a \geq 2$ and $|u|_b \geq 2$. A seminal result by Lentin and Sch\"utzenberger states that if $f$ is a morphism defined on alphabet $\{a, b\}$, if for a non-empty word $u$, $f(u)$ is not primitive then there exists a word $v$ in $a^*b \cap ab^*$ such that $f(v)$ is not primitive \cite[Th. 5]{LentinSchutzenberger1969}. We are back to previous cases.\qed \end{proof2} \medskip The converse of Proposition~\mbox{\ref{P:primitive}} is false. Indeed as shown by the morphism $f$ defined by $f(a) = ababa$, $f(b) = ab$, a morphism can be weakly quasiperiodic on finite words or on infinite words and be primitive preserving (the image of any primitive word is primitive). Nevertheless observe that when we consider the problem of deciding if a morphism is weakly quasiperiodic on infinite words, we can assume that all images of letters are primitive. Indeed any morphism $f$ such that $f(a)$ is a non-empty power of $a$ for each letter $a$ is not weakly quasiperiodic: for any word (finite of length at least 2 or infinite) $w$, $f(w)$ is quasiperiodic if and only if $w$ is quasiperiodic. In consequence, to determine whether a morphism $f$ is weakly quasiperiodic or not, one can substitute $f$ by the morphism $r_f$ defined by $r(a)$ is the primitive root of $f(a)$. Note that images of letters by $r_f$ are primitive words. \medskip For all weakly quasiperiodic on infinite words morphisms met until now, there exist non-empty words $u$ and $v$ such that the infinite word $uv^\omega$ is not quasiperiodic while $f(uv^\omega)$ is quasiperiodic. This situation also holds in the next lemma (when $\mathbf{w}$ in the hypothesis is not quasiperiodic) whose proof is omitted. We conjecture that this holds in all cases. Bounding the length of $u$ and $v$ could lead to a procedure to check whether a morphism is weakly quasiperiodic on infinite words. \begin{lemma} \label{L:demiqp} Let $f$ be a morphism, and let $\mathbf{w}$ be an infinite word such that $f(\mathbf{w})$ is $q$-quasiperiodic for some word $q$ such that $2|q| \leq |f(\alpha)|$ for each letter $\alpha$. Then: \begin{enumerate} \item $\mathbf{w} = (a_1\ldots a_k)^\omega$ with $a_1$, \ldots, $a_k$ pairwise different letters, or, \item there exist words $x$, $y$, $z$ and letters $a$ and $b$ such that $|xyz|_a = 0$, $|z|_b = 0$, $xay(bz)^\omega$ is not quasiperiodic and $f(xay(bz)^\omega)$ is $q$-quasiperiodic. Moreover in this case, we can find $x$, $y$ and $z$ such that any letter occurs at most once {in} each of these words. \end{enumerate} \end{lemma} \section{\label{sec:conclusion}Conclusion} To conclude this paper on links between quasiperiodicity and morphisms, we point out another question. Given a morphism $f$ prolongable on a letter $a$, can we decide whether the word $f^\omega(a) = \lim_{n \to \infty} f^n(a)$ is quasiperiodic? We are convinced that a better knowledge of weakly and strongly quasiperiodic on infinite words {morphisms} could bring answers to the previous question. We suspect in particular that if $f$ is a strongly quasiperiodic on infinite words morphism and if it is prolongable on $a$, then $f^\omega(a)$ is quasiperiodic. Conversely it should be true that if $f^\omega(a)$ is quasiperiodic and $f(a)$ is not a power of $a$ then $f$ is weakly quasiperiodic on infinite words. The next result states partially that. \begin{proposition} \label{P:fixedpoint} Let $f$ be a non-erasing morphism and $a$ be a letter such that $f^\omega(a)$ is a quasiperiodic infinite word but not a periodic word. If all letters are growing with respect to $f$ ($\lim_{n \to \infty} |f^n(a)|$ $= \infty$), then $f$ is weakly quasiperiodic on infinite words. \end{proposition} Observe that the converse of previous proposition does not hold. The morphism $f$ defined by $f(a) = a$, $f(b) = ba$ does not generate an infinite quasiperiodic word ($f$ does not generate its fixed point $a^\omega$ and $ba^\omega$ is not quasiperiodic), but it is weakly quasiperiodic on infinite words as $f(ab^\omega)$ is $aba$-quasiperiodic. It is an open {problem} to state Proposition~\ref{P:fixedpoint} for arbitrary morphims generating a quasiperiodic {infinite word}. The proof of Proposition~\ref{P:fixedpoint} is a consequence of {Lemma}~\ref{L:demiqp} and the following one. \begin{lemma} \label{L:reducing} Let $f$ be a non-erasing morphism. If, for some integer $k \geq 1$, the morphism $f^k$ is weakly quasiperiodic, then $f$ is weakly quasiperiodic. \end{lemma} \begin{proof} Assume $f^k(\mathbf{w})$ is quasiperiodic for some integer $k \geq 1$ and for some non-quasi\-pe\-riodic infinite word $\mathbf{w}$. Let $i$ be the smallest integer such that $f^i(w)$ is quasiperiodic. Observe that $i \geq 1$ and that $f^{i-1}(w)$ is not quasiperiodic. As $f^i(\mathbf{w}) = f(f^{i-1}(\mathbf{w}))$, $f$ is weakly quasiperiodic on infinite words. \end{proof} \begin{proof2}{Proposition~\ref{P:fixedpoint}} Let $f$ be a morphism and let $a$ be a letter such that $f^\omega(a)$ is a quasiperiodic infinite word. Let $q$ be the quasiperiod of $f^\omega(a)$. Assume that all letters of $f$ are growing. As all letters are growing with respect to $f$, for some $k \geq 1$, $f^k$ verifies the hypothesis of Lemma~\mbox{\ref{L:demiqp}}: $f^k$ is weakly quasiperiodic on infinite words. By Lemma~\ref{L:reducing}, $f$ is also weakly quasiperiodic on infinite words. \qed \end{proof2} \bibliographystyle{plain}
2,877,628,089,947
arxiv
\section{Introduction} \label{sec:intro} In this paper, we are concerned with the numerical pricing of financial derivatives. A financial derivative is a contract whose value depends on an underlying asset such as a stock, an interest rate, or a commodity. The trading in financial derivatives has increased tremendously during the last decades, mostly due to the possibility to hedge positions in the underlying asset. Another important feature of financial derivatives is the potential for leverage, since a small movement in the underlying asset can cause a large movement in the value of the financial derivative. Due to the large traded volume of financial derivatives, efficient and accurate pricing of such contracts is of utmost importance. In most cases, there is no analytical formula available, and it becomes necessary to use a numerical method to compute the prices of the contracts. When a financial derivative depends on several underlying assets, the problem becomes multi-dimensional. Traditionally, the only way to price such financial derivatives is to use Monte Carlo methods for a stochastic differential equation (SDE) formulation of the problem. However, due to their arguably slow convergence, considerable efforts in the research community have been devoted to deriving efficient methods for a partial differential equation (PDE) formulation of the pricing problem. The main problem with these methods is the so-called curse of dimensionality --- the number of degrees of freedom in the problem grows exponentially in the number of dimensions. Numerical methods for the PDE formulation include adaptive Finite Differences (FD) \cite{persson2007pricing, lotstedt2007space, linde2009highly, persson2010pricing, von2015adaptive}, high-order compact schemes \cite{during2012high, during2015high}, Alternating Direction Implicit (ADI) schemes \cite{hout2010, ADI2}, Radial Basis Function (RBF) approximation \cite{pettersson2008improved, larsson2008multi}, Radial Basis Function Partition of Unity (RBF-PU) method \cite{safdari2015radial, shcherbakov2016radial,shcherbakov2016radialb}, and Radial Basis Function generated Finite Differences (RBF-FD) \cite{milovanovic2018radial,Slobodan}. In \cite{von2015benchop} and \cite{benchop2} several methods for pricing of options are implemented and evaluated. As a numerical example, we consider pricing of a European two-dimensional option. An option is a financial derivative which gives the holder the right, but not the obligation, to buy (for call options) or sell (for put options) an underlying asset at a specified strike price $K$ at or before the time of maturity $T$. The method that we employ is RBF-FD. The main idea behind it is to combine the desirable features from FD (sparsity of the discretization matrices --- as opposed to RBF) and RBF (meshfree --- as opposed to FD). Such methods have the potential to be of high order, depending on the number of nodes used in the discretization stencil. However, for many option pricing problems, the payoff function has a discontinuity in the function itself or its derivatives, which limits the order of convergence obtained in numerical simulations. For this reason, we smooth the payoff function according to \cite{Kreiss}, before employing the numerical method. This smoothing increases the order of convergence to the expected one from the discretization that is used. In Section \ref{sec:nummet} we define the discretization in space and time, while Section \ref{sec:M} is devoted to the model problems that we solve, as well as node layouts, stencils, boundary conditions, and smoothing of the initial data. Finally, the results are presented in Section \ref{sec:numres}, and conclusions are drawn in Section \ref{sec:concl}. \section{Numerical method} \label{sec:nummet} We consider pricing of financial derivatives where the problem can be formulated as a PDE in $D$ spatial dimensions and time \begin{align} \frac{\partial u}{\partial t}+{\mathcal{L}}u&=0, \nonumber \\ u(s_1,\ldots,s_D,0)&=g(s_1,\ldots,s_D), \label{eq:L2} \\ s_i\ge 0,\ i=1,\ldots,D;& \quad 0\le t\le T. \nonumber \end{align} Here the solution $u(s_1,\ldots,s_D,t)$ denotes the price of the financial derivative, $t$ denotes time, $s_i$, the value of the underlying asset with index $i$, and $g$ the payoff function of the financial derivative. In many pricing problems the original PDE is a final value problem solved backward in time. We consider problems in forward time as in (\ref{eq:L2}), i.e., when necessary the problem is transformed into an initial value problem. In Sections \ref{sec:RBF-FD} and \ref{sec:temp} we define the spatial and temporal discretization of (\ref{eq:L2}), respectively. \subsection{Radial Basis Function generated Finite Differences} \label{sec:RBF-FD} In RBF-FD the spatial operator $\mathcal{L}u$ in (\ref{eq:L2}) at a location $\mathbf{s}^c=(s_1^c,s_2^c,\ldots,s_D^c)$, is approximated as a linear combination of the solution at the $m$ closest nodes $\mathbf{s}^k$ (possibly including $\mathbf{s}^c$), $k=1,\ldots,m$ \begin{equation} \mathcal{L}u|_{\mathbf{s}^c}\approx\sum_{k=1}^{m}{w}_{k}u|_{\mathbf{s}^k}. \label{eq:Lu} \end{equation} The weights ${w}_{k}$ are calculated by enforcing (\ref{eq:Lu}) to be exact for an RBF $\phi(r)$ \begin{equation}\label{eq:D} {{ \begin{bmatrix} \phi(\|\mathbf{s}^1-\mathbf{s}^1\|) & \ldots & \phi(\|\mathbf{s}^1-\mathbf{s}^m\|)\\ \vdots & \ddots & \vdots\\ \phi(\|\mathbf{s}^m-\mathbf{s}^1\|) & \ldots & \phi(\|\mathbf{s}^m-\mathbf{s}^m\|) \end{bmatrix} \begin{bmatrix} {w}_1\\ \vdots\\ {w}_m \end{bmatrix}= \begin{bmatrix} \mathcal{L}\phi(\|\mathbf{s}-\mathbf{s}^1\|)|_{{\mathbf{s}}^c}\\ \vdots \\ \mathcal{L}\phi(\|\mathbf{s}-\mathbf{s}^m\|)|_{{\mathbf{s}}^c} \end{bmatrix}.}} \end{equation} It is given from RBF interpolation that (\ref{eq:D}) is a nonsingular system, and hence a unique set of weights $w_k$, $k=1,\ldots,m$ can be computed. Typical choices of RBFs are listed in Table \ref{tab:rbf}. For the first four examples, the parameter $\varepsilon\in \mathbb{R}$ is the shape parameter of the RBF. For polyharmonic splines (PHSs), the parameter $q\in{\mathbb{N}}$. In this paper, we follow \cite{flyer2016role}, \cite{bayona2017role}, and \cite{Slobodan} and use PHSs as basis functions together with polynomials of degree $p$ in the interpolation. With that approach, the polynomial degree (instead of the RBF) controls the rate of convergence, while the RBFs contribute to reduction of approximation errors and are necessary in order to have a stable approximation. \begin{table}[H] \centering \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{$\phi(r)$} \\ \hline Gaussian&${\displaystyle{e^{-(\varepsilon r)^2}}}$\\ Inverse quadratic& ${\displaystyle{{1}/({1+(\varepsilon r)^2})}}$ \\ Multiquadric& ${\displaystyle{\sqrt{1+(\varepsilon r)^2}}}$ \\ Inverse multiquadric& ${\displaystyle{1/\sqrt{1+(\varepsilon r)^2}}}$ \\ \hline Polyharmonic splines&$r^{2q-1}$ \\ \hline \end{tabular} \caption{A list of commonly used RBFs $\phi(r)$.} \label{tab:rbf} \end{table} \noindent In (\ref{eq:Dc}), we augment (\ref{eq:D}) with monomials of degree one \begin{equation}\label{eq:Dc} {\small{\begin{bmatrix} & & &1&s^1_1&\ldots&s^1_D\\ & B&& 1&\vdots &&\vdots\\ & & &1&s^m_1&\ldots&s^m_D \\ 1&\ldots&1&0&0&\ldots&0\\ s^1_1&\ldots&s^m_1&0&0&\ldots&0\\ \vdots & \vdots & \vdots&\vdots &\vdots & \ddots&\vdots \\ s^1_D&\ldots&s^m_D&0&0&\ldots&0\\ \end{bmatrix} \begin{bmatrix} {w}_1\\ \vdots\\ {w}_m\\ \gamma_0\\ \gamma_1\\ \vdots\\ \gamma_D \end{bmatrix}= \begin{bmatrix} \mathcal{L}\phi(\|\mathbf{s}-\mathbf{s}^1\|)|_{{\mathbf{s}}^c}\\ \vdots \\ \mathcal{L}\phi(\|\mathbf{s}-\mathbf{s}^m\|)|_{{\mathbf{s}}^c}\\ \mathcal{L}1|_{{\mathbf{s}}^c}\\ \mathcal{L}{s}_1|_{{\mathbf{s}}^c}\\ \vdots\\ \mathcal{L}{s}_D|_{{\mathbf{s}}^c}\\ \end{bmatrix},}} \end{equation} where $B$ is the coefficient-matrix in (\ref{eq:D}). Now, we place $N$ computational nodes $\mathbf{s}^c_i$, $i=1,\ldots,N$ at the locations where we want to approximate the solution. The weights for each computational node from solving (\ref{eq:Dc}) are assembled row-wise into the sparse differentiation matrix $W\in\mathbb{R}^{N\times N}$, with $m$ nonzero elements per row. This leads to the following semi-discretization of (\ref{eq:L2}) \begin{equation} \label{eq:semi} \begin{array}{rcl} {\displaystyle{\frac{\mathop{}\!\mathrm{d} }{\mathop{}\!\mathrm{d} t}\bar{u}(t)+W \bar{u}(t)}}&=&\bar{0},\\ \bar{u}(0)&=&\bar{g}, \end{array} \end{equation} where $\bar{u}(t)\in\mathbb{R}^{N\times 1}$ is the vector of unknowns at time $t$, with approximations of $u$ in the computational nodes $\mathbf{s}^c_i$, $i=1,\ldots,N$, $\bar{0}\in\mathbb{R}^{N\times 1}$ is a vector with only zeros, and $\bar{g}\in\mathbb{R}^{N\times 1}$ is the vector with the function $g$ evaluated in the computational nodes $\mathbf{s}^c_i$, $i=1,\ldots,N$. Equation (\ref{eq:semi}) forms a system of linear ordinary differential equations (ODEs) in time. In the next section, we describe how to solve it. \subsection{Temporal discretization} \label{sec:temp} For the time discretization of (\ref{eq:semi}), we use the Backward Differentiation Formula of order two (BDF2) \cite{Hairer2008}. This time-stepping scheme requires the solution at two previous time steps, and we therefore employ Backward Euler (BDF1) for the first time step. It is convenient to have the same coefficient matrix in all time steps, so we use non-equidistant time steps as described in \cite{larsson2008multi} and later used in e.g., \cite{milovanovic2018radial,Slobodan}. This is accomplished by discretizing the time interval with $M$ steps of length $\Delta t^{\ell} = t^{\ell} - t^{\ell-1}$, where $\ell = 1,\ldots,M$. We define $\omega_{\ell}=\Delta t^{\ell}/\Delta t^{\ell-1}$ for $\ell = 2,\ldots,M$ and arrive at \begin{align} \label{eq:BDF1} \bar{u}^1 - \bar{u}^0 &= \Delta t^1W\bar{u}^1,\\ \label{eq:BDF2} \bar{u}^{\ell} - \beta_1^\ell \bar{u}^{\ell-1} + \beta_2^\ell \bar{u}^{\ell-2} &= \beta_0^{\ell}W\bar{u}^{\ell},\quad \ell = 2,\ldots,M, \end{align} where \begin{equation} \beta_0^{\ell} = \Delta t^{\ell}\frac{1 + \omega_{\ell}}{1+ 2\omega_{\ell}},\quad \beta_1^{\ell} = \frac{(1+\omega_{\ell})^2}{1+2\omega_{\ell}},\quad \beta_2^{\ell} = \frac{\omega_{\ell}^2}{1+2\omega_{\ell}}. \label{eq:betas} \end{equation} We compute the values for $\omega_{\ell}$ using the recursive condition $\beta_0^{\ell}=\beta_0^{\ell-1}$, which keeps the coefficient matrix constant throughout all time steps. Since our time interval has the length $T$, we chose the initial time step length $\Delta t^1$ from \begin{equation} \label{eq:Dt} \sum_{\ell=1}^{M}\Delta t^{\ell} = T = \Delta t^1(1+\sum_{\ell=2}^{M} \prod_{\ell'=2}^{\ell}\omega^{\ell}). \end{equation} Finally, we start the time integration by setting $\bar{u}^0=\bar{g}$. From the temporal discretization we get the following linear system of equations to solve in each time step \begin{equation} \label{eq:lineq} A\bar{u}^{\ell}=\bar{b}^{\ell}, \end{equation} where $A=I-\Delta t^1 W$, $\Delta t^1$ is given by (\ref{eq:Dt}), $\bar{b}^{\ell}=\beta_1^{\ell}\bar{u}^{\ell-1}-\beta_2^{\ell}\bar{u}^{\ell-2}$, $\ell=2,\ldots,M$, and $\bar{b}^1=\bar{u}^0=\bar{g}$. \section{Model problem} \label{sec:M} As a model problem we consider a European call option issued on two underlying assets $s_1$ and $s_2$ \begin{align} \frac{\partial u}{\partial t}+\mathcal{L}u&=0, \label{eq:Eur} \\ s_1 \ge 0,\ s_2 \ge 0,\ & \quad 0\le t \le T, \nonumber \end{align} with \begin{align} \mathcal{L}u= \frac{1}{2}\Big(\sigma_1^2 s_1^2\frac{\partial^2u}{\partial s_1^2}&+\sigma_2^2 s_2^2\frac{\partial^2u}{\partial s_2^2}\Big)+ \rho\sigma_1\sigma_2 s_1s_2\frac{\partial^2u}{\partial s_1 \partial s_2} \nonumber \\ &+ r\Big(s_1\frac{\partial u}{\partial{s_1}}+s_2\frac{\partial u}{\partial{s_2}}\Big) -ru, \label{eq:L} \end{align} and \begin{equation} u(s_1,s_2,T)=g(s_1,s_2)=\left(\frac{1}{2}(s_1+s_2)-K\right)^+. \label{eq:init} \end{equation} Here $(f(x))^+=\max(f(x),0)$, $r$ denotes the risk-free interest rate in the market, $\sigma_i$ the volatility of asset $i$, and $\rho$ the correlation between the assets. As a close-field boundary condition in $s_1=s_2=0$ we set \begin{equation} u(0,0,t)=0,\quad 0\le t\le T, \label{eq:close} \end{equation} and as a far-field boundary condition we set \begin{equation} u(s_1,s_2,t)=\left(\frac{1}{2}(s_1+s_2)-K e^{-rt}\right),\quad 0\le t\le T, \label{eq:far} \end{equation} for $s_1$ and $s_2$ large enough. The parameters used are given in Table \ref{tab:2}. {\color{red} \begin{table}[htb] \centering \begin{tabular}{|c|c|} \hline $r$&{\color{black}0.03} \\ $\sigma_1$&{\color{black}0.15} \\ $\sigma_2$& {\color{black}0.15} \\ $\rho_{1,2}$& {\color{black}0.5} \\ $K$& {\color{black}1} \\ $T$& {\color{black}0.2} \\ \hline \end{tabular} \caption{Parameters used in the model problem} \label{tab:2} \end{table}} Equation (\ref{eq:Eur}) is a PDE that should be solved backward in time. To apply the time-stepping scheme in Section \ref{sec:temp}, we therefore transform (\ref{eq:Eur}) into a problem that is solved forward in time. \subsection{Node layout, stencils, and boundary conditions} \label{sec:nodes} We consider both a uniform and a nonuniform node layout, presented in Figure \ref{fig:221}. Unlike classical grid-based methods (e.g., standard FD methods) we do not need to use a rectangular domain. Instead, we only use the lower-triangular half of the rectangle which reduces the number of computational nodes by a factor of two, and hence the computational complexity significantly. The reason for introducing a nonuniform node layout is that we can cluster nodes where we are most interested in having an accurate solution. In general, we are most interested in having an accurate solution in the neighborhood of $s_1+s_2=2K$, which is also where the truncation error is largest due to large derivatives in the solution from the discontinuity in the first derivative of the payoff function. We start to present a nonuniform node distribution in 1D that is generated as introduced in \cite{hout2010} and later used for RBF-FD and option pricing in \cite{milovanovic2018radial}. Consider $N_1$ equidistant nodes $x_1^{(1)}<\ldots<x_1^{(i)}<\ldots<x_1^{(N_1)}$ constructed by \begin{equation} x_1^{(i)}=\text{arcsinh}\left(-\frac{K}{c}\right)+(i-1) \Delta x\text{,}\quad i=1,\ldots,N_1\text{,} \label{eq:adap1} \end{equation} where $c$ is a positive real constant which specifies how dense the node distribution becomes around the strike price $K$, $$\Delta x=\frac{1}{N_1}\left[\text{arcsinh}\left(\frac{s_{\max}-K}{c} \right) - \text{arcsinh}\left( -\frac{K}{c}\right)\right]\text{,}$$ and $s_{\max}$ denotes the far-field boundary. Then, the nonuniform node distribution $s_1$ is generated pointwise as \begin{equation} s_1^{(i)}=K+c\cdot \text{sinh}(x^{(i)})\text{,}\quad i=1,\ldots,N_1\text{.} \label{eq:adap2} \end{equation} The nonuniform node layout is generated by using the one-dimensional node layouts from (\ref{eq:adap1}) and (\ref{eq:adap2}), along the axes $s_1$ and $s_2$, and then uniformly placing the internal points in the diagonal direction. The number of nodes along each diagonal is increased by one for each diagonal. The far-field boundary is located at $s_1+s_2=s_{\max}=8K$. The density tuning parameter used in Figure \ref{fig:221} for the nonuniform node layout and in the numerical experiments presented in Section \ref{sec:numres}, is $c=0.8$. It should be noted that a too small value of $c$ eventually leads to an ill-conditioned problem. \begin{figure}[H] \centering \makebox[\linewidth][c]{% \centering \begin{subfigure}[H]{0.8\textwidth} \centering \input{tikz/grid2dreg.tikz} \caption{Uniform node layout.} \end{subfigure} \hspace{-3.5cm} \begin{subfigure}[H]{0.8\textwidth} \centering \input{tikz/grid2dadap.tikz} \caption{Nonuniform node layout.} \end{subfigure} } \caption{Uniform and nonuniform computational node layouts in 2D. The boundary conditions are employed in the blue triangle node (the close-field boundary condition) and in the red square nodes (the far-field boundary condition).} \label{fig:221} \end{figure} We also introduce the notation $N_s$ for the number of nodes along one of the axes, i.e., \begin{equation} \frac{N_s(N_s+1)}{2}=N. \label{eq:Ns} \end{equation} The nearest neighbors for constructing the stencils are efficiently determined using the $k$-D tree algorithm, \cite{bentley1975multidimensional}. In Figure \ref{fig:222} we show examples of stencils at different locations in the domain. The polynomial space is of size $$\nu=\left(\begin{array}{c} p+D\\ p\end{array} \right),$$ which we use to set the size of the stencils to $m=5\nu$, following \cite{flyer2016role,bayona2017role,Slobodan}. We are aiming for a fourth order scheme and use $p=4$ and $q=5$ which gives $$\nu=\left(\begin{array}{c} 6\\ 4\end{array} \right)=15.$$ Hence, we use a stencil size that is $m=5\cdot 15=75$. \begin{figure}[H] \centering \input{tikz/gridsten.tikz} \caption{Examples of nearest neighbor based stencils used for approximating the differential operator on a nonuniform node layout. The central node of each displayed stencil is denoted by a white cross mark. All stencils are of the same size $m=75$.} \label{fig:222} \end{figure} For the boundary nodes we use different treatments depending on where the node is located. For the node $s_1=s_2=0$ (the blue triangle in Figure \ref{fig:221}), we set the close-field boundary condition from (\ref{eq:close}). For the nodes $s_1+s_2=8K$ (the red squares in Figure \ref{fig:221}), we set the far-field boundary conditions from (\ref{eq:far}). For the boundary nodes along the axes, i.e., $s_1=0$, $s_2>0$ and $s_2=0$, $s_1>0$ we solve (\ref{eq:Eur}) using the discretization scheme defined in Section \ref{sec:nummet}. The $k$-D tree algorithm generates one-sided stencils for those nodes. \subsection{Smoothing of initial data} \label{sec:smooth} Since the initial data $g(s_1,s_2)$ in (\ref{eq:init}) has a discontinuity in the first derivative, the obtained spatial order of convergence for a finite difference scheme is limited to two, regardless of the formal order of the scheme. The formal spatial order of the scheme in (\ref{eq:Dc}) is $p$, i.e., for $p>2$ the obtained convergence order is limited by the lack of smoothness in the final condition. In \cite{Kreiss}, a smoothing of the initial data that recovers the order of convergence to the formal order of the scheme is introduced. This approach has been successfully used for option pricing problems in e.g., \cite{Pooley} and \cite{during2015high}. Since we are aiming for a fourth order scheme, we use a fourth order smoothing operator $\Phi_4$ defined by its Fourier transform \begin{equation} \hat{\Phi}_4(\omega)=\left(\frac{\sin(\omega/2)}{\omega/2}\right)^4\left(1+\frac{2}{3}\sin^2(\omega/2)\right). \label{eq:phihat} \end{equation} Using {\textsc{Wolfram Mathematica}} to compute the inverse Fourier transform of (\ref{eq:phihat}) gives \begin{align} {\Phi}_4(s) = \frac{1}{72}\Big( & -(s-3)^3 \cdot \mathrm{sgn}(s-3) -(s+3)^3\cdot \mathrm{sgn}(s+3) \nonumber \\ & +12(s-2)^3 \cdot \mathrm{sgn}(s-2) +12(s+2)^3\cdot \mathrm{sgn}(s+2) \nonumber \\ & -39 (s-1)^3 \cdot \mathrm{sgn}(s-1) -39(s+1)^3\cdot \mathrm{sgn}(s+1) \nonumber \\ & +56s^3 \cdot \mathrm{sgn}(s) \Big), \label{eq:phi} \end{align} where $$\mathrm{sgn}(x)=\frac{|x|}{x}.$$ Following \cite{Kreiss}, \cite{Pooley}, and \cite{during2015high}, we get the smoothed final condition on a uniform node layout as \begin{equation} {\displaystyle{ \tilde{g}(s_1,s_2)=\frac{1}{\Delta s^2}\int\limits_{-3\Delta s}^{3\Delta s}\int\limits_{-3\Delta s}^{3\Delta s}\Phi_4\left(\frac{\tilde{s}_1}{\Delta s}\right)\Phi_4\left(\frac{\tilde{s}_2}{\Delta s}\right)g(s_1-\tilde{s}_1,s_2-\tilde{s}_2)\mathop{}\!\mathrm{d}\tilde{s}_1\mathop{}\!\mathrm{d}\tilde{s}_2. }} \label{eq:smooth} \end{equation} Since $g(s_1,s_2)$ is smooth in a large part of the computational domain, we only need to compute (\ref{eq:smooth}) in the nodes that are close enough to $s_1+s_2=2K$ to be affected from the smoothing. Also, since the nodes along a diagonal all have the same distance to $s_1+s_2=2K$, we only need to compute one value of $\tilde{g}(s_1,s_2)$ for each diagonal and use that value for all nodes on that diagonal. The theory in \cite{Kreiss} shows that replacing the final condition $g(s_1,s_2)$ with $\tilde{g}(s_1,s_2)$ defined in (\ref{eq:smooth}) gives a fourth order scheme for Cartesian grids, i.e., the node layout that we here refer to as uniform. Here, we want to use this smoothing also for our nonuniform node layout defined in Section \ref{sec:nodes}. This layout can be seen as a slightly skewed Cartesian grid and the nodes are equidistantly distributed along the diagonals. For this node layout we replace $\Delta s$ in (\ref{eq:smooth}) with $$\Delta s_i=\min_{k = 1,\ldots,m}^{k \ne c} \|{\mathbf{s}}^c_i-{\mathbf{s}}^{k}_i\|,\quad i=1,\ldots,N.$$ \section{Numerical results} \label{sec:numres} The numerical method described in Section \ref{sec:nummet} applied to the model problems described in Section \ref{sec:M} is implemented in {\textsc{Matlab}}. In all experiments, we start by scaling the original problem such that $s_{\max}=1$ and time runs forward in the PDE. After the integration, the solution is transformed back to the original problem. The linear system defined in (\ref{eq:lineq}) is solved using GMRES \cite{saad1986gmres}, with an incomplete LU factorization as the preconditioner using {\tt{nofill}}. The convergence tolerance for the iterations is set to $10^{-8}$, and as the initial condition for each iteration we use the computed solution from the previous time step. The numerical experiments are performed on a laptop equipped with a 2.3 GHz Intel Core i7 CPU and 16 GB of RAM. The computation of the RBF-FD weights is performed in parallel using the parallel toolbox command {\tt{parfor}} with four workers. In Figure \ref{fig:res1} we plot the error $\Delta u_{\max}$ as a function of $\hat{h}\equiv1/\sqrt{N}$ as well as of CPU-time for the model problem. The error is defined as \begin{equation} \Delta u(s_1,s_2)=|u^c(s_1,s_2,0)-u^*(s_1,s_2,0)|, \label{eq:error} \end{equation} where $u^c$ is the computed solution and $u^*$ is a reference solution computed with a second order finite difference method on a very fine grid. We use (\ref{eq:error}) to define \begin{equation} \Delta u_{\max}=\max_{[s_1,s_2]\in\hat{\Omega}}\Delta u(s_1,s_2), \label{eq:max} \end{equation} where $\hat{\Omega}=\left[\frac{1}{3}K,\frac{5}{3}K\right]\times \left[\frac{1}{3}K,\frac{5}{3}K\right]$. We denote standard second order finite differences \cite{tavella2000pricing} by FD. RBF-FD-GS is an RBF-FD method with Gaussian RBFs, stencil size $m=25$ and a node density dependent shape parameter that is presented in detail in \cite{milovanovic2018radial}. Abbreviation RBF-FD-PHS is used for the method that is presented in this paper. Moreover, we use designation \texttt{smoothed} in the superscript for the computations performed with the smoothing of the initial data, and \texttt{uniform} and \texttt{nonuniform} to specify the node layouts. Independent of spatial discretization that is used, we employ BDF2 with $M=N_s$ time steps in all experiments. For the RBF-FD methods $N_s$ is defined in (\ref{eq:Ns}) and for FD it is defined by $N_s=\sqrt{N}$. With this number of time steps, the temporal discretization error is not visible in the plots. \begin{figure}[H] \centering \input{tikz/BSeuCallconv.tikz} \input{tikz/BSeuCalltime.tikz} \caption{$\Delta u_{\max}$ as a function of $\hat{h}$ and CPU-time in seconds for the European call option.} \label{fig:res1} \end{figure} In Figure \ref{fig:res1} we see that all methods, but the two that are using a smoothed final condition, exhibit second order convergence. Among those five second order methods, RBF-FD with PHS exhibits the smallest error for a given $N$, and a nonuniform node layout gives a smaller error than the uniform one using the same $N$. RBF-FD-PHS with a smoothed final condition exhibits fourth order spatial convergence whether we are using a uniform or nonuniform node layout (apart from a small deviation for the nonuniform node layout). When it comes to computational time to reach a certain $\Delta u_{\max}$, FD is competitive for the larger errors displayed. This makes sense since the RBF-FD methods all have to compute the weights $w_k$, $k=1,\ldots,m$ before the time-stepping. Moreover, our model problem has a fairly short time to maturity $T=0.2$. For longer times to maturity, FD does not perform equally well compared to the RBF methods, see \cite{milovanovic2018radial, Slobodan}. We also establish that the fourth order methods quickly become superior when it comes to CPU-time to reach a certain $\Delta u_{\max}$. That is especially true for RBF-FD-PHS$_{\mathrm{nonuniform}}^{\mathrm{smoothed}}$. Even though this method has a computational prephase that includes both computation of weights $w_k$, $k=1,\ldots,m$ and smoothing of the final condition, the method requires a much smaller CPU-time than the other methods for $\Delta u_{\max}<10^{-4}$. \begin{figure}[H] \centering \input{tikz/BSeuCall_phs.tikz} \hspace{-0.1cm} \input{tikz/BSeuCall_phs_smoothed.tikz} \hspace{-0.1cm} \input{tikz/BSeuCall_phs_colorbar.tikz} \caption{Heat maps of $\Delta u$ for the European call basket option on uniform node layouts. The boundary of $\hat{\Omega}$ is marked with a white dash-dotted line.} \label{fig:res3} \end{figure} \begin{figure}[H] \centering \input{tikz/BSeuCall_phs_adap.tikz} \hspace{-0.1cm} \input{tikz/BSeuCall_phs_adap_smoothed.tikz} \hspace{-0.1cm} \input{tikz/BSeuCall_phs_colorbar.tikz} \caption{Heat maps of $\Delta u$ for the European call basket option on nonuniform node layouts. The boundary of $\hat{\Omega}$ is marked with a white dash-dotted line.} \label{fig:res4} \end{figure} In Figures \ref{fig:res3}--\ref{fig:res4} we show a heat map of the error $\Delta u$ defined in (\ref{eq:error}) for the model problem using the method RBF-FD-PHS. In Figure \ref{fig:res3} we present the error on the uniform node layout, and in Figure \ref{fig:res4} the error on the nonuniform node layout, with $N=6105$ for both node layouts. To the right in both figures, we have used the smoothed final condition, while the original one is used in the plots on the left. The errors in Figures \ref{fig:res3}--\ref{fig:res4} are presented for $0\le s_j\le 4$, $j=1,2$, in order to have a better view of the error profile around the smoothed area. The color scale is the same in all four plots. From Figures \ref{fig:res3}--\ref{fig:res4} we conclude that the smoothing of the final condition renders a $\Delta u$ that has a smaller magnitude compared to the original final condition. Moreover, $\Delta u$ obtained from the smoothed final condition has three maxima along a line $s_2=s_1+const.$ while the corresponding number of maxima is 1 for the nonsmoothed final condition. We also note that the magnitude of $\Delta u$ is smaller for the nonuniform node layout compared to the uniform one. That is due to the fact that for the nonuniform node layout, the number of nodes is larger in the area where the solution has large derivatives, i.e., around the strike price. We end this section by concluding that in this particular example, $\Delta u_{\max}$ is more than one order smaller using smoothing of the final condition on a nonuniform grid than without the smoothing on a uniform grid for the same number of nodes. \section{Conclusions} \label{sec:concl} In this paper, we have implemented a solver to price financial derivatives based on RBF-FD discretization in space and BDF2 in time. As RBFs we use PHSs, augmented with monomials of up to degree $p$. The formal order of this spatial discretization is $p$, however for many pricing problems the lack of smoothness of the initial data limits the actual order obtained in numerical simulations. However, by employing a smoothing technique to the initial data, the formal order of the discretization is retained. The RBF-FD discretizations have the advantage over standard FD such that the nodes do not have to be organized in a Cartesian grid. On the other hand, the RBF-FD discretizations have the merit that they render sparse differentiation matrices as opposed to global RBF approximations that lead to full matrices. Thus, RBF-FD has the possibility to give accurate solutions on nonuniform node layouts, still yielding sparse matrices. As a model problem, we consider pricing of a European type basket option issued on two underlying assets, resulting in a PDE in two spatial dimensions and time. By employing a nonuniform node layout that has a denser node distribution where we are most interested in having an accurate solution, together with smoothing of the final condition, the numerical experiments demonstrate that our developed method gives a very accurate solution in a short time using fewer nodes than the methods that we compare with, for this model problem. The fact that we can solve the problem accurately with fewer nodes becomes extremely important when we want to solve problems in higher dimensions, e.g., for pricing of financial derivatives issued on several underlying assets. Since the number of degrees of freedom grows exponentially in the number of dimensions (number of underlying assets), the ability to use fewer nodes per dimension to reach a certain accuracy might lead to the possibility to solve problems that would not be possible to solve with traditional techniques. \bibliographystyle{abbrvnat}
2,877,628,089,948
arxiv
\section{Major Headings} \section{Introduction} The interaction of two nuclei consists of a repulsive Coulomb potential and an attractive nuclear potential. The overlap of the two potentials produces a Coulomb barrier. Fusion occurs when the interacting nuclei have sufficient kinetic energy to overcome the repulsive barrier and are subsequently trapped inside the potential pocket to form a compound nucleus. The kinetic energy determines whether the process takes place by going over or quantum tunneling through the barrier. The nucleons in the reactants are rearranged in the compound nucleus. Nuclear fusion is responsible for energy generation in stars. It is also a process for synthesizing new elements in laboratories. The study of fusion has been carried out for several decades. In the 1980s, the discovery of unexpectedly large fusion cross sections at sub-barrier energies in some heavy-ion systems generated tremendous interest in fusion studies.\cite{be85,st86,be88} The sub-barrier fusion enhancement can be understood in the coupled-channel formalism.\cite{ba98,da98} The coupling of the entrance channel to the intrinsic degrees of freedom of the projectile and target splits the uncoupled single barrier into a distribution of barriers. The enhanced fusion cross sections at sub-barrier energies arise from going over the low energy barriers. The advent of radioactive ion beams (RIBs) has generated new excitement in this field. The r.m.s. radius of nuclei far from the $\beta$-stability valley is, in many cases, significantly larger. Fusion is expected to be enhanced due to lower barriers. However, these nuclei are often loosely bound, {\it i.e.}, the valence nucleon(s) have very small binding energy. Breakup of the loosely bound nuclei in the Coulomb and/or nuclear field of the target can take place and thus remove the available flux for fusion. As a result, fusion is predicted to be suppressed.\cite{hu92,ta93} On the other hand, if breakup is significantly large fusion can be enhanced through the strong coupling.\cite{da94} How fusion is influenced by loosely bound nuclei is still an open question, as will be seen later in this review. Several stable beam experiments have showed that neutron transfer with positive Q-values can enhance sub-barrier fusion.\cite{mo94,st95,st97,ti97,so98} With neutron-rich radioactive beams, the number of transfer channels with positive Q-values can be very large. One would expect an enhancement of sub-barrier fusion by the coupling to these transfer reactions.\cite{de00,za03} The compound nucleus formed with neutron-rich radioactive nuclei induced reactions should have higher stability against fission. Using short-lived neutron-rich radioactive beams may be a viable way for producing new heavy elements.\cite{hu91} Results of fusion measurements performed with some low intensity neutron-rich radioactive beams will be discussed in this paper. The emphasis of this review is on experimental work. General discussions on recent theoretical developments in fusion can be found in review articles by Balantekin and Takigawa,\cite{ba98} Dasgupta {\it et al.},\cite{da98} and Signorini\cite{si01}. However, specific theoretical treatments on some of the measurements discussed will be presented where the experiment is mentioned. The challenges in conducting fusion measurements with radioactive beams will be shown in Sect. {\bf 2} followed by a brief description of techniques for producing radioactive beams. Measurements using light-mass and medium-mass radioactive beams will be discussed in Sect. {\bf 4} and Sect. {\bf 5}, respectively. Concluding remarks will be given in Sect. {\bf 6}. \section{Experimental Considerations} Fusion is commonly studied by measuring the cross section as a function of reaction energy. The cross section can be determined by detecting the evaporation residues or fission fragments directly, or by detecting the $\gamma$ rays or light particles emitted from the evaporation residues deposited in a catcher foil. The techniques used in radioactive beam measurements are similar to those used in stable beam experiments.\cite{be85,st86} It is very important to have high quality beams for fusion measurements. Good energy resolution, small energy width, is crucial for measuring excitation functions, particularly at energies below the barrier where the cross sections change exponentially. To determine whether sub-barrier fusion is enhanced the reaction energies have to be known better than $\sim$1\%. Low emittance beams which result in a beam spot of a few mm on target are desirable. This reduces the angular broadening and energy spread of the reaction products. Beam purity is often an issue in RIBs, and it is associated with the production processes. The unwanted contaminants can be removed or reduced by high resolution mass separators or chemical methods. But sometimes they cannot be completely eliminated because the beam of interest is orders of magnitude less intense than the contaminants. Therefore, measurements with the main contaminant beam species have to be performed and subtracted from measurements with the mixed beam. The intensity of RIBs is, presently, several orders of magnitude lower than that of stable beams. In stable beam experiments, the beam intensity can be measured by a Faraday cup. One particle nA is $6.24\times 10^{9}$ particles per second (pps). In RIB experiments, an intensity of $10^{6}$ to $10^{7}$ pps would be considered high today. Fusion measurements can be performed with a beam of 10$^{4}$ pps. In order to compensate for the low beam intensity, high efficiency detectors with large solid angle coverage or multiple targets for obtaining measurements of several energies in one run have been employed. The availability of large area silicon strip detectors (SSD) in a variety shapes and compact multichannel electronics for handling such detectors have made many measurements feasible. On the other hand, because of the low beam intensity, the use of event-by-event beam tracking is very useful for cleaning up events originating from contaminant induced reactions. Incorporating time-of-flight measurements into experiments has many advantages and is very common. It can be used to measure beam energies. When beam contaminants are present, valid events can be selected using timing gates corresponding to the correct beam particles. The decay of beam particles sometimes contributes to the background in detectors. With the use of time-of-flight, this background can be suppressed effectively. \section{Types of Radioactive Ion Beams} The production of RIBs has been discussed extensively in many reports\cite{rnb6} and is beyond the scope of this review. We will give a brief description of the methods used to produce beams for the experiments discussed in this paper. \subsection{Isotope separator on-line} The isotope separator on-line (ISOL) method uses a driver accelerator to accelerate light-charged particles, such as protons, deuterons, and $\alpha$ particles, into a thick target.\cite{st00,vi01,ry02} The radioactive atoms diffuse out of the target and get ionized in an ion source. The secondary ions are then selected by mass separators, accelerated by a post accelerator, and sent to an experimental area, as shown in Fig.~\ref{fg:ribprod}(a). The beam quality is very good because the post accelerator is usually an electrostatic tandem accelerator or a linac. Beams which can be produced with this method are limited to those with lifetimes of the order of seconds or longer. Species with shorter lifetimes may not be able to get out of the target fast enough for post acceleration. How fast a radioactive atom gets out of the target depends strongly on the chemical and structural properties of the target. \begin{figure}[th] \centerline{\psfig{file=fig1_liang.eps,width=8cm}} \caption{A schematic illustration of RIB production: (a) ISOL, (b) in-flight with direct reactions, and (c) in-flight with fragmentation reactions. \label{fg:ribprod}} \end{figure} \subsection{In-flight} In this method, radioactive beams can be produced with the use of one accelerator. The accelerated heavy-ion beam is incident on a thin target in inverse kinematics. The reaction products emitted in a narrow cone in the forward direction are focused and transported through mass separators. At energies near and below 10 MeV/nucleon, direct reactions are often used. The reaction products can be selected and focused by solenoids such as the TWINSOL facility at the University of Notre Dame,\cite{be97,le99} or by a combination of solenoid, bunching resonator, and bending magnet such as at ATLAS (Argonne National Laboratory).\cite{ha00} Figure~\ref{fg:ribprod}(b) depicts the beam selection by the latter method. At energies of a few tens to several hundreds MeV/nucleon, the production reaction is essentially fragmentation. Very sophisticated fragment mass separators such as the A1900 at NSCL (Michigan State University)\cite{mo03} and RIPS at RIKEN\cite{ku92}, is used for filtering the beam, as shown in Fig.~\ref{fg:ribprod}(c). The advantage of the in-flight production method is that very short-lived species can be produced because diffusion and target chemistry are not involved in the processes. However, reaction mechanisms and target thickness introduces significant energy and angular spreads resulting in poor energy resolution and large emittance of the secondary beam. \section{Light-ion Reactions} This section will discuss the fusion reactions induced by light-mass radioactive ion beams namely, $^{6,8}$He, $^{11}$Be, $^{11}$C, and $^{17}$F up to now, which are the only projectiles utilized for this type of study. The $^{6,8}$He, $^{11}$Be, and $^{17}$F projectiles are particularly interesting for the following reasons: \begin{itemize} \item[a)] $^{6}$He is weakly bound with a two-neutron separation energy S$_{2n}$ = 0.972 MeV and has a neutron skin like structure with a large neutron spatial distribution around the $^{4}$He core.\cite{ta92,al97} In the present literature $^{6}$He is considered as a neutron halo nucleus. The isotope $^{8}$He is more tightly bound with S$_{2n}$ = 2.468 MeV but still has a structure similar to $^{6}$He.\cite{ta92,al97} \item[b)] $^{11}$Be is weakly bound, with S$_{n}$ = 0.504 MeV, and has a well established neutron halo\cite{ta88,fu91} which produces a r.m.s. radius $\sim$10\% larger than what is expected from the $r_{0}A^{1/3}$ systematics with $r_{0} \sim$ 1.18 fm. \item[c)] $^{17}$F is a proton drip-line nucleus with a proton separation energy S$_{p}$ = 0.600 MeV. Its first excited state is bound by 0.105 MeV with an extended r.m.s. radius $r_{rms}$ = 5.3 fm which is considered as a halo state.\cite{mo97} \end{itemize} In addition to the fusion process, the main topics of this review, discussion also includes the breakup (BU) process which is expected to be strong because of the small binding energy of these short-lived nuclei. Such a process could hinder or enhance the fusion process around the barrier as extensively debated by many theoreticians, and/or increase the total reaction cross section. In the following subsections the experimental results obtained with the light RIBs mentioned above will be critically reviewed as well as the theories developed to explain the measurements. Concerning fusion with loosely bound projectiles, which break up easily, we have to distinguish between the following processes: a) {\em complete fusion} (CF), when all projectile nucleons are trapped inside the target, b) {\em incomplete fusion} (ICF), when only part of the nucleons are trapped, and c) {\em total fusion} (TF), the combination of complete and incomplete fusion cross. In the absence of incomplete fusion, total fusion and complete fusion are identical. We should bear in mind that theories usually calculate total fusion cross sections, while various fusion cross sections measured by particular experiments. In order to compare different systems, the fusion cross sections should be corrected for geometric factors originating from the different nuclear radii involved, and the energies of the colliding systems (usually the center of mass energies) should be divided by the Coulomb barrier energies, V$_{B}$. In the following, V$_{B}$ will be evaluated with $r_{0} = 1.56$ fm. This value was deduced from the barrier distribution for the system $^{9}$Be+$^{209}$Bi ($^{208}$Pb) in Ref.~\cite{si99,da99} with the standard formula V$_{B}$~=~Z$_{p}$Z$_{t}$e$^{2}$/$r_{0}$(A$_{p}^{1/3}$+A$_{t}^{1/3}$) for V$_{B}$ = 38.50 (37.75) MeV. In connection with loosely bound nucleus induced fusion, there is a relevant process where the valence nucleon of the projectile, such as the proton in $^{17}$F, is captured by the target. Such a process can be a conventional transfer/stripping and will be called ``transfer'', or a strong capture/transfer/stripping breakup and will be called ``transfer/stripping-breakup''. \subsection{$^{6,8}$He beams} The $^{6}$He beam is becoming fairly popular. It has been produced in several laboratories by both ISOL and in-flight techniques for fusion studies. For the production of these beams, the ISOL technique has been utilized at the pioneering facility of the Cyclotron Research Centre in Louvain la Neuve (Belgium)\cite{ry02} and at the new facility SPIRAL in GANIL (France)\cite{vi01}. The in-flight technique has been adopted at the TWINSOL facility at the University of Notre Dame (USA).\cite{be97,le99} \subsubsection{The $^{6}$He+$^{209}$Bi system} This system, with V$_{B}$ = 19.76 (20.33) MeV in the center of mass (laboratory), has been studied extensively at Notre Dame. The $^{6}$He beam is produced by the $^{9}$Be($^{7}$Li,$^{6}$He) proton transfer reaction with the following characteristics: intensity $\sim 10^{5}$ pps, energy ranging from 14 to 22 MeV, energy resolution $\sim$1.5 MeV, beam size on target from 5 to 8 mm diameter. The fusion cross section was evaluated by the sum of the ($^{6}$He,3n)$^{212}$At and ($^{6}$He,4n)$^{211}$At evaporation channels.\cite{ko98} These data are essentially for complete fusion. This cross section could be somewhat underestimated at the lowest energies since the 2n channel, expected to be small, was not studied. The possible incomplete fusion of $^{4}$He, produced by $^{6}$He breakup, and the subsequent 1n emission to $^{212}$At was excluded by the authors by reaction Q-value arguments. This should eventually be verified experimentally. Within the present results, since there is no incomplete fusion, the complete fusion measured has to be considered as total fusion. This system, compared to $^{4}$He+$^{209}$Bi, shows moderate enhancement only in the sub-barrier region as shown in Fig.~\ref{fg:he6bi209} \begin{figure}[th] \centerline{\psfig{file=fig2_liang.eps,width=8cm}} \caption{Fusion cross sections for the systems $^{4,6}$He+$^{209}$Bi. The $^{4}$He cross sections have been corrected for the different radius and Coulomb barrier with respect to the $^{6}$He projectile. \label{fg:he6bi209}} \end{figure} Above the barrier, the two cross sections, once corrected for the different projectile radii and Coulomb barrier energies, are essentially equal. This suggests that, in this case, the breakup effects have negligible or no influence on the fusion process above the barrier. The breakup process is quite relevant in this system. Indeed, the so-called inclusive $\alpha$ production has been found very strong\cite{ag00} as compared to fusion, particularly below the barrier. This $\alpha$ production is most likely originating from the $^{6}$He breakup. This seems to be the only relevant process in addition to the fusion. As a matter of fact, breakup and fusion exhaust the total reaction cross section inferred from the elastic scattering data\cite{ag01}, as shown in Fig.~\ref{fg:he6elas}. \begin{figure}[th] \centerline{\psfig{file=fig3_liang.eps,width=8cm}} \caption{Cross sections for the following processes in the system $^{6}$He+$^{209}$Bi: total reaction (solid curve), total fusion (solid circles), inclusive $\alpha$ production or inclusive breakup (open squares), total fusion+inclusive breakup (crosses). The total fusion+inclusive breakup cross section exhaust the total reaction cross section in the region where common data exist. \label{fg:he6elas}} \end{figure} The detailed mechanism of the inclusive $\alpha$ production is not yet very clear. These inclusive $\alpha$'s can originate by several processes, namely: $\alpha$+n+n+$^{209}$Bi, $\alpha$+n+(n+$^{209}$Bi), and $\alpha$+(2n+$^{209}$Bi). With parenthesis we indicate the nucleus where one or two neutrons are trapped into the target by the nuclear potential. These two processes are of the transfer/stripping-breakup type since they have a cross section much larger than a conventional transfer process, usually well predicted by direct reaction formalism like distorted-wave Born approximation (DWBA) or coupled-channels. Very recently $\alpha$-n coincidence measurements were undertaken by the same Notre Dame group at 23 MeV, slightly above the Coulomb barrier. In their first experiment\cite{by04}, they studied the $^{209}$Bi($^{6}$He,$^{5}$He)$^{210}$Bi reaction followed by $^{5}$He $\rightarrow$ $^{4}$He+n by measuring the $\alpha$ particle and neutron in coincidence. The neutrons were detected in liquid scintillators with a relatively high threshold ($>$ 1 MeV). The experimental results give evidence that the coincidence events, assigned to one neutron transfer/stripping-breakup, account for approximately 20\% of the inclusive $\alpha$-particle yield which has a cross section of 800 mb. In their second experiment,\cite{de04} they studied $^{209}$Bi($^{6}$He,$^{4}$He)$^{211}$Bi by detecting $\alpha$ particles at and beyond the grazing angle in coincidence with neutrons evaporated from $^{211}$Bi. The neutrons were detected in solid plastic scintillators with a lower threshold ($>$0.3 MeV), but unfortunately with a high background of around 45\% of the total neutron yield. In this case, such coincidence yield accounts for about 55\% of the inclusive $\alpha$-particle yield. The remaining 25\% yield, around 200 mb, of the $\alpha$-particle inclusive events is assumed to originate from a breakup process with all $^{6}$He fragments, $\alpha$ and two neutrons, in the exit channel. These results have similarity to the $^{6}$Li+$^{208}$Pb reaction\cite{si03}. Lithium-6 with S$_{\alpha}$ = 1.47 MeV is the least bound stable nucleus. In this system, strong inclusive $\alpha$ production was observed. The breakup process, $\alpha$-d as well as $\alpha$-p coincidences, is approximately one order of magnitude smaller with cross sections ranging from 65 mb to 110 mb around the barrier. These results make the theoretical description of the $^{6}$He+$^{209}$Bi reaction dynamics quite intriguing, since in addition to the usual coupling to the target or projectile bound state excitations and their consequent barrier distributions, one has to consider also the coupling to the breakup channels, which proceeds in most cases via continuum excitations since they lie above the particle emission threshold. Several theoretical approaches have been undertaken. A first attempt was done in Ref.~\cite{al02}. These calculations (TH1) were done via the coupled-channel formalism using the {\tt ECIS} code\cite{ecis}. The real potential was calculated with a double folding model using the BMD3Y1 interaction\cite{bmd3y}. The imaginary potential had a small radius in order to absorb all the flux penetrating the barrier simulating the incoming wave boundary conditions. However the real potential had to be reduced by a factor of 0.4 to reproduce the data. This renormalization most likely takes care of the breakup process in a bulk way. In Ref.~\cite{ru04}, two other approaches were followed. In the first one, the calculations (TH2) were done with the coupled discretized continuum channel (CDCC) formalism and two slightly different ways of handling the potentials: a) empirical cluster-target optical potentials, and b) short-ranged imaginary parts. In these cases, the code {\tt FRESCO}\cite{fresco} was utilized. The results of the CDCC b) approach, a slightly better one in our opinion, are shown in the Fig.~\ref{fg:he6theor}. In the second one, calculation (TH3) was done with the barrier penetration model. In this case the potential was parameterized as V$_{nucl}$~=~V$_{bare}$+V$_{pol}$, and the polarization potential was derived from the previous CDCC approach. In TH2 the breakup process is automatically included in the CDCC formalism. In TH3 the breakup enters via the polarization potential V$_{pol}$. See Ref.~\cite{ru04} for more details. The results of these three different approaches are shown in the Fig.~\ref{fg:he6theor}. \begin{figure}[th] \centerline{\psfig{file=fig4_liang.eps,width=8cm}} \caption{Theoretical calculations of the $^{6}$He+$^{209}$Bi fusion excitation function. See text for details. \label{fg:he6theor}} \end{figure} From the inspection of Fig.~\ref{fg:he6theor} we see that TH3 and TH1 seem to reproduce the experimental data better, however, not all the points. These two approaches are more phenomenological. TH2 is more fundamental since it is based on the CDCC formalism and reproduces well the elastic scattering as discussed in Ref.~\cite{ru04} but fails as a good reproduction of the fusion. This last problem seems to be, for the moment, always present in this type of approach as we will discuss later in connection with the $^{6}$He+$^{64}$Zn system. However, it should be emphasized that the variation of these various theoretical predictions is not necessarily bad. It simply demonstrates the evolution and the present status of the theoretical efforts and indicates that, most likely, additional work is needed to evaluate the interaction of loosely bound nuclei at the barrier. \subsubsection{The $^{6}$He+$^{238}$U system} This system, with V$_{B}$ = 21.19 (21.73) MeV in the center of mass (laboratory), has been measured twice at Louvain la Neuve. The $^{6}$He beam was produced via the ISOL\cite{ry02} method with energy resolution $\sim$0.5\% and intensity $<5\times 10^{7}$ pps. The first series of measurements was performed at eight energies from 14.6 to 28.7 MeV with four primary energies and four degraded energies using mylar absorbers.\cite{tr00} The fusion process was identified by the fission channel. The fission fragments were measured with a large solid angle detection system consisting of large area silicon surface barrier detectors arranged around the target in a box-like structure. In this first measurement, the fission fragments originating from $^{6}$He induced fusion, $^{4}$He induced fusion, and 1n or 2n transfer/stripping-breakup could not be distinguished, so these results overestimate the total fusion cross section. As discussed in several papers, the total fusion cross section for $^{6}$He+$^{238}$U, identified by all the fission events, is much larger, especially below the barrier, than that for $^{4}$He+$^{238}$U by up to two orders of magnitude. The second experiment\cite{ra04} was set up in order to also measure the $\alpha$-fission fragment coincidences. In this way it was possible to distinguish the fission events originating from total fusion (not in coincidence with $\alpha$ particles) from fission events originating from 1n or 2n transfer/stripping-breakup. Figure~\ref{fg:he6u238} presents the results of the two experiments as well as the cross sections with $^{4}$He beam for comparison. First of all, it should be noted that at some energies the cross sections measured in the two runs differ well beyond the statistical errors; see, for example, the lowest energy point. In our opinion, this could originate from systematic errors due to the low beam intensity and the related problems of correctly identifying the reaction channel. The most interesting result is, however, that fission below the barrier is only in coincidence with $\alpha$ particles and consequently it originates by transfer/stripping-breakup. Such a process should not be included in incomplete fusion. This result is very similar to that of the $^{6}$He+$^{209}$Bi reaction. It is rather surprising that, at this level of accuracy, total fusion (only three points) is not at all reproduced by the CDCC calculations based on the work in Ref.~\cite{ru03} and that the $^{6}$He induced total fusion is smaller than $^{4}$He induced fusion {\it i.e.}, fusion is hindered not enhanced. However, the $^{4}$He induced fusion at the highest energies originate from other data as stated also in Ref.\cite{al02}. These points should be investigated further, experimentally and theoretically. \begin{figure}[th] \centerline{\psfig{file=fig5_liang.eps,width=8cm}} \caption{Cross sections measured in the system $^{6}$He+$^{238}$U. Total fission from the first experiment (open triangles), from the second experiment (filled triangles), yields of the $\alpha$-fission fragment coincidences, {\it i.e.}, 1n or 2n transfer induced fission (crosses), and total fusion deduced from the second experiment (filled squares). The relative theoretical fusion cross section calculated using CDCC is shown (bold-dotted curve). The cross sections for $^{4}$He induced fusion are also reported (open squares). These last data include a small correction due to the geometrical difference between the two projectile radii. The curves connecting the various points are drawn to guide the eye. \label{fg:he6u238}} \end{figure} \subsubsection{The $^{6}$He+$^{64}$Zn system} This experiment was done at the Cyclotron Research Centre in Louvain la Neuve (Belgium) using the two coupled cyclotrons via the ISOL method\cite{ry02} with $\sim 3\times 10^{6}$ pps beam intensity. Four beam energies were explored in the range of 10 to 13.6 MeV (V$_{B}$ = 9.5 (10.4) MeV in the center of mass (laboratory) system). The $^{6}$He primary beam had the energy fixed at the maximum value and the lower energy points were obtained by a suitable number of niobium absorbers $\sim$ 3 mg/cm$^{2}$ thick. The experimental set up was arranged in order to measure a) the X-rays emitted following the electron capture decay of various evaporation residues offline, and b) the $^{6}$He scattering events and the inclusive $\alpha$-particle channels originating most likely from transfer/stripping-breakup. For this second part, the large area segmented detector LEDA was utilized.\cite{da00} For comparison, the $\alpha$+$^{64}$Zn system was measured with the same setup at three energies overlapping those of the $^{6}$He+$^{64}$Zn system. The main results\cite{di04} are summarized here. A strong population of the $^{65}$Zn (= n+target) nucleus was observed. The cross section is much larger than the $\alpha$n evaporation yield predicted by the statistical model code {\tt CASCADE}\cite{cascade}. The cross section for the 2n(producing $^{68}$Ge), pn($^{68}$Ga), p2n($^{67}$Ga), and $\alpha$n ($^{65}$Zn) channels were measured. Their sum is essentially the total fusion cross section if the $\alpha$n contribution is excluded, since the $^{4}$He+target incomplete fusion produces, most likely, the same X-ray emitters $^{67}$Ge (1n evaporation) and $^{67}$Ga (1p evaporation) as the $^{6}$He+target complete fusion. This total fusion could be slightly underestimated since there could be some missing channels, like the 3n(1n) emission from $^{6}$He($^{4}$He) capture, which, however, are expected to contribute a negligible amount. Still, the comparison with the $^{4}$He+$^{64}$Zn system measured in the same way is quite meaningful. Once the measured $\alpha$n contribution is replaced by that calculated by {\tt CASCADE}, and trivial geometric corrections are included there is no difference between $^{6}$He and $^{4}$He+$^{64}$Zn cross sections in the few points measured, {\it i.e.}, neither enhancement nor hindrance. The reaction cross sections extracted from the elastic scattering data confirm the existence of strong reaction channel(s) in addition to fusion. The potential extracted from the optical model fit to the elastic scattering data, with $r_{0}$ = 1.2 fm, does not reproduce the fusion cross sections with the {\tt CCFULL}\cite{ccfull} calculations. For a good agreement, a smaller value of $r_{0}$ = 1.0 fm has to be used in the calculations. But such a choice is difficult to justify on the basis of measured nuclear radii and related systematics, unless $r_{0}$ is treated simply as an adjustable parameter to reproduce experimental data. The fact that the potential extracted from the elastic scattering data does not reproduce the fusion data is not unexpected. A similar conclusion was already pointed out; see in particular Ref.~\cite{ne04}. The interpretation of this is that the elastic scattering samples a different portion of the potential, more peripheral than the one involved in fusion. Therefore, this potential does not necessarily apply to the fusion process. \subsubsection{The $^{6,8}$He+$^{63,65}$Cu and $^{6}$He+$^{188,190,192}$Os} These measurements\cite{na04} were performed at GANIL/SPIRAL where the radioactive beams are produced via the ISOL method.\cite{vi01} The $^{6}$He ($^{8}$He) beam had an intensity of 10$^{7}$ ($7\times 10^{4}$) pps, a beam spot on the target of 5 (8) mm in diameter, and a good resolution of 0.1\% in both cases. The following energies were utilized: $^{6}$He +$^{65}$Cu 19.5 and 30 MeV (V$_{B}$ = 9.7 MeV), $^{6}$He+$^{63}$Cu 30 MeV, $^{8}$He+$^{63}$Cu 27 MeV, and $^{6}$He+Os targets 30 MeV (V$_{B}$ = 21 MeV). The detector array consisted of 8 $\gamma$-ray clover detectors from EXOGAM located 10.5 cm from the target and an annular segmented SSD (16 rings x 16 segments) for charged particle detection positioned around 0 degrees at $\sim$3.5 cm from the target. This way the system could detect in beam $\gamma$ rays, forward emitted charged particles (CP), and CP-$\gamma$ coincidences. With the Cu targets the following data were obtained: the cross sections for production of various evaporation residues and/or transfer products (from the characteristic $\gamma$ rays of the nuclei populated), Q-value spectra, and elastic scattering angular distributions which yielded total reaction cross sections. It should be pointed out that this is the first time that fusion cross sections induced by RIBs could be measured via in beam $\gamma$-ray techniques. In the $^{6}$He+$^{65}$Cu system measured at two energies, it was observed that the population of $^{66}$Cu (=1$n$+$^{65}$Cu) is $\sim$10 times stronger than the $\alpha$n evaporation channel predicted by statistical model calculations. The analysis of the Q-value spectra indicates that $^{66}$Cu is populated mainly from the 1$n$ evaporation from $^{67}$Cu produced by 2n+$^{65}$Cu. The fusion cross section, most likely complete fusion, plus this strong transfer/stripping-breakup cross section constitute the largest amount, 85\%, of the total reaction cross section deduced from the elastic scattering data. The remaining 15\% is, most likely, the exclusive breakup cross section with all fragments in the exit channel. The $^{4}$He+$^{63,65}$Cu systems were investigated in parallel at the Bombay 14UD BARC-TIFR tandem accelerator (India) via a similar in beam gamma-ray technique. In this case the strong transfer/stripping-breakup channel was absent. The data collected with the Os targets have, in some cases, lower statistics, and for the $^{190}$Os target the absolute cross sections could not be determined. Nonetheless, the Os targets results are consistent with the Cu target results. \subsubsection{Comments on the $^{6}$He induced reactions} For the total fusion measured with $^{209}$Bi, $^{238}$U, and $^{64}$Zn targets, only the $^{6}$He+$^{209}$Bi system shows enhancement with respect to the $^{4}$He total fusion, but only for the two lowest energy points. This is, for the moment, attributed to the coupling to breakup excitations, as predicted by theories. A remeasurement of these cross sections with higher statistics would be desirable. The really new and strong effects originating from the breakup process are the inclusive and exclusive alpha particle yields. The strongest channels are assigned to the formation of the systems (n+target) and (2n+target). These are most likely not the conventional transfer processes, usually with moderate cross sections, but processes where unbound, highly excited states are formed in a sort of compound nucleus. In order to search for possible common features in the $^{6}$He induced reactions, we have plotted in Fig.~\ref{fg:he6comm} the following data, total reaction cross section, fusion+1n transfer/stripping-breakup cross section, total fusion cross section, and residual cross section as a function of E$_{cm}$/V$_{B}$ with V$_{B}$ computed with $r_{0}$ = 1.56 fm. The residual cross section is the difference between the total reaction and the fusion+1n transfer/stripping-breakup cross sections. It should account for all the processes not included in the fusion and in the 1n transfer/stripping-breakup which are essentially the 2n transfer/stripping-breakup and the exclusive breakup with all $^{6}$He fragments in the exit channel. Fusion and fusion+1n transfer/stripping-breakup cross sections increase continuously and smoothly with E$_{cm}$/V$_{B}$. This is essentially the increase of fusion cross section with rising bombarding energy. So all three systems seem to behave in a similar way in this aspect. The scenario is somehow different for the total reaction cross sections and the related residual cross sections. The residual cross sections, according to the Zn and Bi data, have a maximum around the barrier. The actual height of this maximum, with respect to the Cu target data, is in part questionable since the authors state that the 1n transfer/stripping-breakup events could also originate from 1n evaporation from the system 2n+target.\cite{na04} A possible interpretation of this maximum is that around the barrier the breakup phenomena are the strongest. This behavior is peculiar because it is different from the other two cross sections and deserves further investigation since this is the first time that it is observed. Anyhow, since these various cross sections were measured by different experimental methods they should be confirmed by check experiments. \begin{figure}[th] \centerline{\psfig{file=fig6_liang.eps,width=8cm}} \caption{Experimental cross sections for the following processes induced by $^{6}$He on $^{65}$Cu, $^{64}$Zn and $^{209}$Bi targets: total fusion, total fusion + 1n transfer/stripping-breakup, total reaction, and residual cross sections (labelled by ``rest'' in the figure). The residual cross section is the difference between the total reaction and the total fusion + 1n transfer/stripping-breakup cross sections. \label{fg:he6comm}} \end{figure} \subsection{$^{11,10}$Be beams} These experiments were done at RIKEN in the RIBS beam line.\cite{ku92} The $^{11,10}$Be beams were produced in-flight by fragmentation of a 100 MeV/nucleon $^{13}$C primary beam on a thick beryllium production target. The Be beams had energy $\sim$10 times larger than what is needed for fusion measurements at Coulomb barrier energies. The Be beams were, therefore, heavily degraded producing the final beam with a large energy spread, 35 to 55 MeV, and poor emittance, $\sim$5 cm diameter at a $^{209}$Bi target. The Be beam intensities finally achieved were $>$10$^{5}$ pps. The energy of the beam particles producing fusion events were tagged event-by-event via the time-of-flight over a flight path of $\sim$6 m. The detection system consisted of large area SSDs each with an active area of 5x5 cm$^{2}$ arranged in a compact box-like structure as close as possible to a multitarget setup. The fusion events were identified by $\alpha$ particles, with characteristic lifetimes and energies, emitted in the decay of the various evaporation residues populated after neutron evaporation from the compound nucleus and by the fission fragments in coincidence in two opposite SSD of the box.\cite{si98,si04} The fusion cross sections were identified as the sum of the fission and 4n+5n (3n+4n) evaporation channels for $^{11}$Be ($^{10}$Be). These fusion cross section could be slightly underestimated below the barrier since the 3n (2n) channel for $^{11}$Be ($^{10}$Be) could not be measured due to its too long lifetime, T$_{1/2}$ = 16 $\mu$s, which resulted in high random rates. Moreover, the evaporation channels with at least one charged particle, like p$x$n, were not identified, since they are expected to be negligible. Even with these limitations, the comparison of $^{11}$Be and $^{10}$Be fusion is quite meaningful since the data were measured and analyzed in the same way. For $^{10}$Be, the complete fusion cross section is taken as the total fusion because no breakup processes are realistically expected since $^{10}$Be with S$_{n}$ = 6.8 MeV is tightly bound. For $^{11}$Be, mainly total fusion was measured since the incomplete fusion of $^{10}$Be, from $^{11}$Be breakup, was estimated in a previous experiment\cite{yo96} to be $<$30\% of the $^{11}$Be complete fusion. These cross sections came from two independent measurements done some years apart. The statistics of the data is essentially limited by the low intensity of the radioactive beams. This causes a scattering of the various results beyond the statistical errors as discussed in detail in Ref.~\cite{si98,si04} The results are compared in the top panel of Fig.~\ref{fg:be11bi209} as well as with the stable $^{9}$Be beam.\cite{si99} \begin{figure}[th] \centerline{\psfig{file=fig7a_liang.eps,width=8cm}} \centerline{\psfig{file=fig7b_liang.eps,width=8cm}} \caption{Total fusion cross sections for the systems $^{9,10,11}$Be+$^{209}$Bi (top) and comparison between theories and experimental cross sections for the $^{11}$Be+$^{209}$Bi system (bottom). \label{fg:be11bi209}} \end{figure} Within the statistical accuracy of the data, the three cross sections look similar. This is astonishing since $^{10}$Be is tightly bound and both $^{9}$Be and $^{11}$Be are loosely bound. In addition, the latter nucleus has a well established neutron halo structure while $^{10}$Be is well bound like most of the stable nuclei. Three theoretical approaches have been followed to describe these results: \begin{itemize} \item[a)] {\tt CCFULL}:\cite{si98,si04} The main coupling considered was the excitation of collective, rotational-like structure, with no breakup coupling. \item[b)] CDCC using the code {\tt FRESCO}\cite{fresco}:\cite{si98,si04} Only the two-body breakup coupling to discretized continuum states was calculated. These last calculations need several days of CPU time on modern fast PCs. \item[c)] coupled channel calculations using the {\tt ECIS} code\cite{al02} as already mentioned in the discussion of the $^{6}$He+$^{209}$Bi system. \end{itemize} The bottom panel of Fig.~\ref{fg:be11bi209} shows the results of the first two approaches which are similar. But, if we consider also the other two beams $^{10}$Be and $^{9}$Be, the CDCC approach, particularly in the $^{9}$Be case\cite{si98,si04} not shown in Fig.~\ref{fg:be11bi209}, underpredicts the cross sections. Apparently, the collective excitations, better handled by the {\tt CCFULL} code have more strength than the breakup mode. The third approach looks, on average, worse, since it overestimates the cross section at most of the energies. But, as already pointed out, the breakup coupling is included in an indirect way via renormalization of the nuclear potential. The large collective excitation strength versus the breakup strength seems to appear also in the scattering of loosely bound $^{17}$F (S$_{p}$ = 0.600 MeV) compared with the well bound $^{19}$F (with possible collective structure), as later discussed in this review (Sect. {\bf 4.4}). In the case of $^{19}$F, at energies around the Coulomb barrier, the reaction cross section is larger than $^{17}$F, most likely, due to a rotational-like level structure which may be excited more easily than $^{17}$F breakup. For completeness, we should mention that the first attempt at GANIL to measure the $^{11}$Be+$^{238}$U system\cite{fe99} leads to the successful measurement of $^{6}$He+$^{238}$U with the same technique.\cite{tr00} The statistics were very limited so that no conclusion can be drawn from the data. \subsection{$^{11}$C beam} This experiment was performed at the 88-inch cyclotron at Lawrence Berkeley National Laboratory. The system studied was $^{11}$C+$^{197}$Au.\cite{jo00} This experiment is worth mentioning mainly for the technique used for the production of this RIB. The system utilized was BEARS (Berkeley Experiment with Accelerated Radioactive Species) which was based on two coupled cyclotrons. Carbon-11, $t_{1/2}$ = 20 min., was produced via the $^{14}$N(p,$\alpha$) reaction in a 20 atm nitrogen gas target using the cyclotron of the Biomedical Isotope Facility, located 350 m away from the 88-inch cyclotron, and then transported by a dedicated transfer line to this latter cyclotron for further acceleration. This system provided a continuous $^{11}$C beam with a remarkable intensity of 1-2$\times 10^{8}$ pps on target. This experiment measured the excitation functions for the population of the evaporation residues, following $x$n evaporation from the compound nucleus. These cross sections were deduced by the yield of the $\alpha$ particles emitted from the ground state decay of the various At nuclei produced. The excitation functions were measured in the range of 66 to 110 MeV. The results compared with the stable $^{12}$C beam are in agreement with standard evaporation model predictions by the {\tt HIVAP} code\cite{hivap}. \subsection{$^{17}$F beam} In contrast to most of the experiments discussed in the previous subsections where the interplay of neutron breakup and fusion is investigated, this subsection explores the influence of proton breakup on fusion. Fluorine-17 is a proton drip-line nucleus with a ground state binding energy of 0.600 MeV. Its first excited state is only bound by 0.105 MeV with an extended r.m.s. radius and is considered a halo state.\cite{mo97} If the $^{17}$F nucleus is excited to its first excited state before fusion, the fusion cross section is expected to increase because of the lowered barrier for a larger radius. In addition, if $^{17}$F breaks up prior to fusion, the core nucleus $^{16}$O has a lower Z and a lower Coulomb barrier. The incomplete fusion is expected to be enhanced, too. On the other hand, as discussed in the previous section, breakup removes the $^{17}$F flux available for fusion resulting in fusion suppression. Measurements of fusion of $^{17}$F and $^{208}$Pb was performed with the radioactive $^{17}$F produced by an in-flight method using the $p$($^{17}$O,$^{17}$F)$n$ reaction at ATLAS.\cite{re98} The $^{17}$F beam intensity was between 1 and 2$\times 10^{5}$ pps with an energy resolution of about 2.5\%. There was a significant $^{17}$O isobar contamination in the beam with the same magnetic rigidity. The ratio of $^{17}$F to $^{17}$O varied with beam energies but was usually around 1. Since the energy of the $^{17}$O contaminant was approximately 20\% lower than that of $^{17}$F, the fusion of $^{17}$O with the target was estimated to contribute less than 3\% to the total fusion cross section. The compound nucleus decays by fission, therefore, the fusion cross section was identified with the fission cross section. The fission fragments were detected in coincidence by four large area Si detectors. It is noted that incomplete fusion, capture of $^{16}$O by the target following $^{17}$F $\rightarrow$ $^{16}$O+$p$ breakup, was not excluded from this measurement, so the data give the total fusion yield (CF+ICF). The measured fusion-fission excitation function is compared to that of $^{19}$F+$^{208}$Pb and $^{16}$O+$^{208}$Pb. No fusion enhancement was observed for $^{17}$F induced fusion with respect to the stable $^{19}$F and $^{16}$O induced fusion on the same target. At the lowest energy of the measurement, the fusion-fission cross section is suppressed by a factor of 4. The breakup of $^{17}$F may be a factor in influencing fusion. Measurements of $^{17}$F breakup by scattering on a $^{208}$Pb target were performed at energies above the Coulomb barrier (98, 120, and 170 MeV) at HRIBF (Oak Ridge National Laboratory)\cite{li00,li02,li03a} and below the barrier (90 MeV) at ATLAS\cite{ro04}. The HRIBF produced $^{17}$F by the ISOL technique. The intensity was 2$\times 10^{5}$ pps for the first measurement and increased to $10^{7}$ pps for subsequent measurements a year later. Since the secondary beams were accelerated by a tandem accelerator, the beam quality (energy resolution and beam spot) was as good as for stable beams. The $^{17}$O isobar contaminants were removed by selecting the 9$^{+}$ charge state using an analyzing magnet. Two modes of breakup, diffraction (exclusive, two-body) and stripping(inclusive, one-body), were measured at 170 MeV. The diffraction breakup which has $^{16}$O and $p$ in the exit channel, was found to be a factor of four smaller than stripping breakup (only $^{16}$O in the exit channel). This observation agrees with predictions by Esbensen.\cite{li02} At energies near the barrier, only the stripping breakup was measured. The energy dependence of the breakup is presented along with that of fusion in Fig. \ref{fg:f17pb}. The stripping breakup cross sections are smaller than the fusion cross sections by factors of 4 to 10 and the diffraction breakup is four times smaller than the stripping breakup. At energies below the barrier, the diffraction breakup was studied at backward scattering angles.\cite{ro04} The angle integrated breakup cross section agrees with the theoretical prediction and is very small. It is concluded that breakup of $^{17}$F is weak and it has no noticeable influence on fusion. \begin{figure}[th] \centerline{\psfig{file=fig8_liang.eps,width=8cm}} \caption{The excitation function of $^{17}$F+$^{208}$Pb fusion\protect\cite{re98} is shown by the open circles, inclusive breakup\protect\cite{li03a} by the crosses, and exclusive breakup\protect\cite{ro04} by the filled squares. The reaction cross sections obtained from the optical model analysis of elastic scattering data are shown by the thin dash-dotted curve. The dashed and dash-dotted curves are for stripping and diffraction breakup, respectively.\protect\cite{li03a} \label{fg:f17pb}} \end{figure} The elastic scattering of $^{17}$F+$^{208}$Pb near the barrier was analyzed by an optical model.\cite{li03a,ro04} Due to the thick targets used in these experiments, the energy resolution was poor. The excitation of $^{17}$F to its first excited state as well as other quasielastic scattering, resulting in a small energy and/or mass change, could not be separated from the pure elastic scattering. The shape of the angular distribution for $^{17}$F+$^{208}$Pb elastic scattering is similar to that of $^{16,17}$O+$^{208}$Pb, particularly at backward angles. These differential cross sections are much smaller than in the $^{19}$F+$^{208}$Pb system. This indicates a smaller absorption cross section in the $^{19}$F+$^{208}$Pb. The total reaction cross sections obtained from the optical model analysis are slightly larger than the fusion cross sections, as shown in Fig. \ref{fg:f17pb}. The small difference between the total reaction and fusion cross sections suggests that other reaction channels such as breakup are weak and fusion enhancement is not expected. It was speculated by Romoli {\it et al.} that the differences in the elastic scattering between $^{17}$F and $^{19}$F may arise from the large collectivity of $^{19}$F where the ground state is strongly coupled to its first excited state.\cite{ro04} As a result, the excitation probability of $^{19}$F could be significantly larger than the breakup probability of $^{17}$F. Furthermore, calculations show that the excitation of $^{17}$F to the first excited state is about 30\% larger than the diffraction breakup.\cite{re98} Therefore, fusion following excitation to the halo state could be too small to be seen. A large suppression of complete fusion was observed in $^{9}$Be+$^{208}$Pb at energies above the barrier.\cite{da99} The sum of evaporation residue and fission cross sections accounts for 68\% of the expected fusion cross section. In their measurements, the capture of a charged fragment from the breakup of $^{9}$Be by the target (incomplete fusion) was observed. A similar situation occurs also in $^{9}$Be+$^{209}$Bi.\cite{si99} The complete and incomplete fusion make up the expected fusion cross sections. In the $^{17}$F experiments, elastic scattering and breakup reaction data already exist. Further measurements of complete and incomplete fusion separately in $^{17}$F+$^{208}$Pb would be useful for a complete theoretical description of breakup and fusion. It should be pointed out that dynamical polarization is predicted to be present when a loosely bound nucleus approaches a heavy nucleus with large Coulomb fields.\cite{es96,es02} Moreover, the dynamical polarization of a proton loosely bound nucleus will push the valence proton to the far side and get shielded from the core nucleus. This will reduce the breakup probability. In contrast, a neutron loosely bound nucleus tends to be polarized in such a way that the valence neutron lies between the core nucleus and the target which leads to large breakup cross sections. This may be the reason for low breakup cross sections for $^{17}$F but large breakup cross sections for $^{6}$He. \section{Heavy-ion reactions} In the following subsections, fusion induced by medium-mass neutron-rich radioactive nuclei is presented. This topic is of particular interests for the potential use of neutron-rich RIBs for producing heavy elements. It is noted that for the following systems involving neutron-rich RIBs the excitation functions will be compared in reduced coordinates using the Bass model fusion barrier\cite{ba74}, and $\pi$R$^{2}$ with R=1.2(A$_{p}^{1/3}$+A$_{t}^{1/3}$), where A$_{p}$ and A$_{t}$ are the mass of the projectile and target, respectively, to normalize the center-of-mass reaction energy and the cross section. This is different from the procedures described in the light-ion section because the medium-mass radioactive nuclei used in experiments so far, are not very extended or loosely bound. \subsection{$^{29,31}$Al beams} The radioactive Al beams were produced by fragmenting 90 MeV/nucleon $^{40}$Ar at RIKEN. Thick Al degraders were used to decrease the secondary Al beams to the appropriate region of energy. This resulted in a very large energy spread in the beams. It was necessary to use event-by-event time-of-flight measurements to define the energy of each beam particle. They were also used in the data analysis for setting gates to remove events originating from contaminant induced reactions. The beam intensity was 1$\times 10^{5}$ and 3$\times 10^{4}$ pps for $^{29}$Al and $^{31}$Al, respectively. To avoid changing beam energies, a stack of ten mylar backed $^{197}$Au targets was placed in series along the beam direction. The excitation function was obtained in one run.\cite{wa01} The compound nuclei formed in these reactions decay by fission. Therefore, the measured fission excitation function was taken as the fusion excitation function. The fission fragments were identified by two pairs of multiwire proportional counters (MWPC) placed on each side of the target stack. Valid fission events required a simultaneous detection of both fragments with the MWPCs. The target from which the fission fragments originated was identified by reconstructing the tracks of the fragments. The cross sections were scaled and checked with separate measurements using stable $^{27}$Al beams. All three excitation functions, $^{27,29,31}$Al+$^{197}$Au, exhibit large enhancement near and below the barrier with respect to barrier penetration model predictions.\cite{wa01} The enhancement cannot be accounted for by coupled-channel calculations including excitation of the projectile and target, and the static deformation of the target. Nevertheless, when the excitation functions are plotted in reduced coordinates which factor out the differences in nuclear sizes and barrier heights, they overlap each other, as shown in Fig.~\ref{fg:al197au}. In order to avoid the influence from coupling to the intrinsic degrees of freedom at energies near the barrier, the cross sections from high energy measurements were fitted with $\sigma=\pi R^{2}(1-V_{b}/E)$ to extract the barrier height $V_{b}$ and barrier radius R$_{b}$. It was found that the barrier height for $^{29}$Al and $^{31}$Al induced fusion with respect to $^{27}$Al was reduced by 3.4 and 4.5 MeV, and the barrier radius was decreased by 0.1 and 0.2 fm, respectively. In contrast, the Bass model\cite{ba74} predicts that the reduction of barrier height and the decrease of barrier radius is 1.2 MeV and 0.2 fm for $^{29}$Al induced fusion as compared to $^{27}$Al induced fusion. The larger reduction in measured barrier height compared to the Bass model prediction may be due to the large deformation in $^{27}$Al. Watanabe {\it et al.} fitted the excitation functions with Stelson's model assuming a flat distribution of barriers.\cite{st90} The threshold barrier for neutron flow correlates with the binding energy of the participants. The neutron binding energies are 13.1, 9.4, 7.2, and 8.1 MeV for $^{27,29,31}$Al and $^{197}$Au, respectively. The neutrons are likely to flow from Au to $^{27,29}$Al and from $^{31}$Al to Au. Although the threshold barrier height and the direction of flow are different in the three systems, the barrier thickness and the sub-barrier enhancement are very similar, which is surprising but also puzzling.\cite{wa01} It is noted that the Q-value for two and four-neutron pickup in $^{27}$Al and two-neutron stripping in $^{31}$Al are positive. All the other neutron transfer reactions (up to four neutrons) have negative Q-values. For this reason, the fusion is not likely to be enhanced by sequential neutron transfer. This may distinguish neutron flow from neutron transfer as a mechanism that can enhance the sub-barrier fusion rate. On the other hand, the Q-values for proton stripping reactions become more positive as the Al isotopes become more neutron-rich . For instance, the Q-value is as large as 16.3 MeV for $^{197}$Au($^{31}$Al,$^{34}$S)$^{194}$Os. The coupling of proton transfer reactions, which was not considered in the previous calculations, may be important in neutron-rich radioactive nuclei induced fusion.\cite{he87} \begin{figure}[th] \centerline{\psfig{file=fig9_liang.ps,width=8cm}} \caption{Comparison of fusion excitation functions for $^{27,29,31}$Al+$^{197}$Au in reduced coordinates. The open circles are for $^{27}$Al induced fusion, the closed circles are for $^{29}$Al, and the open triangles are for $^{31}$Al. \label{fg:al197au}} \end{figure} \subsection{$^{38}$S beam} This experiment compares the fusion excitation function of radioactive $^{38}$S on $^{181}$Ta to that of stable $^{32}$S on the same target.\cite{zy97,zy01} Since neither the sulfur isotopes nor the $^{181}$Ta nucleus is magic, any effects due to the shell closure can be realistically excluded. The $^{38}$S beams were produced by projectile fragmentation of a 40 MeV/nucleon $^{40}$Ar beam on a $^{9}$Be target at NSCL. The energy of the $^{38}$S beam delivered to the experiment was 8 MeV/nucleon, therefore, beam degradation was necessary. This was achieved by using Al foils of various thickness mounted at the entrance of the scattering chamber to degrade the beam to the desired energies, 161.2 to 254.0 MeV. An energy spread of 2.5 MeV FWHM was observed as a result of the degradation process. The beam intensities on target ranged from 2$\times 10^{3}$ to 1$\times 10^{4}$ pps with 85\% to 90\% $^{38}$S. Two sets of timing channel plates and parallel plate avalanche counters (PPAC) were used for event-by-event beam tracking and time-of-flight measurement. According to a statistical model prediction, more than 99\% of the compound nucleus formed in this reaction decays by fission. The measured fission excitation function was taken as the fusion excitation function. The fission fragments were detected in an array of PPACs and Si detectors. A valid event was defined by fragment-fragment coincidence gated by time-of-flight associated with the $^{38}$S beam. The fission cross section was normalized to the Rutherford scattering measured by a forward angle Si detector. Quasifission usually complicates fusion-fission measurements in heavy systems. In quasifission, the interacting nuclei are captured inside the fusion barrier but fail to evolve inside the fission saddle point and reseparate. Thus the process differs from fission fragments emitted from a fully equilibrated compound nucleus. To distinguish these two processes in a low statistics RIB experiment is very difficult. Nevertheless, one-dimensional barrier penetration models and coupled-channel calculations give results for capture inside the fusion barrier which includes quasifission for heavy systems. Here, the fusion cross section actually refers to the capture cross section. The excitation functions for $^{32}$S and $^{38}$S induced fusion are compared in Fig.~\ref{fg:s38ta} by reduced cross section and reduced energy as described in the previous section. The fusion cross sections for $^{38}$S+$^{181}$Ta are generally larger than those for $^{32}$S+$^{181}$Ta. It is noted that the comparison is presented differently in Ref.~\cite{zy97,zy01} where the reduced cross section and reduced energy are calculated with the experimentally extracted barrier radius and barrier height, respectively. In that case, the two excitation functions coincide. The barrier height $V_{b}$ and barrier radius R$_{b}$ extracted by fitting the high energy cross sections to the classical limit $\sigma=\pi R^{2}(1-V_{b}/E)$ show that the barrier height was reduced significantly, by 5.9 MeV, and the barrier position shifted by 1.8 fm for $^{38}$S with respect to $^{32}$S.\cite{zy01} According to the Bass model, the barrier height decreases by 3.4 MeV and the barrier radius increases by 0.4 fm. The larger barrier shift may be attributed to the deformation of $^{38}$S ($\beta_{2}$=0.246)\cite{sc96}. This deformation, not accounted for in the Bass model, may result in the difference shown between Fig.~\ref{fg:s38ta} (enhancement) in this paper and Fig.~4 in Ref.\cite{zy97} (no enhancement). The Q-value for $^{38}$S induced fusion is 6.3 MeV lower than that for $^{32}$S induced fusion. The excitation energy of the compound nucleus is, therefore, 12.2 MeV lower at the barrier. As the excitation energy of the compound nucleus decreases, the total number of neutrons evaporated decreases, however, the strength of lower multiplicity neutron evaporation channels actually increases. This is a very valuable piece of information. A colder compound nucleus formed in neutron-rich radioactive nucleus induced fusion will increase the production yield of neutron-rich heavy elements. \begin{figure}[th] \centerline{\psfig{file=fig10_liang.ps,width=8cm}} \caption{Reduced fusion-fission excitation functions for $^{38}$S+$^{181}$Ta (closed circles) and $^{32}$S+$^{181}$Ta (open circles). \label{fg:s38ta}} \end{figure} \subsection{$^{132}$Sn beam} The advent of accelerated $^{132}$Sn for experiments is important for nuclear physics since $^{132}$Sn is a doubly magic nucleus (Z=50, N=82). It has a large N/Z ratio (1.64) and has eight extra neutrons compared to the heaviest stable Sn isotope, $^{124}$Sn. The fusion-evaporation excitation functions of $^{64}$Ni on even Sn isotopes from A=112 to 124 have been measured.\cite{fr83} Being able to extend the Ni+Sn systems to the very neutron-rich $^{132}$Sn+$^{64}$Ni system is very exciting. The $^{132}$Sn beams were produced with the ISOL technique at the HRIBF. The short-lived $^{132}$Sn ($t_{1/2} = 39.7$ s) is a product of proton induced $^{238}$U fission. Since most of the mass 132 isobars produced is $^{132}$Te, sulfur was introduced into the ion source to purify the beam.\cite{st03} This relies on the fact that sulfur and tin form chemical compounds at a much higher rate than other produced mass 132 isobars. The SnS$^{+}$ molecular ions were subsequently broken up in a charge exchange cell. In this way, the composition of the beam consisted of 96\% $^{132}$Sn. The small amount of $^{132}$Te had negligible effects on the measurement because the higher Z gives a higher Coulomb barrier and fusion is, therefore, suppressed. The Sn$^{-}$ ions were accelerated in the 25 MV tandem electrostatic accelerator. The average beam intensity for the experiment was 2$\times 10^{4}$ pps with a maximum of 5$\times 10^{4}$ pps. Since at energies near and below the Coulomb barrier the predominant decay mode of the compound nucleus is particle evaporation, residue cross sections were measured. This is an inverse kinematics reaction, therefore, the residues are forward focused. A combination of time-of-flight detectors and an ionization chamber placed at zero degrees provided high efficiency for residue detection. Cross sections of the order of mb can be measured. The evaporation residue excitation function for $^{132}$Sn+$^{64}$Ni is shown in Fig.~\ref{fg:sn132ni} in reduced coordinates.\cite{li03b} The cross sections for stable even Sn isotopes are shown in the same figure for comparison. As can be seen, the $^{132}$Sn induced fusion is very much enhanced at sub-barrier energies. Coupled-channel calculations were performed to compare with the measured data. Since statistical model calculations predicted that fission is negligible at E$_{cm} \leq$ 160 MeV, the evaporation residue cross sections are taken as fusion cross sections. Coupled-channel calculations including projectile and target excitation to their first 2$^{+}$ and 3$^{-}$ states are in fair agreement with $^{64}$Ni+$^{124}$Sn data but fail to reproduce $^{64}$Ni+$^{132}$Sn data, as shown in Fig. \ref{fg:sn132cc}. It is noted that $^{64}$Ni+$^{124}$Sn has only one transfer channel, ($^{64}$Ni,$^{66}$Ni), with a positive Q-value. For $^{64}$Ni+$^{132}$Sn, the Q-values for $^{64}$Ni picking up two to six neutrons are positive. Coupled-channel calculations including inelastic excitation and transfer reactions reproduces well the $^{64}$Ni+$^{124}$Sn data. But there is still large discrepancy between the calculation, shown by the solid curve in Fig. \ref{fg:sn132cc}, and measurement for $^{64}$Ni+$^{132}$Sn. It should be pointed out that the form factors implemented for the multinucleon transfer\cite{li04} were extrapolated from the $^{58}$Ni+$^{124}$Sn measurements.\cite{ji98} Further development of the calculation and measurement of nucleon transfer are necessary for better understanding the enhancement observed here. The $^{64}$Ni+$^{124,132}$Sn comparison is very similar to that of $^{40}$Ca+$^{90,96}$Zr.\cite{ti98} There is no transfer channel with positive Q-value in $^{40}$Ca+$^{90}$Zr, but several neutron transfer channels with positive Q value are in $^{40}$Ca+$^{96}$Zr. Coupled-channel calculations considering inelastic excitation reproduce well the $^{40}$Ca+$^{90}$Zr data. When inelastic excitation and transfer were coupled, the calculations underpredicted the $^{40}$Ca+$^{96}$Zr data. However, only simultaneous nucleon transfer was considered in their calculations and sequential nucleon transfer may be more important.\cite{ti98} Moreover, Stelson's neutron flow mechanism may play a role in $^{40}$Ca+$^{96}$Zr because the barrier distribution is broader and flatter as compared to that of $^{40}$Ca+$^{90}$Zr. It is conceivable that the neutron flow mechanism can become more dominant than the coupling to surface modes in very neutron-rich nucleus induced fusion. When the $^{132}$Sn beam intensity is greater in the future, it will be very useful to perform high precision excitation function measurements to study the barrier distributions. \begin{figure}[th] \centerline{\psfig{file=fig11_liang.ps,width=8cm}} \caption{Reduced evaporation residue excitation function for $^{64}$Ni+$^{A}$Sn. The filled circles are for reactions induced by $^{132}$Sn. \label{fg:sn132ni}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=fig12_liang.ps,width=10cm}} \caption{Comparison of measured ER excitation functions with coupled-channel calculations. The left panel is for $^{132}$Sn+$^{64}$Ni and the right panel is for $^{64}$Ni+$^{124}$Sn\protect\cite{fr83}. The measured ER cross sections are shown by the filled circles and open triangles for $^{132}$Sn+$^{64}$Ni and $^{64}$Ni+$^{124}$Sn, respectively. One-dimensional barrier penetration model prediction, coupled-channel calculations including inelastic excitation, and inelastic excitation and transfer are shown by the dotted, dashed and solid curves, respectively. \label{fg:sn132cc}} \end{figure} In the $^{38}$S and $^{29,31}$Al experiments, no further fusion enhancement relative to the stable isotope induced reaction was found, but large enhancement was observed in the $^{132}$Sn experiment. It is noted that in the former two experiments the neutron-rich nucleus is the light reactant whereas in the latter experiment, the neutron-rich nucleus is the heavy reactant. Moreover, $^{132}$Sn has eight extra neutrons compared to the heaviest stable Sn isotope whereas $^{38}$S and $^{31}$Al have two and four extra neutrons compared to the heaviest stable S and Al isotopes, respectively. In fusion induced by neutron-rich nuclei such as $^{38}$S, the lowering of the excitation energy can increase the cross section for lower multiplicity neutron evaporation channels. If the fusion is further enhanced as seen in the $^{132}$Sn induced fusion, the cross section for lower multiplicity neutron evaporation channels can be further increased. There could be an advantage to using neutron-rich RIBs to produce heavy elements if the intensity is sufficiently high. Experiments using more neutron-rich radioactive nuclei are underway to look for systematic trends. These new experiments will provide more clues for understanding the mechanisms for fusion enhancement. As the nuclei become more neutron-rich, the neutron binding energy decreases and neutron breakup will start to play a role. Whether neutron breakup in heavy-ion reactions influences fusion the same way as light-ion reactions and the implication on heavy element production will be interesting to study. \section{Concluding Remarks} \subsection{Present situation} From the analysis of the various results presented in this review it is clear that presently they suffer from the lack of good statistics as compared with those obtained with stable beams which are usually at least three orders of magnitude more intense. The situation will remain like this for sometime until new dedicated RIB facilities will be in operation.\cite{ri00,eu03,riken} These first generation experimental data give still valuable results, but one should only expect to get hints from them rather than clear indications as with stable beams. For the moment, the main indications, keeping in mind the $^{6}$He results, are that for light nuclei with loosely bound neutrons, there is not such a large sub-barrier fusion enhancement as predicted by many theories. However, there is a new effect which appears in a strong channel(s) where the light projectile fragment(s) are trapped into the target by the nuclear field in a nonconventional transfer process that could be called transfer/stripping-breakup. Such a process seems to be weaker with light projectiles when they have loosely bound protons as in $^{17}$F. This different behavior could be tentatively explained by invoking different effects of the polarization potentials for the two projectiles. The details of such a process, which seem to be strongest at the barrier (see Fig.~\ref{fg:he6comm}), have to be further investigated theoretically and experimentally. With medium mass RIBs, no fusion enhancement is expected from the halo structure since the beams that are available today are not near the drip-lines. The relevant point could be the large neutron excess of some specific RIBs like $^{132}$Sn. The data are, for the moment, quite scarce and scattered throughout the nuclear chart so it is nearly impossible to see any trend. Only for the heaviest system discussed in this paper is sub-barrier fusion enhanced. This may be attributed to the large neutron excess of the projectile, $^{132}$Sn, which has eight neutrons more than the heaviest stable $^{124}$Sn. It is conceivable that neutron transfer plays an important role in enhancing fusion yields. But the simplified treatment of transfer channels in the theory fails to predict such an enhancement. More systematic data focused on specific systems will be necessary for heavy systems in order to identify a trend in fusion. \subsection{Perspectives} Nuclear reactions are often used as tools for probing nuclear structure. As more species of RIBs become available and further away from stability, fusion induced by neutron-rich radioactive nuclei could be used to explore the properties of neutron skin and neutron halo in heavier nuclei. In very heavy reaction systems, the extra-push energy is required for complete fusion.\cite{sw81} The extra-push energy depends on the effective fissility parameter, a measure of the Coulomb repulsion against the nuclear surface tension, of the system. How the extra-push energy affects fusion induced by very neutron-rich radioactive nuclei is an important research subject. It can provide information on whether it is practical to use such beams for producing heavy elements. Another quantity related to heavy element production is the survival probability, which correlates with the fission barrier. The systematics of fission barrier heights is obtained from stable and proton-rich nuclei\cite{si86}. With shell corrections, fission barriers for neutron-rich heavy elements were predicted. The location of shell closure in very heavy nuclei may be probed by fusion studies using neutron-rich radioactive beams. To better understand the reaction dynamics, measuring evaporated particles from neutron-rich radioactive nucleus induced fusion will be useful. One can also learn about the level density of a neutron-rich compound nucleus. It has been demonstrated that barrier distributions can be extracted from high precision fusion excitation function measurements. The distribution of barriers can reveal the signature of channel couplings. It identifies the important channels which contribute to the sub-barrier fusion enhancement. With the RIBs available today, it is not practical to perform such measurements because the intensity is orders of magnitude too low. Some proposed new RIB facilities are designed to deliver beams of intensity comparable to stable beams. These new facilities will allow the barrier distribution to be measured in a reasonable period of beam time. While we are devoting a great deal of effort to RIB experiments, we should not neglect stable beam experiments. We have seen that the important results obtained with $^{6,7}$Li and $^{9}$Be shed light on the influence of breakup on fusion. As we move further away from stability, the beam intensity is expected to be much lower. Experimental apparatus will be more complicated and experiments will take longer times. Stable beams will have to be used to setup and calibrate equipment. Stable beam experiments may also provide valuable information to help us understand the results of RIB experiments which frequently suffer from poor statistics. In the next few years, many facilities will provide more varieties of RIBs with energies above the Coulomb barrier. The projected intensity for some of the beams will reach above 10$^{8}$ pps. More fusion experiments will be performed and we expect to see some measurements with very good statistics. Reaction channels that are important to sub-barrier fusion enhancement, such as nucleon transfer and inelastic excitation can be measured with higher intensity beams and dedicated apparatus. At the same time, experiments will push towards using beams further away from the stability. Progresses in theoretical treatment of fusion involving RIBs will be made with new experimental results. Furthermore, there are several new and powerful RIB facilities\cite{ri00,eu03} that are under consideration. When they become a reality, more exciting fusion experiments and results can be expected. \section*{Acknowledgements} The authors wish to thank A. di Pietro, M. Trotta, N. Alamanos, N. Keeley, J. J. Kolata, P. E. Mueller, R. Raabe, and D. Shapira for kindly providing some of their data relevant for the preparation of this review and/or for critical reading of this manuscript. Research at the Oak Ridge National Laboratory is supported by the U.S. Department of Energy under contract DE-AC05-00OR22725 with UT-Battelle, LLC.
2,877,628,089,949
arxiv
\section{Introduction} \setcounter{equation}{0} The purpose of these notes is to review one of the branches of modern string theory: the theory of matrix models with the emphasize on their intrinsic integrable structure. We begin with a brief description of the field and its place in the closest environement within entire string theory. The main content of the string theory \footnote{See \cite{UFN} for a general review} is the study of symmetries in the broadest possible sense of the word by methods of the quantum field theory. The usual scheme is to start from some symmetry and construct a field-theoretical model (usually 2-dimensional, for the reason that we do not discuss here), which possesses this symmetry in some simple sense (e.g. as Noether symmetry or as a chiral algebra). The main idea at this stage is to find a model which is exactly solvable (if nothing but the symmetry is given this is a nice principle to restrict dynamics). The next step is to study the hidden symmetries of the model, which are somehow responsible for its exact solvability and are usually much larger than the original symmetry. This "inverse" step: {\it model} $\longrightarrow$ {\it symmetry} can be made at least with three different ideas in mind. One can look for some hidden local (gauge) symmetry of the model, which is fixed or spontaneously broken, i.e. identify it with some other model which had more fields - $auxiliary$ from the point of view of the smaller model and $gauge$ - from that of the larger one. (Examples: gauged Wess-Zumi\-no-No\-vikov- Witten (WZNW) model, topological theories in BRST formalism etc.) One can take for a new (full) symmetry of the model just its operator algebra (algebra of observables) (see \cite{KleP}, \cite{W(KleP)} and also \cite{Klepo} for the first results in this direction). It deserves mentioning that $gauging$ of entire algebra of observables gives rise to a "string field theory", associated with original model (considered as a string model). One can construct effective action of the theory by exact evaluation of the functional integral. As to direct step {\it symmetry} $\longrightarrow$ {\it model}, one can take as example the best understood case, when original symmetry is just a Lie algebra. Then the quantum mechanical model can be constructed by geometrical quantization technique (see \cite{AS} for the most important example of Kac-Moody algebra and the WZNW model). {}From the mathematical point of view the two elements of the above scheme look like {\it algebra} (theory of symmetries) and {\it Analysis} \& {\it Geometry} (field-theoretical models). The idea of constructing models with a given symmetry (and nothing else relevant for the dynamics) can be identified with the mathematical concept of "universal objects". \bigskip \bigskip \centerline{$|\ \ \ SYMMETRY \ \ \ | \ \ \ \ \ = \ \ \ \ \ |\ \ \ {\bf ALGEBRA}\ \ \ |$} \bigskip \centerline{$\ \ \ \uparrow \ \ \ \downarrow\ \ \ \ \ \ \ \ \phantom{.................}$} \bigskip \centerline{$|\ \ \ \ \ \ \ \ \ MODEL \ \ \ \ \ \ \ \ \ | \ \ \ \ \ = \ \ \ \ \ | \ \ \ {\bf ANALYSIS} \ \ \ \ \ \ |$} \centerline{$|\ \ with\ this\ symmetry\ \ | \ \ \ \ \ \phantom. \ \ \ \ \ \ \ \ | \ \ \ {\bf GEOMETRY} \ \ \ |$} \bigskip \centerline{\it Theory of everything} \bigskip The sequence of iterations of the two arrows in the picture leads to a deaper understanding, enlargening and generalization of all the notions involved: symmetry, exact solvability, field theory, geometrical structures, quantization etc, thus stimulating considerable progress both in physics and mathematics. If this iterative process can somehow converge, the limit point will deserve the name of the {\it theory of everything}, which will indeed unify all the possible field theoretical models by embedding them into a huge, but well structuired theory, which will be also exactly solvable in some, yet unspecified sense of the word. We refer to \cite{UFN} for more details about this semiphilosophical $programm$, known under the name of (modern) $string$ $theory$, and now turn to a more narrow subject: the theory of matrix models. \bigskip At the moment it is mainly associated with the theory of effective actions, at least so far this is where the main results of the modern theory of matrix models find their applications. This technique is especially suited for the study of effective actions, obtained after integration over 2-dimensional geometries (including the sum over genera) and it produces non-perturbative (exact) partition functions of particular string models. The main result of these studies points out that these partition functions exhibit two remarkable (though expected \cite{GLM}) properties: First, effective action for a given model is essentially the $same$ as for any other model. In fact effective action is a function of coupling constants ("sources" in the old-fashioned terms), which are nothing but coordinates in the $space\ of$ various $models$ (configuration space of entire string theory): variation of couplings change one model for another. Second, effective action possesses a huge additional symmetry, which is somewhat similar to the general covariance in the space of all models (the above mentioned configuration space) and in the simplest examples, which have been studied so far, can be expressed in terms of integrable hierarchies. (This "general covariance" in the configuration space can after all turn into the main dynamical principle of the string theory.) Both these features seem to be very general, arising whenever the largest possible Lagrangian with a given symmetry is considered (without restrictions on the possible counterterms, imposed by requirements of renormalizability or by locality-minimality "principles" - this is why this phenomenon is not widely known to field theorists). An example of highly-nontrivial calculations leading to similar conclusions can be found in ref.\cite{KoS}. We hope that these remarks will become more clearer after some specific examples will be considered below. Still they deserve being fromulated in the full generality, not only to intrigue the reader, but also because they can serve for better understanding of the ideas and outcomes of generic string theory. \bigskip The "corner" of the string theory, associated with matrix models, can be described by the following picture: see Fig.1. \bigskip The big blocks within the body of string theory, which are directly related to matrix models are: theory of conformal models, that of the $N=2$ supersymmetry and the (loop-equation version of) the Yang-Mills theory (in any dimension). Also the Einstein gravity should be related to the subject in a way, similar to Yang-Mills theory, but these links are yet not clarified. Both conformal theory and $N=2$ supersymmetry are sources of the concept of "topological models" \cite{Wito}-\cite{BBRT}. These arise after gauging of all continuous symmetries of the WZNW models and/or as models with BRST-exact stress tensors, naturally appearing in the context of $N=2$ supersymmetry. If formulated in a self-consisted way in the "universal module space" (unification of module spaces of all finite-genus Riemann surfaces and bundles over them) these models turn into those of "topological gravity". Generating functionals of topological gravity models in fact generate infinite sequences of topological invariants of certain spaces (inverse definition is also possible in some cases \cite{Wito}, though the universal (generic) algorithm for the operation {\it topology of some space} $\longrightarrow$ {\it topological gravity} is not yet formulated). Alternative models of $2d$ quantum gravity arise straightforwardly from conformal models through a procedure of "summation over geometries". There are two essen\-tially different approaches to the problem. One ("Polyakov approach") is to make use of the complex structure, intrinsic for conformal theory \cite{BPZ} and sum over $Riemann$ $surfaces$, what involves integration over module spaces and sum over genera. The main technique used in this approach is the theory of free fields on Riemann surfaces \cite{KnUFN}, \cite{MP} and bosonization formalism for conformal field theories \cite{BosF}, \cite{GMMOS}. This approach requires solution of Liouville theory, which still remains a problem under intensive investigation (in turn related to conformal field theory). Further progress in this direction should be related (or can be expressed in terms of) the adequate theory of the {\it universal module space}, {\it handle-gluing operators} etc. Similar objects arise in the field-theoretical approach to the topological gravity (see \cite{LP} for a recent review). Alternative approach to summation over geometries does not refer at all to the complex structure and instead involves a sum over random $equilateral$ triangulations \cite{Migrev}-\cite{mamo}. \footnote{Its relation to the Polyakov approach is a separate very interesting, important and badly understood problem, which allows a non-trivial reformulation in terms of number theory (see \cite{LM}). The main puzzle here is that equilateral triangulations are in fact $arithmetic$ Riemann surfaces - a dense discrete subset in the entire module space, with interesting and deap algebraic properties. Equivalence of the two approaches to $2d$ quantum gravity should imply the existence of some number-theoretical background behind the scene, which would be very nice to discover in full purity.} This is the place where matrix models first appear in the context of string theory. The random triangulation approach is by no means specific for conformal models (since it ignores the complex structure) and can be applied in many other situations - for example, to Yang-Mills (YM) theories in any dimensions (where instead of summation over geometries one needs "simply" to sum over ordinary Feynman diagramms). Applications of the matrix-model method usually involve two steps: formulation and the study of "discrete" model and then taking its "continuum limit", giving rise to a new - "continuous matrix model", which sometimes can again be represented in a form of some matrix integral. One of the main dis\-coveries in the field of mat\-rix models is that $continuous$ models arising finally from the random-equilateral-triangulation description of the simplest (minimal with $c<1$) string models coincide with the simplest ($CP^1$ Landau-Ginzburg) models of topological gravity \cite{WitTG},\cite{Ko}-\cite{MMM}: two (classes of) theories are identical (this is not yet proved in full detail, but is more than plausible). So far $continuous$ models are actually found and somehow understood only for string models, based on the $c<1$ minimal conformal theories (moreover, only for $q=1$ in the $(p,q)$-seria). Conformal models with $c \geq 1$, which are relevant for description of gauge theories in space-time dimension $d \geq 2$ (which possess particles, rather than only topological degrees of freedom), should give rise to the discrete matrix models with "non-factorizable" integration over "angular variables", of which the simplest (solvable) example is Kazakov-Migdal model \cite{KazMi}. The issue of continuum limit for such models is yet not understood (at least in terms of integrable structures, which should probably generalize the familiar theory of Toda hierarchies). The goal of the study of matrix models is three-fold. First of all, one can look for the non-perturbative (exact) answers for the physical amplitudes in the given model. This is the subject which attracts most attention in the literature (for several obvious reasons). However, it is equally (and, perhaps, even more) important to understand the mathematical structure behind the matrix models (which involves topics like general theory of integrable hierarchies, geometrical quanization, Duistermaat-Heckman theorem ("localization theory") etc). Also important for the purposes of string theory is to use the results of the study of matrix models in order to unify $a\ priori$ different models (according to the above mentioned principle: non-perturbative partition functions for different models differ by a change of variables in the space of coupling constants). Matrix models already played an important role in making this principle more clear and acceptable for many string theorists. \bigskip Let us make the next step and look even closer at the field of matrix models, especially, at its mostly studied domain, associated with the $d<2$ string models. Then the following structure will be seen: Fig.2. \bigskip The sample example of matrix model is that of 1-matrix integral \begin{eqnarray} Z_N\{t\} \equiv c_N\int_{N\times N} dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr} H^k}, \label{1mamo} \end{eqnarray} where the integral is over $N\times N$ Hermitian matrix $H$ and $dH = \prod_{i,j} dH_{ij}$. There are three directions in which one can proceed starting from (\ref{1mamo}). The first one \cite{GMMMO} is to look for an invariant formulation of properties of the functional $Z_N\{t\}$. It appears to satisfy the infinite set of differential equations (in fact these are just Ward identities for the functional integral (\ref{1mamo}) \cite{MM}): \begin{eqnarray} &L_n Z_N\{t\} = 0, \ \ \ n\geq -1, \nonumber \\ &L_n \equiv \sum_{k=0}^{\infty} kt_k\frac{\partial}{\partial t_{k+n}} + \sum_{k=0}^n \frac{\partial^2}{\partial t_k\partial t_{n-k}}, \label{vird} \\ &\frac{\partial}{\partial t_0}Z_N = NZ_N, \nonumber \end{eqnarray} which is known under the name of "discrete Virasoro constraints". $Z_N\{t\}$ can be represented as a correlator of screening operators in some auxiliary conformal model (of one free field on the "spectral surface"), and Virasoro constraints (\ref{vird}) are of course related to the Virasoro algebra in that conformal model. Also $Z_N\{t\}$ is some $\tau$-function of integrable "Toda-chain" hierarchy (in fact this statement should be a corollary of the Virasoro constraints, but this relation is still not very well understood). The most straightforward further developement \cite{GMMMO},\cite{MMMM} is to take the continuum limit of the Toda-\-chain hierarchy. In the specially adjusted ("double-scaling") limit \cite{mamo}) it gives rise to the KdV-hierarchy, and the corresponding $\tau$-function appears subjected to the slightly different constraints \cite{FKN},\cite{MMMM} (which again form a Borel subalgebra of some other "continuous Virasoro algebra"): \begin{eqnarray} &{\cal L}_{2n} {\cal Z}^{cont}\{T\} = 0, \ \ \ n\geq -1, \nonumber \\ &{\cal L}_{2n} \equiv \frac{1}{2}\sum_{{\rm odd}\ k=1}^{\infty} k(T_k + r_k)\frac{\partial}{\partial T_{k+2n}} + \frac{1}{4} \sum_{{\rm odd}\ k = 1}^{2n-1} \frac{\partial^2}{\partial T_k\partial T_{2n-k}} + \nonumber \\ &\quad + \frac{1}{16}\delta_{n,0} + \frac{1}{4} (T_1+r_1)^2\delta_{n,-1}, \label{virc} \end{eqnarray} where $r_k = -\frac{2}{3}\delta_{k,3}$. In fact \begin{eqnarray} {\cal Z}^{cont}\{T\} \sim \left.\lim_{d.s.\{N \rightarrow\infty\}} \sqrt{Z_N\{t\}} \right|_{t_{2k+1}=0}, \label{coverdi} \end{eqnarray} and $T$ are related to $t$ by linear transformation \cite{Kmamo},\cite{MMMM}: \begin{eqnarray} T_k = \frac{1}{2}\sum_{m\geq \frac{k-1}{2}} \frac{g_m}{(m -\frac{k-1}{2})!} \frac{\Gamma(m+\frac{1}{2})}{\Gamma(\frac{k}{2}+1)}, \ \ \ k\ {\rm odd}; \nonumber \\ g_m = mt_{2m},\ m\geq 1;\ \ g_0 = 2N. \label{Tt1mamo} \end{eqnarray} This ${\cal Z}^{cont}\{T\}$ can be again represented in the form of a matrix integral (over $n\times n$ Hermitian matrix) \cite{Ko},\cite{GKM}-\cite{AvM}: \begin{eqnarray} {\cal Z}^{cont}\{T\} = {\cal Z}_V\{T\} \label{T=Q} \end{eqnarray} with $V(X) = \frac{X^3}{3}$, where \begin{eqnarray} {\cal Z}_V\{T\} \sim {\cal F}_{V,n}\{ L\} \equiv \int_{n\times n} dX e^{- {\rm tr} V(X) + {\rm tr} L X} \label{GKM} \end{eqnarray} and \begin{eqnarray} T_k = \frac{1}{k} {\rm tr} L^{-k/2},\ \ \ k\ {\rm odd}. \label{TtK} \end{eqnarray} The function ${\cal Z}_V\{T\}$ (but not ${\cal F}_{V,n}\{ L\}$) is in fact independent of $n$: the only thing that happens for finite values of $n$ is that the r.h.s. of (\ref{GKM}) can not describe ${\cal Z}_V\{T\}$ at $arbitrary$ points in the $T$-space, in accordance with (\ref{TtK}). Continuous Virasoro constraints (\ref{virc}) are in fact equivalent to the trivial matrix-valued Ward identity \begin{eqnarray} \left( V'\left(\frac{\partial}{\partial L_{\rm tr}} \right) - L\right) {\cal F}_{V,n}\{ L\} = 0. \label{WiK} \end{eqnarray} Another direction to proceed from the discrete 1-matrix model is to rewrite it identically in the form of Kontsevich model: this time with $V(X) = X^2$ and additional factor of $(det\ X)^N$ aunder the integral in ${\cal F}_{V,n}\{ L\}$ \cite{ChMa}. Then the double scaling limit can be studied in internal terms of Kontsevich models \cite{Toda}. The third direction is towards multimatrix models. In $continuous$ version they should provide $\tau$-functions of reduced KP-hierarchies \cite{Doug} (KdV is the $p=2$ reduction), which are subjected to "continuous $W$-constraints" \cite{FKN}. Matrix models of such $\tau$-functions are Kontsevich models with $V(X) \sim X^{p+1}$ \cite{GKM}-\cite{AvM}. At $discrete$ level, however, things are not so simple. The most popular discrete multimatrix models \cite{mumamo} are defined as the multiple matrix integrals of the form: \begin{eqnarray} \oldfalse \begin{array}{c} Z_N\{t^{(\alpha)}\} \equiv \\ c_N^{p-1}\int_{N\times N} dH^{(1)}...dH^{(p-1)} \prod_{\alpha = 1}^{p-1} e^{\sum_{k=0}^{\infty}t_k^{(\alpha)}{\rm Tr} H_{(\alpha)}^k} \prod_{\alpha = 1}^{p-2} e^{{\rm Tr}H^{(\alpha)}H^{(\alpha+1)}} \label{mumamo} \end{array} \end{eqnarray} (the form of the "interaction term" ${\rm Tr}H^{(\alpha)}H^{(\alpha+1)}$ is restricted by the "solvability" principle, but not unambiguously). In fact these models are particular examples of the "scalar-product eigenvalue models" and are not really distinguished except for the 1-matrix ($p=2$) and 2-matrix ($p=3$) cases. This is reflected in the absense of any reasonable Ward identites and integrable structures for these models, which would somehow involve their dependence on the variables $t^{(\alpha)}$ with $2\leq \alpha \leq p-2$. Therefore the "multi-scaling continuum limit" of these models can hardly be investigated at any degree of rigourousness. (It is not so much important for "physical" applications to have any discrete models associated with the continuum ones, but this is an interesting problem for the "science for science".) For the 2-matrix ($p=3$) case the Ward identities can be expressed in the form of "$\tilde W$-constraints" \cite{tildeW} and look like \cite{GKM} \begin{eqnarray} \tilde W_{n-m}^{(m+1)}\{t\} Z_N\{t,\bar t\} = (-)^{m+n} \tilde W_{m-n}^{(n+1)}\{\bar t\} Z_N\{t,\bar t\} \label{Wtilde} \end{eqnarray} (here $t$ and $\bar t$ stand for $t^{(1)}$ and $t^{(2)}$, and $m,n$ are any non-negative integers). The really interesting set of discrete multimatrix models does exist, but it is somewhat different from (\ref{mumamo}). These theories will be refered to as "conformal matrix models", since they arise straightforwardly as generalization of the "CFT-formulation" of the 1-matrix model \cite{comamo}: it is enough to substitute discrete Virasoro constraints in the theory of one free field by the $W_p$-constraints in the theory of $p-1$ free fields. Matrix integral formulation then involves an "interaction term" $Det \left(H^{(\alpha)}\otimes I - I \otimes H^{(\alpha+1)}\right)$ instead of $e^{{\rm Tr}H^{(\alpha)}H^{(\alpha+1)}}$, which is not very easy to guess $a\ priori$, but so defined models and their continuum limits can be examined in a manner, quite parallel to the one-matrix case (though not all is already done in this direction). Also this approach opens a possibility to formulate discrete models for any set of constraints, e.g. assosiated with the more exotic $W$-algebras and with quantum groups (i.e.can help to solve the inverse problem: $constraints\ \longrightarrow\ discrete\ matrix\ model$). This is an option which also deserves further investigation. Another natural name for this set of theoris is "multi-component eigenvalue models". Kontsevich models should be also related to topological models of Landau-\-Ginzburg gravity (LGG), though this relation is not yet clarified in full detail (see, however \cite{Krich},\cite{LP}) . Among the main unresolved puzzles in this whole field is the description of generic $(p,q)$-models. Formally, Generalized Kontsevich model (\ref{GKM}) provides this description, but in fact the partition function ($\tau$-function) gets singular when the "phase transition" point where the $q$ changes is approached, and Kontsevich model with $V(X) = polinomial\ of \ degree\ p+1$ provides a nice description only of $(p,1)$-models. Generically, Kontsevich integral describes a duality transformation between $(p,q)$ and $(q,p)$ models: $(p,q) \longrightarrow (q,p)$ \cite{KhMa}, but not any of these models separately. (The only exclusion are $(p,1)$-models because they are related by Kontsevich transformation to the $(1,p)$ models which are completely trivial.) In fact continuous models have two $different$ sets of "time-variables". Thus far we introduced $T$, which are essentially expansion parameters of the generating functional for correlation functions. More exact, these parameters $\hat T$ depend on the particular model (vacuum), around which the perturbation expansion is performed, and they differ slightly from the model-independent $T$. Another set of "times", $r_k = \frac{p}{k(p-k)}{\rm Res}\left(V'(\mu)\right)^{1-\frac{k}{p}}d\mu$, parametrizes the shape of the polinomial "potential" $V_p(X)$ (of degree $p+1$) and it describes the coordinates in the space of (matrix) models. These two types of variables - parameters of the generating functional and those labeling the shape of the Lagrangian - are almost the same (in fact they would be $just$ the same, if there were no loop (quantum) effects). This similarity between $T$'s and $r$'s is reflected in the remarkable property of partition function of the $(p,1)$ model - it essentially depends only on the {\it sum} of "times" $\hat T$ and $r$ \cite{Krich}: \begin{eqnarray} {\cal Z}_{V_p}\{T\} = f_p(r\mid \hat T_k + r_k)\tau_p\{\hat T_k + r_k\} \label{Krich} \end{eqnarray} with some simple (and explicitly known) function $f_p$. (In eq.(\ref{TtK}) above for {\it monomial} cubic potential $V_3(x) = \frac{x^3}{3}$, $\ \hat T_k = T_k = \frac{1}{k}{\rm tr}L^{-k/2}$, while $r_k = -\frac{2}{3}\delta_{k,3}$.) The last thing to be mentioned in this general description of the field of the matrix model theory is their relation to group theory. Generalized Kontsevich model (\ref{GKM}) is intimately connected to the "integrable nature" of group chracters and the coadjoint orbit integrals (characters of all the irreducible representations of $U(N)$ are usually KP $\tau$-functions \cite{chartau}). In fact some "discrete (or quantum) version" of Kontsevich integral is a sum over all unitary irreps of $U(n)$ ("integral" over a $model$ of $U(n)$, or over the set of all coadjoint orbits): \begin{eqnarray} {\cal F}_V^{qu}\{G\} \equiv \sum_R d_R\chi_R(G) e^{-\sum_{k=0}^{\infty} v_kC_k(R)}, \label{quGKM} \end{eqnarray} where $d_R$, $\chi_R$ and $C_k(R)$ stand for dimension, character and the $k$-th Casimir of irreducible representation $R$ of $U(n)$. Time variables $T_k \sim \frac{1}{k}{\rm tr}G^k$, while potential $V(X) = \sum_{k=0}^{\infty}s_kX^k$. This expression can be further generalized to \begin{eqnarray} {\cal F}_V^{qu}\{G\} \equiv \sum_R \chi_R(\bar G)\chi_R(G) e^{-\sum_{k=0}^{\infty}v_kC_k(R)}. \label{quGKM'} \end{eqnarray} Properties of these "quantum" Kontsevich models deserve further investigation (objects like (\ref{quGKM}) are also known to arise in the localization theory, in particular, in the study of the $d=2$ YM theory, see, for example, \cite{Wit2YM} and \cite{NT}). \bigskip These notes are essentially a review of the views and results of the group, working at Moscow (and Kiev). Since references will not be given every time, I present here the list of people involved into these investigations: L.Chekhov, A.Gerasimov, A.Losev, S.Kharchev, Yu.Makeenko, A.Mar\-sha\-kov, A.Mikhailov, A.Mironov, A.Orlov, S.Pa\-kuliak, I.Polyubin, A.Zab\-rodin. I also apologize for the somewhat sporadic references to the works of other groups. \ \bigskip \section{Ward identities for the simplest matrix models} \setcounter{equation}{0} \subsection{Ward identites versus equations of motion} We begin systematic consideration of matrix models from their simplest and at the same time the most basic property: the Ward identites (WI) for partition functions. Partition function is by definition a functional of the coupling constants in the Lagrangian and WI will be understood here as (differentiual or finite-difference) equations, imposed on this functional. If partition function is represented in the form of a matrix integral, \footnote{To avoid confusion we emphasize that such representation does $not$ need to exist, at least in any simple form. The more the theory of matrix models develops, the less it has to do with $matrices$ and matrix integrals. However (as in the case of entire $string\ theory$) original name has a tendency to survive. Anyhow, the main content of the theory of matrix models (at least of its branch, analyzed in these notes) is the search for $invariant$ formulations of the properties of partition functions, while matrix integrals (if at all existing) are considered as their particular realizations (representations). Moreover, there can exist very different matrix integral representations of the same partition function, the simplest example being just the basic discrete 1-matrix model, which can be also represented in the form of Kontsevich integral (see below). } the WI are usually implied by its invariance (or, better to say - covariance) under the change of the integration variables (thus the name "WI"). In ordinary field theory we are usually dealing with models, where WI either do not exist at all, or at most there is finite number of them - then they are interpreted as reflecting the $symmetry$ of the theory. However, by no means the finite set of these WI provides a $complete$ description of dynamics of the theory: the number of (quantum) equations of motion (EqM) is usually infinite and their solutions are never fixed by the WI. In fact this difference between WI and EqM arises because the Lagrangians, considered in the ordinary field theory are not of the most general form: they are usually severely restricted by "principles" like renormalizability or minimality. Because of this there is simply not many enough coupling constants in the Lagrangian to describe the result of $any$ variation of integration variables as that of the variation of coupling constants, and thus not every equation of motion can be represented as (differential) equation for the partition function. In other words, by restricting the shape of Lagrangian for "non-symmetric" reasons one breaks the original huge "symmetry" (covariance) of the model, which was enough to describe all the dynamics (all EqM) as dictated by symmetry,- and a broader view is necessary in order to recognize EqM as the WI, associated with that original high symmetry. This symmetry (it is not of Noether type, of course) is peculiar property of all the $quantum\ mechanical$ partition functions, since these usually arise from the procedure of functional integration. It happened so that matrix models appeared to be the first class of quantum-mechanical systems (functional integrals) for which this identity: \begin{eqnarray} {\rm all\ EqM}\ \equiv\ {\rm all\ WI} \nonumber \end{eqnarray} was not simply observed as a curious phenomenon, but became a subject of intensive investigation and is identified as the source of exact solvability (integrability) of the theory. Of course, significance of this observation (and its implications) is quite universal, by no means restricted to the field of matrix models themselves, however, it is yet not enough appreciated by the experts in other fields. In any case, we are going to deal only with matrix models in these notes. We proceed to consideration of the WI according to the following plan (not all the arrows will be actually discussed): see Fig.3. \subsection{Virasoro constraints for the discrete 1-matrix model} The basic example \cite{GMMMO},\cite{MM} which illustrates the arguments from the previous subsection is provided by the 1-matrix model \begin{eqnarray} Z_N\{t\} \equiv c_N\int_{N\times N} dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr} H^k}. \label{1mamo'} \end{eqnarray} This integral is invariant under any change of variables $H \rightarrow f(H)$. It is convenient to choose the special basis in the space of such transformations: \begin{eqnarray} \delta H = \epsilon_nH^{n+1}. \end{eqnarray} Here $\epsilon_n$ is some infinitesimal matrix and, of course, $n \geq -1$. The value of integral can not change under the change of integration variable, and we obtain the identity: \begin{eqnarray} \int_{N\times N} dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr}H^k} = \int d(H + \epsilon_nH^{n+1})e^{\sum_{k=0}^{\infty} t_k {\rm Tr} (H + \epsilon_nH^{n+1})^k}, \nonumber \end{eqnarray} i.e. \begin{eqnarray} \int dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr}H^k} \left( \sum_{k=0}^{\infty} kt_k {\rm Tr}H^{k+n} + {\rm Tr} \frac{\delta H^{n+1}}{\delta H} \right) \equiv 0. \label{vird1} \end{eqnarray} In order to evaluate the Jacobian ${\rm Tr}\frac{\delta H^{n+1}}{\delta H}$ let us restore the matrix indices: \begin{eqnarray} (\delta H^{n+1})_{ij} = \sum_{k=0}^n (H^k \delta H H^{n-k})_{ij} = \sum_{k=0}^n (H^k)_{il}( \delta H)_{lm}( H^{n-k})_{mj}. \nonumber \end{eqnarray} In ${\rm Tr}\frac{\delta H^{n+1}}{\delta H}$ one should take $l=i$ and $m=j$, so that \begin{eqnarray} {\rm Tr}\frac{\delta H^{n+1}}{\delta H} = \sum_{k=0}^n {\rm Tr}H^k {\rm Tr} H^{n-k}. \label{vird2} \end{eqnarray} Now we can note that since we started from Lagrangian of the most general form (consistent with the symmetry $H \rightarrow UHU^{\dagger}$), any correlation function can be obtained as variation of the coupling constants (all possible $sources$ are included as counterterms). In our particular example this is just a trivial remark: \begin{eqnarray} <{\rm Tr} H^{a_1} ... {\rm Tr} H^{a_n}> &= \int dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr}H^k} {\rm Tr} H^{a_1} ... {\rm Tr} H^{a_n} = \nonumber \\ &= \frac{\partial^n}{\partial t_{a_1}...\partial t_{a_n}} Z_N\{t\}. \label{vird3} \end{eqnarray} We can use this relation together with (\ref{vird2}) in order to rewrite (\ref{vird1}) as: \begin{eqnarray} L_n Z_N\{t\} = 0, \ \ \ n\geq -1 \label{virdid} \end{eqnarray} with \begin{eqnarray} L_n \equiv \sum_{k=0}^{\infty} kt_k\frac{\partial}{\partial t_{k+n}} + \sum_{k=0}^n \frac{\partial^2}{\partial t_k\partial t_{n-k}}. \label{virdop} \end{eqnarray} Note that according to the definition (\ref{1mamo'}) \begin{eqnarray} \frac{\partial}{\partial t_0}Z_N = NZ_N. \nonumber \end{eqnarray} \bigskip Several remarks are now in order. First of all, expression in brackets in (\ref{vird1}) represents just $all$ the equations of motion for the model (\ref{1mamo'}), and (\ref{virdid}) is nothing but another way to represent the same set of equations. This is an example of the above-mentioned identification of EqM and WI. Second, commutator of any two operators $L_n$ apearing in (\ref{virdid}) should also annihilate $Z_N\{t\}$. It is another indication (not a convincing one, however) that we already got a $complete$ set of constraints, that $L_n$'s form a closed (Virasoro) algebra: \begin{eqnarray} \phantom. [ L_n, L_m] = (n-m) L_{n+m}, \ \ \ n,m\geq -1. \label{virdal} \end{eqnarray} Third, (\ref{virdid}) can be considered as invariant formulation of what is $Z_N$: it is a solution of this set of compatible differential equations. {}From this point of view eq.(\ref{1mamo'}) is rather a particular representation of $Z_N$ and it is sensible to look for other representations as well (we shall later discuss two of them: one in terms of CFT, another in terms of Kontsevich integrals). Fourth, one can try to analyze the uniqueness of the solutions to (\ref{virdid}). If there are not too many of them the set of constraints can be considered complete. A natural approach to classification of solutions to the algebra of constraints is in terms of the orbits of the corresponding group \cite{GMMMMO}. Let us consider an oversimplified example, which can still be usefull to understand implications of the complete set of WI as well as clarify the meaning of classes of universality and of integrability. Imagine, that instead of (\ref{virdid}) with $L_n$'s defined in (\ref{virdop}) we would obtain somewhat simpler equations: \footnote{One can call them "classical" approximation to (\ref{virdid}), since they would arise if the variation of measure (i.e. a "quantum effect") was not taken into account in the derivation of (\ref{virdid}). Though this concept is often used in physics it does not have much sense in the present context, when we are analyzing $exact$ properties of functional (matrix) integrals. } \begin{eqnarray} l_n Z = 0, \ \ n\geq 0 \ \ {\rm with} \ \ l_n = \sum_{k=1}^{\infty} kt_k\frac{\partial}{\partial t_{k+n}}. \nonumber \end{eqnarray} Then operator $l_1$ can be interpreted as generating the shifts \begin{eqnarray} t_2 \longrightarrow t_2 + \epsilon_1 t_1, \nonumber \\ t_3 \longrightarrow t_3 + 2\epsilon_1 t_2, \nonumber \\ . \nonumber \end{eqnarray} We can use it to shift $t_2$ to zero, and eq. $l_1Z = 0$ then implies that \begin{eqnarray} Z(t_1,t_2,t_3,...) = Z(t_1,0,\tilde t_3,...) \nonumber \end{eqnarray} $(\tilde t_k = t_k - \frac{(k-1)t_2t_{k-1}}{t_1}, \ k\geq 3)$. Next, operator $l_2$ generates the shifts \begin{eqnarray} t_3 \longrightarrow t_3 + \epsilon_2 t_1, \nonumber \\ t_4 \longrightarrow t_4 + 2\epsilon_2 t_2, \nonumber \\ . \nonumber \end{eqnarray} and does $not$ affect $t_2$. We can now use eq. $l_2Z = 0$ to argue that \begin{eqnarray} Z(t_1,t_2,t_3,t_4,...) = Z(t_1,0,\tilde t_3,\tilde t_4,...) = Z(t_1,0,0,\tilde{\tilde t}_4,...) \nonumber \end{eqnarray} etc. Assuming that $Z$ is not very much dependent on $t_k$ with $k \rightarrow \infty$, \footnote{This, by the way, is hardly correct in this particular example, when the group has no compact orbits.} we can conclude, that \begin{eqnarray} Z(t_1,t_2,t_3,...) = Z(t_1,0,0,...) = Z(1,0,0,...) \nonumber \end{eqnarray} (at the last step we also used the equation $l_0Z = 0$ to rescale $t_1$ to unity). All this reasoning was correct provided $t_1 \neq 0$. Otherwise we would get $Z(0,1,0,0,...)$, if $t_1 = 0,\ t_2\neq 0$, or $Z(0,0,1,0,...)$, if $t_1 = t_2 = 0,\ t_3\neq 0$ etc. In other words, we obtain classes of universality (such that the value of partition function is just the same in the whole class), which in this oversimplified example are labeled just by the first non-vanishing time-variable. Analysis of the orbit structure for the actually important realizations of groups, like that connected to eq.(\ref{virdop}) has never been performed in the context of matrix model theory. It may deserve emphasizing that the constraints, as we saw, can actually allow one to eliminate (solve exactly) all the dependence on the time-variables, in less trivial examples they somehow imply the integrability structure, which is just a slightly more complicated version of the same solvability phenomenon. \subsection{CFT formulation of matrix models} Given a complete set of the constraints on partition function of infinitely many variables which form some closed algebra we can now ask an inverse question: how these equations can be solved or what is the integral representation of partition function. One approach to this problem is analysis of orbits, briefly mentioned at the end of the previous section. Now we turn to another technique \cite{comamo}, which makes use of the knowledge from conformal field theory. This constructions can have some meaning from the "physical"point of view, which implies certain duality between the 2-dimensional world surfaces and the spectral surfaces, associated to configuration space of the string theory. However, our goal now is more formal: to use the means of CFT for solution of the constraint equations. This is very natural in the case when the algebra of constraints is Virasoro algebra, as in the case of the 1-matrix model, or some other algebra if it is known to arise naturally as chiral algebra in some simple conformal models. In fact the approach which will be now discussed is rather general and can be applied to construction of matrix models, associated with many different algebraic structures. We begin from the set of equations (\ref{virdid}) which we shall further refer to as "discrete Virasoro constraints". The CFT formulation of interest should provide the solution to these equations in the form of some correlation function in some conformal field theory. Of course, it becomes natural, if we somehow identify the operators $L_n$, which form Virasoro algebra with the harmonics of the stress-tensor $T_n$, which satisfy the same algebra, and manage to relate the constraint that $L_n$ annihilate the correlator to the statement that $T_n$ annihilate the vacuum state. Thus the procedure is naturally split into two steps: First we should find a $t$-dependent operator ("Hamiltonian") $H(t)$, such that \begin{eqnarray} L_n(t) \langle e^{H(t)} \ldots = \langle e^{H(t)}T_n \ldots \label{lcft1} \end{eqnarray} This will relate differential operators $L_n$ to $T_n$'s expressed through the fields of conformal model. Second we need to enumerate the states, that are annihilated by the operators $T_n$ with $n \geq -1$, i.e. solve equation \begin{eqnarray} T_n \mid G \rangle = 0 \end{eqnarray} for the ket-states, what is an internal problem of conformal field theory. If both ingredients $H(t)$ and $\mid G \rangle$ are found, solution to the problem is given by \begin{eqnarray} \langle e^{H(t)}\mid G\rangle. \end{eqnarray} To be more explicit, for the case of the discrete Virasoro constraints we can just look for solutions in terms of the simplest possible conformal model: that of a one holomorphic scalar field \begin{eqnarray} \phi (z) = \hat q + \hat p \log z + \sum _{k\neq 0} {J_{-k}\over k} z^{k}\nonumber\\ \ [J_n,J_m] = n\delta _{n+m,0}, \ \ \ [\hat q,\hat p] = 1. \end{eqnarray} Then the procedure is as follows: Define vacuum states \begin{eqnarray} J_k|0\rangle &= 0, \ \ \ \langle N|J_{-k} = 0, \ \ \ k > 0\nonumber\\ \hat p|0\rangle &= 0, \ \ \ \langle N|\hat p = N\langle N|, \end{eqnarray} the stress-tensor \begin{eqnarray} T(z) = {1\over 2}[\partial \phi (z)]^2 = \sum T_nz^{-n-2},\quad T_n = \sum _{k>0}J_{-k}J_{k+n} + {1\over 2}\sum _{{a+b=n}\atop{a,b\geq 0}}J_aJ_b, \end{eqnarray} and the Hamiltonian \begin{eqnarray} H(t) &= {1\over \sqrt{2}} \sum _{k>0}t_kJ_k = {1\over \sqrt{2}}\oint_{C_0}U(z)J(z)\nonumber\\ U(z) &= \sum _{k>0}t_kz^k, \ \ \ J(z) = \partial \phi (z). \end{eqnarray} It can be easily checked now that \begin{eqnarray} L_n\langle N|e^{H(t)}\ldots = \langle N|e^{H(t)}T_n\ldots \end{eqnarray} and \begin{eqnarray} T_n|0\rangle = 0, \ \ \ n \geq -1 . \end{eqnarray} As an immediate consequence, any correlator of the form \begin{eqnarray} Z_N\{t\mid G\} = \langle N|e^{H(t)}G|0\rangle \label{confsol} \end{eqnarray} gives a solution to (\ref{virdid}) provided \begin{eqnarray} [T_n,G] = 0, \ \ \ n \geq -1. \label{crGop} \end{eqnarray} In fact operators $G$ that commute with the stress tensor are well known: these are just any functions of the "screening charges" \footnote{For notational simplicity we omit the normal ordering signs, in fact the involved operators are $:e^H:$ and $:e^{\pm \sqrt{2}\phi}:$} \begin{eqnarray} Q_\pm = \oint J_\pm = \oint e^{\pm \sqrt{2}\phi }. \end{eqnarray} The correlator (\ref{confsol}) will be non-vanishing only if the matching condition for zero-modes of $\phi$ is satisfied. If we demand the operator to depend only on $Q_{+}$, this implies that only one term of the expansion in powers of $Q_{+}$ will contribute to (\ref{confsol}), so that the result is essentially independent on the choice of the function $G(Q_+)$, we can for example take $G(Q_+) = e^{Q_+}$ and obtain: \begin{eqnarray} Z_N\{t\} \sim \frac{1}{N!}\langle N|e^{H(t)}(Q_+)^N|0\rangle . \label{comamo1mm"} \end{eqnarray} This correlator is easy to evaluate using the Wick theorem and the propagator $\phi(z)\phi(z')\sim \log(z-z')$ and finally we get \begin{eqnarray} Z_N\{t\} &= \frac{1}{N!} \langle N \mid :e^{{1\over \sqrt{2}}\oint_{C_0}U(z)\partial\phi(z)}: \prod_{i=1}^N \oint_{C_i} dz_i :e^{\sqrt{2}\phi(z_i) }: \mid 0 \rangle = \nonumber \\ &= \frac{1}{N!}\prod_{i=1}^N\oint_{C_i} dz_i e^{U(z_i)} \prod_{i<j}^N (z_i-z_j)^2 \label{comamo1mm} \end{eqnarray} in the form of a multiple integral, which can in fact be directly related to the matrix integral in (\ref{1mamo'}), see \cite{BIPZ} and the next section. Thus in the simplest case we resolved the inverse problem: reconstructed the integral representation from the set of discrete Virasoro constraints. However, the answer we got seems a little more general than (\ref{1mamo'}): the r.h.s. of eq.(\ref{comamo1mm}) still depends on the contours of integration. Moreover, we can also recall that the operator $G$ above could depend not only on $Q_+$, but also on $Q_-$. The most general formula is a little more complicated than (\ref{comamo1mm}): \begin{eqnarray} \oldfalse \begin{array}{c} Z_{N}\{t\mid C_i, C_r\} \sim \frac{1}{(N+M)!M!}\langle N|e^{H(t)}(Q_+)^{N+M} (Q_-)^M|0\rangle = \\ = \frac{1}{(N+M)!M!}\prod_{i=1}^{N+M}\oint_{C_i} dz_i e^{U(z_i)} \prod_{r=1}^M\oint_{C'_r} dz'_r e^{U(z'_r)} \cdot \\ \cdot \frac{\prod_{i<j}^{N+M} (z_i-z_j)^2 \prod_{r<s}^N (z'_r-z'_s)^2 } {\prod_{i}^{N+M}\prod_r^M (z_i-z_r)^2}. \end{array} \label{comamo1mm'} \end{eqnarray} We refer to the papers \cite{comamo} for discussion of the issue of the contour-dependence. In certain sense all these different integrals can be considered as branches of the same analytical function $Z_N\{t\}$. Dependence on $M$ is essentially eliminated by Cauchy integration around the poles in denominator in (\ref{comamo1mm'}). Above construction can be straightforwardly applied to other algebras of constraints, provided: (i) The free-field representation of the algebra is known in the CFT-framework, such that the generators are $polinomials$ in the fields $\phi$ (only in such case it is straightforward to construct a Hamiltonian $H$, which relates CFT-realization of the algebra to that in terms of differential operators w.r.to the $t$-variables; in fact under this condition $H$ is usually linear in $t$'s and $\phi$'s). There are examples (like Frenkel-Kac representation of level $k=1$ simply-laced Kac-Moody algebras \cite{FK} or generic reductions of the WZNW model \cite{GMMOS},\cite{BO}-\cite{GereF}) when generators are $exponents$ of free fields, then this construction should be slightly modified. (ii) It is easy to find vacuum, annihilated by the relevant generators (here, for example, is the problem with application of this approach to the case of "continuous" Virasoro and $W$-constraints). The resolution to this problem involves consideration of correlates on Riemann surfaces with non-trivial topologies, often - of infinite genus. (iii) The free-field representation of the "screening charges", i.e. operators that commute with the generators of the group within the conformal model, is explicitly known. These conditions are fulfilled in many case in CFT, including conventional {\bf W}-algebras \cite{Zam} and ${\cal N} = 1$ \footnote{In the case of ${\cal N} = 2$ supersymmetry a problem arises because of the lack of reasonable screening charges. At the most naive level the relevant operator to be integrated over superspace (over $dzd^{\cal N}\theta$) in order to produce screening charge has dimension $1-\frac{1}{2}{\cal N}$, which $vanishes$ when ${\cal N} = 2$. } supersymmetric models \cite{AGS}. For illustration purposes we present here several formulas from the last paper of ref.\cite{comamo} for the case of the ${\bf W}_{r+1}$-constraints, associated with the simply-laced algebras ${\cal A}$ of rank $r$. Partition function in such "conformal multimatrix model" is a function of "time-variables" $t_k^{(\lambda)},\ k = 0...\infty,\ \lambda = 1...r={\rm rank}{\cal A}$, and also depends on the integer-valued $r$-vector ${\Bf N} = \{N_1...n_r\}$. The ${\bf W}_{r+1}$-constraints imposed on partition function are: \begin{eqnarray} W_n^{(a)}(t)Z_{{\Bf N}}^{\cal A}\{t\} = 0, \ \ n\geq 1-a, \ \ a= 2...r+1. \end{eqnarray} The form of the $W$-operators is somewhat complicated, for example, in the case of $r+1=3$ (i.e. for ${\cal A} = A_2$ ($SL(3)$)) \begin{eqnarray} \oldfalse \begin{array}{c} W^{(2)}_n = \sum ^\infty _{k=0}(kt_k\frac{\partial}{\partial t_{k+n}} + k\bar t_k\frac{\partial}{\partial \bar t_{k+n}}) + \\ + \sum _{a+b=n}(\frac{\partial ^2}{\partial t_a\partial t_b} + \frac{\partial ^2}{\partial \bar t_a\partial \bar t_b}) \end{array} \end{eqnarray} \begin{eqnarray} \oldfalse \begin{array}{c} W^{(3)}_n = \sum _{k,l>0}(kt_klt_l\frac{\partial}{\partial t_{k+n+l}} - k\bar t_kl\bar t_l\frac{\partial }{\partial t_{k+n+l}} -2kt_kl\bar t_l\frac{\partial }{\partial \bar t_{k+n+l}})+ \\ + 2\sum _{k>0}\left[ \sum _{a+b=n+k}(kt_k\frac{\partial ^2}{\partial t_a \partial t_b} - kt_k\frac{\partial ^2}{\partial \bar t_a\partial \bar t_b} - 2k\bar t_k\frac{\partial ^2}{\partial t_a\partial \bar t_b)}\right] + \\ + {4\over 3}\sum _{a+b+c=n}(\frac{\partial ^3} {\partial t_a\partial t_b\partial t_c} - \frac{\partial ^3}{\partial t_a\partial \bar t_b\partial \bar t_c}), \end{array} \label{wopex} \end{eqnarray} and two types of time-variables, denoted through $t_k$ and $\bar t_k$. are ossociated with two $orthogonal$ directions in the Cartan plane of $A_2$: ${\bf e} = {{\Bf\alpha} _1\over\sqrt{2}}$, $\bar{\bf e} = {\sqrt{3}{\Bf\nu} _2\over\sqrt{2}}$. \footnote{Such orthogonal basis is especially convenient for discussion of integrability properties of the model, these $t$ and $\bar t$ are linear combinations of time-variables $t_k^{\lambda}$ appearing in eqs. (\ref{hamAr}) and (\ref{comamoAr}).} All other formulas, however, are very simple: Conformal model is usually that of the $r$ free fields, $S \sim \int\bar\partial{\Bf\phi}\partial{\Bf\phi} d^2z$, which is used to describe representation of the level one Kac-Moody algebra, associated with ${\cal A}$. Hamiltonian \begin{eqnarray} H(t^{(1)}\ldots t^{(r+1)}) = \sum_{\lambda = 1}^{r+1}\sum _{k>0}t^{(\lambda )}_k{\Bf\mu} _\lambda {\Bf J}_k, \label{hamAr} \end{eqnarray} where $\{{\Bf\mu}_{\lambda}\}$ are associated with "fundamental weight" vectors ${\Bf\nu}_{\lambda}$ in Cartan hyperplane and in the simplest case of ${\cal A} = A_r$ ($SL(r+1)$) satisfy $$ {\Bf\mu}_{\lambda}\cdot {\Bf\mu}_{\lambda'}=\delta_{\lambda\lambda'}-{1\over{r+1} }, \ \ \ \sum_{\lambda=1}^{r+1} {\Bf\mu}_{\lambda}=0, $$ thus only $r$ of the time variables $t^{(1)}\ldots t^{(r+1)}$ are linearly independent. Relation between differential operators $W_n^{(a)}(t)$ and operators ${\rm W}_n^{(a)}$ in the CFT is now defined by \begin{eqnarray} W^{(a)}_n\langle {\Bf N}|e^{H(t)}\ldots = \langle {\Bf N}|e^{H(t)}{\rm W}^{(a)}_i\ldots\ , \nonumber \\ a=2,\ldots,p; \ \ \ i\geq 1-a, \end{eqnarray} where \begin{eqnarray} {\rm W}^{(a)}_n = \oint z^{a+n-1}{\rm W}^{(a)}(z)\nonumber\\ {\rm W}^{(a)}(z) = \sum _\lambda [{\Bf\mu} _\lambda \partial {\Bf\phi} (z)]^a + \ldots \end{eqnarray} are spin-$a$ generators of the ${\bf W}^{\cal A}_{r+1}$ algebra. The screening charges, that commute with all the ${\rm W}^{(a)}(z)$ are given by \begin{eqnarray} Q^{(\alpha)} = \oint J^{(\alpha)} = \oint e^{{\Bf\alpha} {\Bf\phi} } \end{eqnarray} $\{{\Bf\alpha} \}$ being roots of finite-dimensional simply laced Lie algebra ${\cal A}$. Thus partition function arises in the form: \begin{eqnarray} Z^{\cal A}_{{\Bf N}}\{t\} = \langle {\Bf N}|e^{H(t}G\{Q ^{(\alpha)} \}|0\rangle \end{eqnarray} where $G$ is an exponential function of screening charges. Evaluation of the free-feild correlator gives: \begin{eqnarray} \oldfalse \begin{array}{c} Z^{\cal A}_{{\Bf N}}\{t\} \sim \int \prod _\alpha \left[ \prod ^{N_\alpha }_{i=1}dz^{(\alpha )}_i \exp \left( \sum _{\lambda ;k>0}t^{(\lambda )}_k({\Bf\mu} _\lambda {\Bf\alpha} )(z^{(\alpha )}_i)^k \right) \right] \times \\ \times \prod _{(\alpha ,\beta )}\prod ^{N_\alpha }_{i=1} \prod ^{N_\beta }_{j=1}(z^{(\alpha )}_i- z^{(\beta )}_j)^{{\Bf\alpha} {\Bf\beta} } \end{array} \label{comamoAr} \end{eqnarray} In fact this expression can be rewritten in terms of an $r$-matrix integral - a "conformal multimatrix model": \begin{eqnarray} Z^{\cal A}_{{\Bf N}}\{t^{(\alpha)}\} = c_N^{p-1}\int_{N\times N} dH^{(1)}...dH^{(p-1)} \prod_{\alpha = 1}^{p-1} e^{\sum_{k=0}^{\infty}t_k^{(\alpha)}{\rm Tr} H_{(\alpha)}^k}\cdot \nonumber \\ \cdot\prod_{(\alpha ,\beta )} {\rm Det} \left(H^{(\alpha)}\otimes I - I\otimes H^{(\alpha+1)}\right)^{{\Bf\alpha} {\Bf\beta} } \label{comamo"} \end{eqnarray} In the simplest case of ${\bf W}_3$ algebra eq.(\ref{comamoAr}) with insertion of only two (of the six) screenings $Q_{\alpha _1}$ and $Q_{\alpha _2}$ turns into \begin{eqnarray} \oldfalse \begin{array}{c} Z^{A_2}_{N_1,N_2}(t,\bar t) = {1\over N_1!N_2!} \langle N_1,N_2|e^{H(t,\bar t)}(Q^{(\alpha _1)})^{N_1} (Q^{(\alpha _2)})^{N_2}|0\rangle = \\ = {1\over N_1!N_2!} \prod_i \int dx_i e^{U(x_i)} \prod_j\int dy_j e^{\bar U(y_i)} \Delta (x)\Delta (x,y) \Delta (y), \end{array} \label{comamo"3} \end{eqnarray} where $\Delta(x,y) \equiv \Delta(x)\Delta(y)\prod _{i,j}(x_i - y_j)$. This model is associated with the algebra ${\cal A} = A_2\ (SL(3))$, while the original 1-matrix model (\ref{comamo1mm"})-(\ref{comamo1mm'}) - with ${\cal A} = A_1\ (SL(2))$. The whole series of models (\ref{comamoAr}-\ref{comamo"}) for ${\cal A} = A_r\ (SL(r+1))$ is distinguished by its relation to the level $k=1$ simply-laced Kac-Moody algebras. In this particular situation the underlying conformal model has integer central charge $ c = r = {\rm rank} {\cal A}$ and can be "fermionized".\footnote{ This is possible only for very special Kac-Moody algebras, and such formulation is important in order to deal with $conventional$ formulation of integrability, which usually involves $commuting$ Hamiltonian flows (not just a closed algebra of flows) and fermionic realization of the universal module space (universal Grassmannian). In fact these restrictions are quite arbitrary and can be removed (though this is not yet done in full details), see section 4 below for more detailed discussion. } The main feature of this formulation is that the Kac-Moody currents (which after integration turn into "screening charges" in the above construction) are quadratic in fermionic fields, while they are represented by exponents in the free-boson formulation. In fact fermionic (spinor) model naturally possesses $GL(r+1)$ rather than $SL(r+1)$ symmetry (other simply-laced algebras can be embedded into larger $GL$-algebras and this provides fermionic descriprion for them in the case of $k=1$). The model contains $r+1$ spin-1/2 fields $\psi_i$ and their conjugate $\tilde\psi_i$ ($b,c$-systems); \begin{eqnarray} S = \sum_{j=1}^{r+1} \int \tilde\psi_j\bar\partial\psi_j d^2z, \nonumber \end{eqnarray} central charge $c=r+1$, and operator algebra is \begin{eqnarray} \tilde\psi_j(z)\psi_k(z') &= \frac{\delta_{jk}}{z-z'}\ + :\tilde\psi_j(z)\psi_k(z'): \nonumber \\ \psi_j(z)\psi_k(z') &= (z-z')\delta_{jk}:\psi_j(z)\psi_k(z'): + \ (1-\delta_{jk}):\psi_j(z)\psi_k(z'): \nonumber\\ \tilde\psi_j(z)\tilde\psi_k(z') &= (z-z')\delta_{jk}:\tilde\psi_j(z)\tilde\psi_k(z'): + \ (1-\delta_{jk}):\tilde\psi_j(z)\tilde\psi_k(z'): \nonumber \end{eqnarray} The Kac-Moody currents of level $k=1$ $GL(r+1)$ are just $J_{jk} = :\tilde\psi_j\psi_k:\ \ j,k = 1\ldots r+1$, and screening charges are $Q^{(\alpha)} = iE_{jk}^{(\alpha)}\oint :\tilde\psi_j\psi_k:$, where $E_{jk}^{(\alpha)}$ are representatives of the roots ${\Bf\alpha}$ in the matrix representation of $GL(r+1)$. Cartan subalgebra is represented by $J_{jj}$, while positive and negative Borel subalgebras - by $J_{jk}$ with $j<k$ and $j>k$ respectively. In eq.(\ref{comamo1mm'}) $Q_+ = i\oint\tilde\psi_1\psi_2,\ \ Q_- = i\oint\tilde\psi_2\psi_1\ $ while in eq.(\ref{comamo"3}) $Q^{(\alpha_1)} = i\oint\tilde\psi_1\psi_2,\ \ Q^{(\alpha_2)} = i\oint\tilde\psi_1\psi_3\ $ (and $Q^{(\alpha_3)} = i\oint\tilde\psi_2\psi_3,\ \ Q^{(\alpha_4)} = i\oint\tilde\psi_2\psi_1,\ \ Q^{(\alpha_5)} = i\oint\tilde\psi_3\psi_1,\ \ Q^{(\alpha_6)} = i\oint\tilde\psi_3\psi_2$). $Q^{(\alpha_6)}$ can be substituted instead of $Q^{(\alpha_2)}$ in (\ref{comamo"3}) without changing the answer. For generic $r$ the similar choice of "adjacent" (not simple!) roots (such that their scalar products are $+1$ or $0$) leads to selection of the following $r$ screening operators $Q^{(1)} = i\oint\tilde\psi_1\psi_2\,\ \ Q^{(2)} = -i\oint\psi_2\tilde\psi_3,\ \ Q^{(3)} = i\oint\tilde\psi_3\psi_4,\ldots$, i.e. $Q^{(j)} = i\oint\tilde\psi_j\psi_{j+1}$ for odd $j$ and $Q^{(j)} = -i\oint\psi_j\tilde\psi_{j+1}$ for even $j$. \subsection{Gross-Newman equation} We turn now to consideration of the WI for another sort of matrix models. This subject concerns at least two important classes: the conventional discrete $two$-matrix models and Kontsevich models. As it was explained in the Introduction the theories of the second type arise in consderation of the $(p,1)$ continuous matrix models, as well as in the study of topological Landau-Ginzburg theories, while the two-matrix model is believed to exhibit a rich pattern of continuous limits and is capable to provide representatives of all the $(p,q)$ universality classes (this line of reasoning, however, has never been really developed and we shall not discuss it in these notes). The starting point and the basic example is provided by the integral \begin{eqnarray} {\cal F}_{V,n}\{ L\} \equiv \int_{n\times n} dX e^{- {\rm tr} V(X) + {\rm tr} L X} \label{KI} \end{eqnarray} over $n\times n$ Hermitean matrix, which we shall further refer to as "Kontsevich integral", keeping in mind its most important application (though this obvious quantity has been of course considered by many other people). It may seem that the action in this integral is $not$ of the most general type and we can no longer perform $arbitrary$ change of variables $X \longrightarrow f(X)$ without changing the functional form of the integral. In fact this is incorrect, because "external field" $ L$ is matrix valued and coupled to $X$ linearly, and therefore $any$ correlator of $X$-fields can be represented through $ L$-derivatives. Consider again the shift $X \rightarrow X + \epsilon_nX^{n+1},\ n \geq -1$. Invariance of the integral implies: \begin{eqnarray} \int dX e^{- {\rm tr} V(X) + {\rm tr} L X} {\rm tr}\ \epsilon_n \left(-X^{n+1} V'(X) + L X^{n+1} + \sum_{k=0}^n X^k {\rm tr}X^{n-k} \right) = 0, \nonumber \end{eqnarray} which can be rewritten as \footnote{The obvious relation is used here: $X_{\gamma\delta} e^{{\rm tr} L X} = \frac{\partial}{ L_{\delta\gamma}} e^{{\rm tr} L X}$. Note that the order of matrix indices $\gamma\delta$ is reversed at the r.h.s. as compared to the l.h.s., i.e. derivatives are in fact w.r.to $transponed$ matrix $ L$: $\ f(X)e^{{\rm tr} L X} = f(\frac{\partial}{\partial L_{tr}}) e^{{\rm tr} L X}$ (at least for any function $f(x)$, which can be represented as a formal seria in integer powers of $X$). } \begin{eqnarray} &{\rm tr}\ \epsilon_n \left( \left(-\frac{\partial}{\partial L_{\rm tr}}\right)^{n+1} V'\left(\frac{\partial}{\partial L_{\rm tr}}\right) + L \left(-\frac{\partial}{\partial L_{\rm tr}}\right)^{n+1} + \right. \nonumber \\ &\left. \sum_{k=0}^n \left(-\frac{\partial}{\partial L_{\rm tr}}\right)^k {\rm tr}\left(-\frac{\partial}{\partial L_{\rm tr}}\right)^{n-k} \right) {\cal F}_V\{ L\} = \nonumber \\ &= {\rm tr}\ \epsilon_n \left(-\frac{\partial}{\partial L_{\rm tr}}\right)^{n+1} \left(V'\left(\frac{\partial}{\partial L_{\rm tr}}\right) - L\right){\cal F}_V\{ L\} = 0 \label{preGNe} \end{eqnarray} This system is in fact equivalent to a single matrix-valued equation \begin{eqnarray} \left(V'\left(\frac{\partial}{\partial L_{\rm tr}}\right) - L\right) {\cal F}_V\{ L\} = 0. \label{GNe} \end{eqnarray} As well as I know this equation was first written down in \cite{GN}, therefore it will be refered to as the Gross-Newman (GN) equation. It was rediscovered and implications for the theory of matrix models were investigated in \cite{MMM},\cite{GKM},\cite{tildeW}. There are essentially two types of corollaries, which will be discussed in the next two subsections. First, GN equation can be used to characterize the function ${\cal F}_V\{ L\}$ itself. This will lead us to consideration of Kontsevich models. Second, it can be used to derive equations for the 2-matrix model, which arises after ${\cal F}_V\{ L\}$ is further integrated with some weight over $ L$. \subsection{Ward identities for Generalized Kontsevich Model} Being just the complete set of equation of motion the GN equation (\ref{GNe}) provides complete information about the function ${\cal F}_V\{ L\}$. However, this statement needs to be formulated more carefully. A need for this comes, for example, from the observation that operators \begin{eqnarray} {\rm tr} L^m \left(V'\left(\frac{\partial}{\partial L_{\rm tr}}\right) - L\right) \end{eqnarray} do not form a closed algebra: their commutators have some different functional form. One of the reasons for these complications is that eq.(\ref{GNe}) does not account explicitly for a very important property of ${\cal F}_V\{ L\}$: this function in fact depends only on the eigenvalues of $ L$. This information should be still added somehow to the GN equation. We shall analyze this issue of eigenvalue-dependence in more details in next sections. For our current purposes this argument implies that one should try to express equation (\ref{GNe}) in terms of eigenvalues. Here, however, one should be carefull again. Clearly, not only ${\cal F}_V\{ L\}$ depends on eigenvalues, it depends on their "symmetric" (Weyl-group invariant) combinations, i.e. it rather depends on quantities like ${\rm tr} L^a$ then on particular eigenvalues. Moreover, powers $a$ here should be negative and fractional. Indeed, integrals like (\ref{KI}) are usually understood as analytical continuation from some values of parameters in the potential $V$, when integral is convergent. They can be also related to the formal (perturbation) seria arising when integrand is expanded around a stationary point. To begin with it is reasonable to take $n=1$ i.e. consider just an ordinary integral. For the sake of simplicity also take particular $V(x) = -\frac{x^{p+1}}{p+1}$. Then the stationary point is at $x = \lambda^{\frac{1}{p}}$ and \begin{eqnarray} \int dx e^{-\frac{x^{p+1}}{p+1}+ l x} \sim l^{-\frac{p-1}{2}} e^{\frac{p}{p+1}l^{\frac{p+1}{p}}} \sum_{k \ge 0} c_kl^{-\frac{k}{p}}. \label{KIper} \end{eqnarray} It is now easy to understand what should be done in the general situation with matrices and arbitrary potentials. First of all, one needs to solve equation for the stationary point, $V'(X) = L$. For this purpose it is most convenient to introduce a new matrix variable $\Lambda$ instead of $ L$, which by definition satisfies $V'(\Lambda) = L$. Then stationary point is just $X=\Lambda$. Second, one should separate the analogue of the complicated prefactor (quasiclassical contribution): \begin{eqnarray} {\cal C}_V\{\Lambda\} = (2\pi)^{n^2/2} \frac{e^{{\rm tr}\left(\Lambda V'(\Lambda) - V(\Lambda)\right)}} {\sqrt{{\rm det} V''(\Lambda)}}, \label{cvfactor} \end{eqnarray} Then the function that describes pure "quantum" contribution \footnote{The "classical action" in (\ref{cvfactor}) can be also represented as ${\rm tr}\left(\Lambda V'(\Lambda) - V(\Lambda)\right) = {\rm tr}\int \Lambda dV'(\Lambda)$. Determinant of quadratic fluctuations is defined as $$(2\pi)^{n^2/2} \left({\rm det} V''(\Lambda)\right)^{-1/2} \sim \int dY e^{-{\rm tr}V_2(\Lambda,Y)}, \nonumber $$ where $V_2(\Lambda,Y) \equiv {\rm lim}_{\epsilon \rightarrow 0}\frac{1}{\epsilon^2} \left(V(\Lambda+\epsilon Y) - V(\Lambda) - \epsilon V'(\Lambda) Y\right)$. For $V(\Lambda) = \frac{\Lambda^{p+1}}{p+1}\ $ we have $V''(\Lambda) = ( \sum_{k=0}^{p-1} \Lambda^k \otimes \Lambda^{p-k-1} )$. One could easily choose an "opposite" parametrization in eq.(\ref{Miwatimes}): $T_k = -\frac{1}{k}{\rm tr}\Lambda^{-k}$. Though not quite obvious, this never influences any results (see section 2.10 for an example). Our choice of signs is motivated by simplification of formulas for the GKM, including the relation between $ L$ and $\Lambda$. Instead, some sign factors appear in formulas, related to Toda-like representations of partition functions and those involving $\tilde W$-operators. } \begin{eqnarray} {\cal Z}_V\{T\} \equiv {\cal C}_V\{\Lambda\}^{-1} {\cal F}_V\{V'(\Lambda)\} \label{GKMdef} \end{eqnarray} to be refered as partition function of the Generalized Kontsevich Model (GKM) \cite{GKM}, can be represented as a formal (perturbation) series expansion in variables \begin{eqnarray} T_k = \frac{1}{k}{\rm tr}\Lambda^{-k}. \label{Miwatimes} \end{eqnarray} GN (\ref{GNe}) equation can be now rewritten as a set of differential equations for ${\cal Z}_V\{T\}$. Indeed, we already have: \begin{eqnarray} {\cal C}_V^{-1} \left(V'\left(\frac{\partial}{\partial L_{\rm tr}}\right) - L\right) {\cal C}_V {\cal Z}_V\{T\} = 0, \label{riv1} \end{eqnarray} but it is still necessary to express the operator at the l.h.s. in terms of $T$. This is in fact possible to do, using the relation: \begin{eqnarray} \frac{\partial}{\partial L_{tr}}{\cal Z}_V\{T\} = \sum_k \frac{\partial T_k}{\partial L_{tr}} \frac{\partial Z}{\partial T_k} \end{eqnarray} and substituting the $traces$ of $\Lambda$-matrices, which can arise in the process of calculation, by $T$'s. It is important only that $\Lambda$'s usually appear in negative powers: this is already achieved by the choice of a proper normalization factor ${\cal C}_V\{\Lambda\}$. For monomial potential $V_p(X) = \frac{X^{p+1}}{p+1}$ this is especialy simple: $ L = \Lambda^p$ and $\frac{\partial T_k}{\partial L_{tr}} = -\frac{1}{p}\Lambda^{-p-k}$. This reasoning allows one to rewrite eq.(\ref{riv1}) identically in the form \begin{eqnarray} \sum_{l} \Lambda^{-l} {\cal O}_l(T) {\cal Z}_V\{T\} = 0, \end{eqnarray} where ${\cal O}_l$ are some differential operators, depending on the shape of $V$, but independent on the size $n$ of the matrix (as all the above reasoning never refered to particular values of $n$, except for a sample $example$ at the very beginning). It remains to use the fact that matrix $ L$ can be arbitrary large and have arbitrarily many independent entries, in order to conclude that we derived a set of constraints on ${\cal Z}_V$ in the form \begin{eqnarray} {\cal O}_l(T) {\cal Z}_V\{T\} = 0. \end{eqnarray} For potential $V$ of degree $p+1$ these appear to be exactly the "continuous Virasoro constraints". See refs.\cite{MMM} and \cite{GKM} for detailed analysis of the Virasoro case $p=2$ (associated with the pure topological gravity and with the double-scaling limit of the 1-matrix model), and \cite{Mikh} for the exhaustive presentation of the case of $p=3$. \subsection{Discrete Virasoro constraints for the Gaussian Kontsevich model} As a simplest illustration of the technique, described in the previous subsection, we derive now the constraints for the Gaussian Kontsevich model \cite{ChMa} with potential $V(X) = \frac{1}{2}X^2$: \begin{eqnarray} {\cal Z}_{\frac{X^2}{2}}\{N,T\} = \frac{e^{-{\rm tr}\frac{ L^2}{2}}}{({\rm det} L)^N} \int dX ({\rm det}X)^N e^{-{\rm tr}\frac{X^2}{2} + L X}. \label{gako} \end{eqnarray} In this case $ L = V'(\Lambda) = \Lambda$, and the time-variables are just \begin{eqnarray} T_k = \frac{1}{k}{\rm tr} \Lambda^{-k} = \frac{1}{k}{\rm tr} L^{-k}. \label{gakoT} \end{eqnarray} To make the model non-trivial an extra "zero-time" variable $N$ \cite{Toda} is introduced, which was not included into the previous definition (\ref{GKMdef}). Now note that the $N$-dependence of Kontsevich integral (\ref{KI}) can be described simply as an extra term in the potential: $V(X) \rightarrow \hat V(X) = V(X) - N\log X$ (though this can $not$ be done neither in the quasiclassical factor ${\cal C}_V$ nor in the definition of time-variables $T$). Since the GN equation depends only on Kontsevich equation, we can use it with $V$ substituted by $\hat V$. Then we have instead of (\ref{riv1}): \begin{eqnarray} \oldfalse \begin{array}{c} \frac{e^{-{\rm tr}\frac{ L^2}{2}}}{({\rm det} L)^N} \left(\frac{\partial}{\partial L_{tr}}\right)^{n+1}\cdot \left( \frac{\partial}{\partial L_{tr}} - N\left(\frac{\partial}{\partial L_{tr}}\right)^{-1} - L \right) \cdot \\ \cdot ({\rm det} L)^Ne^{+{\rm tr}\frac{ L^2}{2}} {\cal Z}_{\frac{X^2}{2}}\{N,T\} = 0. \end{array} \label{gako2} \end{eqnarray} In order to get rid of the integral operator $(\frac{\partial}{\partial L})^{-1}$ one should take here $n \geq 0$ rather than $n \geq -1$. In fact all the equations with $n > 0$ follow from the one with $n=0$, and we restrict our consideration to the last one. For $n=0$ we obtain from (\ref{gako2}): \begin{eqnarray} \left(\left( \frac{\partial}{\partial L_{tr}} + \frac{N}{ L} + { L}\right)^2 - 2N - L \left( \frac{\partial}{\partial L_{tr}} + \frac{N}{ L} + { L}\right) \right) {\cal Z} = 0 \nonumber \end{eqnarray} or \begin{eqnarray} \left(\left( \frac{\partial}{\partial L_{tr}}\right)^2 + \left( L + \frac{2N}{ L}\right)\frac{\partial}{\partial L_{tr}} + \frac{N^2}{ L^2} - \frac{N}{ L}{\rm tr}\frac{1}{ L} \right) {\cal Z} = 0, \label{gako3} \end{eqnarray} and it remains to substitute: \begin{eqnarray} \frac{\partial{\cal Z}}{\partial L_{tr}} &= -\sum_{k=0}^{\infty} \frac{1}{ L^{k+1}} \frac{\partial{\cal Z}}{\partial T_k}; \nonumber \\ \frac{\partial^2{\cal Z}}{\partial L_{tr}^2} &= \sum_{k=1}^{\infty} \left( \sum_{a=1}^{k+1}\frac{1}{ L^{k+2-a}} {\rm tr}\frac{1}{ L^a} \right) \frac{\partial{\cal Z}}{\partial T_k} + \sum_{k,l=1}^{\infty} \frac{1}{ L^{k+l+2}} \frac{\partial^2{\cal Z}}{\partial T_k\partial T_l} = \nonumber \\ &= \sum_{m=-1}^{\infty} \frac{1}{ L^{m+2}} \left( \sum_{k>{\rm max}(m,0)} \left({\rm tr}\frac{1}{ L^{k-m}}\right) \frac{\partial{\cal Z}}{\partial T_k} + \sum_{k=1}^{m-1} \frac{\partial^2{\cal Z}}{\partial T_k\partial T_{m-k}} \right) \nonumber \end{eqnarray} and finally obtain: \begin{eqnarray} \sum_{m=-1}^{\infty} &\frac{1}{ L^{m+2}} \left( \sum_{k=1+\delta_{m,-1}}^{\infty} \left({\rm tr}\frac{1}{ L^{k}}\right) \frac{\partial}{\partial T_{k+m}} + \sum_{k=1}^{m-1} \frac{\partial^2}{\partial T_k\partial T_{m+k}} - \right. \nonumber \\ &\left. - \frac{\partial}{\partial T_{m+2}} - 2N\frac{\partial}{\partial T_{m}} + N^2\delta_{m,0} - N \left({\rm tr}\frac{1}{ L}\right) \delta_{m,-1} \right) {\cal Z} = \nonumber \\ &= \sum_{m=-1}^{\infty} \frac{1}{ L^{m+2}} e^{NT_0} L_m(T+r) e^{-NT_0} {\cal Z} = 0. \label{gako4} \end{eqnarray} Here $L_m(t)$ are just the generators (\ref{virdop}) of the discrete Virasoro algebra (\ref{virdid}): \begin{eqnarray} e^{Nt_0} L_m(t) e^{-Nt_0} = e^{Nt_0} \left( \sum_{k=1}^{\infty} kt_k\frac{\partial}{\partial t_{k+m}} + \sum_{k=0}^m \frac{\partial^2}{\partial t_k\partial t_{m-k}} \right) e^{-Nt_0}. \end{eqnarray} and at the r.h.s. of (\ref{gako4}) $r_k = -\frac{1}{2}\delta_{k,2}$.\footnote{ This small correction is a manifestation of a very general phenomenon: from the point of view of symmetries (Ward identities) it is more natural to consider $Z_V$ not as a function of $T$-variables, but of some more complicated combination $\hat T_k + r_k$, depending on the shape of potential $V$. If $V$ is a polinomial of degree $p+1$, $\hat T_k = \frac{1}{k}{\rm tr} (V'(\lambda))^{-k/p},$ while $r_k = \frac{p}{k(p-k)}{\rm Res}\left(V'(\mu)\right)^{1-\frac{k}{p}}d\mu$. For monomial potentials these expressions become very simple: $\hat T_k = T_k$ and $r_k = -\frac{p}{p+1}\delta_{k,p+1}$. See \cite{comamo} and section 4.9 below for more details. In most places in these notes we prefer to use invariant potential-independent times $T_k$, instead of $\hat T_k$, but then Ward identites acquire some extra terms with $r_k$ (which in fact will be very simple in our examples, which are all given for monomial potentials). } Thus we found that the WI of the Gaussian Kontsevich model (\ref{gako}) coincide with those of the ordinary 1-matrix model, moreover the size of the matrix $N$ in the latter model is associated with the "zero-time" in the former one. This result \cite{ChMa} of course implies, that the two models are identical: \begin{eqnarray} e^{-NT_0}{\cal Z}_{\frac{X^2}{2}}\{N,T_1,T_2,\ldots\} \sim Z_N\{T_0,T_1,T_2,\ldots\}. \end{eqnarray} We shall discuss direct connection between these two matrix integrals (\ref{1mamo'}) and (\ref{gako}) in the next section, after some more details will be presented about the structure of "eigenvalue" matrix models. \subsection{Continuous Virasoro constraints for the $V = \frac{X^3}{3}$ Kontsevich model} This example is a little more complicated than that in the previous subsection, and we do not present calculations in full details (see \cite{MMM} and \cite{GKM}). Our goal is to demonstrate that the constraints which arise in this model, though still form (Borel subalgebra of) some Virasoro algebra, are $different$ from (\ref{virdid}). From the point of view of the CFT-formulation the relevant model is that of the $twisted$ (in this particular case - antiperiodic) free fields. These so called "continuous Virasoro constraints" give the simplest illustration of the difference between discrete and continuous matrix models: this is essentially the difference between "homogeneous" (Kac-Frenkel) and "principal" (soliton vertex operator) representation of the level $k=1$ Kac-Moody algebra. From the point of view of integrable hierarchies this is the difference between Toda-chain-like and KP-like hierarchies. We shall come back to a more detailed discussion of this difference later, when the "multi-scaling continuum limit" will be considered. Another (historical) aspect of the same relation also deserves mentioning, since it also illustrates the interrelation between different models. The discrete 1-matrix model arises naturally in description of quantum $2d$ gravity as sum over 2-geometries in the formalism of random equilateral triangulations. The model, however, decribes only lattice approximation to $2d$ gravity and (double-scaling) coninuum limit should be taken in order to obtain the real (continuous) theory of $2d$ gravity. This limit was originally formulated in terms of the contraint algebra (equations of motion or "loop" or "Schwinger-Dyson" equations - terminology is taste-dependent), leaving open the problem of what is the form of partition function ${\cal Z}^{cont}\{T\}$ of continuous theory. Since the relevant algebra appeared to be just the WI for Kontsevich model (with $V(X) = \frac{X^3}{3}$), this proves that the latter one is exactly the continuous theory of pure $2d$ gravity. At the same time, Kontsevich model itself can be naturally introduced as a theory of $topological$ gravity (in fact this is how the model was originally discovered in \cite{Ko}). From this point of view the constraint algebra, to be discussed below in this subsection, plays central role in the proof of equivalence between pure $2d$ quantum gravity and pure topological gravity (in both cases "pure" means that "matter" fields are not introduced). After these introductory remarks we proceed to calculations. Actually they just repeat those for the Gaussian model, performed in the previous subsection, though formulas get somewhat more complicated. This time we do not include zero-time $N$ and just use eq.(\ref{GNe}) with $V(X) = \frac{X^3}{3}$. Now it is also much more tricky (though possible) to work in matrix notations (because fractional powers of $ L$ will be involved) and we rewrite everything in terms of the eigenvalues of $ L$. We substitute \begin{eqnarray} \oldfalse \begin{array}{c} {\cal C}_{\frac{X^3}{3}} = \frac{\prod_\delta e^{\frac{2}{3}\lambda_\delta^{3/2}}} {\sqrt{\prod_{\gamma,\delta} (\sqrt{\lambda_\delta} + \sqrt{\lambda_\gamma})}}, \\ \left( \frac{\partial^2}{\partial L_{tr}^2}\right)_{\gamma\gamma} = \frac{\partial^2}{\partial\lambda_\gamma^2} + \sum_{\delta\neq\gamma}\frac{1}{\lambda_\gamma-\lambda_\delta} \left(\frac{\partial}{\partial\lambda_\gamma} - \frac{\partial}{\partial\lambda_\delta}\right) \end{array} \nonumber \end{eqnarray} and introduce a special notation for \begin{eqnarray} \frac{{\cal D}}{{\cal D}\lambda_\gamma} \equiv {\cal C}_{\frac{X^3}{3}}^{-1} \frac{\partial}{\partial \lambda_\gamma} {\cal C}_{\frac{X^3}{3}} = \frac{\partial}{\partial \lambda_\gamma} + \sqrt{\lambda_\gamma} - \frac{1}{4\lambda_\gamma} - \frac{1}{2} \sum_{\delta\neq\gamma}\frac{1}{\sqrt{\lambda_\gamma} (\sqrt{\lambda_\delta} + \sqrt{\lambda_\gamma})}. \nonumber \end{eqnarray} Then (\ref{GNe}) turns into \begin{eqnarray} \left( \left(\frac{{\cal D}}{{\cal D}\lambda_\gamma}\right)^2 + \sum_{\delta\neq\gamma}\frac{1}{\lambda_\gamma-\lambda_\delta} \left(\frac{{\cal D}}{{\cal D}\lambda_\gamma} - \frac{{\cal D}}{{\cal D}\lambda_\delta}\right) \right) {\cal Z}_{\frac{X^3}{3}}\{T\} = 0. \label{GNX3} \end{eqnarray} Now we need explicit expression for $T$: \begin{eqnarray} T_k = \frac{1}{k} L^{-k}, \label{KMX3T} \end{eqnarray} and as we already know from the previous subsection we also need \begin{eqnarray} r_k = - \frac{2}{3}\delta_{k,3}. \end{eqnarray} It will not be explained untill we turn to consideration of integrable structure of Kontsevich model in the following sections, but ${\cal Z}_{\frac{X^3}{3}}\{T\}$ is in fact independent of all time-variables with $even$ numbers (subscripts). Therefore we can take only $k=2a+1$ in (\ref{KMX3T}), \begin{eqnarray} T_{2a+1} &= \frac{1}{2a+1} \sum_\delta \lambda_\delta^{-a-\frac{1}{2}}, \nonumber \\ r_{2a+1} &= - \frac{2}{3}\delta_{a,1} \end{eqnarray} and \begin{eqnarray} \frac{\partial}{\partial \lambda_\gamma} {\cal Z}_{\frac{X^3}{3}}\{T\} &= \sum_{a=0}^{\infty} \frac{\partial T_{2a+1}}{\partial \lambda_\gamma} \frac{\partial{\cal Z}}{\partial T_{2a+1}} = -\frac{1}{2} \sum_{a=0}^{\infty} \lambda_\gamma^{-a-\frac{3}{2}} \frac{\partial{\cal Z}}{\partial T_{2a+1}}; \nonumber \\ \frac{\partial^2}{\partial \lambda_\gamma^2} {\cal Z}_{\frac{X^3}{3}}\{T\} &= \frac{1}{4}\sum_{a,b=0}^{\infty} \lambda_\gamma^{-a-b-3} \frac{\partial{\cal Z}}{\partial T_{2a+1}\partial T_{2b+1}} + \frac{1}{2} \sum_{a=0}^{\infty} (a + \frac{3}{2}) \lambda_\gamma^{-a-\frac{5}{2}} \frac{\partial{\cal Z}}{\partial T_{2a+1}}. \nonumber \end{eqnarray} These expressions should be now substituted into (\ref{GNX3}) and we obtain: \begin{eqnarray} \oldfalse \begin{array}{c} \frac{1}{4}\sum_{a,b=0}^{\infty} \lambda_\gamma^{-a-b-3} \frac{\partial{\cal Z}}{\partial T_{2a+1}\partial T_{2b+1}} + \\ + \sum_{a=0}^{\infty} \left[ \frac{1}{2} \sum_{a=0}^{\infty} (a + \frac{3}{2})\lambda_\gamma^{-a-\frac{5}{2}} - \frac{1}{2} \sum_{\delta\neq\gamma}\frac{1}{\lambda_\gamma-\lambda_\delta} \left(\lambda_\gamma^{-a-\frac{3}{2}} - \lambda_\delta^{-a-\frac{3}{2}} \right) - \right. \\ \left. - \left( \sqrt{\lambda_\gamma} - \frac{1}{4\lambda_\gamma} - \frac{1}{2} \sum_{\delta\neq\gamma} \frac{1}{\sqrt{\lambda_\gamma}(\sqrt{\lambda_\delta} + \sqrt{\lambda_\gamma})} \right)\lambda_\gamma^{-a-\frac{3}{2}} \right]\frac{\partial{\cal Z}}{\partial T_{2a+1}} + \\ + \left[\ldots\right] {\cal Z} \ \ \ = \ \ \ \sum_{n=-1}^{\infty} \frac{1}{\lambda_\gamma^{n+2}}{\cal L}_n{\cal Z} \end{array} \label{vircder} \end{eqnarray} with \begin{eqnarray} \oldfalse \begin{array}{c} {\cal L}_{2n} = \sum_{a=0}^{\infty} \left(a+\frac{1}{2}\right)\left(T_{2a+1}+r_{2a+1}\right) \times \\ \times \frac{\partial}{\partial T_{2a+2n+1}} + \frac{1}{4} \sum_{\stackrel{a+b=n-1}{a,b\geq 0}} \frac{\partial^2}{\partial T_{2a+1}\partial T_{2b+1}} + \frac{1}{16}\delta_{n,0} + \frac{1}{4} T_1^2\delta_{n,-1} = \\ = \frac{1}{2}\sum_{{\rm odd}\ k=1}^{\infty} k(T_k+r_k)\frac{\partial}{\partial T_{k+2n}} + \frac{1}{4} \sum_{{\rm odd}\ k = 1}^{2n-1} \frac{\partial^2}{\partial T_k\partial T_{2n-k}} + \frac{1}{16}\delta_{n,0} + \frac{1}{4} T_1^2\delta_{n,-1}. \end{array} \label{vircidder} \end{eqnarray} Factor $\frac{1}{2}$ in front of the first term at the r.h.s. in (\ref{vircidder}) is important for ${\cal L}_{2n}$ to satisfy the properly normalized Virasoro algebra:\footnote{ Therefore it could be reasonable to use a different notation: ${\cal L}_n$ instead of ${\cal L}_{2n}$. We prefer ${\cal L}_{2n}$, because it emphasises the property of the model to be 2-reduction of KP hierarchy (to KdV), see section 4 below. } \begin{eqnarray} \phantom. [{\cal L}_{2n}, {\cal L}_{2m}] = (n-m){\cal L}_{2n+2m}. \nonumber \end{eqnarray} Coefficient $\frac{1}{4}$ in front of the second term can be eliminated by rescaling of time-variables: $T \rightarrow \frac{1}{2}T$, then the last term turns into $\frac{1}{16}T_1^2\delta_{n,-1}$. We shall not actually discuss evaluation of the coefficient in front of ${\cal Z}$ (with no derivatives), which is denoted by $[\ldots]$ in (\ref{vircder}) (see \cite{MMM} and \cite{GKM}). In fact almost all the terms in original complicated expression cancel, giving finally \begin{eqnarray} \left[ \ldots \right] = \frac{1}{16\lambda_\gamma^2} + \frac{T_1^2}{4\lambda_\gamma}, \nonumber \end{eqnarray} and this is represented by the terms with $\delta_{n,0}$ and $\delta_{n,-1}$ in expressions (\ref{vircidder}) for the Virasoro generators ${\cal L}_{2n}$. The term with the double $T$-derivative in (\ref{vircder}) is already of the necessary form. Of intermidiate complexity is evaluation of the coefficient in front of $\frac{\partial{\cal Z}}{\partial T_{2a+1}}$ in (\ref{vircder}), which we shall briefly describe now. First of all, rewrite this coefficient, reordering the items: \begin{eqnarray} \frac{1}{2}\left[ (a + \frac{3}{2})\lambda_\gamma^{-a-\frac{5}{2}} - \sum_{\delta\neq\gamma}\frac{1}{\lambda_\gamma-\lambda_\delta} \left(\lambda_\gamma^{-a-\frac{3}{2}} - \lambda_\delta^{-a-\frac{3}{2}} \right)\right] + \nonumber \\ + \left[ \frac{1}{4}\lambda_\gamma^{-a-\frac{5}{2}} + \frac{1}{2} \sum_{\delta\neq\gamma} \frac{\lambda_\gamma^{-a-2}}{\sqrt{\lambda_\delta} + \sqrt{\lambda_\gamma}} \right] - \lambda_\gamma^{-a-1}. \label{vircder2} \end{eqnarray} The first two terms together are equal to the sum over $all$ $j$ (including $j=i$): \begin{eqnarray} - \frac{1}{2} \sum_\delta \frac{1}{\lambda_\gamma-\lambda_\delta} \left(\lambda_\gamma^{-a-\frac{3}{2}} - \lambda_\delta^{-a-\frac{3}{2}} \right) = \frac{1}{2} \sum_\delta \frac{\lambda_\gamma^{a+\frac{3}{2}} - \lambda_\delta^{a+\frac{3}{2}}}{\lambda_\gamma-\lambda_\delta} \cdot\frac{1}{\lambda_\gamma^{a+\frac{3}{2}}\lambda_\delta^{a+\frac{3}{2}}} = \nonumber \\ = \frac{1}{2\lambda_\gamma^{a+2}} \sum_\delta \frac{\lambda_\gamma^{a+2} - \lambda_\gamma^{\frac{1}{2}} \lambda_\delta^{a+\frac{3}{2}}} {\lambda_\gamma-\lambda_\delta}\cdot\frac{1}{\lambda_\delta^{a+\frac{3}{2}}}. \nonumber \end{eqnarray} Similarly, the next two terms can be rewritten as \begin{eqnarray} \frac{1}{2} \sum_\delta \frac{\lambda_\gamma^{-a-2}}{\sqrt{\lambda_\gamma} + \sqrt{\lambda_\delta}} = \frac{1}{2\lambda_\gamma^{a+2}} \sum_\delta \frac{\sqrt{\lambda_\gamma} - \sqrt{\lambda_\delta}} {\lambda_\gamma-\lambda_\delta} = \nonumber \\ = \frac{1}{2\lambda_\gamma^{a+2}} \sum_\delta \frac{\lambda_\gamma^{\frac{1}{2}} \lambda_\delta^{a+\frac{3}{2}} - \lambda_\delta^{a+2}} {\lambda_\gamma-\lambda_\delta}\cdot\frac{1}{\lambda_\delta^{a+\frac{3}{2}}}. \nonumber \end{eqnarray} The sum of these two expressions is equal to \begin{eqnarray} \frac{1}{2\lambda_\gamma^{a+2}} \sum_\delta \frac{\lambda_\gamma^{a+2} - \lambda_\delta^{a+2}} {\lambda_\gamma-\lambda_\delta} \cdot\frac{1}{\lambda_\delta^{a+\frac{3}{2}}}. \nonumber \end{eqnarray} Note that powers $a+2$ are already integer and the remaining ratio can be represented as a sum of $a+2$ terms. Adding also the last term from the l.h.s. of (\ref{vircder2}), we finally obtain: \begin{eqnarray} -\frac{1}{\lambda_\gamma^{a+1}} + \frac{1}{2} \sum_{n=-1}^a \frac{1}{\lambda_\gamma^{n+2}}\sum_\delta \frac{1}{\lambda_\delta^{a-n+\frac{1}{2}}} = \nonumber \\ = \frac{1}{2}\sum_{n=-1}^a \frac{1}{\lambda_\gamma^{n+2}} (2a-2n+1)(T+r)_{2a-2n+1} \nonumber \end{eqnarray} in accordance with (\ref{vircder}) and (\ref{vircidder}). \subsection{$\tilde W$-constraints for the asymmetric 2-matrix model} We turn now to a very different application \cite{tildeW} of the GN equation (\ref{GNe}). Namely, we shall now consider ${\cal F}_{V,n}\{ L\}$ as a building block in construction of conventional discrete two-matrix model \begin{eqnarray} Z_N\{t,\bar t\} \equiv c_N^2 \int dHd\bar H e^{\sum_k (t_k {\rm Tr}H^k + \bar t_k {\rm Tr} \bar H^k) + {\rm Tr}H\bar H} = \nonumber \\ = \int d L e^{\sum_k t_k {\rm Tr} L^k} {\cal F}_{\bar U,N}\{ L\}. \label{2mamo} \end{eqnarray} Now $ L$ plays the role of $H$ and $\bar U(\bar H) = \sum_k\bar t_k \bar H^k$. We can now use GN equation to derive a relation for $Z_N\{t,\bar t\}$. Take (\ref{GNe}), \begin{eqnarray} \left(\bar U (\frac{\partial}{\partial L_{tr}}) + L\right) {\cal F}_{\bar U,N}\{ L\} = 0, \label{GNe'} \end{eqnarray} multiply it by $e^{{\rm Tr}U( L)} = e^{\sum_k t_k {\rm Tr} L^k}$ and integrate over $ L$. In order to express this relation in terms of $t$-derivatives of $z$ it is necessary to have some "scalar" rather than $matrix$ equations, therefore we"ll actually need to take trace of (\ref{GNe'}). However, in order not to loose any information, we first multiply (\ref{GNe'}) by $ L^n$ and $then$ take the trace. In this way we obtain: \begin{eqnarray} \int d L e^{\sum_k t_k {\rm Tr} L^k} {\rm Tr} L^n \left(\bar U (\frac{\partial}{\partial L_{tr}}) + L\right) {\cal F}_{\bar U}\{ L\} = 0. \nonumber \end{eqnarray} Integration by parts gives: \begin{eqnarray} \int d L {\cal F}_{\bar U}\{ L\} {\rm Tr}\left(\bar U (-\frac{\partial}{\partial L_{tr}}) + L\right) L^n e^{\sum_k t_k {\rm Tr} L^k}. \label{gana1} \end{eqnarray} Now we need to introduce a new class of operators \cite{tildeW}. Consider the action of ${\rm Tr} \frac{\partial^m}{\partial L_{tr}^m} L^n$ on $e^{{\rm Tr}U( L)} = e^{\sum_k t_k {\rm Tr} L^k}$. It gives some linear combination of terms like \begin{eqnarray} {\rm tr} L^{a_1} ...{\rm tr} L^{a_l} e^{{\rm tr} U( L)} = \frac{\partial^l}{\partial t_{a_1}...\partial t_{a_l}} e^{-{\rm tr} U( L)} \nonumber \end{eqnarray} i.e. we obtain a combination of differential operators with $t$-derivatives, to be denoted $\tilde W(t)$: \begin{eqnarray} \tilde W_{n-m}^{(m+1)}(t) e^{{\rm tr} U( L)} \equiv {\rm Tr} \frac{\partial^m}{\partial L_{tr}^m} L^n e^{{\rm tr} U( L)}, \ \ \ m,n \geq 0. \label{twop} \end{eqnarray} For example, \begin{eqnarray} \oldfalse \begin{array}{c} \tilde W_n^{(1)} = \frac{\partial}{\partial t_n}, \ \ n\geq 0; \\ \tilde W_n^{(2)} = \sum_{k=0}^{\infty} kt_k\frac{\partial}{\partial t_{k+n}} + \sum_{k=0}^n \frac{\partial^2}{\partial t_k\partial t_{n-k}}, \ \ n\geq -1; \\ \tilde W_n^{(3)} = \sum_{k,l = 1}^{\infty} kt_klt_l \frac{\partial}{\partial t_{k+l+n}} + \sum_{k=1}^{\infty} kt_k \sum_{a+b=k+n}\frac{\partial^2}{\partial t_a\partial t_b} + \\ + \sum_{k=1}^{\infty} kt_k \sum_{a+b=n+1}\frac{\partial^2}{\partial t_a\partial t_{b+k-1}} + \sum_{a+b+c=n} \frac{\partial^3}{\partial t_a\partial t_b\partial t_c} + \frac{(n+1)(n+2)}{2}\frac{\partial}{\partial t_n}; \\ \ldots \end{array} \label {twopex} \end{eqnarray} Note, that while $\tilde W_n^{(1)}$ and $\tilde W_n^{(2)}$ are just the ordinary $(U(1)$-Kac Moody and Virasoro operators respectively, the higher $\tilde W^{(m)}$-operators do $not$ coincide with the generators of the {\bf W}-algebras: already \begin{eqnarray} \tilde W_n^{(3)} \neq W^{(3)} = \sum_{k,l = 1}^{\infty} kt_klt_l \frac{\partial}{\partial t_{k+l+n}} + 2\sum_{k=1}^{\infty} kt_k \sum_{a+b=k+n}\frac{\partial^2}{\partial t_a\partial t_b} \nonumber \\ + \frac{4}{3} \sum_{a+b+c=n} \frac{\partial^3}{\partial t_a\partial t_b\partial t_c}. \nonumber \end{eqnarray} $\tilde W$-operators (in variance with ordinary $W$-operators) satisfy recurrent relation: \begin{eqnarray} \tilde W_n^{(m+1)} = \sum_{k=1}^{\infty} kt_k\tilde W_{n+k}^{(m)} + \sum_{k=0}^{m+n-1} \frac{\partial}{\partial t_k} \cdot \tilde W_{n-k}^{(m)}, \ \ \ n\geq -m. \end{eqnarray} Actually not too much is already known about the ${\tilde W}$ operators and the structure of $\tilde{\bf W}$-algebras (in particular it remains unclear whether the negative harmonics $\tilde W_n^{(m+1)}$ with $n < -m$ can be introduced in any reasonable way), see \cite{tildeW} for some preliminary results. Equation (\ref{gana1}) can now be represented in terms of the $\tilde W$-operators: \begin{eqnarray} \int d L {\cal F}_{\bar U}\{ L\} \left(\sum_{k\geq 1} k\bar t_k \left(-\frac{\partial}{\partial L_{tr}}\right)^{k-1} + L\right) L^n e^{{\rm Tr}U(t)} = \nonumber \\ = \left(\sum_{k\geq 1} (-)^{k-1}k\bar t_k \tilde W_{n+1-k}^{(k)} + \tilde W_{n+1}^{(1)} \right) Z_N\{t,\bar t\} = 0. \label{gana2} \end{eqnarray} This relation is highly asymmetric in $t$ and $\bar t$, and in fact it provides a suitable description of the WI only in the somewhat peculiar case when potential $\bar U(\bar H)$ is a polinomial of $finite$ degree. See refs.\cite{GaNa} and \cite{tildeW} for discussion of such asymmetric models. \subsection{$\tilde W$-constraints for generic 2-matrix model} When both potentials $U$ and $\bar U$ in (\ref{2mamo}) are generic formal seria, eqs.(\ref{gana2}) represent only a one-parametric subset of the 2-parametric family of WI. Before we describe the whole set, let us emphasize that the two-matrix model (\ref{2mamo}) is the one, where the action is $not$ of the most general form, consistent with some symmetry. Therefore it is not covariant under arbitrary change of variables $H,\bar H \longrightarrow f(H,\bar H), \bar f(H,\bar H)$, and our usual method of derivation of Ward identities does not work. The reason why generic 2-matrix model with action containing all the possible combinations ${\rm Tr} (H^{a_1}\bar H^{b_1} H^{a_2}\bar H^{b_2} ...)$ is never considered seriously is essentially our poor understanding of the unitary-matrix interals for "non-eigenvalue" theories, to which class such generic model belongs. For reasons to be explained in the next section such problems do not arise for the models of the form (\ref{2mamo}) or (\ref{comamo"}), and this is why they attracted most attention so far. Hopefully the problems with the unitary-matrix integrals are temporal and this restricted class of multimatrix models will be unlarged, this should be especially easy to do in the part of the theory dealing with constraint algebras, but this subject is beyond the scope of the present notes. In order to derive the complete set of WI for the model (\ref{2mamo}), we apply the following semi-artificial trick. Note that exponential $e^{{\rm Tr}H\bar H}$ satisfies: \begin{eqnarray} \left( {\rm Tr} H^n\frac{\partial^m}{\partial H^m_{tr}} - {\rm Tr} \bar H^m \frac{\partial^n}{\partial \bar H^n_{tr}}\right) e^{{\rm Tr} H\bar H} = 0. \label{wt2m1} \end{eqnarray} Let us integrate this identity over $H$ and $\bar H$ with the weight $e^{{\rm Tr}U(H) + {\rm Tr}\bar U(\bar H)}$ and then integrate by parts. We obtain an identity: \begin{eqnarray} \oldfalse \begin{array}{c} \int dHd\bar H e^{{\rm Tr} H\bar H} \cdot \\ \cdot \left( {\rm Tr} \left(-\frac{\partial}{\partial H_{tr}}\right)^mH^n - {\rm Tr} \left(-\frac{\partial}{\partial \bar H_{tr}}\right)^n\bar H^m\right) e^{{\rm Tr}U(H) + {\rm Tr}\bar U(\bar H)} = 0, \end{array} \label{wt2m2} \end{eqnarray} which can be represented in terms of $\tilde W$ operators \cite{GKM}: \footnote{Relations (\ref{wt2m1}) and thus (\ref{twid}) are in the obvious sense associated with $\ {\rm Tr} H^n\bar H^m$. Of course there are similar relations, in the same sense associated with any object like ${\rm Tr} (H^{a_1}\bar H^{b_1}H^{a_2}\bar H^{b_2} ...)$ and with products of such traces: it is enough to substitute all $\bar H \rightarrow \frac{\partial}{\partial H_{tr}}$ to obtain the l.h.s. of the equation and substitute all $H \rightarrow \frac{\partial}{\partial \bar H_{tr}}$ to obtain its r.h.s. (one should only remember that such substitution is possible, say in the l.h.s. if all the $\bar H$ are put to the right of all $H$, in order to restore the matrix-product form of the relation, one should carefully take into account all the commutators, arising when $\frac{\partial}{\partial H_{tr}}$ is carried back to original position of the corresponding $\bar H$). All such relations can appear to be just implications of eq.(\ref{twid}). } \begin{eqnarray} \tilde W^{(m+1)}_{n-m}(t) Z\{t,\bar t\} = (-)^{m-n}\tilde W^{(n+1)}_{m-n}(\bar t) Z\{t,\bar t\}, \ \ {\rm for\ all}\ m,n\geq 0. \label{twid} \end{eqnarray} This is the full(?) set of WI for the 2-matrix model. When one of potentials (say, $U(t)$ is polinomial of finite degree, the most of this symmetry is "spontaneously broken", the surviving part being described by eqs.(\ref{gana2}). Among other things eq.(\ref{twid}) reveals an amusing automorphsim of the $\tilde{\bf W}_{\infty}$ algebra: \begin{eqnarray} \tilde W^{(m+1)}_{n-m} \longleftrightarrow \tilde W^{(n+1)}_{m-n}, \ \ \ m,n\geq 0, \end{eqnarray} for example, Virasoro's Borel subalgebra is formed not only by operators $\tilde W^{(2)}_n$, but also by $\tilde W^{(n+2)}_{-n}, \ n\geq -1$ (while $U(1)$ Borel subalgebra - not only by $\tilde W^{(1)}_n = \frac{\partial}{\partial t_n}$, but also by $\tilde W^{(n+1)}_n, \ n\geq 0$). One can attempt to apply the same procedure and derive $\tilde W$-identites for the conventional $(p-1)$-matrix models with $p-1 > 2$. In principle, this is possible, but unfortunately the arising equations neither have a nice form nor is there many enough of them. However, for illustrational purposes we shall scetch some relevant fromulas in the rest of this subsection. Consider the multimatrix integral \begin{eqnarray} \oldfalse \begin{array}{c} Z = \int dH_1...dH_{p-1} \cdot \\ \cdot e^{{\rm Tr}U_1(H_1) + \ldots + {\rm Tr}U_{p-1}(H_{p-1})}\ldots e^{{\rm Tr}(H_1H_2 + H_2H_3 + \ldots +H_{p-2}H_{p-1})} \end{array} \label{mumamosc} \end{eqnarray} Acting on $Z$, operator $\ \displaystyle{\tilde W^{(m+1)}_{n-m}(t^{(1)})}\ $ produces an insertion of $\ \displaystyle{{\rm Tr} H_1^n \stackrel{\longleftarrow} {\left(\frac{\partial}{\partial H_{1,tr}}\right)^m}}$ at the position of $\ldots$ in (\ref{mumamosc}). Integration by parts gives: \begin{eqnarray} {\rm Tr} H_1^n \stackrel{\longrightarrow} {\left(-\frac{\partial}{\partial H_{1,tr}}\right)^m} \longrightarrow (-)^m{\rm Tr} H_1^nH_2^m = (-)^m{\rm Tr} H_2^mH_1^n \nonumber \end{eqnarray} In the case of $p-1=2$, that we discussed above, this can be rewriten as $\displaystyle{(-)^m{\rm Tr} H_1^m \stackrel{\longrightarrow} {\left(\frac{\partial}{\partial H_{2,tr}}\right)^n}}$ and integration by parts gives $\displaystyle{(-)^m{\rm Tr} H_2^m \stackrel{\longleftarrow} {\left(-\frac{\partial}{\partial H_{2,tr}}\right)^n}}$, what is equivalent to the action of $(-)^{m+n}\tilde W^{(n+1)}_{m-n}(t^{(2)})$ on $Z$: we reproduce equation (\ref{twid}). However, for $p-1>2$ things are more complicated. Insertion of ${\rm Tr} H_2^mH_1^n$ is equivalent to that of $\displaystyle{{\rm Tr} H_2^m\left(\stackrel{\longrightarrow} {\frac{\partial}{\partial H_{2,tr}}} - H_3\right)^n}$, which after integration by parts and acts on $e^{U_2(H_2)}$ and gives: \begin{eqnarray} \oldfalse \begin{array}{c} {\rm Tr} H_2^m\left(-\stackrel{\longleftarrow} {\frac{\partial}{\partial H_{2,tr}}} - H_3\right)^n \sim \\ \sim {\rm Tr} H_2^m\left(\sum_k kt_k^{(2)}H_2^{k-1} - H_3\right) \left(-\stackrel{\longleftarrow} {\frac{\partial}{\partial H_{2,tr}}} - H_3\right)^{n-1} \sim \ldots \end{array} \end{eqnarray} Derivatives remaining at the r.h.s. should be carried through the first bracket and than act on $e^{U_2(H_2)}$ etc. After all we get some linear combination of terms like ${\rm Tr} H_2^{b_1}H_3^{c_1}H_2^{b_2}H_3^{c_2}\ldots $ with $t^{(2)}$-dependent coefficients. Now, if we are dealing with the $p-1=3$-matrix model, every $H_2$ standing to the right of all $H_3$'s can be substituted by $\frac{\partial}{\partial H_{3,tr}}$, otherwise one should also include terms with commutators when this $\frac{\partial}{\partial H_{3,tr}}$ is carried back to the place where $H_2$ was standing. In this way we obtain a combination of insertions of the form \begin{eqnarray} \oldfalse \begin{array}{c} {\rm Tr} \stackrel{\longrightarrow} {\left(\frac{\partial}{\partial H_{3,tr}}\right)^{b_1}}H_3^{c_1} \stackrel{\longrightarrow} {\left(\frac{\partial}{\partial H_{3,tr}}\right)^{b_2}}H_3^{c_2}\ldots \sim \\ \sim {\rm Tr} \stackrel{\longleftarrow} {\left(-\frac{\partial}{\partial H_{3,tr}}\right)^{b_1}}H_3^{c_1} \stackrel{\longleftarrow} {\left(-\frac{\partial}{\partial H_{3,tr}}\right)^{b_2}}H_3^{c_2}\ldots \end{array} \end{eqnarray} The resulting operator can be expressed through $\tilde W(t^{(3)})$ and we obtain an identity, saying that some algebraic combination of $\tilde W(t^{(1)})$ and $\tilde W(t^{(3)})$ with $t^{(2)})$-dependent coefficients annihilates the partition function. For $p-1>3$ insertion of $H_2$ is equivalent to that of $\frac{\partial}{\partial H_{3,tr}} - H_4$ rather than $\frac{\partial}{\partial H_{3,tr}}$, and the procedure should be repeated again and again. Finaly one arrives at constraints where the operators are algebraic combinations of $\tilde W(t^{(1)})$ and $\tilde W(t^{(p-1)})$ with the coefficients, which depend on $t^{(2)},\ldots,t^{(p-2)}$ (moreover these are $infinite$ seria in $\tilde W$ operators, unless all the intermediate potentials $U_2,\ldots,U_{p-2}$ are polinomials of $finite$ degree. This is of course not a too illuminating procedure and in fact it was never worked through to get concrete identities in any nice form. Instead it can serve to illustrate the problems, peculiar for the class of conventional multimatrix models (at least for $p-1>2$). It can also emphasize the beauty of $conformal$ multimatrix models, which have clear advantages already at the level of Ward identites. \subsection{$\tilde W$-operators in Kontsevich model} After $\tilde W$-operators are introduced, we can also rewrite the GN equation (\ref{riv1}) for Kontsevich models in terms of $\tilde W$'s. Namely, we shall prove the following identity \cite{tildeW}: \begin{eqnarray} \left(\frac{\partial}{\partial \Lambda_{tr}}\right)^{m+1} {\cal Z}\{ T_k\} = (\pm)^{m+1} \sum_{l\geq 0} \Lambda^{-l-1} \tilde W_{l-m}^{(m+1)}( T) {\cal Z}\{ T_k\}, \label{tiwko} \end{eqnarray} valid for $any$ function ${\cal Z}$ which depends on $ T_k = \mp\frac{1}{k}{\rm tr}\Lambda^{-k},\ \ k\geq 1\ $ and $ T_0 = \pm{\rm tr}\log \Lambda$ with $n\times n$ matrix $\Lambda$. Application of the identity (\ref{tiwko}) is most straightforward in the Gaussian model (\ref{gako}), e.g. for transformation of eq. (\ref{gako3}) into eq.(\ref{gako4}) (we remind that $ L = \Lambda$ in this case). In other cases calculations with the use of identity (\ref{tiwko}), accounting for the quasiclassical factor ${\cal C}_V\{ L\}$ and the difference between $ L = V'(\Lambda)$ and $\Lambda$ become somewhat more involved, though still seem enough straightforward. Also for particular potentials $V(X)$ partition function ${\cal Z}_V\{T\}$ is actually independent of certain (combinations of) time-variables (for example, if $V(X) = \frac{X^{p+1}}{p+1}$ it is independent of all $T_{pk},\ k\in Z_+$), and this is important for appearence of the constraints in the standard from like eqs.{\ref{vircder}), (\ref{vircidder}), i.e. for certain $reduction$ of $\tilde W$-constraints to the ordinary $W$-constraints. This relation between $\tilde W$- and $W$-operators deserves further investigation. The proof of eq.(\ref{tiwko}) is provided by the following trick. Let us make a sort of Fourier transformation: \begin{eqnarray} {\cal Z}\{ T\} = \int dH\ {\cal G}\{H\} e^{\sum_{k=0}^{\infty} T_k{\rm Tr}H^k}, \label{mafoutr} \end{eqnarray} where integral is over $N\times N$ Hermitean matrix $H$.\footnote{ Here it is for the first time that we encounter an important idea: matrix models - the ordinary 1-matrix model (\ref{1mamo'}) in this case - can be considered as defining integral transformations. This view on matrix models can to large extent define their role in the future developement of string theory. } Then it is clear that once the identity (\ref{tiwko}) is established for ${\cal Z}\{ T\}$ substituted by $ e^{{\rm Tr} U(H)},\ \ U(H) = \sum_{k=0}^{\infty} T_k{\rm Tr}H^k$, with any matrix $H$, it is valid for $any$ function ${\cal Z}\{ T\}$. The advantage of such substitution is that we can now make use of the definition (\ref{twop}) of the $\tilde W$ operators in order to rewrite (\ref{tiwko}) in a very explicit form: \begin{eqnarray} \left(\frac{\partial}{\partial \Lambda_{tr}}\right)^{m+1} e^{{\rm Tr} U(H)} = (\pm)^{m+1}\sum_{l \geq 0}^{\infty} \Lambda^{-l-1} \tilde W_{l-m}^{(m+1)}( T) e^{{\rm Tr} U(H)} = \nonumber \\ = (\pm)^{m+1} \sum_{l \geq 0}^{\infty} \Lambda^{-l-1} {\rm Tr} \left(\frac{\partial}{\partial H_{tr}}\right)^m H^l e^{{\rm Tr} U(H)} = \nonumber \\ = (\pm)^{m+1} {\rm Tr}\left(\frac{\partial}{\partial H_{tr}}\right)^m \frac{1}{\Lambda\otimes I - I\otimes H} e^{{\rm Tr} U(H)}. \label{tiwkoder1} \end{eqnarray} Now expression for $ T$'s in terms of $\Lambda$ should be used. Then \begin{eqnarray} e^{{\rm Tr} U(H)} = Det^{\pm 1} (\Lambda\otimes I - I\otimes H) \nonumber \end{eqnarray} and substituting this into (\ref{tiwkoder1}) we see that (\ref{tiwko}) is equivalent to \begin{eqnarray} \left( \left(\frac{\partial}{\partial \Lambda_{tr}}\right)^{m+1} - (\pm)^{m+1} I\cdot{\rm Tr} \left(\frac{\partial}{\partial H_{tr}}\right)^m \cdot \frac{1}{\Lambda\otimes I - I\otimes H} \right)\cdot \nonumber \\ \cdot Det^{\pm 1} (\Lambda\otimes I - I\otimes H) = 0 \label{tiwkoder2} \nonumber \end{eqnarray} Here "${\rm Tr}$" stands for the trace in the $H$-space only, while $Det = {\rm Det}\otimes {\rm det}$ - for determinant in both $H$ and $\Lambda$ spaces. After one $\Lambda$-derivative is taken explicitly, we get: \begin{eqnarray} \left(I\otimes{\rm Tr}\right) \left( \left(\frac{\partial}{\partial \Lambda_{tr}}\right)^m\otimes I - I\otimes \left(\pm\frac{\partial}{\partial H_{tr}}\right)^m \right) \cdot \nonumber \\ \cdot \frac{Det^{\pm 1} (\Lambda\otimes I - I\otimes H)} {\Lambda\otimes I - I\otimes H} = 0. \label{tiwkoder3} \end{eqnarray} This is already a matrix identity, valid for any $\Lambda$ and $H$ of the sizes $n\times n$ and $N\times N$ respectively. For example, if $m=0$ ($\tilde W^{(1)}$-case), it is obviously satisfied. If both $n=N=1$, it is also trivially true, though for different reasons for different choice of signs: for the upper signs, the ratio at the l.h.s. is just unity and all derivatives vansih; for the lower signs we have: \begin{eqnarray} \left(\frac{\partial}{\partial \lambda }\right)^m - \left(-\frac{\partial}{\partial h}\right)^m = \left(\sum_{\stackrel{a+b=m-1}{a,b\geq 0}} \left(\frac{\partial}{\partial \lambda }\right)^a \left(-\frac{\partial}{\partial h}\right)^b \right) \left(\frac{\partial}{\partial \lambda } + \frac{\partial}{\partial h}\right), \nonumber \end{eqnarray} and this obviously vanishes since $(\frac{\partial}{\partial \lambda } + \frac{\partial}{\partial h})f(\lambda -h) \equiv 0$ for any $f(x)$. If $m>0$ and $\Lambda,\ H$ are indeed {\it matrices}, direct evaluation becomes much more sophisticated. We present the first two nontrivial examples: $m=1$ and $m=2$. The following relartions will be usefull. Let $Q \equiv \frac{1}{\Lambda\otimes I - I\otimes H}$. Then \begin{eqnarray} Det^{\pm 1}Q \frac{\partial}{\partial \Lambda_{tr}} Det^{\mp 1} Q = \pm\left[(I\otimes {\rm Tr})Q\right]; \nonumber \\ Det^{\pm 1}Q \frac{\partial}{\partial H_{tr}} Det^{\mp 1} Q = \mp\left[({\rm tr}\otimes I)Q\right]; \nonumber \\ \left(\frac{\partial}{\partial \Lambda_{tr}}\otimes I\right) Q = -\left[({\rm tr}\otimes I)Q\right]Q; \nonumber \\ \left(I\otimes\frac{\partial}{\partial H_{tr}}\right) Q = \left[(I\otimes {\rm Tr})Q\right]Q. \label{tiwkoder5} \end{eqnarray} This is already enough for the proof in the case of $m=1$. Indeed: \begin{eqnarray} Det^{\pm 1}Q \left(\frac{\partial}{\partial \Lambda_{tr}}\otimes I \mp I\otimes \frac{\partial}{\partial H_{tr}}\right)Q Det^{\mp 1} Q = \nonumber \\ = \{-\left[({\rm tr}\otimes I)Q\right]Q \pm \left[(I\otimes {\rm Tr})Q\right]Q \}\mp \nonumber \\ \mp\{ \left[(I\otimes {\rm Tr})Q\right]Q \mp \left[({\rm tr}\otimes I)Q\right]Q \} = 0. \nonumber \end{eqnarray} The first two terms at the r.h.s. come arise from $\Lambda$-, while the last two - from $H$-derivatives. In the case of $m=2$ one should take derivatives once again. This is a little more tricky, and the same compact notation are not sufficient. In addition to (\ref{tiwkoder5}) we now need: \begin{eqnarray} \left(\frac{\partial}{\partial \Lambda_{tr}}\otimes I\right) \left[({\rm tr}\otimes I)Q\right]Q = - \left[({\rm tr}\otimes I)Q\right]^2 Q - {\cal B}. \label{tiwkoder7} \end{eqnarray} Here \begin{eqnarray} \left[({\rm tr}\otimes I)Q\right]^2 = \left[({\rm tr}\otimes I)\left[({\rm tr}\otimes I)Q\right]Q\right], \label{tiwkoder8} \end{eqnarray} while in order to write ${\cal B}$ explicitly we need to restore matrix indices (Greek for the $\Lambda$-sector and Latin - for the $H$ one). The $(\alpha i,\gamma k)$-component of (\ref{tiwkoder7}) looks like: \begin{eqnarray} \left(\frac{\partial}{\partial \Lambda_{\beta\alpha}}\delta^{im}\right) Q_{\delta\delta}^{mj}Q_{\beta\gamma}^{jk} = - Q_{\delta\delta}^{ij}Q_{\beta\beta}^{jl}Q_{\alpha\gamma}^{lk} - Q_{\delta\beta}^{il}Q_{\alpha\delta}^{lj}Q_{\beta\gamma}^{jk} \end{eqnarray} and appearence of the second term at the r.h.s. implies, that ${\cal B}_{\alpha\gamma}^{ik} = Q_{\delta\beta}^{il}Q_{\alpha\delta}^{lj}Q_{\beta\gamma}^{jk}$. Further, \begin{eqnarray} &\left(\frac{\partial}{\partial \Lambda_{tr}}\otimes I\right) \left[(I\otimes {\rm Tr})Q\right]Q = \nonumber \\ &-\left[(I\otimes {\rm Tr})\left[({\rm tr}\otimes I)Q\right]Q\right]Q -\left[(I\otimes {\rm Tr})\left[(I\otimes {\rm Tr})Q\right]Q\right]Q; \nonumber \\ &\left( I\otimes \frac{\partial}{\partial H_{tr}}\right) \left[({\rm tr}\otimes I)Q\right]Q = \nonumber \\ &+ \left[({\rm tr}\otimes I)\left[(I\otimes {\rm Tr})Q\right]Q\right]Q + \left[(I\otimes {\rm Tr})\left[({\rm tr}\otimes I)Q\right]Q\right]Q; \nonumber \\ &\left( I\otimes \frac{\partial}{\partial H_{tr}}\right) \left[(I\otimes {\rm Tr})Q\right]Q = + \left[(I\otimes {\rm Tr})\left[(I\otimes {\rm Tr})Q\right]Q\right]Q +{\cal B}. \end{eqnarray} It is important that ${\cal B}$ that appears in the last relation in the form of ${\cal B}_{\alpha\gamma}^{ik} = Q_{\alpha\delta}^{lj}Q_{\delta\beta}^{il}Q_{\beta\gamma}^{jk}$ is exactly the same ${\cal B}$ as in eq.(\ref{tiwkoder7}). Now we can prove (\ref{tiwkoder3}) for $m=2$: \begin{eqnarray} &Det^{\pm 1}Q \left(\left(\frac{\partial}{\partial \Lambda_{tr}}\right)^2\otimes I - I\otimes \left(\frac{\partial}{\partial H_{tr}}\right)^2\right) Q Det^{\mp 1} Q = \nonumber \\ &= \left\{ \pm\left[(I\otimes{\rm Tr})Q\right]\left( - \left[({\rm tr}\otimes I)Q\right]Q \pm \left[(I\otimes{\rm Tr})Q\right]Q \right) - \right. \nonumber \\ &\left. - \left( - \left[({\rm tr}\otimes I)\left[({\rm tr}\otimes I)Q\right]Q\right]Q - {\cal B} \right) \pm \right. \nonumber \\ &\left. \pm \left( - \left[(I\otimes{\rm Tr})\left[({\rm tr}\otimes I)Q\right]Q\right]Q - \left[({\rm tr}\otimes I)\left[(I\otimes{\rm Tr})Q\right]Q\right]Q \right)\right\} - \nonumber \\ &- \left\{ \mp\left[({\rm tr}\otimes I)Q\right]\left( \left[(I\otimes{\rm Tr})Q\right]Q \mp \left[({\rm tr}\otimes I)Q\right]Q \right) + \right. \nonumber \\ &\left. + \left( \left[(I\otimes{\rm Tr})\left[(I\otimes{\rm Tr})Q\right]Q\right]Q + {\cal B} \right) \mp \right. \nonumber \\ &\left. \mp \left( \left[({\rm tr}\otimes I)\left[(I\otimes{\rm Tr})Q\right]Q\right]Q + \left[(I\otimes{\rm Tr})\left[({\rm tr}\otimes I)Q\right]Q\right]Q \right)\right\} \end{eqnarray} where the terms 1,2,3,4,5,6 in the first braces cancel the terms 1,3,2,4,6,5 in the second braces and identity (\ref{tiwkoder8}) and its counterpart with $({\rm tr}\otimes I) \rightarrow (I\otimes {\rm Tr})$ is used. Explicit proof of eq.(\ref{tiwkoder3}) for generic $m$ is unknown. \ \bigskip \section{Eigenvalue models} \setcounter{equation}{0} \subsection{What are eigenvalue models} \bigskip Given the present state of knowledge we need to consider in most cases only the narrow class of the "eigenvalue" models. These models have the property of being associated with conventional integrable hierarchies (of (multicomponent) KP and Toda type), where integrable flows just commute (instead of forming less trivial closed algebras), and thus with the level-1 Kac-Moody algebras (by artificial tricks, familiar from the bosonization formalism in conformal field theory \cite{Turb} these can be sometimes generalized to particular other levels like $k=2$). This means that the models are essentially associated with abelian Cartan subalgebras rather than with full matrix algebras. \footnote{Groups, arising in the theory of matrix models and integrable hierarchies are not just those of matrices, appearing in the integral representations: the latter ones are in the best case related to the zero-modes of the former. Moreover, even this relation is not usually simple to reveal. This remark is important to avoid confusion in the next paragraphs.} In CFT-formulation (see below) this means that the eigenvalue models can be represented in terms of the free fields, which bosonize the Cartan subalgebra of the whole group in the WZNW model (the remaining $(\beta,\gamma)$-fields \cite{GMMOS} being (almost) neglected - their remnants are observed in the form of "cocycle" factors in the Frenkel-Kac formulas \cite{FK}, see \cite{Turb}). In the matrix-integral representations the integrals for the eigenvalue models are in fact reduced to those over diagonal matrices (consisting of eigenvalues of original matrices, thus the name "eigenvalue models"). Most important, from the physical point of view eigenvalue models describe only $topological$ (discrete) degrees of freedom, but not any propagating $particles$. \footnote{ Particles are always related to the "angular (unitary-) matrix" integrals (as everybody knows from example of the Wilson lattice QCD) which are highly less trivial to deal with, though these are also integrable in some broader sense of the word - within the (yet non-existing) generalization of integrable hierarchies from the fields in the Cartan subalgebra to the entire WZNW model.} This can be understood if one notes that matrix models usually possess gauge symmetry, associated with the unitary rotation of matrices, $M_{\alpha} \longrightarrow U_{\alpha}^{\dagger}M_{\alpha}U_{\alpha}$, i.e. matrix models are usually $gauge$ theories. In the case of eigenvalue models this symmetry is realized without "gauge fields" $V_{\alpha\beta}$, which would depend on pairs of indices $\alpha$, $\beta$ and transform like $V_{\alpha\beta} \longrightarrow U_{\alpha}^{\dagger}V_{\alpha\beta}U_{\beta}$. In other words, eigenvalue models are gauge theories without gauge fields, i.e. are pure topological. Thus it is not a surprise that they usually live in the space time of dimension $d<2$, \footnote{Let us remind that in the Polyakov formulation which is the least counterintuitive for interpretation of what happens in the space-time (target space), string models usually involve Liouville field, identified as a time-variable in the target-space formalism. (Note that for this reason there is usually (at least one) $time$ in the string theory, while space can be of any dimension (at least between 0 and 25), not obligatory integer.) Because of this extra Liouville field the space-time dimension $d$ usually differs by $1$ from the central charge of the CFT model, which is coupled to $2d$-gravity to from a string model: $d = c+1$ and $d<2$ is the same as $c<1$. } since for $d>2$ there $should\ be$ particles, associated with the gauge fields. At the "boundary" lies the model of "$d=2\ (c=1)$ string", which has one particle-like degree of freedom (dilaton, which becomes tachyon in the $d>2$ models). This very interesting model is much worse understood than the $d<2$ models, at least its properties are already somewhat different from other eigenvalue models (especially in the most interesting "compactified" case), and it will not be discussed in these notes. Later we"ll return to the subject of non-eigenvalue ($d>2$) theories, though not too much is yet known about them, but now we are going to concentrate on the eigenvalue models. \subsection{1-matrix model} Hermitean matrix integrals are usually transformed to the eigenvalue form by separation of angular and eigenvalue variables. As usually, the simplest is the case of the 1-matrix model \begin{eqnarray} Z_N\{t\} \equiv c_N\int_{N\times N} dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr} H^k}, \label{1mamo"} \end{eqnarray} where this separations does not involve any information about unitary-matrix integrals. Take \begin{eqnarray} H = U^{\dagger}DU, \label{diag} \end{eqnarray} where $U$ is a unitary matrix and diagonal matrix $D = {\rm diag}(h_1...h_N)$ has eigenvalues of $H$ as its entries. Then integration measure \begin{eqnarray} dH = \prod_{i,j=1}^N dH_{ij} = \frac{[dU]}{[dU_{Cartan}]} \prod_{i-1}^N dh_i \Delta^2(h), \label{Dyson} \end{eqnarray} where "Van-der-Monde determinant" $\Delta (h) \equiv det_{(ij)} h_i^{j-1} = \prod_{i>j}^N (h_i - h_j)$ and $[dU]$ is Haar measure of integration over unitary matrices. The way to derive eq.(\ref{Dyson}) is to consider the norm of infinitesimal variation \begin{eqnarray} \mid\mid \delta H\mid\mid^2 &\equiv \sum_{i,j=1}^N \mid \delta H_{ij}\mid^2 = \sum_{i,j=1}^N \delta H_{ij}\delta H_{ji} = {\rm Tr} (\delta H)^2 = \nonumber \\ &= {\rm Tr}\left(-U^{\dagger}\delta UU^{\dagger}DU + U^{\dagger}D\delta U + U^{\dagger}\delta DU \right)^2 = \nonumber \\ &= {\rm Tr} (\delta D)^2 + 2i{\rm Tr}{\delta u}[\delta D,D] + 2{\rm Tr}\left(-{\delta u} D {\delta u} D + (\delta u)^2D^2 \right), \nonumber \end{eqnarray} where $\delta u \equiv \frac{1}{i}{\delta U}U^{\dagger} = \delta u^{\dagger}$ and $\delta D = {\rm diag}(\delta h_1\ldots \delta h_N)$. The second term at the r.h.s. vanishes because both $D$ and $\delta D$ are diagonal and commute. Therefore \begin{eqnarray} \mid\mid \delta H\mid\mid^2 = \sum_{i=1}^N (\delta h_i)^2 + \sum_{i,j=1}^N (\delta u)_{ij}(\delta u)_{ji}(h_i-h_j)^2. \nonumber \end{eqnarray} Now it remains to recall the basic relation between the infinitesimal norm and the measure: ${\rm if}\ \ \mid\mid \delta l \mid\mid^2 = G_{ab}\delta l^a\delta l^b \ \ {\rm then}\ \ [dl] = \sqrt{{\rm det}_{ab}G_{ab}} \prod_a dl^a, $ to obtain eq.(\ref{Dyson}) with Haar measure $[dU] = \prod_{ij}^N du_{ij}$ being associated with the infinitesimal norm \begin{eqnarray} \mid\mid \delta u\mid\mid^2 = {\rm Tr}(\delta u)^2 = \sum_{i,j=1}^N \delta u_{ij}\delta u_{ji} = \sum_{i,j=1}^N \mid \delta u_{ij}\mid^2 \nonumber \end{eqnarray} and $[dU_{Cartan}] \equiv \prod_{i=1}^N du_{ii}$. Coming back to the 1-matrix model it remains to note that the "action" ${\rm Tr} U(H) \equiv \sum_{k=0}^{\infty} t_k {\rm Tr}H^k$ with $H$ substituted in the form (\ref{diag}) is independent of $U$: \begin{eqnarray} {\rm Tr} U(H) = \sum_{i=1}^N U(h_i). \nonumber \end{eqnarray} Thus \begin{eqnarray} Z_N\{t\} = \frac{1}{N!} \prod_{i=1}^N \int dh_i e^{U(h_i)} \prod_{i>j}^N (h_i-h_j)^2 = \nonumber \\ = \frac{1}{N!} \prod_{i=1}^N \int dh_i e^{U(h_i)} \Delta^2(h), \label{1mamoev} \end{eqnarray} provided $c_N$ is chosen to be \begin{eqnarray} c_N^{-1} = N!\frac{{\rm Vol}_{U(N)}}{({\rm Vol}_{U(1)})^N}, \label{cN} \end{eqnarray} where the volume of unitary group in Haar measure is equal to \begin{eqnarray} {\rm Vol}_{U(N)} = \frac{(2\pi)^{N(N+1)/2}}{\prod_{k=1}^N k!}. \label{volun} \end{eqnarray} A simple way to derive eq.(\ref{volun}) will be described at the end of this section, as an example of application of orthogonal polinomials technique. \subsection{Itzykson-Zuber and Kontsevich integrals} Let us proceed now to Kontsevich integral, \begin{eqnarray} {\cal F}_{V,n}\{ L\} = \int_{n\times n} dX e^{- {\rm tr} V(X) + {\rm tr} L X}. \label{KI'} \end{eqnarray} We shall see shortly that it in fact depends only on the eigenvalues of the matrix $ L$ (this fact was already used in the previous section), however, this time somewhat more sophisticated unitary matrix integrals will be involved. Substitute $X = U_X^{\dagger}D_XU_X; \ \ L = U_{ L}^{\dagger} D_{ L}U_{ L}$ in (\ref{KI'}) and denote $U \equiv U_XU_{ L}^{\dagger}$. Then \begin{eqnarray} &{\cal F}_{V,n}\{ L\} = \nonumber \\ &= \prod_{i=1}^n \int dx_i e^{-V(x_i)} \Delta^2(x) \int_{n\times n} \frac{[dU]}{[dU_{Cartan}]} \exp\left({\sum_{\gamma,\delta =1}^n x_\gamma l_\delta\mid U_{\gamma\delta}\mid^2} \right). \label{KI'2} \end{eqnarray} In order to proceed further we need to evaluate the integral over unitary matrices, which appeared at the r.h.s. This integral can actually be represented in two different ways: \begin{eqnarray} I_n\{X, L\} \equiv \int_{n\times n} \frac{[dU]}{[dU_{Cartan}]} e^{{\rm tr} XU L U^{\dagger}} = \label{IZa} \\ = \int_{n\times n} \frac{[dU]}{[dU_{Cartan}]} e^{\sum_{\gamma,\delta =1}^n x_\gamma l_\delta\mid U_{\gamma\delta}\mid^2} \label{IZb} \end{eqnarray} (the U's in the two integrals are related by the transformation $U \longrightarrow U_XUU_{ L}^{\dagger}$ and Haar measure is both left and right invariant). Formula (\ref{IZa}) implies that $I_n\{X, L\}$ satisfies a set of simple equations \cite{Migeq}: \begin{eqnarray} \left( {\rm tr} \left(\frac{\partial}{\partial X_{tr}}\right)^k - {\rm tr} L^k \right) I_n\{X, L\} = 0, \ \ k\geq 0, \nonumber\\ \left( {\rm tr} \left(\frac{\partial}{\partial L_{tr}}\right)^k - {\rm tr} X^k \right) I_n\{X, L\} = 0, \ \ k\geq 0, \label{Migeq} \end{eqnarray} which by themselves are not very restrictive. However, another formula, (\ref{IZb}), implies that $I_n\{X, L\}$ in fact depends only on the eigenvalues of $X$ and $ L$, and for $such$ $I_n\{X, L\} = \hat I\{x_\gamma, l_\delta\}$ eqs.(\ref{Migeq}) become very restrictive \footnote{When acting on $\hat I$, which depends only on eigenvalues, matrix derivatives turn into: \begin{eqnarray} {\rm tr} \frac{\partial}{\partial X_{tr}} \hat I &= \sum_\gamma \frac{\partial}{\partial x_\gamma} \hat I; \nonumber \\ {\rm tr} \frac{\partial^2}{\partial X_{tr}^2} \hat I &= \sum_\gamma \frac{\partial^2}{\partial x_\gamma^2} \hat I + \sum_{\gamma\neq\delta} \frac{1}{x_\gamma-x_\delta} \left(\frac{\partial}{\partial x_\gamma} - \frac{\partial}{\partial x_\delta}\right) \hat I; \nonumber \end{eqnarray} etc. } and allow to determine $\hat I\{x_\gamma, l_\delta\}$ unambigously (at least if $\hat I\{x_\gamma, l_\delta\}$ is expandable in a formal power seria in $x_\gamma$ and $ l_\delta$). The final solution is \begin{eqnarray} I_n\{X, L\} = \frac{(2\pi)^{\frac{n(n-1)}{2}}}{n!} \frac{{\rm det}_{\gamma\delta} e^{x_\gamma l_\delta}}{\Delta(x)\Delta( l)}. \label{IZ} \end{eqnarray} Normalization constant can be defined by taking $ L = 0$, when \begin{eqnarray} I_n\{X,L=0\} = \frac{{\rm Vol}_{U(n)}}{({\rm Vol}_{U(1)})^n} = \frac{(2\pi)^{\frac{n(n-1)}{2}}}{\prod_{k=1}^n k!}, \nonumber \end{eqnarray} and using the fact that \begin{eqnarray} \left.\frac{{\rm det}_{\gamma\delta} f_\gamma(l_\delta)}{\Delta(l)}\right| _{\{l_\delta = 0\}} = \left(\prod_{k=0}^{n-1}\frac{1}{k!} \right) {\rm det}_{\gamma\delta} \partial^{\delta-1}f_\gamma(0). \nonumber \end{eqnarray} Eq.(\ref{IZ}) is usually refered to as the Itzykson-Zuber formula \cite{IZ}. In mathematical literature it was earlier derived by Kharish-Chandra \cite{KhCh}, and in fact the integral (\ref{IZa} is the basic example of the coadjoint orbit integrals \cite{STS}-\cite{AS}, which can be exactly evaluated with the help of the Duistermaat-Heckmann theorem \cite{DH}, \cite{NiDH},\cite{Wit2YM},\cite{NT}. This calculation is the simplest example of the very important technique of $exact$ evaluation of $non-Gaussian$ unitary-matrix integrals, which is now doing its first steps (see \cite{KMSW1}-\cite{Shata}) and will be discussed at the end of these notes. Now we turn back to the eigenvalue formulation of the GKM. Substitution of (\ref{IZ}) into (\ref{KI'2}) gives: \begin{eqnarray} {\cal F}_{V,n}\{ L\} = \frac{(2\pi)^{\frac{n(n-1)}{2}}}{\Delta( l)} \prod_{\delta=1}^n \int dx_\delta e^{-V(x_\delta)} \Delta(x) \frac{1}{n!} {\rm det}_{\gamma\delta} e^{x_\gamma l_\delta} = \nonumber \\ = \frac{(2\pi)^{\frac{n(n-1)}{2}}}{\Delta( l)} \prod_{\delta =1}^n \int dx_\delta e^{-V(x_\delta) + x_\delta l_\delta} \Delta(x), \label{KIev} \end{eqnarray} where we used antisymmetry of $\Delta(x)$ under permutations of $x_\gamma$'s in order to change $\frac{1}{n!} {\rm det}_{\gamma\delta} e^{x_\gamma l_\delta}$ for $e^{\sum_\delta x_\delta l_\delta}$ under the sign of the $x_\delta$ integration. We can now use the fact that $\Delta(x) = {\rm det}_{\gamma\delta}x_\delta^{\gamma -1}$ in order to rewrite the r.h.s. of (\ref{KIev}): \begin{eqnarray} {\cal F}_{V,n}\{ L\} = (2\pi)^{\frac{n(n-1)}{2}}\frac{{\rm det}_{\gamma\delta} \hat\varphi_\gamma( l_\delta)}{\Delta( l)}, \label{KIev'} \end{eqnarray} where \begin{eqnarray} \hat\varphi_\gamma( l) \equiv \int dx x^{\gamma -1}e^{-V(x)+ l x}, \ \ \ \gamma\geq 1. \label{hatvarphi} \end{eqnarray} These functions $\hat\varphi( l)$ satisfy a simple recurrent relation: \begin{eqnarray} \hat\varphi_\gamma = \frac{\partial\hat\varphi_{\gamma -1}}{\partial l} = \left(\frac{\partial}{\partial l}\right)^{\gamma -1}\hat\Phi \label{preKasch} \end{eqnarray} with \begin{eqnarray} \hat\Phi( l)\equiv \hat\varphi_1( l) = \int dx e^{-V(x)+ l x}. \label{preKasch'} \end{eqnarray} Note also that if the "zero-time" $N$ is introduced (see subsection 2.6 above and \cite{Toda}), then \begin{eqnarray} {\cal F}_{V,n}\{N\mid L\} \equiv {\cal F}_{V(X)-N\log X,n}\{ L\} = (2\pi)^{\frac{n(n-1)}{2}}\frac{{\rm det}_{\gamma\delta} \hat\varphi_{\gamma +N}( l_\delta)}{\Delta( l)} \label{KIev'N} \end{eqnarray} with just the $same$ $\hat\varphi_\gamma( l)$ and $\gamma,\delta = 1\ldots n$. If we divide by the quasiclassical factor ${\cal C}_V\{\Lambda \}({\rm det} \Lambda )^N,\ L = V'(\Lambda )$, in order to transform Kontsevich integral into Kontsevich model (see section 2.5), we get: \begin{eqnarray} {\cal Z}_V\{N,T\} = \frac{1}{({\rm det} \Lambda )^N}\cdot\frac{{\rm det}_{\gamma\delta} \varphi_{\gamma +N}(\lambda _\delta)}{\Delta(\lambda )}. \label{KIev"N} \end{eqnarray} The role of ${\cal C}_V\{\Lambda \}$ is to convert $\hat\varphi( l)$ into the properly normalized expansions in the negative iteger powers of $\lambda $: \begin{eqnarray} \varphi_\gamma(\lambda ) = \frac{e^{-\lambda V'(\lambda )+V(\lambda )}\sqrt{V''(\lambda )}}{\sqrt{2\pi}} \hat\varphi_\gamma(V'(\lambda )) = \lambda ^{\gamma -1}(1 + {\cal O}(\lambda ^{-1})), \label{varphiko} \end{eqnarray} and to change $\Delta( l) = \Delta(V'(\lambda ))$ in the denominator of (\ref{KIev'N}) for $\Delta(\lambda )$ in (\ref{KIev"N}). Instead of the simple recurrent relations (\ref{preKasch}) for $\hat\varphi$ the normalized functions $\varphi$ satisfy: \begin{eqnarray} \varphi_\gamma(\lambda ) = {\cal A}\varphi_{\gamma -1}(\lambda ) = {\cal A}^{\gamma -1}\Phi(\lambda ), \label{Kasch} \end{eqnarray} where $\Phi(\lambda ) = \varphi_1(\lambda )$ and operator \begin{eqnarray} {\cal A} = \frac{1}{V''(\lambda )}\cdot\frac{\partial}{\partial \lambda } -\frac{1}{2}\frac{V'''(\lambda )}{(V''(\lambda ))^2} + \lambda \label{Kaschop} \end{eqnarray} now depends on the potential $V(x)$. \subsection{Conventional Multimatrix models} The multimatrix integrals of the form \begin{eqnarray} Z_N\{t^{(\alpha)}\} \equiv \nonumber \\ \equiv c_N^{p-1}\int_{N\times N} dH^{(1)}...dH^{(p-1)} \prod_{\alpha = 1}^{p-1} e^{\sum_{k=0}^{\infty}t_k^{(\alpha)}{\rm Tr} H_{(\alpha)}^k} \prod_{\alpha = 1}^{p-2} e^{{\rm Tr}H^{(\alpha)}H^{(\alpha+1)}} \label{mumamo'} \end{eqnarray} can be rewritten in the eigenvalue form using the same Itzykson-Zuber formula (\ref{IZ}). Indeed, substituting $H^{(\alpha)} = \left.U^{(\alpha)}\right.^{\dagger}D^{(\alpha)} U^{\alpha)}$ and then defyining $U^{(\alpha)}\left.U^{(\alpha+1)}\right.^{\dagger} \equiv \tilde U^{(\alpha)}$, we obtain: \begin{eqnarray} \oldfalse \begin{array}{c} Z_N\{t^{(\alpha)}\} =\\ = \frac{1}{N!} \prod_{\alpha = 1}^{p-1} \prod_{i=1}^N \int dh_i^{(\alpha)} e^{-V(h_i^{(\alpha)})} \Delta^2(h^{(\alpha)}) \prod_{\alpha = 1}^{p-2} I_N\{H^{(\alpha)}, H^{(\alpha+1)}\} = \\ = \frac{1}{N!} \prod_{\alpha = 1}^{p-1} \prod_{i=1}^N \int dh_i^{(\alpha)} e^{-V(h_i^{(\alpha)})} \prod_{\alpha = 1}^{p-2} e^{h_i^{(\alpha)}h_i^{(\alpha+1)}} \Delta(h^{(1)})\Delta(h^{(2)}), \end{array} \label{mumamoev} \end{eqnarray} where the same trick is done with the substitution of $\frac{1}{N!} {\rm det}_{ij} e^{h_i^{(\alpha)}h_j^{(\alpha+1)}}$ for $e^{\sum_{i=1}^N h_i^{(\alpha)}h_i^{(\alpha+1)}}$ under the sign of the $h_i^{(\alpha)}$-integration (step by step: first - for $\alpha = 1$, then for $\alpha = 2$ and so on). Note that all the Van-der-Monde determinants disappeared from the final formula at the r.h.s. of eq.(\ref{mumamoev}), except for those at the ends of the matrix chain (at $\alpha = 1$ and $\alpha = p-1$). If the chain was closed rather than open, i.e. there was an additional factor of $e^{{\rm Tr}H^{(p-1)}H^{(1)}}$ under the integral in (\ref{mumamo'}), then the trick with separation of all angular-variable (unitary-matrix) integrations would not work so simply: in addition to the Itzykson-Zuber integral the much more involved quantities would be required, like \begin{eqnarray} \oldfalse \begin{array}{c} I_n\{X_1,X_2; L\} \equiv \\ \equiv c_n \int_{n\times n} \frac{[dU_1]}{[dU_{1,Cartan}]} \frac{[dU_2]}{[dU_{2,Cartan}]} \\ \cdot \exp\left({{\rm tr} X_1U_1 L U_1^{\dagger} + {\rm tr} X_2U_2 L U_2^{\dagger} + {\rm tr}X_1(U_1U_2^{\dagger})X_2(U_2U_1^{\dagger})}\right) \end{array} \end{eqnarray} This (so far unresolved) closed chain model (lattice Potts model) is an example of non-eigenvalue models, in the $p=\infty$ case it turns into "compactified" $c=1$ model. This theory is $more$ complicated then the so far simplest class of non-eigenvalue models of "induces Yang-Mills theory", known as Kazakov-Migdal models. \subsection{Determinant formulas for eigenvalue models} We are now prepared to make the crucial step towards understanding of mathematical structure behind eigenvalue models, which distinguishes their partition functions in the entire variety of arbitrary $N$-fold integrals. This structure expresses itself in the form of determinantal formulas, which we are now going to discuss. In the next section 4 these formulas will be identified as examples of $\tau$-functions of KP and Toda hierarchies. Looking at the relevant integrals (\ref{1mamoev}), (\ref{mumamoev}) one can notice that integrals over different eigenvalues with non-trivial measures which depend on the shape of potentials $U$ or $V$, are almost separated, the only "interaction" between different eigenvalues being defined by $universal$ (potential-independent) quantities, made from the Van-der-Monde determinants. This feature is intimately related both to its origin (decoupling of angular variables in original matrix integral) and to its most important implication (integrability). The main property of the Van-der-Monde determinant is that it is at the same time a $Pfaffian$ (and it is in this quality that it arises from matrix integrals) and a $determinant$ (and this is the feature that implies integrability): \begin{eqnarray} \prod_{i>j}(h_i-h_j) = \Delta(h) = {\rm det}_{ij}h_i^{j-1}. \label{vdmd} \end{eqnarray} We already used this property above, when going from eq.(\ref{KIev}) to eq.(\ref{KIev'}), which as we shall see later is the crucial step in the proof of integrability of Kontsevich model. In that case determinantal formula (\ref{KIev'}) for partition function was trivial to derive, because the integrand was $linear$ in Van-der-Monde determinants. Now we turn to slightly more complicated situations, involving products of Van-der-Monde determinants. Consider an eigenvalue model of the form: \begin{eqnarray} Z_N = \frac{1}{N!} \prod_{k=1}^N \int d\mu_{h_k,\bar h_k} \Delta(h)\Delta(\bar h), \label{meas1} \end{eqnarray} to be refered to as "scalar-product" model. All conventional multimatrix models (\ref{mumamo'}) belong to this class. In the case of the 1-matrix model (\ref{1mamoev}) \begin{eqnarray} d\mu_{h,\bar h} = dhd\bar h e^{U(h)}\delta(h-\bar h), \end{eqnarray} while for conventional multimatrix models (\ref{mumamoev}) \begin{eqnarray} d\mu_{h^{(1)}, h^{(p-1)}} = dh^{(1)}dh^{(p-1)}\prod_{\alpha=2}^{p-2} \int dh^{(\alpha)} \prod_{\alpha =1}^{p-1} e^{U_{\alpha}(h^{(\alpha)})} \prod_{\alpha = 1}^{p-2} e^{h^{(\alpha)}h^{(\alpha+1)}} \label{measmumamo} \end{eqnarray} If $d\mu_{h,\bar h} =\delta(h-\bar h)d\bar h d\mu_h$ we call this measure local. The main feature of local measure is that operator of multiplication by $H$ (or any function of $h$) is Hermitean. Thus measure is local in the 1-matrix model, but is non-local for all $p-1>1$. In the latter case the measure is defined to depend only on $h = h^{(1)}$ and $\bar h = h^{(p-1)}$, all other $h^{(\alpha)},\ \alpha = 2\ldots p-2$ being integrated out, what makes the "interaction" between $h$ and $\bar h$ more complicated than just $\delta(h-\bar h)$ in the one-matrix ($p=2$) and $e^{h\bar h}$ in the two-matrix ($p=3$) cases. In no sense the set of particular formulas (\ref{measmumamo}) for $p>3$ is distinguished among other scalar-product models, and from now on we shall not consider conventional multimatrix models with $p-1 > 2$ as a separate class of theories. Eqs.(\ref{meas1}) and (\ref{vdmd}) imply together that \begin{eqnarray} Z_N = \frac{1}{N!} \prod_{k=1}^N \int d\mu_{h_k,\bar h_k} {\rm Det}_{ik} h_k^{i-1} {\rm Det}_{jk} \bar h_k^{j-1} = \nonumber \\ = {\rm Det}_{ij} \int d\mu_{h,\bar h} h^{i-1}\bar h^{j-1} = {\rm Det}_{ij} \langle h^{i-1} \mid \bar h^{j-1} \rangle, \label{dimamodet} \end{eqnarray} where an obvious notation is introduced for the scalar product \begin{eqnarray} \langle f(h)\mid g(\bar h)\rangle \equiv \int d\mu_{h,\bar h} f(h)g(\bar h). \nonumber \end{eqnarray} We can now be a little more specific and introduce time-variables $t_k$ and $\bar t_k$ so, that \begin{eqnarray} d\mu_{h,\bar h} = e^{U(h)+\bar U(\bar h)}{d\hat\mu}_{h,\bar h}, \nonumber \\ U(h) = \sum_{k=-\infty}^{\infty} t_kh^k, \ \ \bar U(\bar h) = \sum_{k=-\infty}^{\infty} \bar t_k\bar h^k, \label{tbartmeas} \end{eqnarray} and $\hat{d\mu}_{h,\bar h}$ is already independent of $h$ and $\bar h$. If we now denote ${\cal H}^f(t,\bar t) \equiv \langle 1\mid 1\rangle$, then \begin{eqnarray} {\cal H}^f_{ij} \equiv \langle h^{i}\mid \bar h^{j} \rangle = \frac{\partial^2}{\partial t_{i}\partial \bar t_{j}}{\cal H}^f(t,\bar t) = \nonumber \\ \stackrel{{\rm if}\ i,j\geq 0}{=} \left(\frac{\partial}{\partial t_1}\right)^{i} \left(\frac{\partial}{\partial\bar t_1}\right)^{j}{\cal H}^f(t,\bar t), \label{todamatr} \end{eqnarray} and \begin{eqnarray} Z_N = {\rm Det}_N {\cal H}^f_{ij}, \label{todarepr} \end{eqnarray} where $\ {\rm Det}_N$ stands for determinant of the $N\times N$ matrix ${\cal H}_{i-1,j-1}$ (which is defined itself for {\it any} integers $i,j$) with $i,j = 0,\ldots, N-1$. Characteristic property of ${\cal H}_{ij}^f$ is its peculiar time-dependence: \begin{eqnarray} \frac{{\cal H}^f_{ij}}{\partial t_k} = {\cal H}^f_{i+k,j}; \ \ \ \frac{{\cal H}^f_{ij}}{\partial \bar t_k} = {\cal H}^f_{i,j+k}. \end{eqnarray} Eq.(\ref{todarepr}) provides the determinantal formula for all scalar-product models. The case of the local measure - for 1-matrix model - is a little special. In this case $U(h)$ contains the whole information about the measure: $d\mu_{h,\bar h} = \delta(h-\bar h)d\mu_h,\ d\mu_h = e^{U(h)}dh$, and there is no $\bar U(\bar h)$ (or $\bar t$ simply coincide with $t$). Then (\ref{todarepr}) is still valid, but \begin{eqnarray} {\cal H}^f_{ij} = \left.\langle h^{i}\mid \bar h^{j} \rangle\right| _{d\mu_{h,\bar h}} = \left.\langle h^{i+j} \rangle\right|_{d\mu_h} = \frac{\partial}{\partial t_{i+j}}{\cal H}^f(t) = \nonumber \\ \stackrel{{\rm if}\ i,j\geq 0}{=} \left(\frac{\partial}{\partial t_1}\right)^{i+j}{\cal H}^f(t). \label{todachainmatr} \end{eqnarray} The same formula (\ref{todachainmatr}) can be also derived as a limit of eq.(\ref{KIev'}) for Kontsevich integral. Indeed, \begin{eqnarray} Z_N\{t\} = c_N\int_{N\times N}dH e^{{\rm Tr}U(H)} = \lim_{ L \rightarrow 0} {\cal F}_{U,N}\{ L\} = \nonumber \\ = \lim_{\{ l_j\}\rightarrow 0} \frac{{\rm Det}_{ij}\hat\varphi_i^{\{U\}}( l_j)}{\Delta( l)} = {\rm Det}_{ij}\frac{\partial^{j-1}\hat\varphi_i^{\{U\}}( l_j)} {\partial l^{j-1}}(0) = {\rm Det}_{ij} {\cal H}^f_{i-1,j-1}, \end{eqnarray} where this time \begin{eqnarray} {\cal H}^f_{i-1,j-1} \stackrel{i,j>0}{=} \frac{\partial^{j-1}\hat\varphi_i^{\{U\}}( l_j)} {\partial l^{j-1}}( l =0) \stackrel{(\ref{KIev'})}{=} \left(\frac{\partial}{\partial l}\right)^{i+j-2} \hat\Phi^{\{U\}} \mid_{ l = 0}. \end{eqnarray} Now we note, that the action of $\frac{\partial}{\partial l}$ on $\hat\Phi^{\{U\}}( l) = \int dx e^{U(x) + l x}$ is equivalent to that of $\left( \frac{\partial}{\partial t_1}\right)$, since this is no longer a matrix integral, and thus \begin{eqnarray} {\cal H}^f_{ij} = \left( \frac{\partial}{\partial t_1}\right)^{i+j} \hat\Phi^{\{U\}}(0), \label{todachainmatr1} \end{eqnarray} i.e. ${\cal H}^f(t) = \hat\Phi^{\{U\}}(0)$. $Conformal$ multimatrix models were introduced in section 2.3 above just as eigenvalue models. For the $A_{p-1}$ series partition functions are defined to be \begin{eqnarray} \oldfalse \begin{array}{c} Z_{N_1\ldots N_{p-1}}^{A_{p-1}}\{t^{(1)} \ldots t^{(p-1)}\} = \\ = \prod_{\alpha =1}^{p-1} c_{N_\alpha}\int_{N_\alpha\times N_\alpha} dH^{(\alpha)} e^{{\rm Tr}U_\alpha(H^{(\alpha)})} \cdot \\ \cdot \prod_{\alpha =1}^{p-2} {\rm Det} \left( H^{(\alpha)}\otimes I - I\otimes H^{(\alpha +1)}\right) = \\ = \prod_{\alpha = 1}^{p-1} \frac{1}{N_\alpha !}\prod_{i=1}^{N_\alpha} \int dh_i^{(\alpha)} e^{U_\alpha (h_i^{(\alpha)})} \Delta^2(h^{(\alpha)}) \prod_{\alpha = 1}^{p-2} \prod_{i,k} (h_i^{(\alpha)} - h_k^{(\alpha +1)}). \label{comamoap} \end{array} \end{eqnarray} This expression does not have a form of eq.(\ref{meas1}), thus conformal matrix models for $p-1>1$ are not of the "scalar-product" type. We shall sometimes call them $(p-1)$-component models, because they are related to the multi-component integrable hierarchies. The simplest way to proceed with their investigation is to use the same trick with Kontsevich integral, which was just applied in the 1-matrix case. Let us start from a very general $(p-1)$-component model: \begin{eqnarray} Z = \prod_{\alpha = 1}^{p-1} \int_{N_\alpha\times N_\alpha} dH^{(\alpha)} e^{{\rm Tr} U_\alpha (H^{(\alpha)})} K(H^{(1)}\ldots H^{(p-1)}). \label{mucomammo} \end{eqnarray} It can be also represented in terms of Kontsevich integrals: \begin{eqnarray} Z = \left.K\left(\frac{\partial}{\partial L^{(1)}_{tr}},\ldots, \frac{\partial}{\partial L^{(p-1)}_{tr}}\right) \prod_{\alpha = 1}^{p-1} {\cal F}_{U_\alpha, N_\alpha}\{ L^{(\alpha)}\}\right|_{ L^{(\alpha)} = 0}. \label{mucomamo1} \end{eqnarray} This representation is not very usefull, since the limit $ L \rightarrow 0$ is not easy to take, unless $K$ is a polinomial in the eigenvalues of all its arguments. However, this is exactly the case for our confromal models (\ref{comamoap}). Indeed, \begin{eqnarray} K^{A_{p-1}} = \prod_{\alpha =1}^{p-2} {\rm Det} \left(\frac{\partial}{\partial L^{(\alpha)}_{tr}}\otimes I - I \otimes \frac{\partial}{\partial L^{(\alpha+1)}_{tr}}\right). \end{eqnarray} Still this is not very convenient, because representation (\ref{KIev'}) for ${\cal F}$ contain $\Delta( L)$ in denominator, which are not very pleasant to differentiate. Simplification can be achieved if instead we rewrite the original expression at the r.h.s. of (\ref{comamoap}) as follows: \begin{eqnarray} &Z_{N_1\ldots N_{p-1}}^{A_{p-1}}\{t^{(1)} \ldots t^{(p-1)}\} = \nonumber \\ &= \Delta\left(\frac{\partial}{\partial l^{(1)}} \right) \prod_{\alpha =1}^{p-2} \Delta\left(\frac{\partial}{\partial l^{(\alpha)}}, \frac{\partial}{\partial l^{(\alpha+1)}} \right) \Delta\left(\frac{\partial}{\partial l^{(p-1)}} \right) \times \nonumber \\ &\times \left. \prod_{\alpha =1}^{p-1}\left( \frac{1}{N_\alpha !}\prod_{i=1}^{N_\alpha}\int dh_i^{(\alpha)} e^{U_\alpha (h_i^{(\alpha)})+ l_i^{(\alpha)}h_i^{(\alpha)}} \right)\right|_{ l^{(\alpha)} = 0} \label{mucomamo2} \end{eqnarray} where $\Delta(h,h') \equiv \prod_{i>j}^N(h_i-h_j)\prod_{k>l}^{N'}(h'_k-h'_l) \prod_{i=1}^N\prod_{k=1}^{N'}(h'_k-h_i)$. This formula already takes the specific form of $K$ into account. The product of integrals in brackets at the r.h.s. of (\ref{mucomamo2}) is equal (for every fixed $\alpha$) to \begin{eqnarray} \frac{1}{N_\alpha !} \prod_{j=1}^{N_\alpha}\hat\Phi^{\{U_\alpha\}}( l_j^{(\alpha)}) \end{eqnarray} (compare with eq.(\ref{todachainmatr1})). In order to simplify the notation we shall further denote $\hat\Phi^{\{U_\alpha\}}( l) \equiv \int dx e^{U_\alpha(x)+ l x}$ through $\hat\Phi_\alpha( l)$, and $\left(\frac{\partial}{\partial t_1^{(\alpha)}}\right)^k \hat\Phi^{\{U_\alpha\}}( l^{(\alpha)}) = \left(\frac{\partial}{\partial l^{(\alpha)}}\right)^k \hat\Phi^{\{U_\alpha\}}( l^{(\alpha)})$ - through $\partial^k \hat\Phi_\alpha( l)$. Thus \begin{eqnarray} &Z_{N_1\ldots N_{p-1}}^{A_{p-1}}\{t^{(1)} \ldots t^{(p-1)}\} = \nonumber \\ &= \Delta\left(\frac{\partial}{\partial l^{(1)}} \right) \prod_{\alpha =1}^{p-2} \Delta\left(\frac{\partial}{\partial l^{(\alpha)}}, \frac{\partial}{\partial l^{(\alpha+1)}} \right) \Delta\left(\frac{\partial}{\partial l^{(p-1)}} \right) \times \nonumber \\ &\times \left. \prod_{\alpha =1}^{p-1}\left( \frac{1}{N_\alpha !} \prod_{j=1}^{N_\alpha} \hat\Phi_\alpha( l_j^{(\alpha)})\right) \right|_{ l^{(\alpha)} = 0}. \label{mucomamo4} \end{eqnarray} If $p-1=1$, differential operator is just a square of determinant $\Delta(\partial/\partial l)$ and we can use the relation \begin{eqnarray} \Delta^2(h) &= \sum_P {\rm Det}_{ij} h_{P(j)}^{i+j-2} = \nonumber \\ &= \sum_P {\rm Det} \left[ {\begin{array}{ccrc} 1 &h_{P(2)} &h^2_{P(3)}\ldots &h^{N_1-1}_{P(N_1)} \\ h_{P(1)} &h^2_{P(2)} &h^3_{P(3)}\ldots &h^{N_1}_{P(N_1)} \\ h_{P(1)}^2 &h^3_{P(2)} &h^4_{P(3)}\ldots &h^{N_1+1}_{P(N_1)} \\ & &\ldots & \\ h_{P(1)}^{N_1-1} &h_{P(2)}^{N_1} &h_{P(3)}^{N_1+1}\ldots &h^{2N_1-2}_{P(N_1)} \end{array}} \right] \label{detform1} \end{eqnarray} where the sum is over all the $N!$ permutations $P$ of $N$ elements $1\ldots N$, in order to conclude that (\ref{mucomamo4}) reproduces our old formula (\ref{todarepr}), (\ref{todachainmatr1}): $Z_N = {\rm Det}_{ij} \partial^{i+j-2}\hat\Phi$. For $p-1=2$ we need to use a more complicated analogue of (\ref{detform1}): \begin{eqnarray} &\Delta(h)\Delta(h,h')\Delta(h') = \label{detform2} \\ &= \sum_P\sum_{P'}{\rm Det}\left[ {\begin{array}{crccrc} 1 &h_{P(2)} \ldots &h^{N_1-1}_{P(N_1)} &1 &\bar h_{\bar P(2)} \ldots &\bar h^{N_2-1}_{\bar P(N_2)} \\ h_{P(1)} &h^2_{P(2)} \ldots &h^{N_1}_{P(N_1)} &\bar h_{\bar P(1)} & \bar h_{\bar P(2)}^2 \ldots &\bar h^{N_2}_{\bar P(N_2)} \\ h_{P(1)}^2 &h^3_{P(2)} \ldots &h^{N_1+1}_{P(N_1)} &\bar h_{\bar P(1)}^2 &\bar h_{\bar P(2)}^3 \ldots &\bar h^{N_2+1}_{\bar P(N_2)} \\ & & \ldots & & & \\ h_{P(1)}^{{\cal N}_1-1} &h_{P(2)}^{{\cal N}_1} \ldots &h^{{\cal N}+N_1-2}_{P(N_1)} &\bar h_{\bar P(1)}^{{\cal N}_2-1} &\bar h_{\bar P(2)}^{{\cal N}_2} \ldots &\bar h^{{\cal N}+N_2-2}_{P(N_2)} \end{array}} \right] \nonumber \end{eqnarray} where ${\cal N} = \sum_{\alpha =1}^{p-1} N_\alpha$. Making use of this formula, we conclude that the r.h.s. of (\ref{mucomamo4}) for $p-1=2$ is also representable in the form of determinant: \begin{eqnarray} {\rm Det} \left[ \begin{array}{crccrc} \hat\Phi &\partial\hat\Phi \ldots &\partial^{N_1-1}\hat\Phi &\hat{\bar\Phi} &\partial\hat{\bar\Phi} \ldots &\partial^{N_2-1}\hat{\bar\Phi} \\ \partial\hat\Phi &\partial^2\hat\Phi \ldots &\partial^{N_1}\hat\Phi &\partial\hat{\bar\Phi} &\partial^2\hat{\bar\Phi} \ldots &\partial^{N_2}\hat{\bar\Phi} \\ & & \ldots & & & \\ \partial^{{\cal N}-1}\hat\Phi &\partial^{\cal N}\hat\Phi \ldots &\partial^{{\cal N}+N_1-2}\hat\Phi &\partial^{{\cal N}-1}\hat{\bar\Phi} &\partial^{\cal N}\hat{\bar\Phi} \ldots &\partial^{{\cal N}+N_2-2}\hat{\bar\Phi} \end{array} \right] \nonumber \end{eqnarray} here $\hat\Phi = \hat\Phi_1,\ \hat{\bar\Phi} = \hat\Phi_2$ and all arguments $ l^{(\alpha)} = 0$. It is especially easy to check formula (\ref{detform2}) in the simplest case of $N_1=N_2=1$. Then it just says that $\bar h - h = {\rm Det}\left[\begin{array}{cc} 1&1 \\ h&\bar h \end{array}\right]$. Analogues expressions for $p-1>2$ are more involved, they are no longer just determinants: this is obvious already from consideration of the simplest case of $N_1=\ldots=N_{p-1}=1$, the product $\prod_{\alpha=1}^{p-2}(h^{(\alpha)} - h^{(\alpha+1)})$ is no longer determinant of any nice matrix. \subsection{Orthogonal polinomials} Formalism of orthogonal polinomials was intensively used at the early days of the theory of matrix models. It is applicable to scalar-product eigenvalue models and allows to further transform (diagonalize) the remaining determinants into products. In variance with both reduction from original $N^2$-fold matrix integrals to the eigenvalue problem, which (when possible) reflects a physical phenomenon - decoupling of angular (unitary-matrix) degrees of freedom (associated with $d$-dimensional gauge bosons), - and with occurence of determinant formulas which reflects integrability of the model, orthogonal polinomials appear more as a technical device. Essentially orthogonal polinomials are necessary if wants to explicitly separate dependence on the the size $N$ of the matrix in the matrix integral ("zero-time") from dependencies on all other time-variables and to explicitly construct variables, which satisfy Toda-like equations. However, modern description of integrable hierarchies in terms of $\tau$-functions does not require explicit separation of the zero-time and treats it more or less on the equal fooring with all other variables, thus making the use of orthogonal polinomials unnecessary. Still this technique remains in the arsenal of the matrix model theory\footnote{ Of course, one can also use this link just with the aim to put the rich and beautifull mathematical theory of orthogonal polinomials into the general context of string theory. Among interesting problems here is the matrix-model description of $q$-orthogonal polinomials. } and we now briefly explain what it is about. At the end of this section two simple applications will be also described: one to evaluation of the volume of the unitary group, another - to direct proof of equivalence of the ordinary 1-matrix model and the Gaussian Kontsevich model. Both these examples make use of explicitly known orthogonal Hermite polinomials and in this sense are not quite representative: usually orthogonal polinomials are $not$ known explicitly. Some applications of such "abstract" theory of orthogonal polinomials to the study of matrix models will be mentioned in the following sections. In the context of the theory of scalar-product matrix models orthogonal polinomials naturally arise when one notes that after partition functions appears in a simple determinantal form of eq.(\ref{dimamodet}), any linear change of basises $h^i \rightarrow Q_i(h) = \sum_kA_{ik}h_k, \ \bar h^j \rightarrow \bar Q_j(\bar h) = \sum_l B_{jl}\bar h^l$ can be easily performed and $Z \longrightarrow Z\cdot {\rm det} A\cdot {\rm det} B$. In particular, if $A$ and $B$ are triangular with units at diagonals, their determinants are just unities and $Z$ does not change at all. This freedom is, however, enough, to diagonalize the scalar product and choose polinomials $Q_i$ and $\bar Q_j$ so that \begin{eqnarray} \langle Q_i(h) \mid \bar Q_j(\bar h) \rangle = e^{\phi_i}\delta_{ij}. \label{orthopol} \end{eqnarray} $Q_i$ and $\bar Q_j$ defined in this way up to normalization are called orthogonal polinomials. (Note that $\bar Q$ does not need to be a {\it complex} conjugate of $Q$: "bar" does not mean complex conjugation.) Because of above restriction on the form of matrices $A$ and $B$ these polinomials are normalized so that \begin{eqnarray} Q_i(h) = h^i + \ldots;\ \ \ \bar Q_j(\bar h) = \bar h^j + \ldots \nonumber \end{eqnarray} i.e. the leading power enters with the $unit$ coefficient. From (\ref{dimamodet}) and (\ref{orthopol}) it follows that \begin{eqnarray} Z_N = \prod_{i=1}^N e^{\phi_{i-1}}. \label{ZNprod} \end{eqnarray} This formula is essentially the main outcome of orthogonal polinomials theory fro matrix models: it provides complete separation of the $N$-dependence of $Z$ (on the size of the matrix) from that on all other parameters (which specify the shape of potential, i.e. the measure $d\mu_{h,\bar h}$), this information is encoded in a rather complicated fashion in $\phi_i$. As was already mentioned, any feature of matrix model can be examined already at the level of eq.(\ref{dimamodet}), which does not refer to orthogonal polinomials and thus they are not really relevant for the subject. We can, however, reverse the problem and ask, what can matrix models provide $for$ the theory of orthogonal polinomials.\footnote{Of course, we can hardly get anything $new$ for that theory, but the purpose is to see, which features are immediate consequences of the "physically-inspired" approach. As usually this can help to somehow organize the existing knowledge in appropriate system. This is, however, not our goal in these notes: only a very simple example will be mentioned, which will be also of use in our futher considerations.} The first question to ask in the theory of orthogonal polinomials is: Given the measure $d\mu_{h,\bar h}$, what are the corresponding orthogonal polinomials? Usually the answer to this type of questions is not at all straightforward. Its complexity, however, depends on what one agrees to accept as a suitable answer. Of particular interest for our purposes below would be integral representations. It would be very helpfull to have just an integral transformation, converting the set of orthogonal polinomials for given $d\mu_{h,\bar h}$ into some standard set, like $Q_i^{(0)} = x^i$. Unfortunately, such transformation is rarely available, though there are important examples: classical orthogonal polinomials and their $q$-analogues (expressed through the ($q$-)hypergeometric functions, which usually possess integral representation of a simple form, see \cite{movi} for an introductory review of such integral formulas, which are in fact well known in CFT). The simplest example of this kind, which will be used below is the set of Hermite polinomials: \begin{eqnarray} {\rm He}_k(h) = \frac{1}{\sqrt{2\pi}} e^{\frac{h^2}{2}} \int (ix)^ke^{-\frac{x^2}{2}-ixh} dx = (h-\frac{d}{dh})^k\cdot 1 = \nonumber \\ = e^{\frac{h^2}{2}}(-\frac{d}{dh})^k e^{-\frac{h^2}{2}} = \frac{1}{2^k}e^{\frac{h^2}{4}}(h - 2\frac{d}{dh})^ke^{-\frac{h^2}{4}} = h^k + \ldots, \label{hepo} \end{eqnarray} orthogonal with the $local$ measure $d\mu_h = e^{-\frac{h^2}{2}}$. For generic measure the answer of this type does not exist in any universal form. However, matrix models still provide a somewhat peculiar integral representation for $any$ measure, with the number of integrations depending on the number of polinomial. In order to obtain this expression, let us consider a slight generalization of formula (\ref{meas1}) \begin{eqnarray} Z_N\{\lambda _\gamma\} \equiv \frac{1}{N!} \prod_{k=1}^N \int d\mu_{h_k,\bar h_k} \Delta(h)\Delta(\bar h)\prod_{k,\gamma} (\lambda _\gamma - h_k). \label{meas2} \end{eqnarray} Then $\Delta(h)\prod_{k,\gamma} (\lambda _\gamma - h_k) = \Delta(h,\lambda )/\Delta(\lambda )$, and $\lambda _\gamma$ can be just considered as $h_{N+\gamma}$, which are $not$ integrated over in (\ref{meas2}). Then it is clear that \begin{eqnarray} \Delta(h,\lambda ) = {\rm Det} \left( \begin{array}{cl} Q_{i-1}(h_k) & Q_{N+\gamma-1}(h_k) \\ Q_{i-1}(\lambda _\delta) & Q_{N+\gamma-1} (\lambda _\delta) \end{array} \right) \end{eqnarray} while $\Delta(\bar h) = {\rm Det}_{jk} \bar Q_{j-1}(\bar h_k)$. Since all the $Q_{N+\gamma-1}(h_k)$ are orthogonal to all $\bar Q_{j-1}(\bar h_k)$ (because $N+\gamma-1 \neq j-1$), we obtain: \begin{eqnarray} Z_N\{\lambda _\delta\} = \frac{{\rm det}_{\gamma\delta}Q_{N+\gamma-1}(\lambda _\delta)} {\Delta(\lambda )} Z_N. \label{meas3} \end{eqnarray} In particular, \begin{eqnarray} Q_N(\lambda ) = \frac{Z_N\{\lambda \}}{Z_N}, \end{eqnarray} where both the numerator and denominator can be represented by $N\times N$-matrix integrals. Inverse "main question" of the theory of orthogonal polinomials is: Given a set of polinomials \begin{eqnarray} Q_i(h) = h_i + \ldots,\nonumber \\ \bar Q_j(\bar h) = \bar h_j + \ldots, \nonumber \end{eqnarray} what is the measure $d\mu_{h,\bar h}$ w.r.to which they form an orthogonal system? We shall not discuss the complete answer to this question and consider only the case of the {\it local} measure, when $\bar Q_i = Q_i$. Then usually the answer does not exist at all: not {\it every} system of polinomials is orthogonal w.r.to some local measure. It is easy to find the necessary (and in fact sufficient) condition. As was mentioned above, the local measure is distinguished by the property that multiplication by (any function of) $h$ is Hermitean operator: \begin{eqnarray} \langle hf(h)\mid g(\bar h)\rangle = \langle f(h) \mid \bar h g(\bar h)\rangle, \ \ {\rm if} \ d\mu_{h,\bar h} \sim \delta(h-\bar h). \end{eqnarray} This property implies, that the coefficients $c_{ij}$ in the recurrent relation \begin{eqnarray} hQ_i(h) = Q_{i+1}(h) + \sum_{j=0}^i c_{ij}Q_j(h) \end{eqnarray} are almost all vanishing. Indeed: for $j<i$ \begin{eqnarray} \oldfalse \begin{array}{c} c_{ij} = \frac{\langle hQ_i(h) \mid Q_j(\bar h)\rangle}{\langle Q_j(h) \mid Q_j(\bar h)\rangle} = \frac{\langle Q_i(h) \mid \bar hQ_j(\bar h)\rangle}{\langle Q_j(h) \mid Q_j(\bar h)\rangle} = \\ = \delta_{i,j+1}\frac{\langle Q_i(h) \mid Q_i(\bar h)\rangle}{\langle Q_j(h) \mid Q_j(\bar h)\rangle} = \delta_{j,i-1} e^{\phi_i -\phi_{i-1}}. \end{array} \end{eqnarray} In other words, polinomials, orthogonal w.r.to a local measure are obliged to satisfy the "3-term recurrent relation": \begin{eqnarray} hQ_i(h) = Q_{i+1}(h) + C_iQ_i(h) + R_iQ_{i-1}(h) \label{3termrel} \end{eqnarray} (the coefficient in front of $Q_{i+1}$ can be of course changed by the change of normalization). Parameter $C_i$ vanishes if the measure is even (symmetric under the change $h \rightarrow -h$), then polinomials are split into two orthogonal subsets: even and odd in $h$. Partition function (\ref{ZNprod}) of the $one$-component model can be expressed through parameters $R_i = e^{\phi_i -\phi_{i-1}}$ of the 3-term relation: \begin{eqnarray} Z_N = Z_1 \prod_{i=1}^{N-1} R_i^{N-i}, \end{eqnarray} thus defining a one-component matrix model (i.e. particular shape of potential), associated with any system of orthogonal polinomials. Our "inverse main question" in the case of the local measure should be now formulated as follows: Given a set of orthogonal polinomials $Q_i(h) = h^i +\ldots$ {\it which satisfy the 3-term relation} (\ref{3termrel}), what is the measure $d\mu_h$? As every complete orthogonal system of functions, orthogonal polinomials satisfy the completeness relation: \begin{eqnarray} \sum_{i=0}^{\infty} e^{-\phi_i} \bar Q_i(\bar h)Q_i(h) = \delta^{\{d\mu\}}(\bar h,h), \end{eqnarray} where $\delta$-function, associated with the measure $d\mu_{h,\bar h}$ is defined so that \begin{eqnarray} \int\int f(h)\delta^{\{d\mu\}}(\bar h,h') d\mu_{h,\bar h} = f(h') \end{eqnarray} for any function $f(h)$. Since for the {\it local} measure $d\mu_h = e^{U(h)}dh$ the $\delta$-function is just $\delta^{\{d\mu\}}(\bar h,h) = e^{-U(h)}\delta(\bar h - h)$, as an answer to our question we can take a representation of $U(h)$ in terms of the corresponding orthogonal polinomials: \begin{eqnarray} e^{-U(h)}\delta(\bar h - h) = \sum_{k=0}^{\infty} \frac{Q_k(\bar h)Q_k(h)} {\langle Q_k \mid Q_k\rangle}. \label{orthopoltomeas} \end{eqnarray} As usually this relation should be understood as analytical continuation. The squared norms $\mid\mid Q_k \mid\mid^2$ in denominator are expressed through the coefficients $R_i$ of the 3-term relation up to an overall constant: $\mid\mid Q_k \mid\mid^2 = \prod_{i=1}^k R_i\mid\mid Q_0 \mid\mid^2$. For example, in the case of Hermite polinomials (\ref{hepo}) we have: \begin{eqnarray} {\rm He}_{k+1}(h) = (h - \frac{d}{dh}){\rm He}_k(h) = h{\rm He}_k(h) - \frac{d}{dh}{\rm He}_k(h) = \nonumber \\ = h{\rm He}_k(h) - k{\rm He}_{k-1}(h) \end{eqnarray} (the last equality holds because $\frac{d}{dh}$ and $h-\frac{d}{dh}$ play the role of annihilation and creation operators respectively). This means that the 3-term relation is satisfied with $R_k = k$ and thus $\mid \mid {\rm He}_k\mid\mid^2 = \mid \mid {\rm He}_0\mid\mid^2 k!$ We shall use the normalization condition $\mid \mid He_0\mid\mid^2 = \sqrt{2\pi}$. Then for $e^{-U(h)}$ we get: \begin{eqnarray} \oldfalse \begin{array}{c} e^{-U(h)}\delta(\bar h-h) = \\ \sum_{k=0}^{\infty} \frac{{\rm He}_k(\bar h){\rm He}_k(h)} {\mid\mid He_k\mid\mid^2} = \frac{1}{\sqrt{2\pi}}\sum_{k=0}^{\infty}\frac{1}{k!} (h-\frac{d}{dh})^k(\bar h-\frac{d}{d\bar h})^k \cdot 1 = \\ = \frac{1}{\sqrt{2\pi}} e^{\frac{h^2}{2}+\frac{\bar h^2}{2}} \sum_{k=0}^{\infty}\frac{1}{k!}(\frac{d^2}{dhd\bar h})^k e^{-\frac{h^2}{2}-\frac{\bar h^2}{2}} = \frac{1}{\sqrt{2\pi}} e^{\frac{h^2}{2}+\frac{\bar h^2}{2}} e^{\frac{d^2}{dhd\bar h}} e^{-\frac{h^2}{2}-\frac{\bar h^2}{2}} = \\ = \frac{1}{\sqrt{2\pi}} {\rm Im} \int\int \frac{d\alpha d\bar\alpha}{2\pi} e^{-\alpha\bar\alpha} e^{\frac{h^2}{2}+\frac{\bar h^2}{2}} e^{\alpha\frac{d}{dh}+\bar\alpha\frac{d}{d\bar h}} e^{-\frac{h^2}{2}-\frac{\bar h^2}{2}} = \\ = \frac{1}{\sqrt{2\pi}} {\rm Im} \int\int \frac{d\alpha d\bar\alpha}{2\pi} e^{-\frac{1}{2}(\alpha +\bar\alpha)^2} e^{-\frac{1}{2}(\alpha +\bar\alpha)(h + \bar h)} e^{-\frac{1}{2}(\alpha -\bar\alpha)(h - \bar h)} = \nonumber \\ = e^{\frac{h^2}{2}}\delta(h-\bar h). \end{array} \end{eqnarray} \subsection{Scalar-product models in Miwa parametrization} We shall now make the first step towards clarification of the interrelation between the scalar-product and Kontsevich models. We already know that in the latter case the important role is played by representation of time-variables in the form of \begin{eqnarray} T_k = \frac{1}{k} {\rm tr} \Lambda ^{-k}, \label{MiwaT} \end{eqnarray} with $n\times n$ matrix $\Lambda $, which will be further refered to as Miwa parametrization (expressions of some similar form were first introduced in \cite{Miwa}). Let us now perfrom such transformation in the case of the scalar-product model. Let us use eq.(\ref{tbartmeas}) to define the time-dependence of the measure, only ignore the $\bar t$-variables. Namely, introduce $d\mu_{h,\bar h} = e^{U(h)} d\hat\nu_{h,\bar h}$ (i.e. $d\hat\nu_{h,\bar h} = e^{\bar U(\bar h)}d\hat\mu_{h,\bar h}$). Substitute \begin{eqnarray} t_k = \mp\left(\frac{1}{k}{\rm tr}\Lambda ^{-k} + r_k\right) \end{eqnarray} and obtain: \begin{eqnarray} \oldfalse \begin{array}{c} e^{U(h)} = e^{-\hat V(h)} e^{\mp{\rm tr} \sum_{k=1}^{\infty}\frac{1}{k} \left(\frac{h}{\Lambda }\right)^k} = e^{-\hat V(h)}\frac{{\rm det}^{\pm 1}(\Lambda -h\cdot I)}{{\rm det} \Lambda } = \\ = \frac{ e^{-\hat V(h)}}{{\rm det} \Lambda } \prod_{\gamma =1}^n (\lambda _\gamma-h)^{\pm 1}, \end{array} \end{eqnarray} where $\hat V(h) \equiv \pm \sum_k r_kh^k$. Let us choose {\it upper} signs in these formulas. Then we can use eqs.(\ref{meas2}) and (\ref{meas3}) to conclude that in Miwa parametrization \begin{eqnarray} Z_N^{\{d\mu\}} = \frac{1}{({\rm det} \Lambda )^N} Z_N^{\{d\hat\nu\}}\{ \lambda _\delta\} = Z_N^{\{d\hat\nu\}} \frac{{\rm det}_{\gamma\delta}\hat Q_{N+\gamma-1}(\lambda _\delta)} {\Delta(\lambda )({\rm det} \Lambda )^N}, \nonumber \end{eqnarray} where $d\hat\nu_{h,\bar h} \equiv e^{-\hat V(\bar h)}d\nu_{h,\bar h}$ and $\hat Q_k$ are the corresponding orthogonal polinomials. In other words, we reduced the model with potential $U(h)$ to another model with potential, $-\hat V(h)$, and expressed the difference in terms of orthogonal polinomials $\hat Q_k$: \begin{eqnarray} \frac{Z_N^{\{d\mu\}}}{Z_N^{\{d\hat\nu\}}} = \frac{1}{({\rm det} \Lambda )^N}\cdot \frac{{\rm det}_{\gamma\delta}\hat Q_{N+\gamma-1} (\lambda _\delta)}{\Delta(\lambda )}. \label{reduhv} \end{eqnarray} If $\hat V(h)$ is adjusted to give rise to some simple orthogonal polinomials, (i.e. if the new model $Z_N^{\{d\hat\nu\}}$ is easy to solve), this representation can considerably simplify the original model. Another interpretation of this formula is that we obtained a GKM-like representation of the from of (\ref{KIev"N}) for the {\it discrete} scalar-product model. The only difference is that $\varphi_\gamma^{\{V\}}$ in (\ref{KIev"N}) are changed for $\hat Q_{\gamma-1}$ in (\ref{reduhv}). This is an important difference, because $\varphi_\gamma^{\{V\}}$ in GKM are defined to by integral formulas like (\ref{hatvarphi}), $\varphi_\gamma^{\{V\}} =\ \langle\langle x^{\gamma -1} \rangle\rangle$ or, alternatively, satisfy the recursive relations like (\ref{Kasch}). Moreover, generic $\varphi_\gamma^{\{V\}}$ are infinite formal series in $\lambda^{-1}$, while $Q_{\gamma-1}$ are orthogonal {\it polinomials}. This discreapancy is one of important stimuli for further developement of the concept of Generalized Kontsevich model, as well as for search for convenient integral representations for orthogonal polinomials. There is, however, at least one interesting situation when the two formulas indeed coincide. This is the case of Gaussian potentials $V$ and $\hat V$, when both $\varphi_\gamma^{\{V\}}$ and $Q_{\gamma-1}$ are represented by orthogonal Hermite polinomials, which possess integral representation, exactly adequate in the context of GKM. This is the subject of our consideration in the next subsection. \subsection{Equivalence of the discrete 1-matrix and Gaussian Kontsevich models} Let us take the ordinary 1-matrix model with the $local$ measure $d\mu_h = e^{U(h)}dh $ to be the scalar-product model, considered in the previous subsection and take Miwa parametrization with upper signs and with $r_k = -\frac{1}{2}\delta_{k,2}$ (as we did in the section 2.6). Then $\hat V(h) = \sum_k r_kh^k = -\frac{h^2}{2} = \frac{(ih)^2}{2}$. The relevant orthogonal polinomials $\hat Q$ are just Hermite polinomials of $imaginary$ argument:\footnote{ Note that this system of functions $\varphi_k = i^{-k}{\rm He}_k(ih)$ looks like $\varphi_0 = 1,\ \varphi_1 = h,\ \varphi_2 = h^2+1, \ldots$, and does not resemble any set of orthogonal polinomials with a local measure (for example the product $\varphi_0\cdot\varphi_2 = h^2+1$ may seem positive definite, this being inconsistent with orthogonality requirement $\langle \varphi_0\mid\varphi_2\rangle = 0$). The thing is that integration at the l.h.s. of eq.(\ref{1mamoidgako}) is well defined only along the imaginary axis, while integrals along the real axis are understood as analytical continuation. } $Q_k^{\{-\frac{h^2}{2}dh\}} = i^{-k}{\rm He}_k(ih) = h^k + \ldots$. These polinomials possess an integral representation (\ref{hepo}): \begin{eqnarray} i^{1-k}{\rm He}_{k-1}(ih) = \frac{1}{\sqrt{2\pi}} e^{-\frac{h^2}{2}}\int x^{k-1} e^{-\frac{x^2}{2} + xh} dx \stackrel{(\ref{varphiko})}{=} \varphi_k^{\{\frac{x^2}{2}\}}(h). \end{eqnarray} Using (\ref{reduhv}) and (\ref{KIev"N}) we obtain a remarkable relation between the two matrix models: \begin{eqnarray} \frac{Z_N\{t_0 = 0; t_k = -\frac{1}{k}{\rm tr}\Lambda ^{-k} + \frac{1}{2}\delta_{k,2}\}} {Z_N\{t_k = \frac{1}{2}\delta_{k,2}\}} = \frac{\int_{N\times N} dH e^{\sum_{k=0}^{\infty} t_k {\rm Tr} H^k}}{\int_{N\times N} dH e^{\frac{1}{2}H^2}} = \nonumber \\ = \frac{e^{-{\rm tr}\frac{\Lambda ^2}{2}}} {(2\pi)^{\frac{n^2}{2}}({\rm det}\Lambda )^N} \int_{n\times n} dX ({\rm det}X)^N e^{-{\rm tr}\frac{X^2}{2} + \Lambda X} = {\cal Z}_{\frac{X^2}{2}}\{N,t\}, \label{1mamoidgako} \end{eqnarray} where ${Z_N\{t_k = \frac{1}{2}\delta_{k,2}\}} = (-2\pi)^{\frac{N^2}{2}}c_N$. This relation can be also regarded as an identity \begin{eqnarray} \frac{\int_{N\times N}dH e^{\frac{1}{2}{\rm Tr}H^2} Det(\Lambda \otimes I- I\otimes H)}{ \int_{N\times N}dH e^{\frac{1}{2}{\rm Tr}H^2}} = \nonumber \\ = \frac{\int_{n\times n}dX e^{-\frac{1}{2}{\rm tr}X^2} {\rm det}^N(X+\Lambda )}{\int_{n\times n}dX e^{-\frac{1}{2}{\rm tr}X^2}}, \end{eqnarray} valid for any $\Lambda$. Note that integrals are of differents sizes: $N\times N$ at the l.h.s. and $n\times n$ at the r.h.s. While $N$-dependence is explicit at both sides of the equation, the $n$-dependence at the l.h.s. enters only implicitly: through the allowed domain of variation of variables $t_k = -\frac{1}{k}{\rm tr}\Lambda ^{-k} + \frac{1}{2}\delta_{k,2}$. (This can serve as an illustration to the general statement that the shape of Kontsevich partition function ${\cal Z}_V$, considered as a function of $T$'s rather than $ L$ or $\Lambda $, is independent of the matrix size $n$.) Identity (\ref{1mamoidgako}) was anticipated from the study of Ward identites for the Gaussian Kontsevich model in \cite{ChMa} (see eq. (2.53) in the section 2.6 above), and it was derived in the present form in ref.\cite{Toda}. Eq.(\ref{1mamoidgako}) can be used to perfrom analytical continuation in $N$ and define what is $Z_N$ for $N$, which are not positive integers. Since $c_N = 0$ for all {\it negative} integers (see eq.(\ref{cnneg}) below), the same is true for $Z_N$. In the next section 4 we shall see that it is characteristic property of $\tau$-functions of {\it forced} hierarchies. \subsection{Volume of unitary group} Formalism of orthogonal polinomials provides also a simple derivation of eq.(\ref{volun}) for the volume of unitary group. Consider eq.(\ref{1mamoev}) with $U(H) = H^2$. Then Gaussian matrix integral can be easily evaluated: \begin{eqnarray} c_N\int_{N\times N} dH e^{-\frac{1}{2}{\rm Tr}H^2} = c_N\prod_{i=1}^N \int dH_{ii}e^{-\frac{1}{2}H_{ii}^2} \prod_{i<j}^N \int d^2H_{ij} e^{-\mid H_{ij}\mid^2} = (2\pi)^{N^2/2}, \nonumber \end{eqnarray} while according to eqs.(\ref{orthopol}) and (\ref{ZNprod}) the same integral is equal to \begin{eqnarray} \frac{1}{N!}\prod_{i=1}^N\int_{-\infty}^{+\infty}e^{-\frac{1}{2}h_i^2} \prod_{i>j}^N (h_i-h_j)^2 = \prod_{j=1}^N \mid\mid {\rm He}_{j-1} \mid\mid^2. \nonumber \end{eqnarray} Here $\mid\mid {\rm He}_{j-1} \mid\mid$ stand for the norms of orthogonal Hermite polinomials (\ref{hepo}), $\mid\mid {\rm He}_k \mid\mid^2 = \sqrt{2\pi} k!$. Comparing the two expressions for the same integral we get: \begin{eqnarray} c_N^{-1} = (2\pi)^{\frac{N^2}{2}}\prod_{k=0}^{N-1} \frac{1}{\sqrt{2\pi}k!} = \frac{(2\pi)^{\frac{N(N-1)}{2}}}{\prod_{k=0}^{N-1} k!}. \label{cnform} \end{eqnarray} According to (\ref{cN}) \begin{eqnarray} c_N^{-1} = N!\frac{{\rm Vol}_{U(N)}}{({\rm Vol}_{U(1)})^N} \nonumber \end{eqnarray} and ${\rm Vol}_{U(1)} = 2\pi$. Thus we obtain eq.(\ref{volun}): \begin{eqnarray} {\rm Vol}_{U(N)} = \frac{(2\pi)^{\frac{N(N+1)}{2}}}{\prod_{k=0}^N k!}. \nonumber \end{eqnarray} An example of somewhat more sophisticated (quantum) group-theoretical quantity, arising from Gaussian matrix models, is provided by the following formula for the $q$-factorial \cite{CAD} (see also \cite{KMSW2}): \begin{eqnarray} \frac{1}{(q,q)_N} \equiv \prod_{n=1}^N \frac{1}{1-q^n} = \frac{\int\int_{N\times N} dH[dU] e^{-m^2{\rm Tr}H^2 + {\rm Tr}HUHU^{\dagger}}}{{\rm Vol}_{U(N)} \int_{N\times N} dH e^{-m^2{\rm Tr}H^2}}. \end{eqnarray} Integral in the numerator is over Hermitean ($H$) and unitary ($U$) $N\times N$ matrices, and $q \equiv m^2 - \sqrt{m^4-1}$. Explicit expression (\ref{cnform}) can be used to prove that $c_N = 0$ for all {\it negative} integer $N$ \cite{Toda}. Eq.(\ref{cnform}) defines $c_N$ only for positive integer $N$ as a finite product. There is an obvious prescription for analytical continuation of such products, provided continuation of the items is known (it can be considered as implied by the similar formula for integrals with the varying upper limit): Let \begin{eqnarray} F(N) = \sum_{k = -\infty}^N f(k). \end{eqnarray} Then \begin{eqnarray} S(N) \equiv \sum_{k=1}^N f(k) = F(N) - F(0) \end{eqnarray} and, obviously, $F(0) - F(-N) = \sum_{k = 1-N}^0 f(k)$, so that \begin{eqnarray} S(-N) \equiv F(-N) - F(0) = -\sum_{k=0}^{N-1} f(-k). \end{eqnarray} Exponentiation of this formula gives the rule for the products. In the case of $c_N$ one can treat factorials in (\ref{cnform}) as Gamma-functions, \begin{eqnarray} (2\pi)^{\frac{N(N-1)}{2}} c_N = \prod_{k=1}^N \Gamma(k), \end{eqnarray} and obtain: \begin{eqnarray} (2\pi)^{-\frac{N(N+1)}{2}} c_{-N} = \left( \prod_{k=0}^{N-1} \Gamma(-k)\right)^{-1} = 0, \label{cnneg} \end{eqnarray} because of the poles of $\Gamma$-functions. \ \bigskip \section{Integrable structure of eigenvalue models} \setcounter{equation}{0} \subsection{The concept of integrability} Integrable structure of dynamical system implies that all the dynamical characteristics - solutions of equations of motion for a classical system and functional integrals for a quantum one - can be found exactly. According to this description the notion of integrability is not very concrete, and in fact it evolves with time, including more and more classes of theories into the class of integable systems. Nowadays we consider the following types of theories as clearly belonging to this class: - Free motion (classical or quantum) on group manifolds and homogeneous spaces; - 2-dimensional conformal theories and their "integrable massive deformations"; - Integrable hierarchies of the (multicomponent) KP and Toda type and their reductions; - Functional integrals, subjected to conditions of (generalized) Duistermaat-Heckman theorem; - (Eigenvalue) matrix models; - Topological theories; - Many supersymmetric models (at least those allowing for Nicolai transformation and/or Duistermaat-Heckman-like description); - Systems with (infinitely) many local integrals of motion. This list (nothing to say about the order of items) is rather arbitrary. Also different items are not really different and (as it should be) can be considered as different descriptions of the same reality. Now we discuss very briefly at least some of the most important views on the concept of integrability. Often the notion of integrability is related to occurence of "many enough" integrals of motion ("many enough" means equal to the number of degrees of freedom). This is, however, not such a rigid definition as one can think. In fact, in classical mechanics there is usually a complete set of integrals of motion available: just initial conditions in the phase space (or, to be more sophisticated, angle-action variables). The problem is, however, that a) these obvious integrals are very complicated (non-local and multi-valued) functionals of the $current$ coordinates, and b) in general situation they are very "unstable" under a small change of current coordinates ("divergency of trajectories"). In order to avoid these problems one usually imposes a "locality" condition on equations of motion. While this is a reasonable thing to do for particular classes of theories (e.g. possessing a well defined kinetic term, which is quadratic in momenta), this is $not$ a nice decription in general situation, since "locality" is not invariant under arbitrary (including non-local) change of variables. In practice, when approached from this side, integrability implies a kind of "regular" behaviour of trajectories and some more or less nicel defined transformation from "natural" (or, better to say, "original") coordinates to the action-angle variables. Situation becomes even less clear when quantum theory is considered, since "chaotic behaviour" no longer implies anything really "chaotic" for the quantum system. Again, very much depends on what kind of observables one wants to consider, and any notion of "regularity" is not enough under arbitrary change of variables. This can be made even more transparent, if one recalls the idea of universality classes, so important in the modern theory. The idea is that even in the cases when behaviour of the system seems absolutely haotic from any naive point of view (like in the cases of turbulence or quantum gravity), one can and should introduce new variables (which can be very complicated functions of original ones), which have smooth and well defined correlation functions. In most cases one is not attempting to find a $complete$ set of such variables (and thus some information is lost), but this reflects nothing but the current state of knowledge, and in fact in studies of $2d$ quantum gravity the $goal$ of $complete$ descrption is already clearly fromulated. Despite these comments, the "definition" of integrability in terms of "many enough" $local$ integrals of motion should be put at the first place in our discussion, because most of the systems which were so far considered as integrable more or less naturally get into this class, allowing for some prefered choice of dynamical variables ("more or less" appears because some "minor" non-locality is usually present in any interesting examples, where angle-action variables are not obvious from the very beginning). This "definition" is so unclear because we attempted to look for a generic description of integrability. Most interesting approaches, however, go from another direction. One starts from some simple system and then perform a change of variables, which makes it looking much more complicated (being still simple in its essense). This appears to be a much more fruitfull view on the problem and in fact all the other items of our list above are describable in this kind of terms. A trivial, but surprisingly representative example of this approach is provided by a free particle, moving in flat $D$-dimensional space. The eigenfunctions of Laplace operator are just plain waves or, equivalently, spherical harmonics. The radial part of the $j$-th harmonic is already a not very simple function, satisfying the equation \begin{eqnarray} \left( -\frac{d^2}{dr^2} + \frac{D-1}{r}\frac{d}{dr} + \frac{C_2(j)}{r^2}\right) \psi(r) = E\psi(r). \label{ralap} \end{eqnarray} This equation is of course less trivial than the original Laplace equation, but their solutions are related in a simple way. In order to find a solution of (\ref{ralap}), say, for $j=0$, one should just take an angular average of a plane wave: \begin{eqnarray} \phi_k(r) = \int e^{ik\vec r\vec\nu}d^{D-1}\vec\nu; \ \ \ \mid\vec\nu\mid = 1. \end{eqnarray} This integral representation expresses the solutions of (\ref{ralap}) through Bessel functions, and this is in fact the proper way to derive the well-known formula: \begin{eqnarray} \phi_k(r) = 2^{\frac{D}{2}-1}\Gamma\left(\frac{D}{2}\right) (kr)^{1-\frac{D}{2}} J_{\frac{D}{2}-1}(kr). \end{eqnarray} If one expands the exponent in the integral in a series, the standard expansion for the Bessel function arises. A slightly more involved example is the quantum mechanical model of a particle in the potential $e^{-q}$, i.e. the theory of equation \begin{eqnarray} \left(-\frac{d^2}{dq^2} + e^{-q}\right)\psi(q) = 0 \label{exppot} \end{eqnarray} (one of course recognizes a simplified version of Toda models). It can be solved by projection of the simple Sshr\"odinger equation for a particle, moving on the upper part of the hyperboloid $x_0^2-x_1^2-x_2^2 = 1;\ \ x_0 > 0$ \cite{OlPe}. If \begin{eqnarray} x_0 = \cosh\frac{q}{2} + \frac{1}{2}z^2e^{q/2}; \ \ x_1 = \sinh \frac{q}{2} - \frac{1}{2}z^2e^{q/2}; \ \ x_2 = ze^{q/2}, \nonumber \end{eqnarray} then $q = \log(x_0+x_1)$, Laplace operator on hyperboloid is \begin{eqnarray} L = \frac{\partial^2}{\partial q^2} - \frac{1}{2}\frac{\partial}{\partial q} +\frac{1}{4}e^{-q}\frac{\partial^2}{\partial z^2} \end{eqnarray} and average of the wave function $\psi_ \lambda(q,z)$ provides the following expression for solutions of (\ref{exppot}): \begin{eqnarray} \psi_ \lambda (q) = e^{i \lambda q}\int_0^{\infty} t^{2i \lambda -1}e^{-(t+e^q/t)}dt. \end{eqnarray} This idea, which is sometimes refered to as "projection method" (see \cite{OlPe} for a broad review), reveals hidden symmetries of some complicated systems (which do not possess any symmetry at all in the usual, Noether-like, sence of the word), by considering them as embedded into wider theories with more degrees of freedom. Quantum mechanical examples of applicability of the method are by no means exhausted by the two systems above, one can consider various projections, starting from (exactly solvable problem of) the free motion on any group manifold, and in general this gives rise to a very important theory of "zonal spherical functions", which nowadays is increasingly attracting attention because of its obvious links to integrability theory and quantum geometry (see \cite{Zab} for discussion of the latter relation and \cite{ZabFKe}, where also relations with orthogonal polinomials and Generalized Kontsevich model are partly revealed). An extremely important example of free motion on a group manifold (in the infinite-dimensional - Kac-Moody - case) is provided by 2-dimensional WZNW (Wess-\-Zumino-\-Novikov-\-Witten) model and the corresponding version of projection method is known as Hamiltonian reductions in conformal field theory. Again the resulting theories (like minimal conformal models) do not possess any kind of symmetry in the usual sense of the word, but still they are very simple and exactly solvable, remembering their origin in the theory of free fields. In principle the theory which is reduced, i.e. complemented by constraints (initial conditions), does not need to be absolutely symmetric, i.e. to have Casimir operator or even zero (as in the WZNW case) as its Hamiltonian. It can be in fact possible to use the projection method to gain a lot of information about reductions of theories with more sophisticated Hamiltonians, which are non-trivial functions of group generators. The simplest example is provided by the theory of quantum-mechanical "quasi-exactly-solvable models" \cite{QES},\cite{Turb1} and its CFT-generalizations \cite{Turb1},\cite{Halp}. A more elaborated technique has the name of "localization theory",\footnote{ For various views and approaches to this theory see \cite{KhCh},\cite{STS}, \cite{AFS},\cite{AS}, \cite{DH}.\cite{NiDH},\cite{Wit2YM}, \cite{NT},\cite{NP}. (So far there are no connections with Andersson localization in the solid state physics.) } (known also as geometrical quantization, Fourier analysis on group manifolds and Duistermaat-Heckman theory), it provides a very wide generalization of the above averaging procedure, which maped plain waves into Bessel functions. The classical sample system to illustrate all the aspects of integrability, starting from free motion and ending with anyonic statistics, $W_\infty$-algebras and $2d$ Yang-Mills theory, is the Calogero-Sutherland system, which can be associated in a uniform way with any simple Lie algebras and in an "intermediately involved" form looks like a multiparticle theory in 1+1 dimension with interaction potential $g^2\sin^{-2}\epsilon (x_i-x_j)$. (See \cite{OlPe} for the introduction to the theory of Calogero-type models and \cite{Vas},\cite{GoNe} for the new developements.) All this discussion was necessary just to illustrate a very simple idea: the theory of free particles, though trivial, is in fact unexhaustively deep. It is enough to impose sophisticated initial conditions or perform a sophisticated change of variables in order to obtain very complicated dynamical systems, which after they are studied $per\ se$ appear to be surprisingly system, the reason for this simplicity being that the real underlying dynamics is just trivial - that of the free particles,- though it may be very hard probelm to reveal this simplicity when the system is given. It is advantage of the $general$ theory, that one can begin from the proper side: just from the theory of free particles and just start making it more and more complicated, by introducing different kind of variables, considering correlators of sophisticated operators and so on. Everything what can be obtained in this way is by definition trivially integrable, though it may be not so simple to guess for somebody who did not know where the particular system at the end of this procedure appeared from. We now proceed to discussion of particularly important realization of this idea: the theory of $\bar\partial$-operators in $1_C$ dimension (i.e. the theory of free holomorphic fields in 2 real dimensions). When considered as functions of moduli of the bundles over Riemann surfaces (i.e. of boundary conditions, imposed on $2d$ free fields), these simple objects (known as "$\tau$=functions") start looking a little involved and after all appear related to sophisticated non-linear equations (but of course integrable) in 2 and 3 dimensions (like KdV or Kadomtsev-Petvishvili (KP) equation). We do not attempt to present an exhaustive theory of $\tau$-functions and integrable hierarchies (besides being still uncompleted, this is a very big field), but instead concentrate on the very core of it, which is just simple determinant formulas for the simplest $\tau$-functions (namely, associated with free-fermion theory and level $k=1$ Kac-Moody algebras). This issue will be discussed in some details, because besides being the basis of integrable hierarchies theory, it is also exactly the place, where the links with the matrix models are found. \subsection{The notion of $\tau$-function} There are several different definitions of $\tau$-functions, but all of them are particular realizations of the following idea: $\tau$-function is a generating functional of all the correlation functions in the theory of free particles in 1+1 dimensions. This basic quantity is a kind of "$det \ D$", where "$D$" is a time-evolution operator (continuous or discrete) and "$det$" is a sort of a product over eigenvalues of "$D$", which is usually expressed in the form of a functional integral, associated with free particles (it is not $a\ priori$ Gaussian in original variables). This quantity is the most general definition of $\tau$-function. In practice one is usually more specific. The mostly well studied version of $\tau$-function arises if one thinks about free particles of a peculiar type: free fermions with quadratic Hamiltonian and continuous time evolution, i.e. one considers a theory of spin-1/2 $b,c$-system (fermions) $\tilde \psi(\bar z, z), \psi(\bar z, z)$, described by the functional integral \begin{eqnarray} \oldfalse \begin{array}{c} \tau\{A\} \sim \ Det(\bar\partial + {\cal A}) \sim \nonumber \\ \sim \int D\tilde\psi D\psi \exp\left({ \int_{d^2 z} \tilde\psi\bar\partial\psi}\right) \exp\left({\int_{d^2 z}\int_{d^2\tilde z} A( z,\tilde z) \delta(\bar{\tilde z}-\bar z)\psi( z)\tilde\psi(\tilde z)} \right) \end{array} \label{fftf} \end{eqnarray} where $\bar z$ plays the role of time and ${\cal A} = A( z,\tilde z)\delta(\bar{\tilde z}-\bar z) d\bar{\tilde z}d\bar z$ is some $(\frac{1}{2},1; \frac{1}{2}, 1)$-bidifferential (i.e. contains a factor of $d\tilde z^{1/2}d\bar{\tilde z}d z^{1/2}d\bar z$). Of course, one can think about more general $\tau$-functions, involving many fermions (this is often done), and more general $b,c$- and $\beta, \gamma$-systems, in particular, arising in the context of WZNW model, associated to any Kac-Moody algebra of any level.\footnote{The main technical difference between generic and "free-fermion" cases is that Lagrangian of generic free field theory is not just quadratic in scalar fields $\phi$, but can also contain particular combinations of exponents $e^\phi$. It also deserves noting that the most general expresiion, {\it quadratic} in {\it scalar} fields, if rewritten in terms of fermions is in fact {\it quartic} (but of course $not$ a generic quartic interaction arises in this way). Integrable nature of certain quartic-fermion interactions is well known from the theory of Thirring models (in this class of models interactions are usually local). } Also of interset is consideration of $discrete$ time evolution (described by difference rather than differential equations), though, as usual in the $2d$ theories, this is not a really independent problem. In the language of matrix models the restriction to $free-fermion$ $\tau$-functions is essentially equivalent to restriction to $eigenvalue$ models. Serious consideration of non-eigenvalue models, aimed at revealing their integrable (solvable) structure will certainly involve the theory of generic $\tau$-functions, but both these things are matters of the future research, and we"ll not go into details about them in these notes. \subsection{$\tau$-function, associated with the free fermions} Because of specific form of the Lagrangian in (\ref{fftf}) the functional integral can be easily represented in Hamiltonian form, provided topology of the 2-surface on which $\bar z, z$ are coordinates, is trivial (genus 0: sphere or annulus). Namely, consider $\tilde\psi$ and $\psi$ as operator-valued functions of $ z$ only (not of the time $\bar z$). Then the only reminiscent of kinetic term $\int_{d^2 z} \tilde\psi\bar\partial\psi$ is canonical commutation relation: \begin{eqnarray} \phantom. [\tilde\psi(\tilde z),\psi( z)]_+ = \delta(\tilde z - z) d\tilde z^{1/2}d z^{1/2}. \end{eqnarray} Then \begin{eqnarray} \tau\{A\} \sim \langle 0 \mid \exp\left({\oint_{d\tilde z}\oint_{d z} A( z,\tilde z)\psi( z)\tilde\psi(\tilde z)}\right) \mid 0 \rangle. \label{hrtf} \end{eqnarray} Now it is usual to expand around $ z = 0$: \begin{eqnarray} &\psi( z) = \sum_{n\in Z}\psi_n z^n d z^{1/2}; \ \ \ \tilde\psi( z) = \sum_{n\in Z}\tilde\psi_n z^{-n-1}d z^{1/2}; \nonumber \\ &\phantom. [\tilde\psi_m,\psi_n]_+ = \delta_{m,n}; \nonumber \\ &\psi_m\mid 0 \rangle = 0 \ \ {\rm for}\ m<0; \ \ \ \tilde\psi_m \mid 0 \rangle = 0 \ \ {\rm for}\ m\geq 0; \nonumber \\ &A( z,\tilde z) = \sum_{m,n\in Z} z^{-m-1}\tilde z^n A_{mn}d z^{1/2}d\tilde z^{1/2}; \nonumber \end{eqnarray} so that \begin{eqnarray} \oint_{d\tilde z}\oint_{d z} A( z,\tilde z)\psi( z)\tilde\psi(\tilde z) = \sum_{m,n\in Z} A_{mn}\psi_m\tilde\psi_n. \nonumber \end{eqnarray} In fact this expansion could be around $any$ point $ z_0$ and on a 2-surface of any topology: topological effects can be easily included as specific shifts of the functional $A( z,\tilde z)$ - by combinations of the "hadle-gluing operators". Analogous shifts can imitate the change of basic functions $ z^n$ for $ z^{n+\alpha}$ and more complicated expressions (holomorphic 1/2-differentials with various boundary conditions on surfaces of various topologies). One can now wonder, whether $local$ functionals $A( z,\tilde z) = U( z)\delta(\tilde z- z)d z^{1/2}d\tilde z^{1/2}$ play any special role. The corresponding contribution to the Hamiltonian looks like\footnote{ Note that normalization factor here is different by a factor of $\frac{1}{\sqrt{2}}$ from that in discussion of discrete models in sections 2.3, 2.7 and 2.8. This is not just a change of {\it notations}, since Miwa transformation can lead to different results when this normalization is changed. See a footnote in section 4.6 below for more detailed discussion. } \begin{eqnarray} H_{Cartan} = \oint_{d z}U( z)\psi( z)\tilde\psi( z) = \oint_{d z} U( z)J( z), \end{eqnarray} where \begin{eqnarray} J( z) = \psi( z)\tilde\psi( z) = \sum_{n\in Z}J_n z^{-n-1}d z \end{eqnarray} is the $U(1)_{k=1}$ Kac-Moody current; \begin{eqnarray} J_n = \sum_{m\in Z}\psi_m\tilde\psi_{m+n}; \ \ \ [J_m,J_n] = m\delta_{m+n,0}. \end{eqnarray} If scalar function (potential) $U( z)$ is expanded as $U( z) = \sum_{k\in Z} t_k z^k$, then \begin{eqnarray} H_{Cartan} = \sum_{n\in Z} t_kJ_k. \end{eqnarray} This contribution to the whole Hamiltonian can be considered distinguished for the following reason. Let us return to original expression (\ref{hrtf}) and try to consider it as a generating functional for all the correlation functions of $\tilde\psi$ and $\psi$. Naively, variation w.r.to $A( z,\tilde z)$ should produce bilinear combination $\psi( z)\tilde\psi(\tilde z)$ and this would solve the problem. However, things are not just so trivial, because operators involved do not commute (and in particular, the exponential operator in (\ref{hrtf}) should still be defined less symbolically, see next subsection). Things would be much simpler, if we can consider $commuting$ set of operators: this is where abelian $\hat{U(1)}_{k=1}$ subgroup of the entire $GL(\infty)_{k=1}$ (and even its purely commuting Borel subalgebra) enters the game. Remarkably, it is sufficient to deal with this abelian subgroup in order to reproduce all the correlation functions.\footnote{ We once again emphasize that this trick is specific for the free fermions and for the level $k=1$ Kac-Moody algebras, which can be expressed entirely in terms of free fields, associated with Cartan generators (modulo some unpleasant details, related to "cocycle factors" in the Frenkel-Kac representations \cite{FK}, which are in fact reminiscents of free fields associated with the non-Cartan generators (parafermions) \cite{Turb}, - but can, however, be put under the carpet or/and taken into account "by hands" as "unpleasant but non-essential(?) sophistications). } The crucial point is the identity for free fermions (generalizable to any $b, c$-systems): \begin{eqnarray} :\psi( \lambda)\tilde\psi(\tilde \lambda):\ = \ :\exp\left({ \int_{ \lambda}^{\tilde \lambda} J}\right): \label{bosid} \end{eqnarray} which is widely known in the form of bosonization formulas:\footnote{ Formulas in brackets are indeed correct, before them the usual symbolic relations are written. Using these formulas we get: \begin{eqnarray} :\psi(\lambda)\tilde\psi(\tilde \lambda):\ = \ :e^{ \phi(\tilde \lambda)-\phi( \lambda)}:\ = \ :e^{ \int_{ \lambda}^{\tilde \lambda} \partial\phi}: \ = \ :e^{ \int_{ \lambda}^{\tilde \lambda} J}: \nonumber \end{eqnarray} This identity can be of course obtained within fermionic theory, one should only take into account that $\psi$-operators are nilpotent, so that exponent of a single $\psi$-operator would be just a sum of two terms (polinomial). } if $J( z) = \partial\phi( z)$, \begin{eqnarray} \tilde\psi(\tilde \lambda) \sim \ :e^{ \phi(\tilde \lambda)}: \ \ \ \left(\ :\psi(\infty)\tilde\psi(\tilde \lambda):\ =\ :e^{ (\phi(\tilde \lambda) - \phi(\infty))}:\ \right); \nonumber \\ \psi( \lambda) \sim \ :e^{- \phi( \lambda)}: \ \ \ \left(\ :\psi( \lambda)\tilde\psi(\infty):\ =\ :e^{ (\phi(\infty) - \phi( \lambda))}:\ \right). \nonumber \end{eqnarray} This identity implies that one can generate any bilinear combinations of $\psi$-operators by variation of potential $U( z)$ only, moreover this variation should be of specific form: \begin{eqnarray} \Delta\oint UJ = \Delta\left( \sum_{k\in Z} t_kJ_k \right) = \int_{ z}^{\tilde z} J = \sum_{k \in Z} \int_{ z}^{\tilde z} z^{-k-1} d z = \nonumber \\ = \sum_{k \in Z} \frac{1}{k}J_k \left(\frac{1}{ z^k} - \frac{1}{\tilde z^k}\right), \nonumber \end{eqnarray} i.e. \begin{eqnarray} \Delta t_k = \frac{1}{k}\left(\frac{1}{ z^k} - \frac{1}{\tilde z^k}\right) \end{eqnarray} Note that this is {\it not} an infinitesimal variation and that it has exactly the form, consistent with Miwa parametrization used in the previous section 3. Since any bilinear combination can be generated in this way from $U( z)$, it is clear that the entire Hamiltonian $\sum A_{mn}\tilde\psi_m\psi_n$ can be also considered as resulting from some transformation of $V$ (i.e. of "time-variables" $t_k$). In other words, \begin{eqnarray} \tau\{A\} = {\cal O}_A[t] \tau\{A = U\}. \nonumber \end{eqnarray} These operators ${\cal O}_A$ are naturally interpreted as elements of the group $GL(\infty)$, acting on the "Universal Grassmannian" \cite{SeWi}, \cite{Kac},\cite{Orlov}, parametrized by the matrices $A_{mn}$ modulo changes of coordinates $ z \rightarrow f( z)$. This representation for $\tau\{A\}$ is, however, not very convenient, and usually one considers {\it infinitesimal} version of the transformation, which just shifts $A$ \begin{eqnarray} \tau\{t\mid A+\delta A\} = \hat{\cal O}_{\delta A}[t] \tau\{t\mid A\}, \label{trtf} \end{eqnarray} note that this transformation clearly distinguishes between the dependencies of $\tau$ on $t$ and on all other components of $A$. The possibility of such representation with the privileged role of Cartan generators is the origin of all simplifications, arising in the case of free-fermion $\tau$-functions.\footnote{ It is also the reason, why these are free-fermion $tau$-functions, that appear in the study of ordinary integrable hierarchies: the Hamiltonian flows, which describe evolution in different $t$-directions just commute, because $t$'s are associated with the commuting Cartan generators of $GL(\infty)$. In the more general situation the flows would form closed but $non-abelian$ algebra. } Relation (\ref{trtf}) is the basis of the orbit interpretation of $\tau$-functions \cite{Kac}. It is also important to understand the role of the "string equation" and othert constraints, imposed on $\tau$-functions in the theory of matrix models. These arise as some particular subalgebras in the set of $\hat{\cal O}$-operators, and their role is to specify particular points $A$ in the Grassmannian, of which this subalgebra is a stabilizer.\footnote{ This relation is straightforward in the case of Virasoro constraints, since Virasoro algebra is just a subalgebra of the $GL(\infty)$ acting on $\tau$-functions, and thus is a symmetry (covariance) of associated integrable hierarchies \cite{Orlov}. $W$-constraints do not form Lie-subalgebra of this $GL(\infty)$, they arise after certain reduction, which in turn exists in a simple form $not$ everywhere on the Grassmannian (in particular $W$ is {\it not} a symmetry of entire KP hierarchy \cite{KodeVo}): here we deal with a more sophisticated self-consistency relation, which remains to be understood in full details (e.g., it is unknown, whether reduction exists at least at {\it any} Virasoro-stable point, which would significantly simplify this kind of consideration). In fact, the entire relation between the constraints and $\tau$-functions is not exhaustively worked out: for example, there is still no clear and satisfactory proof, that the full set of Virasoro and/or $W$-constraints implies that partition function is a $\tau$-function, which would be pure algebraic and not refer to the uniqueness of solutions to the constraints. Result, widely discussed in the literature, see \cite{FKN}, is that string equation (the lowest Virasoro constraint $L_{-1}Z = 0$), if imposed on $Z$, which is somehow known to be the properly reduced $\tau$-function, implies the entire set of Virasoro and $W$-constraints (though even this proof can still have some loopholes). } The simplest examples are in fact provided by formulas from the section 2.3 above, where combinations of the screening charges describe $A$'s, which are stable points of {\it discrete} Virasoro- and $W$-constraints (in the latter case the {\it multi}-fermion system is used). The fact that the $\tau$-function at all the points $A$ of Grassmannian can be obtained by the group action from $\tau\{0\}$, has an implication, known as Hirota equation. The idea \cite{Kac} is just that there are Casimir operators in the group, which commute with the group action and thus the eigenvalue of the Casimir operator is the same for $\tau\{A\}$ at all $A$. In the free-fermion case the simplest example of Casimir operator is given by \begin{eqnarray} J_0 = \oint J = \oint\psi\tilde\psi = \sum_{n\in Z} \psi_n\tilde\psi_n. \end{eqnarray} The eigenvalue of this operator for the vacuum state $\mid 0 \rangle$ is an infinite subtraction constant, and this makes equation $J_0 {\cal O}_A\mid 0 \rangle = {\cal O}_A J_0 \mid 0 \rangle = {\rm const} \cdot{\cal O}_A\mid 0 \rangle$, or $J_0\tau\{A\} = {\rm const}\cdot\tau\{A\}$ not very interesting. However, this operator is represented in bilinear form and in such cases the following trick is usually usefull. If operator, which is bilinear in generators of the algebra, $T^aT^a$, commutes with the action of the group, so does $T^a\otimes T^a$, if the group action on tensor product of representations is defined as $\mid\ \rangle\otimes\mid\ \rangle \rightarrow {\cal O}_A\mid\ \rangle\otimes {\cal O}_A\mid\ \rangle$. (Indeed, then $(T^a\otimes I + I\otimes T^a)^2$ commutes with the group action and so does $T^a\otimes T^a = \frac{1}{2}\left( (T^a\otimes I + I\otimes T^a)^2 - T^aT^a\otimes I - I\otimes T^aT^a\right)$.) If further, $T^a\otimes T^a$ annihilates the product of two vacuum states: \begin{eqnarray} \left(T^a\otimes T^a\right)\ \mid 0 \rangle\otimes\mid 0 \rangle = 0, \label{annvac} \end{eqnarray} then the same equation holds for all $A$: \begin{eqnarray} \left(T^a\otimes T^a\right)\ \mid {\cal O}_A \rangle\otimes\mid {\cal O}_A \rangle = 0. \end{eqnarray} Condition (\ref{annvac}) is trivially valid in our case: \begin{eqnarray} \sum_{n\in Z} \psi_n \mid 0 \rangle\otimes\tilde\psi_n\mid 0 \rangle = 0, \end{eqnarray} since in every term in the sum one of the vacuum states is annihilated: the first one of $n\geq 0$ and the second one if $n<0$.\footnote{ It is easy to verify directly that $\sum_n \psi_n \otimes \tilde\psi_n$ is indeed a Casimir operator in the tensor product: \begin{eqnarray} &\phantom. [\sum_n \psi_n \otimes \tilde\psi_n, \ I\otimes \sum_{l,m} A_{lm}\psi_l\tilde\psi_m + \sum_{l,m} A_{lm}\psi_l\tilde\psi_m \otimes I] = \nonumber \\ &= \sum_n\left( \psi_n \otimes \sum_m A_{nm}\tilde\psi_m - \sum_l A_{ln}\psi_l\otimes \tilde\psi_n \right) = \sum_l\sum_m A_{lm} (\psi_l\otimes \tilde\psi_m - \psi_l\otimes\tilde\psi_m) = 0. \nonumber \end{eqnarray} } Thus we obtain the relation \begin{eqnarray} \sum_{n\in Z} \psi_n \mid {\cal O}_A \rangle \otimes \tilde\psi_n \mid {\cal O}_A \rangle = 0, \end{eqnarray} which can be now multiplied from the left by \begin{eqnarray} \langle 0 \mid \psi(\infty) e^{H_{Cartan}(t)} \otimes \langle 0 \mid\tilde\psi(\infty) e^{H_{Cartan}(t')} \nonumber \end{eqnarray} ($t'_k$ do not need to coincide with $t_k$) and after insertions of $\psi$-operators are expressed as the shifts of times, we obtain: \begin{eqnarray} \sum_{n\in Z} D_n^- \tau\{t\mid A\} \otimes D_n^+ \tau\{t\mid A\} = 0, \label{prehirota} \end{eqnarray} where $\sum_{n\geq 0}D^{\pm}_n z^{-n} = \exp\left(\pm \sum_{k>0} \frac{1}{k z^k} \frac{\partial}{\partial t_k}\right)$. This is particular form of Hirota equation \cite{Hirota}, which is often used to {\it define} $\tau$-functions, associated with integrable hierarchies. If one takes (\ref{hrtf}) for the definition, as it is more natural to do in the general "theory of everything" and as we did above, eq.(\ref{prehirota}) is the starting point for the path, leading to hierarchies in conventional form of differential equations, the Lax and pseudodifferential representations {\it naturally} appearing on the way. We do not go along this path in these notes. The last remark to be made, before we proceed to more detailed formulas, is that $\tau$-functions can be considered as determinants $Det \bar\partial$ of the $\bar\partial$-operators acting on fields with some complicated boundary conditions (like $\psi( z) \sim \exp\left(\sum_{k>0} t_k z^{-k}\right)$ in the simplest case of $t$-dependencies). Entire A-dependence is usually described in this context as that on the point of the "universal module space", which once appeared in the study of string models on Riemann surface of arbitrary genus \cite{UMS}. From this point of view more general $\tau$-functions are sections of the bundles over universal module space, associated with conformal models, more sophisticated then just the theory of free fermions (and $b,c$-systems). The WZNW model is, of course, the most important example to be studied in this context. The crucial feature of all the quantites, associated in this way to conformal models is applicability of Wick theorem, reducing multipoint correlation functions to pair correlators. In the free fermion case this is just a consequence of quadratic form of Lagrangian, in generic situation this follows from existence of holomorphic operator algebra, which allows to define the correlators by fixation of monodromy properties, dictated by pairwise collision of points. Wick theorem is the concrete source of determinant formulas for $tau$-functions, which are used in order to establish their relations with matrix models and other branches of string theory. After this discussion of the context where free-fermion $\tau$-functions can and do appear, we turn now to more detailed and exact formulas, relevant in this particular free-fermion case. They are mostly due to Japanese school \cite{DJKM}, though many other people contributed to this field after it was established. The only sophisticated part of the work with these formulas is accurate accounting for the normal ordering routine which will be mostly unnecessary for our purposes. We shall mostly follow the presentation of papers \cite{KMMOZ},\cite{GKM},\cite{Toda}. \subsection{Basic determinant formula for the free-fermion correlator} Let us consider the following matrix element: \begin{eqnarray} \tau_N\{t,\bar t\mid G\} = \langle N \mid e^H \ G \ e^{\bar H} \mid N \rangle \label{pretaf} \end{eqnarray} where \begin{eqnarray} \oldfalse \begin{array}{c} \psi( z) = \sum_{n\in Z}\psi_n z^n d z^{1/2}; \ \ \ \tilde\psi( z) = \sum_{n\in Z}\tilde\psi_n z^{-n-1}d z^{1/2};\\ G = \exp\left( \sum_{m,n\in Z}{A}_{mn}\psi_m\tilde\psi_n \right); \\ H = \sum_{k>0} t_kJ_k, \ \ \ \bar H = \sum_{k>0} \bar t_kJ_{-k} \\ J( z) = \psi( z)\tilde\psi( z) = \sum_{n\in Z}J_n z^{-n-1}d z; \ \ \ J_n = \sum_k\psi_k\tilde\psi_{k+n}; \\ \phantom. [\tilde\psi_m,\psi_n]_+ = \delta_{m,n}; \ \ \ [J_m,J_n] = m\delta_{m+n,0}; \\ \psi_m\mid N \rangle = 0, \ \ m<N; \ \ \ \ \langle N \mid \psi_m = 0, \ \ m\geq N; \\ \tilde\psi_m\mid N \rangle = 0, \ \ m\geq N; \ \ \ \ \langle N \mid \tilde\psi_m = 0, \ \ m< N; \\ J_m\mid N \rangle = 0,\ \ m>0; \ \ \ \ \langle N \mid J_m = 0, \ \ m<0. \end{array} \label{notfermcor} \end{eqnarray} The "$N$-th vacuum" $\mid N \rangle$ is defined as the Dirac sea, filled up to the level $N$: \begin{eqnarray} \mid N \rangle = \prod_{i=N}^\infty \tilde\psi_i\mid\infty\rangle = \prod_{i=-\infty}^{N-1} \psi_i \mid-\infty\rangle \ ; \nonumber \\ \langle N \mid = \langle \infty\mid \prod_{i=N}^\infty \psi_i = \langle -\infty\mid \prod_{i=-\infty}^{N-1} \tilde\psi_i, \end{eqnarray} where the "empty" (bare) and "completely filled" vacua are defined so that: \begin{eqnarray} &\tilde\psi_m \mid -\infty \rangle = 0, \ \ \ \langle -\infty \mid \psi_m = 0, \nonumber \\ &\psi_m \mid \infty \rangle = 0, \ \ \ \langle \infty \mid \tilde\psi_m = 0 \end{eqnarray} for $any$ $m\in Z$. For the only reason that operators $J$, $H$, $\bar H$ and $G$ are defined so that they have usually $\tilde\psi$ at the very right and $\psi$ at the very left, we have also: \begin{eqnarray} &J_m \mid -\infty \rangle = 0, \ \ \ \langle -\infty\mid J_m = 0, \nonumber \\ &G^{\pm 1} \mid-\infty \rangle = \mid-\infty \rangle; \ \ \ \langle -\infty \mid G^{\pm 1} = \langle -\infty \mid; \nonumber \\ &e^{\pm \bar H} \mid-\infty \rangle = \mid-\infty \rangle; \ \ \ \langle -\infty \mid e^{\pm H} = \langle -\infty \mid. \end{eqnarray} Now we can use all these formulas to rewrite our original correlator (\ref{pretaf}) as: \begin{eqnarray} \oldfalse \begin{array}{c} \langle N \mid e^H \ G\ e^{\bar H}\mid N \rangle = \\ = \langle -\infty\mid \left(\prod_{i=-\infty}^{N-1} \tilde\psi_i\right) e^H \ G\ e^{\bar H} \left(\prod_{i=-\infty}^{N-1} \psi_i\right) \mid-\infty\rangle = \\ = \langle -\infty\mid e^{-H} \left(\prod_{i=-\infty}^{N-1}\tilde\psi_i\right) e^H \ G\ e^{\bar H} \left(\prod_{i=-\infty}^{N-1} \psi_i\right) e^{-\bar H}\mid-\infty\rangle = \\ = \langle -\infty\mid \prod_{i=-\infty}^{N-1} \tilde\Psi_i[t] \prod_{j=-\infty}^{N-1} \Psi_j^G[\bar t] \mid-\infty\rangle = \\ = {\rm Det}_{-\infty < i,j < N} \langle -\infty\mid \tilde\Psi_i[t] \Psi_j^G[\bar t] \mid-\infty\rangle = \\ = {\rm Det}_{i,j<0} {\cal H}_{i+N,j+N}. \end{array} \label{pretafdet} \end{eqnarray} The last two steps here were introduction of "$GL(\infty)$-rotated" fermions, \begin{eqnarray} \tilde\Psi_i[t] \equiv e^{-H}\psi_i e^H; \ \ \ \Psi_j[\bar t] \equiv e^{\bar H} \psi_j e^{-\bar H};\ \ \ \Psi_j^G[\bar t] \equiv G\Psi_j[\bar t] G^{-1}, \label{bigpsi} \end{eqnarray} and application of the Wick theorem to express multifermion correlation function through pair correlators \begin{eqnarray} \oldfalse \begin{array}{c} {\cal H}_{ij}(t,\bar t) \equiv \langle -\infty\mid \tilde\Psi_i[t] \Psi_j^G[\bar t] \mid-\infty\rangle = \\ = \langle -\infty\mid \tilde\Psi_i[t]\ G \ \Psi_j[\bar t] \mid-\infty\rangle , \end{array} \label{Hmatrcor} \end{eqnarray} (once again the fact that $G^{-1}\mid-\infty\rangle = \mid-\infty\rangle$ was used). The only non-trivial dynamical information entered through applicability of the Wick theorem, and for that it was crucial that all the operators $e^H,\ e^{\bar H},\ G$ are $quadratic$ exponents, i.e. can only modify the shape of the propagator, but do not destroy the quadratic form of the action (fields remain $free$). This is exactly equivalent to the statement that "heisenberg" operators $\Psi[t]$ are just "rotations" of $\psi$, i.e. that transformations (\ref{bigpsi}) are $linear$. We shall now describe these transformations in a little more explicit form. Namely, their entire time-dependence can be encoded in terms of "Shur polinomials" $P_n(t)$. These are defined to have a very simple generating function (which we already encountered many times in the theory of matrix models): \begin{eqnarray} \sum_{n\geq 0} P_n(t)z^n = \exp\left({\sum_{k=1}^{\infty} t_kz^k}\right) \end{eqnarray} (i.e. $P_0 = 1,\ \ P_1 = t_1,\ \ P_2 = \frac{t_1^2}{2} + t^2$ etc.), and satisfy the relation \begin{eqnarray} \frac{\partial P_n}{\partial t_k} = P_{n-k}. \end{eqnarray} Since \begin{eqnarray} \exp\left({\sum_{k=1}^{\infty} t_kz^k}\right) = \prod_{k>0} \left(\sum_{n_k\geq 0}\frac{1}{n_k!}\ t_k^{n_k}z^{kn_k}\right), \nonumber \end{eqnarray} Shur polinomials can be also represented as \begin{eqnarray} P_n(t) = \sum_{\stackrel{\{n_k\}}{\sum_{k>0} kn_k = n}} \left(\prod_{k> 0}\frac{1}{n_k!}\ t_k^{n_k}\right). \label{Shupoex} \end{eqnarray} Now, since \begin{eqnarray} e^{-B}Ae^B = A + [A,B] + \frac{1}{2!}[[A,B],B] + \frac{1}{3!}[[[A,B],B],B] + \ldots \nonumber \end{eqnarray} and \begin{eqnarray} \phantom. [\tilde\psi_i,J_k] = \tilde\psi_{i+k}, \ \ [ [\tilde\psi_i,J_{k_1}], J_{k_2}] = \tilde\psi_{i+k_1+k_2}, \ \ldots, \nonumber \end{eqnarray} we have for every fixed $k$: \begin{eqnarray} e^{-t_kJ_k} \tilde\psi_i e^{t_kJ_k} = \sum_{n_k\geq 0} \frac{t_k^{n_k}}{n_k!}\ \tilde\psi_{i+kn_k}. \nonumber \end{eqnarray} It remains to note that all the harmonics of $J$ in $H = \sum_{k>0} t_kJ_k$ commute with each other, to obtain: \begin{eqnarray} \oldfalse \begin{array}{c} \tilde\Psi_i(t) = e^{-H} \tilde\psi_i e^H = \left( \prod_{k>0} e^{-t_kJ_k}\right) \tilde\psi_i \left( \prod_{k>0}e^{t_kJ_k}\right) = \\ = \sum_{n\geq 0}\tilde\psi_{i+n} \left( \sum_{\stackrel{\{n_k\}}{\sum_{k>0} kn_k = n}} \left(\prod_{k> 0}\frac{1}{n_k!}\ t_k^{n_k}\right)\right) \stackrel{(\ref{Shupoex})}{=} \\ = \sum_{n\geq 0}\tilde\psi_{i+n} P_n(t) = \sum_{l\geq i}\tilde\psi_{l} P_{l-i}(t). \end{array} \end{eqnarray} Similarly, relation $[J_k,\psi_j] = \psi_{k+j}$ implies, that \begin{eqnarray} \Psi_j(\bar t) = e^{\bar H}\psi_j e^{-\bar H} = \sum_{n\geq 0}\psi_{j+n} P_n(\bar t) = \sum_{m\geq j}\psi_{m}P_{m-j}(\bar t) \label{bigpsishur} \end{eqnarray} and finally\footnote{ Eqs.(\ref{bigpsishur}) can be also interpreted as representations of Shur polinomials in terms of fermionic correlators in the bare vacuum: \begin{eqnarray} &P_m(\bar t) = \langle -\infty \mid \tilde\psi_{j+m}e^{\bar H} \psi_j \mid-\infty \rangle\ ; \nonumber \\ &P_m( t) = \langle -\infty \mid \tilde\psi_i e^{H} \psi_{i+m} \mid-\infty \rangle\ \nonumber \end{eqnarray} } \begin{eqnarray} &{\cal H}_{ij} = \sum_{\stackrel{l\geq i}{m\geq j}} \langle -\infty \mid \tilde\psi_l\ G\ \psi_m \mid-\infty \rangle\ P_{l-i}(t)P_{m-j}(\bar t) = \nonumber \\ & = \sum_{\stackrel{l\geq i}{m\geq j}} T_{lm} P_{l-i}(t)P_{m-j}(\bar t), \label{HversTlm} \end{eqnarray} which implies also that \begin{eqnarray} \frac{\partial{\cal H}_{ij}}{\partial t_k} = {\cal H}_{i+k,j}; \nonumber \\ \frac{\partial{\cal H}_{ij}}{\partial\bar t_k} = {\cal H}_{i,j+k} . \label{todaeqforH} \end{eqnarray} Matrix \begin{eqnarray} T_{lm} \equiv \langle -\infty \mid \tilde\psi_l\ G\ \psi_m \mid-\infty \rangle \end{eqnarray} is the one which defines fermion rotations under the action of $GL(\infty)$-group element $G$: \begin{eqnarray} &G\psi_mG^{-1} = \sum_{l\in Z} \psi_lT_{lm}; \nonumber \\ &G^{-1}\tilde\psi_l G = \sum_{m\in Z} T_{lm}\tilde\psi_m,\ {\rm or}\ G\tilde\psi_l G^{-1} = \sum_{m\in Z} (T^{-1})_{lm}\tilde\psi_m. \end{eqnarray} If $G=1$, $T_{lm} = \delta_{lm}$. If all $t_k = \bar t_k = 0$, ${\cal H}_{ij} = T_{ij}$. \subsection{Toda-lattice $\tau$-function and linear reductions of Toda hierarchy} In the previous subsection we derived a formula \begin{eqnarray} \tau_N\{t,\bar t\mid G\} = \left.{\rm Det}\right._{i,j<0} {\cal H}_{i+N,j+N} \label{todatau} \end{eqnarray} for the basic correlator, which defines "Toda-lattice $\tau$-function". For obvious reasons $\bar t$ are often refered to as negative-times. $\tau$-function can be normalized by dividing over the same quantity for all time-variables vanishing, but this is not always convenient. Eq.(\ref{todatau}) has generalizations - when similar matrix elements in a multifermion system is considered - this leads to "multicomponent Toda" (or AKNS) $\tau$-functions. Generalizations to arbitrary conformal models should be considered as well. It has also particular "reductions", of which the most important are: KP (Kadomtsev-Petviashvili), forced (semi-infinite) and Toda-chain $\tau$-functions. This is the subject to be discussed in this subsection. Idea of linear reduction is that the form of operator $G$, or, what is the same, of the matrix $T_{lm} $ in eq.(\ref{HversTlm}), can be adjusted in such a way, that $\tau_N\{t,\bar t\mid G\}$ becomes independent of some variables, i.e. equation(s) \begin{eqnarray} \left(\sum_k \alpha\frac{\partial}{\partial t_k} + \sum_k \bar\alpha\frac{\partial}{\partial \bar t_k} + \sum_k \beta_k D_N(k) + \gamma \right) \tau_N\{t,\bar t\mid G\} = 0 \label{redtotau1} \end{eqnarray} can be solved as equations for $G$ for all the values of $t,\bar t$ and $N$ at once. (In (\ref{redtotau1}) $D_N(k)f_N \equiv f_{N+k} - f_N$.) In this case the system of integrable equations (hierarchy), arising from Hirota equation for $\tau$, gets reduced and one usually speaks about "reduced hierarchy". Usually equation (\ref{redtotau1}) is imposed directly on matrix ${\cal H}_{ij}$, of course than (\ref{redtotau1}) is just a corollary. We shall refer to the situation when (\ref{redtotau1}) is fulfilled for {\it any} $t, \bar t, N$ as to "strong reduction". It is often reasonable to consider also "weak reductions", when (\ref{redtotau1}) is satisfied on particular infinite-dimensional hyperplanes in the space of time-variables. Weak reduction is usually a property of entire $\tau$-function as well, but not expressible in the from of a local linear equation, satisfied identicall for {\it all} values of $t, \bar t, N$. Now we proceed to concrete examples: {\it Toda-chain hierarchy}. This is a {\it strong} reduction. The corresponding constraint (\ref{redtotau1}) is just \begin{eqnarray} \frac{\partial {\cal H}_{ij}}{\partial t_k} = \frac{\partial {\cal H}_{ij}}{\partial \bar t_k}, \end{eqnarray} or, because of (\ref{todaeqforH}), ${\cal H}_{i+k,j} = {\cal H}_{i,j+k}$. It has an obvious solution: \begin{eqnarray} {\cal H}_{i,j} = \hat{\cal H}_{i+j}, \end{eqnarray} i.e. ${\cal H}_{ij}$ is expressed in terms of a one-index quantity $\hat{\cal H}_i$. It is, however, not enough to say, what are restrictions on ${\cal H}_{ij}$ - they should be fulfiled for all $t$ and $ \bar t$ at once, i.e. should be resolvable as equations for $T_{lm}$. In the case under consideration this is simple: $T_{lm}$ should be such that \begin{eqnarray} T_{lm} = \hat T_{l+m}. \end{eqnarray} Indeed, then \begin{eqnarray} &{\cal H}_{ij} = \sum_{l,m} T_{lm} P_{l-i}(t) P_{m-j}(\bar t) = \sum_{l,m} \hat T_{l+m} P_{l-i}(t)P_{m-j}(\bar t) = \nonumber \\ &= \sum_{n\geq 0} \hat T_{n+i+j} \left(\sum_{k=0}^n P_k(t)P_{n-k}(\bar t)\right), \nonumber \end{eqnarray} and \begin{eqnarray} \hat{\cal H}_i = \sum_{n\geq 0} \hat T_{n+i} \left(\sum_{k=0}^n P_k(t)P_{n-k}(\bar t)\right). \label{todachainH} \end{eqnarray} {\it Volterra hierarchy}. Toda-chain $\tau$-function can be further {\it weakly} reduced to satisfy the identity \begin{eqnarray} \left.\frac{\partial \tau_{2N}}{\partial t_{2k+1}}\right| _{\{t_{2l+1}=0\}} = 0, \ \ \ {\rm for\ all}\ k, \label{volterrared} \end{eqnarray} i.e. $\tau_{2N}$ is requested to be even function of all odd-times $t_{2l+1}$ (this is an example of "global characterization" of the weak reduction). Note that (\ref{volterrared}) is imposed only on {\it Toda-chain} $\tau$-function with {\it even} values of zero-time. Then (\ref{volterrared}) will hold whenever $\hat{\cal H}_i$ in (\ref{todachainH}) are even (odd) functions of $t_{\rm odd}$ for even (odd) values of $i$. Since Shur polinomials $P_k(t)$ are even (odd) functions of odd-times for even (odd) $k$, it is enough that the sum in (\ref{todachainH}) goes over even (odd) $n$ when $i$ is even (odd). In other words, the restriction on $T_{lm}$ is that \begin{eqnarray} T_{lm} = \hat T_{l+m}, \ \ \ {\rm and} \ \ \ \hat T_{2k+1} = 0 \ \ {\rm for\ all}\ k. \end{eqnarray} {\it Forced hierarchies}. This is another important example of strong reduction. It also provides an example of {\it singular} $\tau$-functions, arising when $G = \exp \left(\sum A_{mn}\psi_m\tilde\psi_n\right)$ blows up and normal ordered operators should be used to define regularized $\tau$-functions. Forced hierarchy appears when $G$ can be represented in the form \cite{KMMOZ} $G = G_0P_+$, where projection operator $P_+$ is such that \begin{eqnarray} P_+ \mid N \rangle = \mid N \rangle \ \ {\rm for}\ N\geq N_0, \nonumber \\ P_+ \mid N \rangle = 0 \ \ {\rm for}\ N < N_0. \label{Pplus} \end{eqnarray} Explicit expression for this operator is\footnote{ Normal ordering sign $\ : \ \ :\ $ means that all operators $\tilde\psi$ stand to the {\it left} of all operators $\psi$. The product at the r.h.s. obviously implies both the property (\ref{Pplus}) and projection property $P_+^2 = P_+$. } \begin{eqnarray} P_+ = \ :\exp \left( - \sum_{l<N_0} \tilde\psi_l\psi_l\right): \ = \prod_{l<N_0} (1 - \tilde\psi_l\psi_l) = \prod_{l < N_0} \psi_l\tilde\psi_l. \nonumber \end{eqnarray} Because of (\ref{Pplus}), $P_+\mid -\infty \rangle = )$, and the identity $G\mid -\infty \rangle = \mid -\infty \rangle$, which was essentially used in the derivation in (4.27), can be satisfied only if $G_0$ is singular and $T_{lm} = \infty$. In order to avoid this problem one usually introduces in the vicinity of such singular points in the universal module space a sort of normalized (forced) $\tau$-function $\tau_N^f \equiv \frac{\tau_N}{\tau_{N_0}}$. One can check that now $T^f_{lm} = \infty$ for all $l,m < N_0$, and $\tau^f$ can be represented as determinant of a final-dimensional matrix \cite{UT},\cite{KMMOZ}: \begin{eqnarray} &\tau_N^f = {\rm Det}_{N_0\leq i,j < N} {\cal H}^f_{ij}\ \ \ {\rm for}\ \ N>N_0; \nonumber \\ &\tau_{N_0}^f = 1; \nonumber \\ &\tau_{N}^f = 0\ \ \ {\rm for }\ \ N < N_0. \end{eqnarray} For $N>N_0$ we have now determinant of a {\it finite}-dimensional $(N-N_0)\times (N-N_0)$ matrix. The choice of $N_0$ is not really essential, therefore it is better to put $N_0 = 0$ in order to simplify formulas, phraising and relation with the discrete matrix models ($N_0$ is easily restored if everywhere $N$ is substituted by $N-N_0$). For forced hierarchies one can also represent $\hat\tau$ as \begin{eqnarray} \tau_N^f = {\rm Det}_{0\leq i,j < N} \partial_1^i\bar\partial_1^j {\cal H}^f, \end{eqnarray} where ${\cal H}^f = {\cal H}^f_{00}$ and $\partial_1 = \frac{\partial}{\partial t_1}$, $\bar \partial_1 = \frac{\partial}{\partial \bar t_1}$. For {\it forced Toda-chain} hierarchy this turns into even simpler expression: \begin{eqnarray} \tau_N^f = {\rm Det}_{0\leq i,j < N} \partial_1^{i+j}\hat{\cal H}^f, \end{eqnarray} while for the {\it forced Volterra} case we get a product of two Toda-chain $\tau$-functions with twice as small value of $N$ \cite{Bowick}: \begin{eqnarray} \tau_{2N}^f &= \left({\rm Det}_{0\leq i,j < N} \partial_2^{i+j}\hat{\cal H}^f\right) \cdot \left( {\rm Det}_{0\leq i,j < N} \partial_2^{i+j} (\partial_2 \hat{\cal H}^f)\right) = \nonumber \\ &= \tau_N^f[\hat{\cal H}^f]\cdot \tau_N^f[\partial_2\hat{\cal H}^f]. \end{eqnarray} Forced $\tau^f_N$ can be {\it always} represented in the form of a scalar-product matrix model. Indeed, \begin{eqnarray} {\cal H}_{ij} = \sum T_{lm}P_{l-i}(t)P_{m-j}(\bar t) = \oint\oint e^{U(h)+\bar U(\bar h)} h^i\bar h^j T(h,\bar h) dhd\bar h, \end{eqnarray} where $T(h,\bar h) \equiv \sum_{lm} T_{lm} h^{-l-1}\bar h^{-m-1}$, and $e^{U(h)} = e^{\sum_{k>0}t_kh^k} = \sum_{l\geq 0} h^lP_l(t)$. Then, since ${\rm Det}_{0\leq i,j < N}h^i = \Delta_N(h)$ - this is where it is essential that the hierarchy is forced - \begin{eqnarray} {\rm Det}_{0\leq i,j < N} {\cal H}_{ij} = \prod_i \oint\oint e^{U(h_i)+\bar U(\bar h_i)} T(h_i,\bar h_i)dh_id\bar h_i \cdot \Delta_N(h)\Delta_N(\bar h), \end{eqnarray} i.e. we obtain a scalar-product model with \begin{eqnarray} d\mu_{h,\bar h} = e^{U(h)+\bar U(\bar h)}T(h,\bar h) dhd\bar h. \end{eqnarray} Inverse is also true: partition function of every scalar-product model is forced Toda-lattice $\tau$-function - see section 4.7 for more details. {\it KP hierarchy}. In this case we just ignore the dependence of $\tau$-function on times $\bar t$. Every Toda-lattice $\tau$-function can be considered also as KP $\tau$-function: just operator $G^{KP} \equiv Ge^{\bar H}$ (a point of Grassmannian) becomes $\bar t$-dependent. Usually $N$-dependence is also eliminated - this can be considered as a little more sophisticated change of $G$. When $N$ is fixed, extra changes of field-variables are allowed, including transformation from Ramond to Neveu-Schwarz sector etc. Often KP hierarchy is from the very beginning formulated in terms of Neveu-Schwarz (antiperiodic) fermionic fields (associated with principal representations of Kac-Moody algebras), i.e. expansions in the first line of (\ref{notfermcor}) are in semi-integer powers of $z$: $\psi_{NS}( z) = \sum_{n\in Z}\psi_n z^{n-\frac{1}{2}} d z^{1/2}$. Given a KP $\tau$-function one can usually construct a Toda-lattice one with the {\it same} G, by introducing in appropriate way dependencies on $\bar t$ and $N$. For this purpose $\tau^{KP}$ should be represented in the form of (\ref{todatau}): \begin{eqnarray} \tau^{KP}\{t\mid G\} = {\rm Det}_{i,j<0} {\cal H}_{ij}^{KP}, \label{addkptau1} \end{eqnarray} where ${\cal H}_{ij}^{KP} = \sum_l T_{lj} P_{l-i}(t)$. Since $T_{lm} $ is a function of $G$ only, it does not change when we built up a Toda-lattice $\tau$-function: \begin{eqnarray} &\tau_N\{t,\bar t\mid G\} = {\rm Det}_{i,j<0} {\cal H}_{i+N,j+N}; \nonumber \\ &{\cal H}_{ij} = \sum_{l,m}T_{lm} P_{l-i}(t)P_{m-j}(\bar t) = \sum_m {\cal H}_{im}^{KP} P_{m-j}(\bar t). \end{eqnarray} Then \begin{eqnarray} \tau^{KP}\{ t\mid G\} = \tau_0\{t,0\mid G\}. \end{eqnarray} If we go in the opposite direction, when Toda-lattice $\tau$-function is considered as KP $\tau$-function, \begin{eqnarray} \tau_0\{t,\bar t\mid G\} = \tau^{KP}\{t \mid \tilde G(\bar t)\}; \nonumber \\ \tilde {\cal H}^{KP}_{ij} = \sum_m {\cal H}_{im}P_{m-j}(\bar t)\ \ {\rm and} \nonumber \\ \tilde T_{lj}\{\tilde G(\bar t)\} = \sum_m T_{lm}\{G\}P_{m-j}(\bar t). \label{addkptau2} \end{eqnarray} KP reduction in its turn has many further weak reductions (KdV and Boussinesq being the simplest examples). We shall mention them again in section 4.9 below, after Miwa transformation of representation (\ref{todatau}) will be considered in the next subsection. \subsection{Fermion correlator in Miwa coordinates} Let us now return to original correlator (\ref{pretaf}) and discuss in a little more details the implications of bosonization identity (\ref{bosid}). In order not to write down integrals of $J$, we introduce scalar field:\footnote{ One can consider $\phi $ as introduced for simplicity of notation, but it should be kept in mind that the scalar-field representation is in fact more fundamental for {\it generic} $\tau$-functions, not related to the level $k=1$ Kac-Moody algebras (this phenomenon is well known in conformal filed theory, see \cite{GMMOS} for more details). } \begin{eqnarray} \phi(z) = \sum_{\stackrel{k\neq 0}{k\in Z-0}}\frac{J_{-k}}{k}z^k + \phi_0 + J_0\log z, \end{eqnarray} such that $\partial\phi(z) = J(z)$. Then (\ref{bosid}) states that: \begin{eqnarray} :\psi( \lambda)\tilde\psi(\tilde \lambda):\ = \ :e^{\phi(\tilde \lambda)-\phi( \lambda)}: \label{bosid'} \end{eqnarray} "Normal ordering" here means nothing more but the requirement to neglect all mutual contractions (or correlators) of operators in between $:\ \ :$ when Wick theorem is applied to evaluate corrletion functions. One can also get rid of the normal ordering sign at the l.h.s. of (\ref{bosid'}), then \begin{eqnarray} \psi( \lambda)\tilde\psi(\tilde \lambda) = \ :e^{\phi(\tilde \lambda)}:\ :e^{-\phi( \lambda)}: \label{bosid"'} \end{eqnarray} In distinguished coordinates on a sphere, when the free field propagator is just $\log( z-\tilde z)$, one also has: \begin{eqnarray} \psi( z)\tilde\psi(\tilde z) = \frac{1}{ z-\tilde z}\ :\psi( z)\tilde\psi(\tilde z): \nonumber \end{eqnarray} Our task now is to express operators $e^H$ and $e^{\bar H}$ through the field $\phi$. This is simple: \begin{eqnarray} H = \oint_0 U( z)J( z) = \oint_0 U( z)\partial\phi( z) = - \oint_0 \phi( z)\partial U( z). \end{eqnarray} Here as usual $U( z) = \sum_{k>0} t_k z^k$ and integral is around $ z = 0$. This is very similar to generic linear functional of $\phi_-(\lambda) \equiv -\sum_{k>0}\frac{1}{k}J_k\lambda^{-k}$, \begin{eqnarray} H = \int \phi_-(\lambda)f(\lambda) d\lambda, \label{hamneg} \end{eqnarray} one should only require that\footnote{ As it is usual nowadays, a factor of $2\pi i$ is assumed to be included into the definition of contour integral $\oint$. } \begin{eqnarray} \partial U( z) = \int \frac{f(\lambda)}{ z - \lambda}d\lambda, \nonumber \end{eqnarray} i.e. \begin{eqnarray} U( z) = \int\log\left(1-\frac{ z}{\lambda}\right) f(\lambda)d\lambda. \end{eqnarray} In terms of time-variables this means that \begin{eqnarray} t_k = -\frac{1}{k}\int \lambda^{-k}f(\lambda)d\lambda. \label{miwatransint} \end{eqnarray} Here we required that $U( z = 0) = 0$, sometimes it can be more natural to introduce also \begin{eqnarray} t_0 = \int \log \lambda\ f(\lambda)d\lambda. \end{eqnarray} This change from the time-variables to "time density" $f(\lambda)$ is known as Miwa transformation. In order to establish relation with fermionic representation and also with matrix models we shall need it in "discretized" form: \begin{eqnarray} t_k &= \frac{\xi}{k}\left(\sum_{\gamma} \lambda_\gamma^{-k} - \sum_\gamma \tilde \lambda_\gamma^{-k} \right), \nonumber \\ t_0 &= -\xi\left(\sum_\gamma \log \lambda_\gamma - \sum_\gamma\log \tilde \lambda_\gamma\right). \label{miwatrans} \end{eqnarray} We changed integral over $\lambda$ for a discrete sum (i.e. the density function $f(\lambda)$ is a combination of $\delta$-functions, picked at some points $\lambda_\gamma,\ \tilde \lambda_\gamma$. This is of course just another basis in the space of the linear functionals, but the change from one basis to another one is highly non-trivial. The thing is, that we selected the basis where amplitudes of different $\delta$-functions are the {\it same}: parameter $\xi$ in (\ref{miwatrans}) is {\it independent} of $\gamma$. Thus the real parameters are just positions of the points $\lambda_\gamma, \ \tilde \lambda_\gamma$, while the amplitude is defined by the density of these points in the integration (summation) domain. This domain does not need to be {\it a priori} specified: it can be real line, any other contour or - better - some Riemann surface.) Parameter $\xi$ is also unnecessary to introduce, because basises with different $\xi$ are essentially equivalent. We shall soon put it equal to {\it one}, but not before Miwa transformation will be discussed in a little more detail. Our next steps will be as follows. Substitution of (\ref{hamneg}) into (\ref{miwatrans}), gives: \begin{eqnarray} H = -\xi\sum_\gamma \phi_-(\lambda_\gamma) + \xi\sum_\gamma\phi_-(\tilde\lambda_\gamma). \end{eqnarray} In fact, what we need is not operator $H$ itself, but the state which is created when $e^H$ acts on the vacuum state $\langle N\mid$. Then, since $\langle N\mid J_m = 0$ for $m<0$, $\langle N\mid e^{-\xi\phi_-(\lambda)}$ is essentially equivalent to $\langle N\mid e^{-\xi\phi(\lambda)}$ with $\phi_-(\lambda)$ substituted by entire $\phi(\lambda)$. If $\xi = 1 $, $e^{-\phi(\lambda)}$ can be further changed for $\psi(\lambda)$ and we obtain an expression for the correlator (\ref{pretaf}) an expression where $e^H$ is substituted by a product of operators $\psi(\lambda_\gamma)$. The same is of course true for $e^{\bar H}$. Then Wick therem can be applied and a new type of determinant formulas arises like, for example, \begin{eqnarray} \tau \sim \frac{\Delta(\lambda, \tilde\lambda)}{\Delta^2(\lambda)\Delta^2(\tilde\lambda)} {\rm det}_{\gamma\delta} \langle N\mid \psi(\lambda_\gamma) \tilde\psi(\tilde\lambda_\delta)\ G \ \mid N\rangle \end{eqnarray} It can be also obtained directly from (\ref{pretafdet}), (\ref{Hmatrcor}) and (\ref{HversTlm}) by Miwa transformation. The rest of this subsection describes this derivation in somewhat more details. The first task is to substitute $\phi_-$ by $\phi$. For this purpose we introduce operator \begin{eqnarray} \sum_{k=-\infty}^{\infty} t_kJ_k = H_+ + H_-, \end{eqnarray} where $H_+ = \sum_{k>0} t_kJ_k$ is just our old $H$, $H_- = \sum_{k\geq 0} t_{-k}J_k$, and "negative times" $t_{-k}$ are defined by "analytical continuation" of the same formulas (\ref{miwatransint}) and (\ref{miwatrans}): \begin{eqnarray} t_{-k} = \frac{1}{k} \int \lambda^k f(\lambda)d\lambda = -\frac{\xi}{k} \left(\sum_\gamma \lambda_\gamma^k - \sum_\gamma \tilde\lambda^k_\gamma \right). \end{eqnarray} Then \begin{eqnarray} \sum_{k=-\infty}^{\infty} t_kJ_k = H_+ + H_- = -\xi\left(\sum_\gamma \phi(\lambda_\gamma) - \sum_\gamma \phi(\tilde\lambda_\gamma) \right). \label{miwadetder1} \end{eqnarray} Further, \begin{eqnarray} e^{H_+ + H_-} = e^{-\frac{1}{2}s(t)} e^{H_+}e^{H_-} = e^{\frac{1}{2}s(t)} e^{H_-}e^{H_+}, \end{eqnarray} where \begin{eqnarray} \oldfalse \begin{array}{c} s(t) \equiv \sum_{k>0} kt_kt_{-k} = -\xi^2 \sum_{k>0}\frac{1}{k} \left(\sum_\gamma \left(\lambda_\gamma^{-k} - \tilde\lambda_\gamma^{-k}\right) \sum_\delta \left(\lambda_\delta^k - \tilde\lambda_\delta^k \right)\right) = \\ = \xi^2 \log \left(\left. \prod_{\gamma,\delta}\right.^\prime\ \frac{(1-\frac{\lambda_\delta}{\lambda_\gamma}) (1-\frac{\tilde\lambda_\delta}{\tilde\lambda_\gamma})} {(1-\frac{\tilde\lambda_\delta}{\lambda_\gamma}) (1-\frac{\lambda_\delta}{\tilde\lambda_\gamma})}\right) \ + \ {\rm const}, \end{array} \end{eqnarray} where prime means that the terms with $\gamma = \delta$ are excluded from the product in the numerator and accounted for in the infinite "constant", added at the r.h.s. In other words, \begin{eqnarray} e^{\frac{1}{2}s(t)} &= {\rm const}\cdot \left( \frac{\prod_{\gamma > \delta} (\lambda_\gamma - \lambda_\delta) (\tilde\lambda_\gamma - \tilde\lambda_\delta)} {\prod_\gamma \prod_\delta (\lambda_\gamma - \tilde\lambda_\delta)} \right)^{\xi^2} = \nonumber \\ &= {\rm const}\cdot \left( \frac{\Delta^2(\lambda) \Delta^2(\tilde\lambda)}{\Delta(\lambda, \tilde\lambda)} \right)^{\xi^2}. \label{miwadetder2} \end{eqnarray} Since $\ \langle N\mid J_m = 0\ $ for all $m<0$, we have $\langle N\mid e^{H_-} = \langle N\mid$, and therefore \begin{eqnarray} \langle N\mid e^H \equiv \langle N\mid e^{H_+} = \langle N\mid e^{H_-}e^{H_+} = e^{-\frac{1}{2}s(t)}\langle N\mid e^{H_+ + H_-}. \end{eqnarray} {}From eq.(\ref{miwadetder1}), \begin{eqnarray} e^{H_+ + H_-} = {\rm const}\cdot \prod_\gamma \ :e^{-\xi\phi(\lambda_\gamma)}: \ :e^{\xi\phi(\tilde\lambda_\gamma)}: \end{eqnarray} where "const" is exactly the same as in (\ref{miwadetder2}). If $\xi = 1$, eq.(\ref{bosid"'}) can be used to write:\footnote{ The choice of $\xi$ can be dictated by particular purposes. Here we impose the requirement on Miwa transform to represent $e^H = e^{H_{Cartan}}$ as a product of dimension-1/2 operators - this is most natural from the point of view of Hirota equations and simplifies the relation with integrable hierarchies. However, in section 2.7 and 2.8 we used another requirement (and there $\xi = \frac{1}{\sqrt{2}}$ rather than $\xi = 1$). There the 1-matrix model was considered, which is characterized by especially simple form of the $full$ hamiltonian (product of dimension-zero operators), and it was more important to adjust operators, which arise from $e^{H_{Cartan}}$ after Miwa transform to have simple correlators with $e^{A\psi\tilde\psi}$. When analyzing the 1-matrix model from this point of view one should also keep in mind that it was actually represented in s.2.3 in terms of {\it two} complex fermions. Screening charges are $Q^{(+)} = \oint e^{\sqrt{2}\phi} = \oint\tilde\psi_1\psi_2 = \oint e^{\phi_1-\phi_2},\ \ Q^{(-)} = \oint e^{-\sqrt{2}\phi} = \oint\tilde\psi_2\psi_1 = \oint e^{\phi_2 - \phi_1}$, while $\phi = \frac{1}{\sqrt{2}}(\phi_1 - \phi_2)$. The Hamiltonian is $H_{Cartan} = \frac{1}{\sqrt{2}}\sum_k t_kJ_k = \frac{1}{2} \sum_k t_k(J_k^1 - J_k^2)$, and Miwa transformation generators insertions of operators $\chi_1\tilde\chi_2$, where $\chi_1$ and $\tilde\chi_2$ have dimension 1/8 (rather than 1/2 as in the one (complex)-fermion system, considered in $this$ section). } \begin{eqnarray} \langle N\mid e^H \ = \ \frac{\Delta(\lambda,\tilde\lambda)} {\Delta^2(\lambda) \Delta^2(\tilde\lambda)} \langle N \mid \prod_\gamma \psi(\lambda_\gamma) \prod_\gamma \tilde\psi(\tilde\lambda_\gamma) \end{eqnarray} Similarly, \begin{eqnarray} e^{\bar H} \mid N \rangle = \prod_\delta \psi(\bar\lambda_\delta) \prod_\delta \tilde\psi(\tilde{\bar\lambda}_\delta) \mid N \rangle \frac{\Delta(\bar\lambda,\tilde{\bar\lambda})} {\Delta^2(\bar\lambda) \Delta^2(\tilde{\bar\lambda})}, \end{eqnarray} where \begin{eqnarray} \bar t_k = - \frac{1}{k} \sum_\delta \left(\bar\lambda_\delta^k - \tilde{\bar\lambda}_\delta^k\right) \end{eqnarray} and we used the fact that $J_m \mid N \rangle = 0 $ for all $m>0$. Finaly, \begin{eqnarray} \oldfalse \begin{array}{c} \tau_N\{t, \bar t \mid G\} = \langle N \mid e^H \ G \ e^{\bar H} \mid N \rangle = \frac{\Delta(\lambda,\tilde\lambda)} {\Delta^2(\lambda) \Delta^2(\tilde\lambda)} \frac{\Delta(\bar\lambda,\tilde{\bar\lambda})} {\Delta^2(\bar\lambda) \Delta^2(\tilde{\bar\lambda})} \cdot \\ \cdot \langle N \mid \prod_\gamma \psi(\lambda_\gamma) \prod_\gamma \tilde\psi(\tilde\lambda_\gamma)\ G\ \prod_\delta \psi(\bar\lambda_\delta) \prod_\delta \tilde\psi(\tilde{\bar\lambda}_\delta) \mid N \rangle . \end{array} \label{tautodamiwdet} \end{eqnarray} Singularities at the coinciding points are completely eliminated from this expression, since poles and zeroes of the correlator are canceled by those coming from the Van-der-Monde determinants. Let us now put $N=0$ and define normalized $\tau$-function \begin{eqnarray} \hat \tau_0\{t,\bar t \mid G\} \equiv \frac{\tau_0\{t,\bar t\mid G\}}{\tau_0\{0,0\mid G\}}, \end{eqnarray} i.e. divide r.h.s. of (\ref{tautodamiwdet}) by $\langle 0 \mid G \mid 0 \rangle$. Wick theorem now allows to rewrite the correlator at the r.h.s. as a determinant of the block matrix: \begin{eqnarray} \oldfalse {\rm det}\left(\begin{array}{cc} \frac{\langle 0 \mid \psi(\lambda_\gamma) \tilde\psi(\tilde\lambda_\delta)\ G \ \mid 0 \rangle}{\langle 0 \mid \ G \ \mid 0 \rangle} & \frac{\langle 0 \mid \psi(\lambda_\gamma)\ G \ \tilde\psi(\tilde{\bar\lambda}_\delta) \mid 0 \rangle}{\langle 0 \mid \ G \ \mid 0 \rangle} \\ -\frac{\langle 0 \mid \tilde\psi(\tilde\lambda_\delta)\ G \ \psi(\bar\lambda_\gamma) \mid 0 \rangle}{\langle 0 \mid \ G \ \mid 0 \rangle} & \frac{\langle 0 \mid \ G\ \psi(\bar\lambda_\gamma) \tilde\psi(\tilde{\bar\lambda}_\delta) \mid 0 \rangle}{\langle 0 \mid \ G \ \mid 0 \rangle} \end{array} \right) \label{blockform} \end{eqnarray} Special choices of points $\lambda_\gamma, \ldots, \tilde{\bar\lambda}_\delta$ can lead to simpler formulas. If $\tilde{\bar\lambda}_\gamma \rightarrow {\bar\lambda}_\gamma$, so that $\bar t_k \rightarrow 0$, the matrix elements at the right lower block in (\ref{blockform}) blow up, so that the off-diagonal blocks can be neglected. Then \begin{eqnarray} \oldfalse \begin{array}{c} \tau_0\{t,\bar t \mid G\} \rightarrow \tau^{KP}\{t \mid G\} = \frac{\langle 0 \mid e^H \ G \ \mid 0 \rangle}{\langle 0 \mid \ G \ \mid 0 \rangle} = \\ = \frac{\Delta(\lambda,\tilde\lambda)} {\Delta^2(\lambda) \Delta^2(\tilde\lambda)} {\rm det}_{\gamma\delta} \frac{\langle 0 \mid \psi(\lambda_\gamma) \tilde\psi(\tilde\lambda_\delta)\ G\ \mid 0 \rangle}{\langle 0 \mid \ G \ \mid 0 \rangle} . \end{array} \label{kpdetrep} \end{eqnarray} This function no longer depends on $\bar t$-times and is just a KP $\tau$-function. Matrix element \begin{eqnarray} \varphi(\lambda,\tilde\lambda) = \frac{\langle 0 \mid \psi(\lambda)\tilde\psi( \tilde\lambda)\ G\ \mid 0 \rangle}{\langle 0 \mid G \mid 0 \rangle} \end{eqnarray} is singular, when $\lambda \rightarrow \tilde\lambda$: $\varphi(\lambda,\tilde\lambda) \rightarrow \frac{1}{\lambda - \tilde\lambda}$. If now in (\ref{kpdetrep}) all $\tilde\lambda \rightarrow \infty$, \begin{eqnarray} \tau^{KP}\{t \mid G\} = \frac{{\rm det}_{\gamma\delta}\varphi_\delta(\lambda_\gamma)}{\Delta(\lambda)}, \label{KPdetmain} \end{eqnarray} where \begin{eqnarray} \varphi_\delta(\lambda) \equiv \langle 0 \mid \psi(\lambda) \left(\partial^{\delta-1}\tilde\psi\right)(\infty)\ G \ \mid 0 \rangle \sim \lambda^{\delta -1} \left(1 + {\cal O}\left(\frac{1}{\lambda}\right)\right). \end{eqnarray} This is the main determinant representation of KP $\tau$-function in Miwa parametrization. Starting from representation (\ref{KPdetmain}) one can restore the corresponding matrix ${\cal H}^{KP}_{ij}$ in eq.(\ref{addkptau1}) \cite{Toda}: \begin{eqnarray} {\cal H}^{KP}_{ij}\{t\} = \oint z^i\varphi_{-j}(z) e^{\sum_k t_kz^k} dz, \label{addkptau3} \end{eqnarray} i.e. \begin{eqnarray} T_{lj}^{KP} = \oint z^l\varphi_{-j}(z). \end{eqnarray} Then obviously $\displaystyle{\frac{\partial {\cal H}^{KP}_{ij}}{\partial t_k} = {\cal H}^{KP}_{i+k,j}}$. Now we need to prove that the $\tau$-function is given at once by $\frac{{\rm det}\ \varphi_\gamma(\lambda_\delta)}{\Delta(\lambda)}$ and ${\rm Det} {\cal H}^{KP}_{ij}\{t\}$. In order to compare these two expressions one should take $t_k = \frac{1}{k}\sum_\gamma^n \lambda_\gamma^{-k}$, so that \begin{eqnarray} \exp\left({\sum_{k>0} t_kz^k}\right) = \prod_{\gamma =1}^n \frac{\lambda_\gamma}{\lambda_\gamma - z} = \left(\prod_\gamma^n \lambda_\gamma\right) \sum_\gamma \frac{(-)^\gamma}{z-\lambda_\gamma} \frac{\Delta_\gamma(\lambda)}{\Delta(\lambda)}, \end{eqnarray} where \begin{eqnarray} \Delta_\gamma(\lambda) = \prod_{\stackrel{\alpha>\beta}{\alpha,\beta\neq \gamma}}(\lambda_\alpha - \lambda_\beta) = \frac{\Delta(\lambda)}{\prod_{\alpha\neq \gamma}(\lambda_\alpha - \lambda_\gamma)}, \end{eqnarray} and \begin{eqnarray} \left.{\cal H}^{KP}_{ij}\right|_ {t_k = \frac{1}{k}\sum_\gamma^n \lambda_\gamma^{-k}} = \left(\prod_\gamma^n \lambda_\gamma\right) \sum_\gamma \frac{(-)^{\gamma+1}\Delta_\gamma(\lambda)}{\Delta(\lambda)} \lambda_\gamma^i\varphi_{-j}(\lambda_\gamma). \label{derivkpvertod1} \end{eqnarray} As far as $n$ is kept finite, determinant of the infinite-size matrix (\ref{derivkpvertod1}), $\displaystyle{\left.{\rm Det}_{i,j<0}{\cal H}^{KP}_{ij}\right|_{t_k = \frac{1}{k}\sum_\gamma^n \lambda_\gamma^{-k}} = 0}$ since it is obvious from (\ref{derivkpvertod1}) that the rank of the matrix is equal to $n$. Therefore let us consider the maximal non-vanishing determinant, \begin{eqnarray} \oldfalse \begin{array}{c} \left.{\rm Det}_{-n\leq i,j<0}{\cal H}^{KP}_{ij}\right|_{t_k = \frac{1}{k}\sum_\gamma^n \lambda_\gamma^{-k}} = \\ = \left(\prod_\gamma^n \lambda_\gamma\right)^n {\rm det}_{i\gamma} \left( \frac{(-)^{\gamma+1}\Delta_\gamma(\lambda)}{\lambda_\gamma^i\Delta(\lambda)} \right) \cdot {\rm det}_{\gamma j}\varphi_{j}(\lambda_\gamma) = \\ = \frac{{\rm det}_{\gamma j}\varphi_{j}(\lambda_\gamma)}{\Delta(\lambda)}. \end{array} \label{derivkpvertod2} \end{eqnarray} We used here the fact that determinant of a matrix is a product of determinants and reversed the signs of $i$ and $j$. Also used were some simple relations: \begin{eqnarray} \oldfalse \begin{array}{lc} &\prod_{\gamma=1}^n \frac{\Delta_\gamma(\lambda)}{\Delta(\lambda)} = \frac{1}{\Delta^2(\lambda)}, \nonumber \\ &{\rm det}_{i\gamma}\frac{1}{\lambda_\gamma^i} = \left(\prod_\gamma^n \lambda_\gamma\right)^{-1}\Delta(1/\lambda), \\ &\Delta(1/\lambda) = \prod_{\alpha>\beta}\left(\frac{1}{\lambda_\alpha} - \frac{1}{\lambda_\beta}\right) = (-)^{n(n-1)/2} \Delta(\lambda) \left(\prod_\gamma^n\lambda_\gamma\right)^{-(n-1)}, \\ {\rm thus}& \\ &\left(\prod_\gamma^n\lambda_\gamma\right)(-)^{n(n-1)/2} \prod_{\gamma=1}^n \frac{\Delta_\gamma(\lambda)}{\Delta(\lambda)} {\rm det}_{i\gamma}\frac{1}{\lambda_\gamma^i} = \frac{1}{\Delta(\lambda)}. \end{array} \end{eqnarray} Since (\ref{derivkpvertod2}) is true for any $n$, one can claim that in the limit $n \rightarrow \infty$ we recover the statement, that $\tau^{KP}\{t\} = {\rm Det}_{i,j<0} {\cal H}_{ij}^{KP}$ with ${\cal H}_{ij}^{KP}$ given by eq.(\ref{derivkpvertod1}) (that formula does not refer directly to Miwa parametrization and is defined for any $t$ and any $j<0$ and $i$). This relation between $\varphi_\gamma$'s and ${\cal H}_{ij}^{KP}$ can now be used to introduce negative times $\bar t_k$ according to the rule (\ref{addkptau2}). Especially simple is the prescription for zero-time: ${\cal H}_{ij} \rightarrow {\cal H}_{i+N,j+N}$, when expressed in terms of $\varphi$ just implies that \begin{eqnarray} \frac{{\rm det}\ \varphi_\gamma(\lambda_\delta)}{\Delta(\lambda)} \rightarrow \frac{{\rm det}\ \varphi_{\gamma +N}(\lambda_\delta)} {({\rm det}\Lambda)^N\Delta(\lambda)}. \end{eqnarray} Generalizations of (\ref{addkptau3}), like \begin{eqnarray} {\cal H}_{ij}\{t,\bar t\} = \oint \oint z^i\bar z^j \langle 0 \mid \psi(z) \ G\ \tilde\psi(\bar z)\mid 0 \rangle e^{\sum_{k} (t_kz^k + \bar t_k\bar z^k)} dzd\bar z, \end{eqnarray} also can be considered. \subsection{Matrix models versus $\tau$-functions.} We are now prepared to return to our main subject and discuss integrability properties of eigenvalue matrix models. The claim is that partition functions of all these models, when considered as functions of time-variables (parametrizing the shapes of potentials) are in fact $\tau$-functions of (perhaps, multicomponent) Toda-lattice and/or KP type. (Interesting non-eigenvalue models are believed to be related to integrable systems of more general type, not restricted to level $k=1$ Kac-Moody algebras.) Partition functions are, however, not generic Toda or KP $\tau$-functions: first, they usually belong to some reduced hierarchies, second, the relevent operators $G$ (points of Grassmannian) are restricted to stay in peculiar domains of the universal module space, specified by "string equations". String equation is in fact nothing but the set of Ward-identities (Virasoro or $W$-constraints in the examples under investigation), which are now interpreted as equations on $G$. The very possibility of such interpretation is highly non-trivial and reflects some deep relation between the constraints and integrable structure. In the case of Virasoro constraints this is not a puzzle, because Virasoro algebra is a symmetry (covariance) of the hierarchy, the situation with other constraints is less clear (see the footnote at section 4.3). In fact, when applied to a $\tau$-function of appropriately reduced hierarchy, the infinitely many constraints usually become dependent and it is enough to impose only the lowest Virasoro constraint $L_{-1}\tau = 0$ (or ${\cal L}_{-p}\tau = 0$, where $p$ is the degree of reduction), in order to recover the entire set \cite{FKN}. It is this lowest constraint (or rather its $t_1$-derivative, $\frac{\partial}{\partial t_1}(L_{-1}\tau) = 0$) that traditionally carries the name of "string equation". It is often much simpler to deduce than the entire set of identites, what is important in practical applications (especially because determinant formulas, which imply integrability, can be also simpler to find in some situations than the Ward identities). In order to give a complete description of some sort of (matrix) models from the point of view of integrability theory it is enough to specify the hierarchy, to which it belongs (if partition function is interpreted as $\tau$-function, \begin{eqnarray} Z_{\rm model}\{t\} = \tau \{t \mid G_{\rm model}\} \end{eqnarray} and the string equation which serves to fix operator $G$ - the point in the universal module space.\footnote{ As we argued in the Introduction and in section 2.1, the word "matrix" can be probably omited if generic Lagrangians are considered in other models of quantum field theory. Also the universal module space (where moduli are - of bundles over {\it spectral} Riemann surfaces) can (and should) be treated as a "space of theories". It is one of the great puzzles (and beauties) of string theory, that Riemann surfaces appear both in the world-sheet and spectral "dimensions". See \cite{GLM} for more discussion on this issue.} After that it becomes an internal (yet unsolved) problem of integrability theory to explain, what is so special about the set of points $\{G_{\rm model}\}$ in this space. (We shall touch this problem in the next subsection, devoted to Kac-Schwarz operators.) Alternatively, if there is nothing special, it is an (unsolved) problem of matrix model theory to find models, associated with {\it any} points $G$ in the universal module space (or explain what is an obstacle, if any). We proceed now to description of particular matrix models from this point of view. As everywhere in these notes we consider only the most important classes of scalar-product, conformal (multicomponent) and Generalized Kontsevich models (GKM). All other examples (like models of complex, orthogonal, unitary etc matrices) can be included into consideration with more or less effort (see \cite{MMMM} and \cite{Bowick} for the cases of complex and unitary models respectively), but they do not add much for the general theory that we are now considering. String equations will be discussed in the next subsection. {\it Scalar-product models}. These were exhaustively discussed in sections 3.5-3.7. We remind that all conventional multimatrix models (with inter-matrix interaction of the form $\exp ({\rm Tr} H^{(\alpha)}H^{(\alpha+1)})$ belong to this class. The crucial formulas are: \begin{eqnarray} Z_N &= {\rm Det}_N {\cal H}_{ij}^f = {\rm Det}_{0 \leq i,j \leq N-1} {\cal H}_{ij}^f = \nonumber \\ &= {\rm Det}_{-N \leq i,j < 0}{\cal H}_{i+N,j+N}^f; \nonumber \\ {\cal H}_{ij}^f &= \frac{\partial^2}{\partial t_i\partial \bar t_j} {\cal H}^f = \left(\frac{\partial}{\partial t_1}\right)^i \left(\frac{\partial}{\partial \bar t_1}\right)^j {\cal H}^f. \label{scapro1} \end{eqnarray} Here \begin{eqnarray} {\cal H}_{ij}^f = \langle h^i \mid \bar h^j\rangle = \int d\hat\mu_{h,\bar h}e^{U(h)+\bar U(\bar h)} h^i\bar h^j. \end{eqnarray} Further, \begin{eqnarray} e^{U(h)} = e^{\sum_{k\geq 0} t_kh^k} = \sum_l h^lP_l(t); \nonumber \\ e^{\bar U(\bar h)} = e^{\sum_{k\geq 0}\bar t_k\bar h^k} = \sum_m \bar h^mP_m(\bar t) \end{eqnarray} and thus \begin{eqnarray} {\cal H}_{ij}^f &= \sum_{l,m} \langle\langle h^{i+l}\mid \bar h^{j+m}\rangle \rangle P_l(t)P_m(\bar t) = \sum_{l,m} T_{lm}^f P_{l-i}(t)P_{m-j}(\bar t), \nonumber \\ T_{lm}^f &= \langle\langle h^{l}\mid \bar h^{m}\rangle\rangle, \end{eqnarray} where the scalar product $\langle\langle \ \mid \ \rangle\rangle$ is w.r.to the measure $d\hat\mu_{h,\bar h}$ (while $\langle \ \mid\ \rangle$ is w.r.to $d\mu_{h,\bar h} = e^{U(h)+\bar U(\bar h)} d\hat\mu_{h,\bar h}$). One would immediately recognize in these formulas representation (\ref{todatau}) of Toda-lattice $\tau$-function, be there no additional restriction that determinant in (\ref{scapro1}) is over finite-dimensional $N\times N$ matrix (indices are constrained: $i,j\geq -N$). This can be automatically taken into account if we require that \begin{eqnarray} T_{lm}^f = \infty \ \ {\rm for\ all}\ l,m<0, \end{eqnarray} and identify $Z_N$ as a $\tau$-function $\tau^f$ of {\it forced} Toda-lattice hierarchy (thus the supescript $f$ carried by ${\cal H}$ and $T$). We conclude that partition functions of any scalar-product model is a $\tau^f$-function of forced Toda-lattice hierarchy. Let us now consider them as KP $\tau$-functions. This means that the $\bar t$-dependence is simply ignored. However, $N$ will be preserved explicitly as a parameter, labeling KP $\tau$-function. After Miwa transformation $t_k = -\frac{1}{k}\sum_\gamma\lambda_\gamma^{-k} - r_k$, described in section 3.7, we get: \begin{eqnarray} Z_N = \hat Z_N \frac{{\rm det}_{\gamma\delta}\hat Q_{N+\gamma-1}(\lambda_\delta)}{\Delta(\lambda)}, \end{eqnarray} where $\hat Q$ are orthogonal polinomials w.r.to the measure $d\hat\nu_{h,\bar h} = e^{-\sum_k r_kh^k}d\hat\mu_{h,\bar h}$. We conclude that in the framework of KP hierarchy the scalar-product models are distinguished by the fact, that the corresponding $\varphi_\gamma(\lambda)$ in (\ref{KPdetmain}) are {\it polinomials} rather than infinite series in powers of $\lambda^{-1}$. {\it 1-Matrix model}. This is particular example of scalar product model with a {\it local} measure $d\mu_{h,\bar h} = e^{U(h)+\bar U(\bar h)}\delta(h-\bar h)dhd\bar h$. In this case \begin{eqnarray} {\cal H}^f_{ij} = \langle h^i \mid \bar h^j \rangle = \langle h^{i+j} \rangle = \frac{\partial}{\partial t_{i+j}}{\cal H}^f = \left(\frac{\partial}{\partial t_1}\right)^{i+j}{\cal H}^f. \end{eqnarray} Thus in this case we deal with the (forced) Toda-chain reduction of Toda-lattice hierarchy. In the end of this section 4 we use orthogonal polinomials to present a detailed description of 1-matrix models as a Toda-chain $\tau$-function. This model can be alternatively defined as Gaussian Kontsevich model: see section 3.8. The fact that partition function is a $\tau$-function follows then from the general statement for GKM, see below. The fact that it is {\it forced} $\tau$-function is related to the property $c_{-N} = 0$, mentioned at the end of s.3.8 (and proved in s.3.9). Also reduction to Toda-chain hierarchy can be observed directly in terms of GKM: see ref.\cite{Toda} for more details. {\it Multicomponent (conformal) matrix models}. These are related to multicomponent hierarchies, with $\tau$-functions representable as correlators in multi-fermion systems. An example of determinant formula which substitutes (\ref{todatau}) in the 2-component case, is given at the end of section 3.5, where it is derived from consideration of the relevant matrix model \cite{comamo}. For derivation of the same determinant formula in the theory of $\tau$-functions see ref.\cite{HOS}. Generic theory of multicomponent hierarchies is now making its first steps and we do not review it in these notes. See \cite{Kavdl} for the group-theory approach to the problem. {\it Generalized Kontsevich model}. Determinant formulas for this case are derived in Section 3.3. The most important expression is \begin{eqnarray} Z_V\{N,T\} = \frac{1}{({\rm det}\Lambda)^N}\frac{{\rm det}_{\gamma\delta}\varphi_{\gamma+N}(\lambda_\delta)}{\Delta(\lambda)}, \label{gkmdet} \end{eqnarray} where \begin{eqnarray} \varphi_\gamma(\lambda) &= \frac{1}{\sqrt{2\pi}} e^{-\lambda V'(\lambda) + V(\lambda)} \sqrt{V''(\lambda)} \int x^{\gamma-1}e^{-V(x) + V'(\lambda)x}dx = \nonumber \\ &= \lambda^{\gamma-1}(1 + {\cal O}(\lambda^{-1})); \label{gkmdet1} \end{eqnarray} and \begin{eqnarray} \varphi_\gamma(\lambda) = {\cal A}\varphi_{\gamma-1}(\lambda) = {\cal A}^{\gamma-1}\Phi(\lambda). \label{gkmKasch} \end{eqnarray} For $N=0$ this is just the representation, peculiar for KP $\tau$-function in Miwa parametrization $T_k = \frac{1}{k}{\rm tr}\Lambda^{-k}$, see eq.(\ref{KPdetmain}) above. Thus \begin{eqnarray} Z_V\{T\} = \tau^{KP}\{T \mid G_V\}, \end{eqnarray} where it is operator $G$ (the point in Grassmannian) which depends on the shape of potential $V(X)$. We also remind that the only way in which $Z$ depends on the size of the matrix $n$ is through the {\it domain of variation} of the time variables $T$. If (\ref{gkmdet}) is extended to full Toda-lattice $\tau$-function, by introduction of negative times, we get \cite{Toda}: \begin{eqnarray} \oldfalse \begin{array}{c} Z_V\{T,N,\bar T\} = \frac{{\cal C}_V^{-1}(\Lambda)}{({\rm det}\Lambda)^N} e^{-\sum_{k>0}\bar T_k{\rm tr}\Lambda^{-k}} \times \\ \times\int_{n\times n} dX ({\rm det}X)^N\exp\left( - {\rm tr}V(X) + {\rm tr}\Lambda X + \sum_{k>0} \bar T_k {\rm tr}X^{-k} \right) \end{array} \end{eqnarray} When {\it this} extended partition function is considered as KP $\tau$-function, we have instead of (\ref{gkmdet}): \begin{eqnarray} Z_V\{T,N,\bar T\} = \frac{1}{({\rm det}\Lambda)^N}\frac{{\rm det}_{\gamma\delta}\varphi_{\gamma+N}^{\{\hat V\}} (\lambda_\delta)}{\Delta(\lambda)}, \label{gkmdet2} \end{eqnarray} and relevant $\varphi$-functions are \begin{eqnarray} \varphi_{\gamma+N}^{\{\hat V\}}(\lambda) &= \frac{1}{\sqrt{2\pi}} e^{-\lambda V'(\lambda) + \hat V(\lambda)} \sqrt{V''(\lambda)} \int x^{\gamma-1} e^{-\hat V(x) + V'(\lambda)x}dx = \nonumber \\ &= \lambda^{N+\gamma-1}(1 + {\cal O}(\lambda^{-1})), \end{eqnarray} with \begin{eqnarray} \hat V(x) &\equiv V(x) - N\log x - \sum_{k>0}\bar T_kx^{-k}, \nonumber \\ V(x) &= \hat V_+(x) \end{eqnarray} (where $\hat V_+(x)$ is the positive-power fragment of Laurent series $\hat V(x)$). Functions $\varphi_\gamma(\lambda)$ in (\ref{gkmdet1}) are just equal to $\displaystyle{ \left.\varphi_\gamma^{\{\hat V\}}(\lambda)\right|_{\bar T = 0}}$. \subsection{String equations and general concept of reduction} The role of string equation is to fix the point $G$ in the universal module space (UMS), associated with the particular matrix model, so that partition function, considered as a function of time variables, will appear as the corresponding $\tau$-function of a fixed shape. In this sense the idea behind the string equation is exatly the same as reduction of integrable hierarchies. The difference is that {\it linear} reductions, as defined in the section 4.5 above, are not enough to fix $G$ unambiguously: they just specify certain subsets in the Grassmannian, which are still infinite-dimensional. The reason, why these are usually linear reductions that are considered in the conventional theory of integrable hierarchies, is that they are associated with the simplest possible - Kac-Moody - subalgebras in the entire $GL(\infty)$. String equations, even their simplest examples, are usually fragments of more complicated - Virasoro and $W$-algebras, and are in fact considerably more restrictive. Moreover, string equation is usually a {\it distinguished} fragment, because it usually belongs to the Virasoro component of Ward identities, and Virasoro is still a {\it Lie} subalgebra in $GL(\infty)$. This is what makes the problem of string equations very similar to the "classical" one with linear reduction. More specifically, in order to include string equations (and in fact the entire set of Virasoro - but not $W$ - constraints) into consideration of reduction it is enough to allow the coefficients in (\ref{redtotau1}) to depend on $t$ and $\bar t$, without changing the order of time-derivatives. Of course, there are no obvious reasons to think that {\it any} point $G$ in the UMS can be selected by imposing this kind of linear-derivative constraints on $\tau$-function, - and further investigation can require essential generalization of such restricted notion of string equation. However, some of the eigenvalue matrix models are already known to possess string equations of such simple type, associated with Virasoro subalgebras of $GL(\infty)$. We do not go into details of the general theory - it is far not completed yet, - but instead present several examples of how string equations arise in particular matrix models. These examples can illustrate also the simplifications arising when only string equations and not the entire sets of Ward identities need be derived. In particular, it is clear that in cases when $\tau$ is represented as ${\rm Det}_{ij} {\cal H}_{ij}$, a linear differential equation imposed on ${\cal H}_{ij}$ will give rise to a similar equation on $\tau$ itself. Most of known string equations can be derived with the help of this technical idea. They are usually associated with invariance of integrals under {\it constant} shifts of integration variables $\delta h =\ const$ in scalar-product and other discrete models, and with the action of operator ${\rm tr} \frac{\partial}{\partial L_{tr}}$ in GKM. For somewhat more involved ideas, associated with string equations see \cite{GMoo}. {\it Scalar-product models}. String equation can be easily deduced for very specific type of measures $d\hat\mu_{h,\bar h}$. Since integral \begin{eqnarray} {\cal H}_{ij} = \int h^i\bar h^j e^{U(h)+\bar U(\bar h)} d\mu_{h,\bar h} \end{eqnarray} is invariant under the shift of integration variable $\delta h =\ const$, \begin{eqnarray} \int h^i\bar h^j e^{U(h)+\bar U(\bar h)} d\hat\mu_{h,\bar h} \left[ ih^{-1} + \frac{\partial U(h)}{\partial h} + \frac{\partial}{\partial h} \log (d\hat\mu_{h,\bar h}) \right] = 0, \end{eqnarray} or \begin{eqnarray} i{\cal H}_{i-1,j} + \sum_{k>0} kt_k\frac{\partial}{\partial t_{k-1}} {\cal H}_{ij} + \left[ S\left(\frac{\partial}{\partial t},\frac{\partial}{\partial \bar t} \right)\right]_{ij} = 0. \end{eqnarray} String equation arises straightforwardly, when operator $S$ is linear. This is true, if $\log (d\hat\mu_{h,\bar h}) \sim hf(\hat h)$ with any function $f(h)$. If the measure $d\hat\mu_{h,\bar h}$ is also required to be symmetric in $h$ and $\bar h$, we obtain conventional 2-matrix model as the only example: \begin{eqnarray} d\hat\mu_{h,\bar h} = e^{ch\bar h}dhd\bar h. \end{eqnarray} Equation for ${\cal H}_{ij}$ is: \begin{eqnarray} \phantom.\left(\sum_{k>0} kt_k\frac{\partial}{\partial t_{k-1}} + c\frac{\partial}{\partial \bar t_1}\right) {\cal H}_{ij} = - i{\cal H}_{i-1,j}. \label{hfor2mamo} \end{eqnarray} Its implication for $\hat \tau_N$ is: \begin{eqnarray} \left(\sum_{k>0} kt_k\frac{\partial}{\partial t_{k-1}} + c\frac{\partial}{\partial \bar t_1}\right) \hat\tau_N = 0, \label{sefor2mamo} \end{eqnarray} since the r.h.s. of (\ref{hfor2mamo}) does not contribute to determinant (the entries in the i-th row are proportional to those in the $i-1$-th row). In particular case of {\it 1-matrix model} $\ c=0$, and we recognize the lowest Virasoro constraint $L_{-1}\hat\tau_N = 0$. Traditionally the name of string equation is given not to $L_{-1}$-constraint itself, but to its $t_1$-derivative: $\frac{\partial}{\partial t_1}(L_{-1}\hat\tau_N) = 0$. For 2-matrix model (\ref{sefor2mamo}) is the lowest ($m=1,\ n=0$) component of the Ward identities $\left(\tilde W_{n-m}^{(m+1)}(t) - (-)^{m+n}c^{n+1} \tilde W_{m-n}^{(n+1)}(\bar t)\right)\hat\tau_N = 0$. Of course, there is also a similar equation with $t \leftrightarrow \bar t$. {\it Multicomponent (conformal) models}. The crucial feature of these models is that intermatrix interaction, when rewritten in terms of eigenvalues, usually contains only differences $\ h_i^{({\Bf\alpha})} - h_j^{({\Bf\beta})}\ $. Thus there is usually covariance under {\it simultaneous} shift of all eigenvalues $\ \delta h_i^{({\Bf\alpha})} = const\ $ by a {\it same} constant. This gives rise to a string equation of the form \begin{eqnarray} \left( \sum_{{\Bf\alpha}} L_{-1}^{({\Bf\alpha})} \right) \tau_{\bf N} = 0. \end{eqnarray} See \cite{comamo} for details. {\it Generalized Kontsevich Model}. In order to derive string equation one should act on partition function $Z_V\{T_k = \frac{1}{k}{\rm tr}\Lambda^{-k}\} = {\cal C}_V^{-1} {\cal F}_V\{L = V'(\Lambda)\}$ with operator ${\rm tr}\frac{\partial}{\partial L_{tr}} = {\rm tr}\frac{1}{V''(\Lambda)} \frac{\partial}{\partial \Lambda_{tr}}$. We can rewrite the result of this action in terms of time-derivatives, \begin{eqnarray} {\rm tr}\frac{\partial}{\partial L_{tr}} \log Z_V\{T\} = -\sum_{k>0} \left({\rm tr}\frac{1}{V''(\Lambda)\Lambda^{k+1}}\right) \frac{\partial}{\partial T_k}\log Z_V\{T\}. \end{eqnarray} Alternatively we can use the fact that $\ \displaystyle{ {\rm tr}\frac{\partial}{\partial L_{tr}} = \sum_\gamma \frac{1}{V''(\lambda_\gamma)}\frac{\partial}{\partial\lambda_\gamma}}\ $, $l = V'(\lambda)$, and explicit expression for $Z_V$ in terms of eigenvalues (Miwa coordinates), \begin{eqnarray} \oldfalse \begin{array}{c} Z_V \sim e^{{\rm tr}V(\Lambda) - {\rm tr}\Lambda V'(\Lambda)} \sqrt{\prod_\gamma V''(\lambda_\gamma)}\ \frac{{\rm det}\ \hat \varphi_\gamma(\lambda_\delta)}{\Delta(\lambda)} \sim \\ {\sim} \frac{{\rm det}\ \varphi_\gamma(\lambda_\delta)}{\Delta(\lambda)}, \end{array} \label{detformzv} \end{eqnarray} to get: \begin{eqnarray} \oldfalse \begin{array}{c} \left({\rm tr}\frac{\partial}{\partial L_{tr}}\right) \log Z_V\{T\} = \\ = \frac{1}{2} {\rm tr}\frac{V'''(\Lambda)}{(V''(\lambda))^2} + \frac{1}{2}\sum_{\gamma > \delta} \frac{V''(\lambda_\gamma) - V''(\lambda_\delta)}{\lambda_\gamma - \lambda_\delta} \cdot \frac{1}{V''(\lambda_\gamma)V''(\lambda_\delta)} - \\ - {\rm tr} \Lambda + \sum_\beta \frac{\partial}{\partial l_\beta} \log{\rm det}_{\gamma\delta} \hat \varphi_\gamma(l_\delta). \end{array} \end{eqnarray} Comparison of these two expressions gives: \begin{eqnarray} \oldfalse \begin{array}{c} \frac{{\cal L}_{-1}^{(V)} Z_V}{Z_V} \equiv \frac{1}{Z_V}\left[ \sum_{k>0} \left({\rm tr}\frac{1}{V''(\Lambda)\Lambda^{k+1}}\right) \frac{\partial}{\partial T_k} + \right. \\ \left. + \frac{1}{2}\sum_{\gamma > \delta} \frac{V''(\lambda_\gamma) - V''(\lambda_\delta)}{\lambda_\gamma - \lambda_\delta} \cdot \frac{1}{V''(\lambda_\gamma)V''(\lambda_\delta)} - \frac{\partial}{\partial T_1}\right] Z_V = \\ = - \frac{\partial}{\partial T_1}\log Z_V + {\rm tr} \Lambda - \sum_\beta \frac{\partial}{\partial l_\beta} \log{\rm det}_{\gamma\delta} \hat \varphi_\gamma(l_\delta). \end{array} \label{steqend} \end{eqnarray} One can show that the r.h.s. is equal to zero, and thus the string equation arises in the form \begin{eqnarray} {\cal L}_{-1}^{(V)} Z_V = 0. \end{eqnarray} If potential is monomial, $V_p = \frac{X^{p+1}}{p+1}$, then $r_k = -\frac{p}{p+1}\delta_{k,p+1}$ and \begin{eqnarray} \oldfalse \begin{array}{c} {\cal L}_{-1}^{V_p} \rightarrow {\cal L}_{-p} \equiv \\ \equiv \frac{1}{p} \left[ \sum_{k>0} (k+p)(T_{k+p} + r_{k+p})\frac{\partial}{\partial T_k} + \frac{1}{2} \sum_{k=1}^{p-1} k(p-k)T_kT_{p-k}\right]. \end{array} \label{asa} \end{eqnarray} The technical idea behind the proof \cite{GKM} is to represent \begin{eqnarray} \frac{\partial}{\partial T_1}\log Z_V = {\rm Res} \frac{Z_V\{T_k + \frac{1}{k\lambda^k}\}d\lambda}{Z_V\{T_k\}}, \end{eqnarray} and make use of the second determinant representation in (\ref{detformzv}) both in the denominator and the numerator: \begin{eqnarray} \frac{\partial}{\partial T_1}\log Z_V = {\rm Res}\frac{d\lambda}{\prod_{\gamma=1}^n(\lambda - \lambda_\gamma)} \cdot\frac{ {\rm det}\left( \begin{array}{cc} \varphi_\delta(\lambda_\gamma) & \varphi_{n+1}(\lambda_\gamma) \\ \varphi_\delta(\lambda) & \varphi_{n+1}(\lambda) \end{array}\right)} {{\rm det}\ \varphi_\delta(\lambda_\gamma)}. \label{as0} \end{eqnarray} Now we recall that \begin{eqnarray} \varphi_\gamma(\lambda) \sim \lambda^{\gamma -1} \left( 1 + {\cal O}(\lambda^{-1})\right). \label{as1} \end{eqnarray} At some moment we shall need even more: in fact \begin{eqnarray} &\varphi_\gamma(\lambda) \sim \lambda^{\gamma -1} \left( 1 + {\cal O}(\lambda^{-2})\right), \ \ {\rm i.e.} \nonumber \\ &\varphi_\gamma(\lambda) = \lambda^{\gamma -1} + c_\gamma\lambda^{\gamma-2} + \ldots, \ \ {\rm and}\ \ c_\gamma = 0 \ \ {\rm for\ any} \ \gamma. \label{as2} \end{eqnarray} This is a rather delicate property of GKM, it follows from two facts: first, $\displaystyle{\varphi_1 = 1 + {\cal O}\left(\frac{V''''}{(V'')^2}, \frac{(V''')^2}{(V'')^3}\right)}$, thus $c_1 = 0$, and second, Kac-Scwarz operator ${\cal A}$, defined in eq.(\ref{gkmKasch}) below, does not have contributions with zero-th power of $\lambda$, thus $c_{\gamma +1} = c_\gamma$. (For example, if $V(x) = \frac{x^2}{2} + ax$, $\varphi_\gamma(x) = \frac{1}{\sqrt{2\pi}} \int x^{\gamma -1} e^{-\frac{1}{2}(x-\lambda)^2}dx = \lambda^{\gamma-1} + 0\cdot \lambda^{\gamma -2} + \ldots$: the dangerous terms with $a$ simply do not show up in the expression for $\varphi_\gamma$.) After this comment we can come back to evaluation of (\ref{as0}). The product in denominator, which arised from the Van-der-Monde determinant, is already proportional to $\lambda^n$: $\prod_{\gamma =1}^n(\lambda - \lambda_\gamma) = \lambda^n\left(1 + {\cal O}(\lambda^{-1})\right)$. Because of this and the asymptotic formulas (\ref{as1}), it is clear that if determinant in the numerator of (\ref{as0}) is rewritten as linear combination of $n\times n$ determinants with the coefficients $\varphi_\gamma(\lambda)$ from the last row, only items with $\gamma \geq n$ can contribute. There are two such items: $\gamma = n$ and $\gamma = n+1$. In the expansion of $(n+1)\times (n+1)$ determinant $\varphi_{n+1}(\lambda) $ is multiplied by ${\rm det}\ \varphi_{\gamma}(\lambda_\delta)$, which exactly cancels with determinant in denominator, and the relevant contribution is \begin{eqnarray} {\rm Res}\ \frac{\varphi_{n+1}(\lambda)d\lambda}{\prod_{\gamma =1}^n(\lambda - \lambda_\gamma)} = c_{n+1} + \sum_\gamma \lambda_\gamma = \nonumber \\ = c_{n+1} + {\rm tr}\Lambda. \end{eqnarray} The item with $\varphi_n(\lambda)$ is \begin{eqnarray} \frac{{\rm det} \left(\varphi_1(\lambda_\gamma) \ldots \varphi_{n-1}(\lambda_\gamma)\ \varphi_{n+1}(\lambda_\gamma)\right)} {{\rm det} \left(\varphi_1(\lambda_\gamma) \ldots \varphi_{n-1}(\lambda_\gamma)\ \varphi_{n}(\lambda_\gamma)\right)} {\rm Res}\ \frac{\varphi_n(\lambda)d\lambda}{\prod_{\gamma =1}^n(\lambda - \lambda_\gamma)}. \label{detratvpn1} \end{eqnarray} The remaining residue is just unity. Determinant in the numerator differs from the one in denominator by substitution of the colomn with entries $\varphi_{n}(\lambda_\gamma)$ for that with $\varphi_{n+1}(\lambda_\gamma)$. At last we can return to eq.(\ref{steqend}) and recall that $\frac{\partial}{\partial l}\hat\varphi_\gamma(l) = \varphi_{\gamma +1}(l)$, thus \begin{eqnarray} \oldfalse \begin{array}{c} \sum_\beta \frac{\partial}{\partial l_\beta} \log{\rm det}_{\gamma\delta}\hat\varphi_\delta(l_\gamma) = \\ \frac{{\rm det} \left(\hat\varphi_1(l_\gamma) \ldots \hat\varphi_{n-1}(l_\gamma)\ \hat\varphi_{n+1}(l_\gamma)\right)} {{\rm det} \left(\hat\varphi_1(l_\gamma) \ldots \hat\varphi_{n-1}(l_\gamma)\ \hat\varphi_{n}(l_\gamma)\right)}, \end{array} \label{detvarphil} \end{eqnarray} what is just the same as (\ref{detratvpn1}), since $\hat\varphi_\delta$ differ from $\varphi_\delta$ by $\delta$-independent factor of $e^{V(\lambda)-\lambda V'(\lambda)}\sqrt{V''(\lambda)}$. Thus we conclude that the r.h.s. of (ref{steqend}) is equal to $-c_{n+1}$, which actually vanishes, as was explained several lines above. Two things deserve paying attention in this derivation. First, it was absolutely crucial that we had $\frac{\partial}{\partial T_1}\log Z_V$ at the r.h.s. of (\ref{steqend}) to make it vanishing, and therefore $\frac{\partial}{\partial T_1}$ immedeately appears in the expression for the ${\cal L}_{-1}^{(V)}$ operator at the l.h.s. (this is the origin of $r_k$-corrections in (\ref{asa}). Second, the result is both simple and natural, but the proof is full of technical details and looks somewhat artificial. It becomes even more involved, when the general formula (\ref{tkderiv}) for $T_k$-derivatives of $Z_V$ with $1\leq k\leq p$ \cite{Krich} is derived, which plays an important role in the theory of GKM and its applications to the theory of quantum gravity. The proof of the string equation is just a particular case of that formula, since using integral representation of $\hat\varphi(l)$ one can represent the r.h.s. of (\ref{detvarphil}) as $\frac{1}{Z_V}\langle {\rm tr} X \rangle$, where $\langle \ \ \rangle$ now stand for the average, defined by Kontsevich integral. Thus \begin{eqnarray} {\cal L}_{-1}^{(V)} Z_V \stackrel{(\ref{steqend})}{=} -\frac{\partial}{\partial T_1}Z_V + \langle {\rm tr}\Lambda - {\rm tr}X \rangle \stackrel{(\ref{tkderiv})}{=} 0. \end{eqnarray} \subsection{On the theory of GKM} We remind that GKM is abbreviation for the Generalized Kontsevich model, This theory is the naturally broad collection of topics for a separate big section in these notes. However, we decided not to include such detailed presentation. This is because GKM theory seems too incomplete now. First, we believe that the natural invariant formulation - of which existing matrix integral is only a specific realization - is still lacking. Second, GKM is not yet generalized enough to fulfil its main purpose of incorporating infromation about {\it all} the models of $2d$ gravity (in fact it should include even more: the entire theory of integrable hierarchies and geometrical quantization). Third, though the whole approach is very conceptual and deep, many {\it proofs}, as available nowadays, are still very technical and long. All this implies, that the proper view on the subject of GKM still needs to be found. At the moment we could describe two complementary approaches: one, starting from integral representations, another - from the Duistermaat-Heckman (localization) theory and Fourier analysis on group manifolds. Though intimately related, these two approaches are still technically different in too many respects. The second one is more fundamental (since ordinary integrals arise from discrete sums either in special limits or in the cases of inifinite-dimensional algebras, and, more important, since integral representation is only one of many possible ways to define the quantities of interest). However, many of the most important results, obtained in the first approach yet do not have their proper names and exact counterparts in the second one. We believe that this whole issue will be very much clarified in the near future and decided to postpone a detailed review till that time. What we can not avoid in these notes, is giving at least a {\it list} of topics, already included in the theory of GKM, and this is the purpose of the present subsection. Kontsevich model with $V=\frac{X^3}{3}$ was derived by Maxim Kontsevich \cite{Ko} from the original definition of topological $2d$ gravity, given by E.Witten \cite{WitTG} in terms of generating functional for Chern classes of certain bundles over Riemann surfaces. Generalization of this reasoning (when more bundles are taken into consideration) leads to the theory of Landau-Ginzburg Gravity (LGG), which is believed to be the same as GKM, though not all the proofs are already avaliable.\footnote{ Intermidiate results include the study of spherical approximation to LGG, which exhibits the structures, peculiar for "quasiclassical integrable hierarchies" (of which Bateman hierarchy to be briefly mentioned in section 5.2 below is an example), and which also arise in "quasiclassical approximation" to GKM. For some results in this direction see \cite{WittenN2},\cite{KriDu},\cite{TakTak},\cite{LP}, \cite{Krich},\cite{KhMa} and references therein. } The crucial feature of non-perturbative partition functions, as we discussed at the neginning of section 2, is their intrinsic integrability. For $2d$ gravity this general idea acquires a very concrete formulation: partition functions are usually just $\tau$-functions of conventional integrable hierarchies, moreover - for LGG, associated with minimal models, these are just ordinary multicomponent Toda hierarchies.\footnote{One can say that this is natural: both such models and Toda hierarchies are associated with the level $k=1$ Kac-Moody algebras and corresponding simplified versions of the WZNW model. However, too much still remains to be clarified about this "obvious" connection. } M.Kontsevich found representation for generating functional in the form of matrix integral, i.e. formulated a matrix model, which later allowed to prove Witten's conjecture that the functional is in fact a $\tau$-function. The concept of GKM as a {\it universal} matrix model, including all the information about generic (eigenvalue?) matrix models and thus all the models of $2d(?)$ gravity was introduced in \cite{GKM}, and the analogue of Kontsevich model with arbitrary potential $V(X)$, i.e. expression \begin{eqnarray} \oldfalse \begin{array}{c} \left.Z_V\{T\}\right|_{T_k = \frac{1}{k}{\rm tr}\Lambda^{-k}} = C_V(\Lambda)^{-1}{\cal F}_V\left(V'(\Lambda)\right) \sim \\ \sim \frac{\sqrt{{\rm det} V''(\Lambda)}} {(2\pi)^{n^2/2} e^{{\rm tr}\left(\Lambda V'(\Lambda) - V(\Lambda)\right)}} \int_{n\times n} dX e^{- {\rm tr} V(X) + {\rm tr} V'(\Lambda) X} \end{array} \label{GKoM} \end{eqnarray} was proposed as an intemediate step in this direction.\footnote{ We remind that ${\rm det} V''(\Lambda)$ is defined somewhat tricky, see s.2.5 above. The same matrix integral (\ref{GKoM}) was also considered in refs.\cite{MaSe},\cite{Direv},\cite{Ko2},\cite{AvM}. } This (still restricted) version of GKM is already enough to unify all the $(p, 1)$-models of $2d$ gravity. In some sense, $(p,q)$-models with $q\neq 1$ are also included, but in a very non-transparent way (using analytical continuation), which does not even explicitly respect the $p \leftrightarrow q$ symmetry. Partition function of such GKM, $Z_V\{T\}$ depends on two types of variables: time-variables $\hat T_k$ and potential $V$. Formally these two types of variables are absolutely different, $V$ being responsible for the choice of particular LGG model or, what is essentially the same, of particular reduction of Toda-lattice or KP hierarchy; while $\hat T_k$ are parameters of the generating functional of all correlation functions in this particular model. But of course, since we deal with exact (non-perturbative) approach, there is almost no real difference between these types of dependencies - on the model (vacuum state) and on the $\hat T$'s: the model can be changed by non-infinitesimal shift of $\hat T$-variables. Technically, in GKM this is reflected in the identity of the form \cite{Krich}: \begin{eqnarray} {\cal Z}_{V_p}\{T\} = f_p(r\mid \hat T_k + r_k)\cdot \tau\{\hat T_k + r_k\mid G_p\} \label{Krich'} \end{eqnarray} where $r_k = \frac{p}{k(p-k)}{\rm Res}\left(V'(\mu)\right)^{1-\frac{k}{p}}d\mu$ provide a specific parametrization of potentials $V$, which is here assumed to be any polinomial of degree $p$, and $f_p$ is some simple function: \begin{eqnarray} f_p(r\mid \hat T_k + r_k) = \exp -\frac{1}{2} \sum_{i,j} A_{ij}(r) (\hat T_i + r_i)(\hat T_j + r_j); \nonumber \\ A_{ij} = {\rm Res}\left(V'(\mu)\right)^{i/p}d\left(V'(\mu)\right)^{j/p} = \frac{\partial^2\log \tau_0^{(p)}}{\partial t_i\partial t_j}, \end{eqnarray} and $\tau_0^{(p)}$ is a $\tau$-function of "quasiclassical hierarchy". What is important, $G_p$ (which define the shape of the $\tau$-function as function of $\hat T + r$) and $f_p$ depend only on the degree $p$, but not on the other details of the shape of the potential. This is a deep formula. It accounts for two phenomena at once: First, it says that $Z$ depends on the {\it sum} of $\hat T$ and $r$.\footnote{ In Miwa parametrization $\hat T_k = \frac{1}{k}{\rm tr}\left(V'_p(\Lambda)\right)^{-k/p}$. Throughout these notes we used different time-variables $T_k = \frac{1}{k}{\rm tr}\Lambda^{-k}$, which are independent of the potential $V$, instead $V$-dependence of $Z_V$ - which we did not really study - was rather nontrivial. If expressed in terms of $\hat T$, partition function $\hat Z_V\{\hat T+r\} = Z_V\{T\}$ becomes almost independent of $V$: it changes - abruptly - {\it only} when the degree $p$ of potential changes. This second type of description is of course in better accordance with the symmetries of particular model - which are different in different "vacua" (for different $p$). Therefore these are variables $\hat T + r$, rather than in $T$ which arise naturally in Ward identites - as we saw in sections 2.5 and 2.6. $T$'s and $\hat T$'s are suited for different purposes: $T$'s are nice when {\it universality} aspects of GKM are concerned, while $\hat T$'s arise when specific features of particular models (orbits, vacua) are considered. } Second, dependence on $V$ is not {\it quite} smooth: when the degree of potential changes, the shapes of the functions $f$ and $\tau$ also changes abruptly. Another side of the same phenomenon is that partition function $Z_V\{T\}$, which in principle is well defined as a matrix integral for all choices of $V$ and $L$ (thus $\hat T$) at once, is in fact singular at some points: there are phase transitions, manifesting the switch from one LGG model to another. After a phase transition original integral expression becomes sonewhat symbolical: it defines partition function only in the sense of analytical continuation, and it is a separate problem to find an integral representation, adequate in the new phases. In practice, what is nicely decribed by the integral representation GKM in the form of eq.(\ref{GKoM}), are $(p,1)$-models, with $p+1$ being just the power of potential $V(x)$. What is not yet found, is analogous representation for $(p,q)$-models with $q\neq 1$ (it can involve multiple matrix integrals, and universal model is supposed to be "matrix quantum mechanics in external fields"). Derivation of the crucial formula (\ref{Krich'})in any approach - starting from GKM in the form of either LGG or matrix integrals - is still very tedious. In matrix-model representation it relies upon identity \cite{Krich} \begin{eqnarray} \oldfalse \begin{array}{c} \frac{\partial Z_V}{\partial T_k} = \langle {\rm tr}\Lambda^k - {\rm tr} X^k \rangle \equiv \\ \equiv {\cal C}_V^{-1} \int \left({\rm tr}\Lambda^k - {\rm tr} X^k\right) e^{-{\rm tr}V_p(X) + {\rm tr}V_p'(\Lambda)X} dX \ \ \ {\rm for}\ 1\leq k\leq p, \end{array} \label{tkderiv} \end{eqnarray} which look trivial but are rather hard to derive. (A proof of the string equation in GKM at the end of the previous subsection is the simplest example of this kind of exersices.) Certainly some simple derivation "in two lines" should exist, but it is not yet found. Formulas of this kind are very important for all aspects of GKM theory. Besides other things, they are just necessary to actually evaluate correlation functions in $(p,1)$-models of $2d$ gravity, of which $Z_V\{T\}$ is a generating functional. If instead of these "physical" questions, one asks about integrability theory, identites of this sort also play important role. For example, looking at (\ref{tkderiv}) for a special $k=p$ and special choice of potential - monomial $V_p(X) = \frac{X^{p+1}}{p+1}$,- ane can note that the r.h.s. vanishes: this is just a Ward identity, reflecting invariance under the shift of the integration variable, $\delta X =\ const$. This is a simplest version of a more general statement:\footnote{ This property was technically implicit in original Kontsevich's work \cite{Ko} for $p+1=3$, where it was related with certain combinatorial identities. A tricky proof, relying upon properties of $\tau$-functions, was given for any $p$ in \cite{GKM}. An example of straightforward proof, again for $p+1=3$, - just in terms of Kontsevich matrix integrals - can be found in ref.\cite{DFIZ}. } \begin{eqnarray} {\rm if}\ V_p(X) = \frac{X^{p+1}}{p+1}, \ \ \ {\rm then} \ \ \ \ \frac{\partial Z_V}{\partial T_{pk}} = 0 \ \ {\rm for\ all\ } n\in Z_+. \label{redcondk} \end{eqnarray} Looking from the point of view of integrable hierarchies, one immediately recognizes (\ref{redcondk}) as an example of reduction condition (\ref{redtotau1}). It corresponds to the so called $p$-reduction of KP-hierarhy, of which KdV ($p=2$) and Boussinesq ($p=3$) are the most celebrated examples. We refer to \cite{GKM} and \cite{Krich} for all details and references, the only thing to mention here is that the slightly weaker verison of the constraint (\ref{redcondk}), \begin{eqnarray} \frac{\partial Z_V}{\partial T_{pn}} = a_n = {\rm const}, \label{redcondk'} \end{eqnarray} where $a_n$ do not depend on any time variables, can be simply expressed in Miwa parametrization: it is just the statement that $\varphi$-functions in \begin{eqnarray} Z_V = \frac{{\rm det}_{\gamma\delta}\varphi_\gamma(\lambda_\delta)}{\Delta(\lambda)} \nonumber \end{eqnarray} satisfy the $p$-reduction condition: \begin{eqnarray} \lambda^p \varphi_\gamma(\lambda) = \sum_{\delta = 1}^{\gamma+p} \hat{\cal V}_{\gamma\delta}\varphi_\delta(\lambda). \label{predcogra} \end{eqnarray} This a restrictive relation, because $\varphi$'s are infinite seria in $1/\lambda$, while at th r.h.s. in (\ref{predcogra}) there is only a finite number of items. In GKM it is satisfied for monomial potential just as a corollary of Gross-Newman equation, or more exact, of the Ward identity for the integral \begin{eqnarray} \varphi_\gamma(\lambda) \sim \int x^{\gamma-1} e^{-V(x)+V'(\lambda)x} dx. \nonumber \end{eqnarray} Indeed, integral does not change under the shift $\delta x = \ const$, and this implies: \begin{eqnarray} \int x^{\gamma-1} \left( V'(x) - V'(\lambda) - \frac{\gamma -1}{x}\right) e^{-V(x)+V'(\lambda)x} dx = 0, \nonumber \end{eqnarray} i.e. \begin{eqnarray} \sum_{k=1}^{p+1} kv_k \left(\varphi_{\gamma +k-1}(\lambda) - \lambda^{k-1}\varphi_\gamma(\lambda)\right) - (\gamma-1)\varphi_{\gamma -1} = 0. \label{calvdef} \end{eqnarray} If only $v_{p+1}\neq 0$, this leads to an identity of the required form of (\ref{predcogra}). This description of reduction can be modified to allow for non-monomial potentials, making use of the concept of "equivalent hierarchies", see \cite{Tak},\cite{Krich}: in this framework the reduction condition is \begin{eqnarray} V'(\lambda)\varphi_\gamma(\lambda) = \sum_\delta {\cal V}_{\gamma\delta}\varphi_\delta,(\lambda), \label{redcond'} \end{eqnarray} but classes of essentially different reductions are labeled by the degree of potential only. As we already discussed in the previous subsection, linear constraints like (\ref{predcogra}) are not restrictive enough to fix the shape of the $\tau$-function (the point $G$ in the universal module space) unambigously: string equation should be also imposed. If expressed in terms of $\varphi$'s, string equation is just the property (\ref{gkmKasch}): \begin{eqnarray} \varphi_{\gamma +1} = {\cal A}\varphi_\gamma, \label{recondigkm} \end{eqnarray} where the Kac-Schwarz operator \begin{eqnarray} {\cal A} = \frac{1}{V''(\lambda)}\frac{\partial}{\partial \lambda} - \frac{1}{2}\frac{V'''(\lambda)}{(V''(\lambda))^2} + \lambda. \end{eqnarray} It has obvious generalization of the form \begin{eqnarray} {\cal A}_{p,q} = \frac{\partial}{\partial V'_p( \lambda)} - \frac{1}{2}\frac{V'''_p(\lambda)}{(V''_p(\lambda))^2} + Q'_q(\lambda), \label{KaschopQ} \end{eqnarray} where $Q_q(\lambda)$ is a polinomial of degree $q+1$ and (\ref{recondigkm}) is substituted be \begin{eqnarray} \varphi_{\gamma +q} = {\cal A}_{p,q}\varphi_\gamma. \end{eqnarray} This generalization is naturally related to the string equation in $(p.q)$ models, see \cite{KhMa} and references therein. Generic $(p,q)$ LGG model can be described by a system of constraints, \begin{eqnarray} (\lambda^p - {\cal V}_p)\{\varphi\} = 0, \nonumber \\ {\cal A}_{p,q}\{\varphi\} = 0, \label{constrset} \end{eqnarray} where both operators ${\cal V}_p$ and ${\cal A}_{p,q}$ are not uniquely fixed by choosing $p$ and $q$, and there is also a freedom to change variables $\lambda \rightarrow f(\lambda)$ and make a triangular transformation of basis $\varphi_\gamma \rightarrow \varphi_\gamma + \sum_{\delta <\gamma}C_{\gamma\delta}\varphi_\delta$. Altogether the set of equations (\ref{constrset}) modulo these allowed transformations is finite - $(p-1)(q-1)$ - dimensional, this is dimension of the module space of LGG models with given $p$ and $q$. Kontsevich integral can be now used to establish duality transformation from $(p,q)$ to $(q,p)$ model \cite{KhMa}: \begin{eqnarray} Z_{V,Q}(\Lambda) = C_{V,Q}^{-1}(\Lambda) \int_{n\times n} dX e^{-{\rm tr}S_{V,Q}(X,\Lambda) + {\rm tr}V'(\Lambda)Q'(X)} Z_{Q,V}(X). \label{VQGKM} \end{eqnarray} Here \begin{eqnarray} S_{V,Q}(x,\lambda) = \int^x V'(y)Q''(y)dy = \int^x V'(y)dQ'(y), \end{eqnarray} As usual, $C_{V,Q}(\Lambda)$ is the quasiclassical approximation to the integral, and $Z_{V,Q}(\Lambda) \equiv \frac{{\rm det}_{\gamma\delta} \varphi_\gamma(\lambda_\delta)}{\Delta(\lambda)}$, where $\varphi$ are solutions to (\ref{constrset}) with ${\cal V}_p$ and ${\cal A}_{p,q}$ defined by eqs.(\ref{redcond'}) and (\ref{KaschopQ}) respectively.\footnote{ Also expression for $r_k$-variables is now modified: \begin{eqnarray} r_k = \frac{p}{k(p-k)}{\rm Res} (V'_p(\mu))^{1-k/p}dQ'_p(\mu), \nonumber \end{eqnarray} For monomial $V_p$ and $Q_q$ $\ r_k = -\frac{p}{p+q}\delta_{k,p+q}$. } This relation does {\it not} provide any formula for $Z_{V_p,Q_q}(\Lambda)$ unless $q=1$. The case of $q=1$ is distinguished because $Z_{Q_1,V_p}$ is trivial. Indeed, the 1-reduction constraint $\lambda\varphi_\gamma = \varphi_{\gamma+1} + \sum_{\delta \leq\gamma}{\cal V}_{\gamma\delta} \varphi_\delta$, implies that ${\rm det}_{\gamma\delta}\varphi_\gamma(\lambda_\delta) = \Delta(\lambda)\prod_\delta \varphi_1(\lambda_\delta)$, thus $Z_{Q_1,V_p} = \exp \sum_k a_kT_k$, what is essentially the same as $Z_{Q_1,V_p} = 1$,\footnote{ Since $\varphi_1(\lambda) = 1 +\sum_{k>0}b_k\lambda^{-k}$, $\ \log \varphi_1(\lambda) = \sum_{k>0}\frac{a_k}{k}\lambda^{-k}$, and the sum $\sum_\delta \log\varphi_1(\lambda_\delta) = \sum_{k>0}\frac{a_k}{k}\left(\sum_\delta \lambda_\delta^{-k}\right) = \sum_{k>0} a_kT_k$. Addition of any {\it linear} combination of time-variables to $\log\tau$ does not essentially change $\tau$-function. For example, the ordinary integrable equations (like KdV or KP) are usually written in terms of variables like $u = \frac{\partial^2}{\partial T_1}\log\tau$, which are {\it second} derivatives of $\log\tau$. } and (\ref{VQGKM}) is just our old formula (\ref{GKoM}) for the $(p,1)$ version of GKM. (In fact $Q_1(X) \sim X^2$, and $Z_{Q_1,V_p}$ is nothing but Gaussian Kontsevich model. It is trivial when the "zero-time" $N=0$, as we assume here.) Matrix model realization of $Z_{V_p,Q_q}$ for $q\neq 1$ is yet unknown. This is not the only important further generalization of GKM (\ref{GKoM}). Another one is implied by the formula for ${\cal F}_V$ in terms of eigenvalues from the section 3.3, \begin{eqnarray} {\cal F}_V \sim \prod_{\gamma = 1}^n \int dx_\gamma e^{-V(x_\gamma)} \Delta^2(x) I(x,l). \label{KIverIZ} \end{eqnarray} As it was already mentioned in section 3.3, the Itzykson-Zuber integral \begin{eqnarray} I(x,l) \sim \int [DU] e^{{\rm tr} UXU^\dagger L} \sim \frac{{\rm det}_{\gamma\delta} e^{x_\gamma l\delta}} {\Delta(x)\Delta(l)} \end{eqnarray} is in fact a coadjoint orbit integral and has group theoretical interpretation: under certian conditions it turns into a character $\chi_R(g) = {\rm Tr}_R g$ of the group $GL(n)$. Here $g \equiv e^L$ is considered as a group element, representation $R$ is labeled by integer-valued parameters $m_1,\ldots,m_n$ - essentially the lengths of rows in the Young diagramm. Exact statement is: \begin{eqnarray} I(m,l)\cdot\frac{\Delta(l)}{\Delta(g)} = \frac{{\rm det}_{\gamma\delta} g_\gamma^{m_\delta}}{\Delta(m)\Delta(g)} = \frac{\chi_R(g)}{d_R}, \end{eqnarray} i.e. in order to get a character we should integrate over matrices $X$ with integer-valued eigenvalues.\footnote{ The ratio \begin{eqnarray} \frac{\Delta(l)}{\Delta(g)} = \prod_{\gamma >\delta} \frac{l_\gamma - l_\delta}{e^{l_\gamma} - e^{l_\delta}} \nonumber \end{eqnarray} is the usual correction factor, which is the price for the possibility to reduce quantum-mechanical problem of motion on the orbit to a {\it single} matrix integral. The full problem of matrix quantum mechanics can and should be considered as a multi-matrix (in fact, infinite-matrix) generalization of GKM (\ref{GKoM}), which incorporates all the $(p,q)$ LGG models. } Dimension $d_R$ of representation can be also expressed in terms of $m$-variables: $d_R = \Delta(m)$. As to the traces ${\rm tr} X^k = \sum_\gamma x_\gamma^k \rightarrow \sum_\gamma m_\gamma^k$, which appear in the action of GKM, they are very similar to the $k$-th Casimir eigenvalue $C_k(R)$ (though is not exactly the same). Thus we see that the integral in (\ref{KIverIZ}) is in fact very similar to \begin{eqnarray} {\cal F}_V^{qu}\{g,\bar g\} \equiv \sum_R \chi_R(\bar g)\chi_R(g) e^{-\sum_{k=0}^{\infty}v_kC_k(R)}, \label{quanGKM} \end{eqnarray} evaluated at the point $\bar g = I$. The only real difference is that instead of the integral we have a sum over discrete values of $m$ (sum over all the representations, or a {\it model} of $GL(n)$). This "discretized" (quantum?) GKM is more general that the continuum one which can be obtained by various limiting procedures. It is now obvious that the theory of discretized GKM largely overlaps with that of $2d$ Yang-Mills theory. The simplest ingredient of this theory is the classical result \cite{chartau}, that $GL(N)$ characters are in fact (singular) Toda-lattice and KP $\tau$-functions. Moreover, the entire sum at the r.h.s. of (\ref{quanGKM}), if considered as a function of $T_k = \frac{1}{k}{\rm tr} g^k$, ${\bar T}_k = \frac{1}{k}{\rm tr} \bar g^k$ is in fact a Toda-lattice $\tau$-function. There are also features parallel to (\ref{Krich'}). We refer to \cite{ITEP12} for a little more details about discretized GKM (see also a recent paper \cite{GoNe}). This is one more very important direction of the further investigation of GKM. \subsection{1-Matrix model versus Toda-chain hierarchy} At the end of this section we use an explicit example of dicrete 1-matrix model \cite{GMMMO} to illustrate, how a more familiar Lax description of integrable hierarchies arises from determinant formulas. This example will be also usefull in Section 5.3 below, when one of the ways to take double-scaling continuum limit of the 1-matrix model will be discussed. Lax representation appears usually after some coordinate system is chosen in the Grassmannian. In the example which we are now considering this system is introduced by the use of orthogonal polinomials. We already know from section 3.6, that partition function of 1-matrix model (which is a {\it one}-component model) is given by \begin{eqnarray} Z_N = \ {\rm Det}_{0 < i,j \leq N} \langle h^i\mid h^j\rangle\ = \prod_{i=0}^{N-1} e^{\phi_i} = Z_1 \prod_{i=1}^{N-1} R_i^{N-i}, \end{eqnarray} where the last two representations are in terms of the norms of orthogonal polinomials \begin{eqnarray} \langle Q_n \mid Q_m \rangle = e^{\phi_n}\delta_{nm} \label{orthocond} \end{eqnarray} and parameter of the 3-term relation \begin{eqnarray} \oldfalse \begin{array}{c} hQ_n(h) = Q_{n+1}(h) + c_nQ_n(h) + R_nQ_{n-1}(h), \\ Z_1 = e^{\phi_0} = \langle 1 \mid 1 \rangle , \ \ \ \ R_n = e^{\phi_n-\phi_{n-1}}. \end{array} \nonumber \end{eqnarray} Of course all the information is contained in the determinant formula together with the rule, which defines time-dependence of ${\cal H}_{ij}^f = \langle h^i\mid h^j\rangle = \hat{\cal H}_{i+j}^f$: \begin{eqnarray} \oldfalse \begin{array}{c} \frac{\partial {\cal H}_{ij}^f}{\partial t_k} = {\cal H}_{i+k,j}^f = {\cal H}_{i,j+k}^f ,\ \ {\rm or} \\ \frac{\partial \hat{\cal H}_i^f}{\partial t_k} = \hat{\cal H}_{i+k}^f. \end{array} \end{eqnarray} (The possibility to express everything in terms of ${\cal H}_i^f$ with a single matrix index $i$ is the feature of Toda-chain reduction of generic Toda-lattice hierarchy.) However, in order to reveal the standard Lax representation we need to go into somewhat more involved considerations. Namely, we consider representation of two operators in the basis of orthogonal polinomials. First, \begin{eqnarray} h^k Q_n(h) = \sum_{m=0}^{n+k} \frac{\langle n \mid h^k \mid m \rangle}{\langle m \mid m \rangle} Q_m(h) = \sum_{m=0}^{n+k} \gamma_{nm}^{(k)}Q_m(h) \end{eqnarray} (here the simplified notation is introduced for $\langle n \mid f(h) \mid m \rangle \equiv \langle Q_n \mid f(h) \mid Q_m \rangle$ and $\displaystyle{\gamma_{nm}^{(k)} \equiv \frac{\langle n \mid h^k \mid m \rangle}{\langle m \mid m \rangle}}$.) Second, \begin{eqnarray} \oldfalse \begin{array}{c} \frac{\partial Q_n(h)}{\partial t_k} = -\sum_{m=0}^{n-1} \frac{\langle n \mid h^k \mid m \rangle}{\langle m \mid m \rangle} Q_m(h) = -\sum_{m=0}^{n-1}\gamma_{nm}^{(k)}Q_m(h), \\ \frac{\partial \phi_n}{\partial t_k} = \frac{\langle n \mid h^k \mid n \rangle}{\langle n \mid n \rangle} = \gamma_{nn}^{(k)}. \end{array} \end{eqnarray} (These last relations arise from differentiation of orthogonality condition (\ref{orthocond}): \begin{eqnarray} \oldfalse \begin{array}{c} e^{\phi_n}\frac{\partial \phi_n}{\partial t_k}\delta_{nm} = \frac{\partial \langle Q_n \mid Q_m \rangle}{\partial t_k} = \nonumber \\ = \langle \frac{\partial Q_n}{\partial t_k} \mid Q_m \rangle + \langle Q_n \mid \frac{\partial Q_m}{\partial t_k} \rangle + \langle Q_n\mid h^k \mid Q_m \rangle \end{array} \end{eqnarray} by looking at the cases of $m<n$ and $m=n$ respectively.) {}From these relations one immediately derives the Lax-like formula: \begin{eqnarray} \frac{\partial \gamma_{nm}^{(k)}}{\partial t_q} = - \sum_{l=m-k}^{n-1} \gamma_{nl}^{(q)}\gamma_{lm}^{(k)} + \sum_{l=m+1}^{n+k} \gamma_{nl}^{(k)}\gamma_{lm}^{(q)} \label{laxrep1mamo} \end{eqnarray} or, in a matrix form, \begin{eqnarray} \frac{\partial \gamma^{(k)}}{\partial t_q} = [ R\gamma^{(q)}, \gamma^{(k)}], \end{eqnarray} where \begin{eqnarray} R\gamma_{mn}^{(k)} \equiv \left\{ \begin{array}{c} -\gamma_{mn}^{(k)} \ \ {\rm if} \ m>n, \\ \gamma_{mn}^{(k)} \ \ {\rm if} \ m<n \end{array} \right. \end{eqnarray} (We remind that usually $R$-matrix acts on a function $f(h) = \sum_{n = -\infty}^{+\infty}f_nh^n$ according to the rule: $Rf(h) = \sum_{n\geq l}f_nh^n - \sum_{n<l}f_nh^n$ with some "level" $l$.) These $\gamma^{(k)}$ are not symmetric matrices, but one can also rewrite all the formulas above in terms of symmetric ones: \begin{eqnarray} {\cal L}_{mn}^{(k)} \equiv e^{\frac{1}{2}(\phi_n - \phi_m)}\gamma_{mn}^{(k)} = \frac{\langle m \mid h^k \mid n \rangle}{\sqrt{\langle m \mid m \rangle \langle n \mid n \rangle}} \end{eqnarray} {}From eqs.(\ref{laxrep1mamo}) one can easily deduce Toda-equations for $\phi_n$: \begin{eqnarray} \oldfalse \begin{array}{c} \frac{\partial^2\phi_n} {\partial t_k\partial t_l} = \frac{\partial}{\partial t_k} \frac{\langle n \mid h^l \mid n \rangle}{\langle n \mid n \rangle} = \\ = \left( \sum_{m>n} - \sum_{m<n}\right) \frac{\langle n \mid h^k \mid m \rangle \langle m \mid h^l \mid n \rangle} {\langle m \mid m \rangle\langle n \mid n \rangle}, \end{array} \end{eqnarray} where the r.h.s. can be expressed in terms of $R_m = e^{\phi_m - \phi_{m-1}}$. In particular, \begin{eqnarray} \frac{\partial^2\phi_n}{\partial t_1\partial t_1} = R_{n+1} - R_n = e^{\phi_{n+1} - \phi_n} - e^{\phi_n - \phi_{n-1}}. \end{eqnarray} Let us also mention that in this formalism the Ward identities (Virasoro constraints) follow essentially from the relation \begin{eqnarray} \left( \frac{\partial}{\partial h}\right)^\dagger = - \frac{\partial}{\partial h} - \sum_{k>0} kt_k h^{k-1}, \end{eqnarray} where Hermitean conjugation is w.r.to the scalar product $\langle\ \mid\ \rangle$. For example, this relation implies, that \begin{eqnarray} \langle Q_n \mid \frac{\partial Q_n}{\partial h} \rangle = - \langle \frac{\partial Q_n}{\partial h} \mid Q_n \rangle - \sum_{k>0} kt_k \langle Q_n \mid h^{k-1} \mid Q_n \rangle. \end{eqnarray} Now we note that $\frac{\partial Q_n}{\partial h}$ is a polinomial of degree $n-1$, thus ${\langle Q_n \mid \frac{\partial Q_n}{\partial h} \rangle = 0}$. (In fact $$\displaystyle{ \frac{\partial Q_n}{\partial h} = -\sum_{k>0} kt_k \left(\sum_{m=0}^{n-1} \gamma_{nm}^{(k-1)}Q_m \right) = -\sum_{k>0} kt_k \frac{\partial Q_n}{\partial t_{k-1}}}.)$$ Also we recall that $\langle Q_n \mid h^{k-1} \mid Q_n \rangle = \langle Q_n \mid Q_n \rangle \frac{\partial \phi_n}{\partial t_{k-1}}$, and obtain: \begin{eqnarray} \sum_{k>0} kt_k\frac{\partial \phi_n}{\partial t_{k-1}} = 0 \end{eqnarray} for any $n$. This should be supplemented by relation $\frac{\partial \phi_n}{\partial t_0} = \phi_n$. In order to get the lowest Virasoro constraint (string equation), $L_{-1}Z_N = 0$ or $L_{-1}\log Z_N = 0$ it is enough just to sum over $n$ from $0$ to $N-1$. For more details about 1-matrix model, Toda-chain hierarchy and application of the formalism of orthogonal polinomials in this context see \cite{GMMMO}. \ \bigskip \section{Continuum limits of discrete matrix models} \setcounter{equation}{0} \subsection{What is continuum limit} Continuum limit of matrix models is, of course, the crucial issue for their physical applications whenever these models are interpreted as discrete (lattice) approximations to continuum theory. The very first thing to be kept in mind is that it is $not$ the only possible view on matrix models. Another approach considers them as describing $topological$ (and thus also in a certain sense "discrete") properties of the theory. Such models, when appearing in the field of, say, quantum gravity (which after all is a sort of a pure topological theory) do $not$ require any continuum limit to be taken: their discrete nature (occurence of {\it integer}-valued matrix indices) reflects $not$ the discrete approximation to the space-time (which does not really exist in quantum gravity), but rather the essential discreteness of the underlying structures: topology of the module spaces of geometries. Example of matrix models which allow for this kind of interpretation - in terms of topology of module spaces of bundles over Riemann surfaces - is provided by Kontsevich models, and this is why they usually do not require any continuum limit and why we once called them "continuous matrix models" in the Introduction to these notes. The models which are usually interpreted in more traditional way - as lattice theories - are represented by our "discrete" models, the 1-matrix, conventional and "conformal" multimatrix models being included into this class. More sophisticated examples are provided by "$c=1$"-theories, Kazakov-Migdal model and, say, Wilson's QCD (and infinetely many other lattice theories). It is not a surprise that continuum limits of some discrete models provide the theories of Kontsevich type: this happens whenever continuum theory is supposed to have a kind of topological nature. This is usually the case for quantum gravity (which, as we said, is conceptually a topological theory in the "module space of geometries" - the notion which is already made more or less explicit in the $2d$ case), but in principle this can be also true for many other theories, including exhaustive quantum theory of Yang-Mills fields (again there is already considerable progress in this direction, as soon as $2d$ Yang-Mills model is concerned). There should not be confusion about the presence of gauge particles in dimensions greater than 2 (for Yang-Mills) and 3 (for gravity): there is no reason to prevent generic topological theory from possessing continuum spectrum of excitations, though explicit analogue of Kontsevich-like description of such situations is not yet found (as we mentioned many times, it should probably rely upon non-eigenvalue models). We shall not discuss the non-trivial history of invention and understanding of all these notions (the crucial steps being discovery of the "multiscaling continuum limits" \cite{Kmamo},\cite{mamo}, which preserve integrable structure of discrete models in continuum case; hypothesis of equivalence of quantum and topological $2d$ gravities \cite{WitTG} and its proof \cite{Witko},\cite{MMM}, provided by discovery of Kontsevich models \cite{Ko} as a peculiar and powerful tool for description of topology of the module spaces). Instead, following the mean line of these notes, we shall concentrate on intrinsic relation between (multiscaling) continuum limits and integrability: the notion of continuum limits is in fact built into the theory of integrable hierarchies and the underlying representation theory of Kac-Moody algebras. In the case of the eigenvalue models the central issue here is the interrelation between Toda-lattice and KP hierarchies, even its more narrow aspect: elimination of the zero-time $N$, present in the Toda-lattice case. {}From representation theory (or conformal field theory, what is essentially the same) point of view the thing is that the zero-time (which labels the filling level of Dirac sea in the fermionic picture) is associated with the zero-modes of $scalar$ field and its elimination is just the change of boundary conditions which eliminates zero-modes. The simplest example of this "twisting" procedure is just transformation from periodic to antiperiodic scalars - it still preserves possibility to have fermionic description (where it looks like a switch from Ramond to Neveu-Schwarz sector), and thus does not take us out of the field of conventional integrable hierarchies. In representation theory one can interpret the same operation just as a switch from the homogeneous to principal representation, which are associated with the Toda-lattice and KP hierarchies respectively. This remarkably simple description is of course far from obvious, if one investigates continuum limit in naive way, without taking integrable structure into account explicitly, but just sending the number of degrees of freedom in discrete theory (i.e. the matrix size $N$) to infinity (together with the inverse lattice spacing, if any). We refer to the classical review \cite{Migrev} for discussion of what are the naive continuum limits in lattice gauge theories, i.e. what are the conditions for getting the second-order phase transitions, which allow for a continuum-like scaling behavious in the vicinity of the critical point, with critical exponents defining all the continuum physics, from quantum dimesnion of the space-time to spectrum of particles. The problem with naive continuum limits is that they can easily destroy integrable structure of the theory (the underlying hidden symmetries), unless special precaution is taken: the critical point (which is in fact a low-codimensional hypersurface in the infinite-dimensional space of parameters) should be approached from the certain directions, so that Ward identities are not explicitly broken. As soon as this word - Ward -dentities - is pronounced, we already get into the field of integrable systems and the issue can be discussed inside this field. The above-mentioned switch from periodic to antiperiodic fields is of course apparent if the discrete and continuos Virasoro constraints (represented by formulas (1.2) and (1.3) in the Introduction) are compared, but this is {\it a posteriori} information, because so far we interpreted "continuous Virasoro constraints" as the Ward-identities for the $V=X^3$-Kontsevich model, and it still remains to be explained why Kontsevich model is indeed what arises after continuum limit is taken. The simplest approach to {\it this} problem is to make use of the identity between discrete 1-matrix model and Gaussian Kontsevich model \cite{ChMa}, established in section 3.8 above. Then the $X^3$-model arises in the large-$N$ limit just when the matrix integral is evaluated by the steepest descent method \cite{Toda}. We shall present this simple calculation in the last subsection below, but before we take a somewhat more direct (and complicated) approach in order to reveal at least some of ideas, underlying the entire theory of continuum limits. \subsection{From Toda-chain to KdV} We begin with the simplest existing example: continuum limit, in which the lowest equation of the "Volterra hierarchy", \begin{eqnarray} \frac{\partial R_n}{\partial t} = - R_n(R_{n+1}-R_{n-1}), \label{lowVolt} \end{eqnarray} turns into the lowest KdV equation: \begin{eqnarray} \frac{\partial r}{\partial T_3} = - \frac{1}{3}r''' - 2rr'. \label{lowKdV} \end{eqnarray} Volterra hierarchy is a reduction of Toda-chain hierarchy, with $R_n = e^{\phi_n-\phi_{n-1}}$, arising when all the odd-times $t_{2k+1} = 0$ and all $\phi_n$ are supposed to be independent of them. Therefore this hierarchy is clearly related to the discrete 1-matrix model. We"ll turn to the study of 1-matrix model in the next subsection, but here we just address the transformation from (\ref{lowVolt}) to (\ref{lowKdV}) \cite{Nov},\cite{GMMMO}. The basic idea of taking continuum limit is to change discrete "zero-time" $n$ for {\it continuum} variable $x$ (to be after all substituted by $T_1$ of the continuous hierarchy). In other words, the idea is to consider a subset of functions $R_n$, which satisfy Volterra equation and depend on $n$ very smoothly, so that they can actually be substituted by a smooth function $R(x)$. This is a very natural thing to do, of course, when one is interested in the large-$n$ limit of the equation. Namely, one substitutes (\ref{lowVolt}) by \begin{eqnarray} \frac{\partial R(x)}{\partial t} = - R(x)(R(x+\epsilon)-R(x-\epsilon)), \label{lowVoltcon} \end{eqnarray} and take the limit $\epsilon \rightarrow 0$, which, after rescaling $x \rightarrow \epsilon x$, gives rise to "Bateman equation", \begin{eqnarray} \frac{\partial R(x)}{\partial t} = - R(x)R'(x). \label{Bateq} \end{eqnarray} This is a very interesting equation (see \cite{Fair} for description of the amusements of the related theory, which is in fact intimately related to the theory of jets). However, it is much simpler that KdV equation (for example, it is {\it completely} integrable in the most trivial sense of the word: entire set of solutions, satisfying {\it any} boundary conditions can be immediately written down, see \cite{Fair}). KdV equation can be considered as a sort of "quantization" of (\ref{Bateq}) (unfortunately this very interesting subject did not yet attract enough attention and is not studied well enough). Remarkably, Bateman equation is not the {\it only} possible limit of Volterra equation: a fine tuning procedure ("double-scaling limit") exist, which can provide less trivial - KdV - equation \cite{Nov}. Indeed, imagine, that in continuum limit $R_n$ tends to a constant $R_0$, and the function $r(x)$ arises only as scaling approximation to this constant: $R(x) = R_0(1 + \epsilon ^sr(x))$. Then the leading term at the r.h.s. of (\ref{Bateq}) is $\epsilon RR'(x) = -2\epsilon^s r(x) (1 + {\cal O}(\epsilon^2,\epsilon^s))$, and instead of (\ref{Bateq}) we would get: \begin{eqnarray} \frac{\partial r}{\partial t} = - 2\epsilon R_0 r'(x) ((1 + {\cal O}(\epsilon^2,\epsilon^s)). \nonumber \end{eqnarray} This equation is even simpler that (\ref{Bateq}) - it is just linear, but in fact it is too simple to preserve its form: by a simple change of variables\footnote{ This change of variables is implied by the relation: \begin{eqnarray} \frac{\partial}{\partial t} + 2\epsilon^sR_0 \frac{\partial}{\partial x} = \left( \frac{\partial \tilde t}{\partial t} + 2\epsilon^sR_0 \frac{\partial \tilde t}{\partial x}\right) \frac{\partial}{\partial \tilde t} + \left( \frac{\partial \tilde x}{\partial t} + 2\epsilon^sR_0 \frac{\partial \tilde x}{\partial x}\right) \frac{\partial}{\partial \tilde x} = \frac{\partial}{\partial \tilde t}. \end{eqnarray} } \begin{eqnarray} \tilde x = x - 2\epsilon R_0 t, \\ \tilde t = \epsilon^3 R_0 t \label{chavacont} \end{eqnarray} it can be transformed into \begin{eqnarray} \frac{\partial r}{\partial \tilde t} = \epsilon^{-2} {\cal O}(\epsilon^2,\epsilon^s), \nonumber \end{eqnarray} and terms at the r.h.s. also deserves beeing taken into account. Then we get: \begin{eqnarray} \frac{\partial r(x)}{\partial t} = -2\epsilon R_0 \left(1 + \epsilon^sr(x)) (r'(x) + \frac{1}{6}\epsilon^2r'''(x) + {\cal O}(\epsilon^4)\right) = \nonumber \\ = -2\epsilon R_0 \left( r'(x) + \frac{1}{6}\epsilon^2r'''(x) + \epsilon^s rr'(x) + \epsilon^2{\cal O}(\epsilon^2, \epsilon^s)\right) \nonumber \end{eqnarray} and, after the change of variables (\ref{chavacont}), \begin{eqnarray} \frac{\partial r(\tilde x)}{\partial \tilde t} = - \frac{1}{3} r'''(\tilde x) - 2\epsilon^{s-2}rr'(\tilde x) + {\cal O}(\epsilon^2, \epsilon^s). \nonumber \end{eqnarray} It is now clear, that the choice $s=2$ is distinguished (a critical point) and at this point we get: \begin{eqnarray} \frac{\partial r}{\partial T_3} = -\frac{1}{3}\frac{\partial^3r}{\partial T_1^3} - 2r\frac{\partial r}{\partial T_1}, \end{eqnarray} where new notation $T_1$ and $T_3$ is introduced for $\tilde x$ and $\tilde t$ respectively. This is already the KdV equation (\ref{lowKdV}), and our conclusion is: While the naive continuum limit of Volterra equation is just a simple Bateman equation, the scaling limit can be fine tuned so that KdV equation arises instead. The crucial ingredient of this adjustement is the change of time-variables $\{t\} \longrightarrow \{T\}$, which involves singular parameter $\epsilon$. The procedure can be easily generalized to the entire Volterra hierarchy, and fine tuning allows to get the entire KdV hierarchy in the limit of $\epsilon \rightarrow 0$. Usually transformation to the "Kazakov variables" $\{T\}$ (they are a little different from those originally introduced by V.Kazakov in \cite{Kmamo}) from $\{t\}$ is some linear {\it triangular} transformation. An important detail is that this procedure requires restriction to only {\it even} time-variables $t_{2m},\ m\geq 0$. (If odd times are also involved, a {\it pair} of KdV hierarchies arises in the continuum limit - this is not a "minimal" case.) Thus "irreducible" realization of continuum limit requires {\it reduction} of original hierarchy. This can be also seen from the fact that the lowest KdV equation arises from the lowest Volterra equation, which is related to the {\it second} eqaution of Toda-chain hierarchy. Unfortunately this simple piece of theory (continuum limits in terms of hierarchies) has never been worked out in full details (for the entire Toda-lattice hierarchy, its multicomponent generalizations and their reductions). As we already mentioned, this theory will involve the general relation between homogeneous and principal representations of the (level $k=1$) Kac-Moody algebras. \subsection{Double-scaling limit of 1-matrix model} Now we proceed to discussion of a slightly different approach to continuum limits, which is directly adjusted to the needs of matrix models. The naive idea \cite{mamo},\cite{FKN} is to forget about integrability and just look at the Ward identities (Virasoro constraints in the 1-matrix case) and take a continuum limit of these identities. This approach makes close contact with the standard technique of "loop equations" (Makeenko-Migdal equations \cite{MaMi}) in the theory of matrix models, of which Virasoro and $W$-constraints are just particular examples.\footnote{One of the puzzles in the theory of non-eigenvalue models is to identify group-theoretical meaning of generic loop equations: they are usually introduced as equations of motion rather than as Ward identities (see discussion at the beginning of Section 2 above), and thus their implications are more obscure and technical means to deal with them are much more restricted. When group theory description will be found, it will very soon reveal the (generalized) integrable structure of non-eigenvalue models and it will be a big step forward in the whole theory. } However, carefull analysis of continuum limit of discrete Virasoro constraints \cite{MMMM} makes it clear that the procedure is far less simple than one can think in advance (usually derivations are not very carefull and details are just "put under the carpet"). The crucial problem is that we want peculiar (double scaling) rather than naive limit, and, as we mentioned in the previous subsection, this also requires a certain reduction (elimination of the odd-times $t_{2m+1}$). If parity symmetry (w.r.to the change of $H \rightarrow -H$ in the original matrix integral) is taken into account, one can easily throw away {\it first} derivatives w.r.to the odd-times $t_{2m+1}$, just because $\displaystyle{\left.\frac{\partial Z_N}{\partial t_{2m+1}}\right| _{t_{2k+1} = 0} = 0}$, but this is no longer true as far as the {\it second} derivatives $\displaystyle{\left.\frac{\partial^2 Z_N}{\partial t_{2m+1}\partial t_{2l-1}} \right| _{t_{2k+1} = 0}}$ are concerned, which appear in (the "quantum piece" of) the Virasoro constraints (1.2). It is a highly non-trivial feature of loop equations (having its origin in their integrable structure!), that in continuum limit these terms can be in fact carefully eliminated. The thing is that the second derivatives of $\log Z_N$ appear to be a {\it local} objects, in the sense that they depend only on $Z_{\tilde N}$ with the difference $\mid \tilde N-N\mid \leq m+l$, which does {\it not} blow up as $N \rightarrow \infty$ in continuum limit. Moreover, the differences $\displaystyle{\frac{\partial^2 \log Z_N}{\partial t_{2m+1}\partial t_{2l-1}} - \frac{\partial^2 \log Z_N}{\partial t_{2m}\partial t_{2l}}}$ almost tend to zero, leaving some simple (though vitaly important) correction to arising continuous loop equations. This locality property allows one to get rid of these dangerous odd-time derivatives, substituting them just by second derivatives w.r.to the even-times. Since such substitution is possible only for {\it logarithms} of $Z_N$, continuous constraints appear imposed on the {\it square root} of original partition function (or on the $\frac{1}{p}$-th power in the case of the $p-1$-component conformal models). Another aspect of this trick to deal with the odd-time derivatives is that it makes the entire derivation depending on the fact that the theory is integrable - this is what guarantees the above-mentioned locality. Since the way to reveal integrability, by looking at the loop equations themselves is yet not very well understood, the whole calculation becomes not quite self-contained (but of course, if we know everything about integrable structure this is not a real drawback, this is just a limitation of particular approach, starting from the loop equations) . In particular, this is the only loophole, which is still not filled in the description of continuum limit of conformal (multi-component) matrix models, which in all other respects goes in exactly in parallel with the 1-component (1-matrix) case.\footnote{ It transforms discrete $W$-constraints into continuum $W$-constraints, which in their turn arise from the GKM with the appropriate potential \cite{GKM}, \cite{Mikh}. Unfortunately, since the GKM-inteprpretation of {\it discrete} multicomponent models (like the one existing in the 1-matrix case, see s.3.8) is yet unknown, the direct way to take their continuum limit - like the one to be described in the next subsection for the 1-matrix case - is also yet unavailable. For more details about conformal matrix models, their integrable structure and continuum limits see refs.\cite{comamo}. } We shall now describe briefly the steps of this calculation for the 1-matrix model, refering for all the details to refs.\cite{MMMM} and \cite{GMMMMO}. Our previous discussion already contain motivations for the main steps, so we do not need to go into detailed explanations. Manipulations below, involving Kazakov variables can look a little artificial, but we repeat that they can be interpreted as a switch from the Toda-type to KP-type hierarchies, which, as we already saw in the previous subsection, is naturally associated with the double-scaling continuum limit. We start from the discrete Virasoro constraints (1.2), rewritten in terms of a generating functional ("stress tensor" on the spectral plane): \begin{eqnarray} L_-(z) Z_N = 0, \end{eqnarray} where \begin{eqnarray} L_-(z) = \sum_{n\geq -1}^{\infty} L_n z^{-n-2} = \frac{1}{2}\left( J^2(z)\right)_-, \end{eqnarray} and \begin{eqnarray} \oldfalse \begin{array}{c} J(z) = \partial \phi(z) = \sum_{n= -\infty}^{\infty}J_n z^{-n-1}; \\ \phi(z) = \frac{1}{\sqrt{2}}\sum_{k\geq 0}t_kz^k - \sqrt{2} \sum_{k>0} \frac{z^{-k}}{k}\frac{\partial }{\partial t_k}; \\ J_{-k} = \sqrt{2}\frac{\partial }{\partial t_k}; \ \ J_k = \frac{1}{\sqrt{2}}kt_k, \ \ k\geq 0; \\ \frac{\partial}{\partial t_0}Z_N = NZ_N. \end{array} \end{eqnarray} Next, we need to reduce the original partition function: \begin{eqnarray} Z_N\{t\} \longrightarrow Z_N^{\rm red}\{t_{\rm even}\} \equiv Z_N\{t_{\rm odd}=0, t_{\rm even}\}. \end{eqnarray} All odd virasoro generators $L_{2n+1}$ act trivially on $Z_N^{\rm red}$, since $\displaystyle{\left.\frac{\partial Z_N}{\partial t_{2k+1}}\right| _{t_{\rm odd}=0} = 0}$, and we need to consider only $L_{2n}$. Introduce also\footnote{ Note that $\displaystyle{ \left.\phi^{\rm red}(z) \neq \phi(z) \right|_{t_{\rm odd}=0}}$ and similarly $\displaystyle{ \left.L_{2n}^{\rm red}(z) \neq L_{2n} \right|_{t_{\rm odd}=0}}$: some factors of $2$ in (\ref{redfields}) being responsible for this discreapancy. In fact $L^{\rm red}$ are related to generators of the Virasoro constraints in the {\it complex}-matrix model \cite{MMMM}, \begin{eqnarray} Z_N^C = \int dM \exp \left(\sum_{k\geq 0}t_{2k} {\rm Tr} (MM^\dagger)^k\right) \nonumber \end{eqnarray} and in continuum limit $\displaystyle{Z_N^C \sim \sqrt{Z_{2N}^{\rm red}}}$. } \begin{eqnarray} \phi^{\rm red}(z) &\equiv \frac{1}{\sqrt{2}}\sum_{k\geq 0} t_{2k}z^{2k} - \sqrt{2}\sum_{k>0} \frac{z^{-2k}}{k}\frac{\partial}{\partial t_{2k}}; \nonumber \\ L^{\rm red}(z) &= \frac{1}{2}\left(\partial \phi^{\rm red}(z)\right)^2; \nonumber \\ L_{2n}^{\rm red} &\equiv \sum_{k>0} kt_{2k}\frac{\partial}{\partial t_{2k+2n}} + \sum_{k=0}^n \frac{\partial^2}{\partial t_{2k}\partial t_{2n-2k}}. \label{redfields} \end{eqnarray} Now we have two issues to be discussed separately. The first one is the change from $t_{2k}$ to Kazakov variables $T_{2m+1}$. The second is the difference between constraints imposed on $Z^{\rm red}$ and $Z$. The simplest way to {\it describe} Kazakov variables is to introduce one more - antiperiodic - scalar field, \begin{eqnarray} \Phi(u) = \frac{1}{\sqrt{2}} \sum_{k\geq 0} T_{2k+1} u^{k+\frac{1}{2}} - \sqrt{2} \sum_{k\geq 0} \frac{u^{-k-\frac{1}{2}}}{k+\frac{1}{2}} \frac{\partial}{\partial \tilde T_{2k+1}}. \end{eqnarray} Here $\tilde T$ and $T$ are related by transfromation \begin{eqnarray} T_{2k+1} = \tilde T_{2k+1} + \epsilon^2 \frac{k}{k+\frac{1}{2}}\tilde T_{2k-1} + 2\epsilon N\delta_{k,0}. \end{eqnarray} Impose now a relation: \begin{eqnarray} \partial\phi^{\rm red}(z) &= \frac{1}{\epsilon^2}U^{-1} \partial\Phi(u) U; \nonumber \\ z^2 &= 1 + \epsilon^2 u, \label{preKaz} \end{eqnarray} and in continuum limit $\epsilon$ is assumed to vanish. This is a relation which maps homogeneous representations into principal, but its invariant meaning (especially - from the point of view of conformal field theory) does not seem to be enough understood. Anyhow, this relations establishes a relation between $t_{\rm even}$ and $T$. Namely, comparing the coefficients in front of the positive powers of $u$ at both sides of this equation, we get: \begin{eqnarray} T_{2k+1} &= \frac{1}{2}\epsilon^{2k+1} \sum_{m\geq k}^{\infty} \frac{g_m \Gamma(m+\frac{1}{2})}{(m-k)!\Gamma(k+\frac{3}{2})}, \ \ k\geq 0; \nonumber \\ g_m &= mt_{2m},\ \ m\geq 1; \ \ \ \ g_0 = 2N. \label{Kazchava} \end{eqnarray} Inverse transformation looks like \begin{eqnarray} g_m = 2\sum_{k\geq m} (-)^{k-m} \frac{T_{2k+1}\Gamma(k+\frac{3}{2})} {\epsilon^{2k+1}(k-m)!\Gamma(m+\frac{1}{2})} . \end{eqnarray} Now, \begin{eqnarray} \frac{\partial}{\partial t_{2k}} = \frac{1}{2}\sum_{m=0}^{k-1} \frac{\Gamma(k+\frac{1}{2})\epsilon^{2k+1}} {(k-m-1)!\Gamma(m+\frac{3}{2})}\frac{\partial}{\partial\tilde T_{2m+1}}, \end{eqnarray} and using this formula when comparing the negative powers of $u$ we find: \begin{eqnarray} U &= \exp\left(\sum_{m,n} A_{mn}\tilde T_{2m+1}\tilde T_{2n+1}\right), \nonumber \\ A_{mn} &= 2\frac{(-)^{m+n}}{\epsilon^{2(m+n+1)}}\cdot \frac{\Gamma(m+\frac{3}{2})\Gamma(n+\frac{3}{2})} {m!n!(m+n+1)(m+n+2)}. \label{Umatrix} \end{eqnarray} The square of relation (\ref{preKaz}) is: \begin{eqnarray} \left(\partial\phi^{\rm red}\right)^2(z) = \frac{1}{\epsilon^4}U^{-1} \left(\partial\Phi\right)^2(u) U, \label{preKazVir} \end{eqnarray} or \begin{eqnarray} \sum_{p\geq 0}L_{2p}^{\rm red} z^{-2p-2} = \frac{1}{\epsilon^4}U^{-1} \left(\sum_{n\geq -1} \tilde{\cal L}_{2n} u^{-n-2}\right) U. \end{eqnarray} This equality implies that \begin{eqnarray} \oldfalse \begin{array}{c} U^{-1}\tilde{\cal L}_{2n} U = \epsilon^4 \sum_{p\geq 0}L_{2p}^{\rm red} \oint_{\infty}\frac{u^{n+1}du}{z^{2p+2}} = \\ = \epsilon^{-2n} \sum_{p=0}^{n+1} (-)^{n+1-p} C^p_{n+1} L_{2p}^{\rm red}, \end{array} \end{eqnarray} since \begin{eqnarray} \oldfalse \begin{array}{c} \epsilon^4\oint_{\infty}\frac{u^{n+1}du}{z^{2p+2}} = \oint_{\infty}\frac{u^{n+1}du}{(1+\epsilon^2 u)^{p+1}} = \frac{1}{\epsilon^{2n}}\frac{\Gamma(-p)}{(n+1-p)!\Gamma(-n-1)} = \nonumber \\ = \frac{(-)^{n+p+1}}{\epsilon^{2n}}\frac{(n+1)!}{p!(n+1-p)!} = \frac{(-)^{n+1-p}}{\epsilon^{2n}}C_{n+1}^p. \end{array} \end{eqnarray} Explicit expressions for the generators $\tilde{\cal L}_{2n}$ (which are harmonics of the stress tensor $\frac{1}{2}\left(\partial\Phi\right)^2(u)$ of antiperiodic field $\Phi(u)$), are: \begin{eqnarray} \oldfalse \begin{array}{c} \tilde{\cal L}_{-2} = \sum_{k\geq 1} (k+\frac{1}{2}) T_{2k+1} \frac{\partial}{\partial \tilde T_{2(k-1)+1}} + \frac{T_1^2}{4}; \\ \tilde{\cal L}_{0} = \sum_{k\geq 0} (k+\frac{1}{2}) T_{2k+1} \frac{\partial}{\partial \tilde T_{2k+1}}; \\ \tilde{\cal L}_{2n} = \sum_{k\geq 0} (k+\frac{1}{2}) T_{2k+1} \frac{\partial}{\partial \tilde T_{2(k+n)+1}} + \\ + \frac{1}{4}\sum_{k=0}^{n-1} \frac{\partial^2}{\partial \tilde T_{2k+1}\partial \tilde T_{2(n-k-1)+1}} - \frac{(-)^n}{16\epsilon^{2n}}; \ \ n>0. \end{array} \label{vircotil} \end{eqnarray} So far everything what was done was just change of variables and all relations were exact for any $\epsilon$, no limits were taken. Operators (\ref{vircotil}) are very similar to ${\cal L}_{2n}$, arising in the "continuous Virasoro constraints" (1.3), imposed on partition function of $X^3$-Kontsevich model. There are, however, two discreapancies. First, $\frac{\partial}{\partial \tilde T}$ appear in (\ref{vircotil}) instead of $\frac{\partial}{\partial T}$ in generators in (1.3). One can argue that this difference is not really essential, since $\tilde T_{2k+1}$ and $T_{2k+1}$ differ by terms, which are proportional to $\epsilon^2$ and thus vanish in the continuum limit $\epsilon \rightarrow 0$. (Note, however, that this reasoning can be applied only for every particular constraint $\tilde{\cal L}_{2n}Z = 0$, $n \geq -1$, not to the entire generating functional, where different terms are summed, multiplied by different powers of $\epsilon$.) The second discreapancy is a litlle more serious: it is the occurence of an extra term $\displaystyle{\frac{(-)^{n+1}}{16\epsilon^{2n}}}$ for all $n\geq 0$ (this difference is present for $n=0$ as well, because ${\cal L}_0$ contains the item $\frac{1}{16}$, which is lacking in (\ref{vircotil}).) This extra term can not be eliminated by just taking continuum limit: moreover, it blows up instead of vanishing when $\epsilon \rightarrow 0$. Remarkably enough, this term disappears when we turn to consideration of actual Virasoro constraints, not just a formal choice of time variables. It cancels completely with the other potential source of problem for the derivation of continuous Ward identites. We proceed now to this most sophisticated matter in this whole subsection. The thing is that, as we already mentioned before, the reduction of discrete Virasoro constraint $L_{2n}Z_N = 0$ contains some non-vanishing terms with the odd-time derivatives: \begin{eqnarray} \oldfalse \begin{array}{c} \left(\sum_{k>0} 2kt_{2k} \frac{\partial}{\partial t_{2k+2n}} + 2 \sum_{k=0}^{n} \frac{\partial^2}{\partial t_{2k}\partial t_{2n-2k}}\right) Z_N^{\rm red} = \\ = \left(\sum_{k=0}^{n} \frac{\partial^2}{\partial t_{2k}\partial t_{2n-2k}} - \sum_{k=0}^{n-1} \frac{\partial^2} {\partial t_{2k+1}\partial t_{2n-2k-1}}\right) Z_N^{\rm red}. \end{array} \label{colivirder0} \end{eqnarray} We added an extra term with the second even-time derivatives to both sides of the identity, in order to get at the r.h.s. a combination, which has a chance to vanish in continuum limit. (This formula still needs to be corrected, see eq.(\ref{colivirder3}) below.) In order to find rigorous reason for elimination of the terms at the r.h.s we need to address to explicit formulas from the last subsection of section 4 (no simpler way is known so far). The crucial formula which we need is: \begin{eqnarray} \frac{\partial^2\phi_n} {\partial t_k\partial t_l} = \frac{\partial}{\partial t_k} \frac{\langle n \mid h^l \mid n \rangle}{\langle n \mid n \rangle} = \left( \sum_{m>n} - \sum_{m<n}\right) \frac{\langle n \mid h^k \mid m \rangle \langle m \mid h^l \mid n \rangle} {\langle m \mid m \rangle\langle n \mid n \rangle}, \label{colivirder1} \end{eqnarray} and the most important feature of it is its $R$-matrix structure (the fact that a {\it difference} occurs at the r.h.s.). This structure implies almost complete cancellation of terms, when we sum over $n$ in order to get $\log Z_N = \sum_{0}^{N-1} \phi_n$, leaving only a finite sum of the length {\it independent} of N: \begin{eqnarray} \frac{\partial^2\log Z_N} {\partial t_k\partial t_l} = \sum_{0 < j < {\rm min}(k,l)} \left( \sum_{n = N-j}^{N-1} \frac{\langle n \mid h^k \mid n+j\rangle \langle n+j \mid h^l \mid n \rangle} {\langle n \mid n \rangle\langle n+j \mid n+j \rangle}\right). \end{eqnarray} The finite sum at the r.h.s. can be expressed in terms of $R_n = e^{\phi_n-\phi_{n-1}}$, which are exactly the quantites to satisfy equations of Volterra hierarchy and tending to {\it constant} (denoted by $R_0$ in the previous section) in continuum limit. Locality property - the finiteness of the sum at the r.h.s in (\ref{colivirder1}) - implies that this r.h.s. tend to a constant value as $N \rightarrow \infty$. This constant does not completely cancels in the difference \begin{eqnarray} \left(\sum_{k=0}^{n} \frac{\partial^2}{\partial t_{2k}\partial t_{2n-2k}} - \sum_{k=0}^{n-1} \frac{\partial^2} {\partial t_{2k+1}\partial t_{2n-2k-1}}\right) \log Z_N^{\rm red}, \label{colivirder2} \end{eqnarray} and the remaining contributions appears to be exactly what necessary to cancel the dangerous term $\displaystyle{\frac{(-)^{n+1}}{16\epsilon^{2n}}}$ which appeared in the difference between $\tilde{\cal L}_n$ and ${\cal L}_n$. We refer to ref.\cite{MMMM} for more details about these cancellations, and the only thing to discuss at the rest of this subsection is the difference between the r.h.s. of (\ref{colivirder0}) and (\ref{colivirder2}). In the second expression the second derivatives are taken of $\log Z$, while they are of $z$ itself in the first one. Of course, \begin{eqnarray} \frac{\partial^2 \log Z_N^{\rm red}} {\partial t_{2k+1}\partial t_{2n-2k-1}} = \frac{1}{Z_N^{\rm red}} \frac{\partial^2 Z_N^{\rm red}} {\partial t_{2k+1}\partial t_{2n-2k-1}} \nonumber \end{eqnarray} but this is $not$ true for even derivatives. So, identity (\ref{colivirder0}) yet needs to be transformed a little more in order to contain exactly (\ref{colivirder1}) at its r.h.s. If this is achieved, the l.h.s. acquires additional contribution and turns into \begin{eqnarray} \oldfalse \begin{array}{c} \sum_{k>0} 2kt_{2k} \frac{\partial Z_N^{\rm red}}{\partial t_{2k+2n}} + \sum_{k=0}^{n} \left( 2\frac{\partial^2 Z_N^{\rm red}} {\partial t_{2k}\partial t_{2n-2k}} - \frac{1}{Z_N^{\rm red}} \frac{\partial Z_N^{\rm red}} {\partial t_{2k}}\frac{\partial Z_N^{\rm red}}{\partial t_{2n-2k}}\right) = \\ = 4\sqrt{Z_N^{\rm red}} L_{2n}^{\rm red} \sqrt{Z_N^{\rm red}}. \end{array} \label{colivirder3} \end{eqnarray} As a result of all this reasoning we conclude that the double scaling continuum limit of reduced 1-matrix model can be described by the following relation: \begin{eqnarray} \lim_{{\rm d.s.}\ \epsilon \rightarrow 0,\ N \rightarrow\infty} \sqrt{Z_N^{\rm red}\{t_{\rm even}\}} \ = \ U^{-1}Z_{V = \frac{X^3}{3}}\{T\}, \end{eqnarray} where factor $U$ is defined in (\ref{Umatrix}), relation between $t$ and $T$-variables is given by (\ref{Kazchava}) and $Z_{V = \frac{X^3}{3}}\{T\}$ is $X^3$-Kontsevich model. The motivation for this conclusion is that both sides of the equation satisfy the same continuous Virasoro constraints (1.3). This whole derivation can be straightforwardly generalized to the case of multiscaling limit in conformal matrix models and analogous relation contains roots of the $p$-th degree, see \cite{comamo} for detailed discussion. \subsection{From Gaussian to $X^3$ Kontsevich model} We shall now abandon these complicated matters and give a simple illustration of how the things can work, if expressed in the adequate terms. Namely, as alternative to the sophisticated procedure, involving explicit switch to Kazakov variables and the study of limits of Ward-identites (loop equations), we shall just use the equivalence of the discrete 1-matrix model and Gaussian Kontsevich model, proved in the section 3.8 above in order to take the continuum limit just of this simplest Kontsevich model. This procedure, suggested in ref.\cite{Toda} appears to be just a kind of a standard eveluation of the integral in the large $N$-limit by the steepest descent method. It is important here that GKM is not sensitive to the size of the matrix $n$ in Kontsevich integral, therefore this limit, when expressed in terms of GKM, has nothing to do with the infinitely large matrices. Relation to be proved below is \begin{eqnarray} \lim_{d.s.\hbox{ }N\rightarrow \infty }{\cal F}_{\{\hat V\}} = {\cal F}^2_{\{V\}}, \label{coligako} \end{eqnarray} where $\hat V(X) = \frac{1}{2}X^2 - N\log X$ and $V(X) = \frac{1}{3}X^3$. Very naively, what happens as $N \rightarrow \infty$ is that in the Kontsevich integral, \begin{eqnarray} \int dX \exp {\rm tr}\left(-\frac{1}{2}X^2 + N\log X + \Lambda X\right) \end{eqnarray} a stationary point arises at $X = X_0$, such that \begin{eqnarray} X_0 = \frac{N}{X_0} + \Lambda. \end{eqnarray} Expansion of this action in powers of $\tilde X = \gamma^{-1} (X - X_0)$ comes entirely from the logarithmic piece: \begin{eqnarray} S - S_0 &= \frac{\gamma^2}{2}\tilde X^2 - N\left( \log\left(1 + \frac{\gamma \tilde X}{X_0}\right) - \frac{\tilde X}{X_0}\right) = \nonumber \\ &= \frac{\gamma^2}{2}\left( 1 + \frac{N}{X_0^2}\right) \tilde X^2 + \sum_{k\geq 3} \frac{N}{k}\left(-\gamma\frac{\tilde X}{X_0}\right)^k. \label{actexpan} \end{eqnarray} In the continuum limit $\gamma$ should be adjusted in such a way, that quadratic term is finite, i.e. ${\gamma \sim \left( 1 + \frac{N}{X_0^2}\right)^{-1/2}}$. Now, if $\Lambda$ remains finite as $N \rightarrow \infty$, $X_0 \sim \sqrt{N}$, $\gamma \sim 1$ and all the terms with $k\geq 3$ in the sum are damped as $\gamma^kNX_0^{-k} \sim N^{1-\frac{k}{2}}$. This is the naive continuum limit. However, it is clear, that one can usually ask $\Lambda$ to behave more adequately - blow up together with growth of $N$ - and fine tune the way in which it tends to infinity so that at last the first term with $k=3$ also survives. For this purpose $\Lambda$ and thus $X_0$ should scale in such a way, that both quantites $\displaystyle{\gamma^2\left( 1 + \frac{N}{X_0^2}\right)}$ and $\displaystyle{\frac{N\gamma^3}{X_0^3}}$ remain finite. This requirement in the case of the latter expression means that $\gamma \sim X_0N^{-1/3}$ and then \begin{eqnarray} \gamma^2\left( 1 + \frac{N}{X_0^2}\right) \sim \frac{N + X_0^2}{N^{2/3}}. \nonumber \end{eqnarray} This is never finite, unless $N + X_0^2 \rightarrow 0$ as $N \rightarrow \infty$. This in turn implies that $X_0 \sim i\sqrt{N}$ and $\Lambda \rightarrow 2X_0 \sim 2i\sqrt{N}$ should be pure imaginary. One can also check that the terms with $k>3$ in the sum (\ref{actexpan}) all tend to zero in this specific limit. Thus we are left with a model which has only cubic and quadratic terms in the action. By simple shift of variables quadratic term can be changed for a linear one and we get a description of the theory in the vicinity of the stationary point in terms of an $X^3$-Kontsevich model. In practice things are a little more complicated, because also reduction to even-times should be taken into account. However, this does not really add too many new problems. We need that only even times ${t_{2k} = {1\over 2k} {\rm tr}{1\over \Lambda ^{2k}}}$ remain non-vanishing, while all the odd times ${t_{2k+1} = {1\over 2k+1} {\rm tr}{1\over \Lambda ^{2k+1}} = 0}$. This obviously implies that the matrix $\Lambda$ should be of block form: \begin{eqnarray} \Lambda = \left( \begin{array}{cc} {\cal M} & 0\\0 & -{\cal M} \end{array}\right) \end{eqnarray} and, therefore, the matrix integration variable is also naturally decomposed into block form: \begin{eqnarray} X = \left( \begin{array}{cc} {\cal X} & {\cal Z}\\{\cal Z} & {\cal Y} \end{array} \right) . \end{eqnarray} Then \begin{eqnarray} \oldfalse \begin{array}{c} {\cal F}_{\{\hat V=X^2/2-N\log X\}} = \int d{\cal X}d{\cal Y}d^2{\cal Z}\ \\ {\rm det} ({\cal X}{\cal Y}-\bar {\cal Z} {1\over {\cal Y}}{\cal Z}{\cal Y})^N e^{-{\rm tr}\{|{\cal Z}|^2+{\cal X}^2/2+{\cal Y}^2/2-{\cal M}{\cal X}+ {\cal M}{\cal Y}\}}. \end{array} \end{eqnarray} To take the limit $N \rightarrow \infty $, one should assume certain scaling behaviour of ${\cal X}$, ${\cal Y}$ and ${\cal Z}$. Moreover, our previous naive consideration gave us some feeling of the {\it fine tuned} scaling behaviour can look like. So we take \begin{eqnarray} \oldfalse \begin{array}{c} {\cal X} = \gamma (i\beta I + x), \\ {\cal Y} = \gamma (-i\beta I + y), \\ {\cal Z} = \gamma \zeta , \\ {\cal M} = \gamma ^{-1}(i\alpha I + m) \end{array} \end{eqnarray} with some large real $\alpha $, $\beta $ and $\gamma $. If expressed through these variables, the action becomes: \begin{eqnarray} \oldfalse \begin{array}{c} {\rm tr}\left(|{\cal Z}|^2 + {\cal X}^2/2 + {\cal Y}^2/2 - {\cal M}{\cal X} + {\cal M}{\cal Y} - N\log ({\cal X}{\cal Y} - \bar {\cal Z}{1\over {\cal Y}}{\cal Z}{\cal Y})\right) = \\ = \gamma ^2{\rm tr}\left(\frac{1}{2}(i\beta I + x)^2 + {1\over 2}{\rm tr}(i\beta I -y)^2 + |z|^2\right) - \\ - {\rm tr}(i\alpha I + m)(2i\beta I + x - y) - \\ - N{\rm tr} \log \ \beta ^2\gamma ^2\left(1 - i {x-y\over \beta } + {xy\over \beta ^2} - {|\zeta |^2\over \beta ^2}(1 + {\cal O}(1/\beta ))\right) = \end{array} \end{eqnarray} $$ = [2\alpha \beta - \beta ^2\gamma ^2 - 2N\ \log \ \beta \gamma ] {\rm tr}\ I - 2i\beta \ {\rm tr}\ m + \eqno{(A)} $$ $$ + i\left(\beta \gamma ^2 - \alpha + {N\over \beta }\right) ({\rm tr}\ x - {\rm tr}\ y) + {1\over 2}\left(\gamma ^2 - {N\over \beta ^2}\right) ({\rm tr}\ x^2 + {\rm tr}\ y^2) + \eqno{(B)} $$ $$ + \left(\gamma ^2 + {N\over \beta ^2}\right) {\rm tr} |\zeta |^2 - \eqno{(C)} $$ $$ - {\rm tr}\ mx + {\rm tr}\ my + {iN\over 3\beta ^3}{\rm tr}(x^3 - y^3) + \eqno{(D)} $$ $$ + {\cal O}(N/\beta ^4) + {\cal O}\left(|\zeta |^2 {N\over \beta ^3}\right). \eqno{(E)} $$ We want to adjust the scaling behaviour of $\alpha $, $\beta $ and $\gamma $ in such a way that only the terms in the line $(D)$ survive. This goal is achieved in several steps. The line ({\it A}) describes normalization of functional integral, it does not contain $x$ and $y$. Thus, it is not of interest for us at the moment. Two terms in the line $(B)$ are eliminated by adjustment of $\alpha $ and $\gamma $: \begin{eqnarray} \gamma ^2 = {N\over \beta ^2}\hbox{ , } \alpha = {2N\over \beta }\hbox{ .} \end{eqnarray} As we shall see soon, $\gamma ^2 = N/\beta ^2$ is large in the limit of $N\rightarrow \infty $ . Thus, the term $(C)$ implies that the fluctuations of $\zeta $-field are severely suppressed, and this is what makes the terms of the second type in the line $(E)$ negligible. More general, this is the reason for the integral $Z_{\{\hat V\}}$ to split into a product of two independent integrals leading to the square of partition function in the limit $N\rightarrow \infty $ (this splitting is evident as, if ${\cal Z}$ can be neglected, the only mixing term $\displaystyle{\log \det \left( \begin{array}{cc} {\cal X} & {\cal Z}\\{\cal Z} & {\cal Y} \end{array} \right)} $ turns into $\log {\cal X}{\cal Y} = \log {\cal X} + \log {\cal Y}$). Thus, we remain with a single free parameter $\beta $ which can be adjusted so that \begin{eqnarray} {\beta ^3\over N}\rightarrow {\rm const}\ \ \ \hbox{ as }\ \ \ N\rightarrow \infty \nonumber \\ (i.e\hbox{. } \beta \sim N^{1/3},\ \gamma^2 \ \sim N^{1/3},\ \alpha \sim N^{2/3} ), \end{eqnarray} making the terms in the last line $(E)$ vanishing and the third term in the line $(D)$ finite. This proves the statement (\ref{coligako}) in a rather straightforward way. Unfortunately no generalization of this procedure for other discrete models is found so far, the main problem beeing identification of GKM-type realizations of other (for example, conformal) discrete matrix models. \ \section{Conclusion} We came to the end of our brief review of the facts, that are already known about the relation between matrix models and integrable hierarchies. There are still several topics, which are already discussed in the literature, but not presented in these notes. First of all, we did not discuss the relation between matrix models and theories of topological (Landau-Ginzburg) gravity (LGG). This field is fastly developing during the last months and will be soon ready to inclusion in this kind of reviews. The list of things which are already clarified enough, includes realization of the Ward identities in the form of "recursion relations" for topological gravity \cite{WitTG}. Also the relation between quasiclassical hierarchies, arising in the spherical approximation to topological theories \cite{KriDu}, to integrable structure of Generalized Kontsevich model is more or less understood \cite{Krich}. Of special importance is the chapter of this theory, which provides matrix-model description of module spaces, associated with Riemann surfaces \cite{Penner},\cite{Ko}. What still deserves better understanding is axionatic construction of topological gravity, similar to remarkably simple construction of topological LG models (before they are coupled to $2d$ gravity) in terms of the Grothendieck residues and chiral rings \cite{chiri}: see \cite{Rabi} for a very nice presentation of the latter case and \cite{LP} for the first big steps towards similar construction in the former case. Also relation to the theory of non-conformal LG models \cite{CeVa} deserves being clarified. A piece which is essentially lacking so far is the clear description of minimal $(p,q)$-models, coupled to $2d$ gravity in the case of $p\neq 1$. In this situation Generalized Kontsevich model is known to describe nothing more but duality transformation between $(p,q)$ and $(q,p)$ models \cite{KhMa}, rather than the models themselves. This subject is also connected with the theory of Kac-Schwarz operator \cite{KaSch}. The work in this direction is extremely important for the understanding of unification of various string models and of essential symmetries of the future string field theory (in particular generic BRST and Batalin-Vilkovisky symmetries are very close analogues of the complete sets of the Ward identites, as described in the general framework in the beginning of section 2). All these things would constitute a natural next section of these notes, but we choosed to wait a little longer untill further clarification is achieved in this fragment of the theory. Second, we did not touch at all physical interpretations of matrix models, which include quantum gravity, Yang-Mills theory and many other possible applications. This should be a subject of very different reviews, for which the whole content of these notes is just a piece of techniques involved in the study of physical phenomena. Third, the biggest $terra\ incognita$ in this branch of science, which remained beyond the scope of these notes, is the theory of non-eigenvalue matrix models, which are related to physical theories in space-time dimensions $d\geq 2$. It is indeed a $terra\ incognita$, at least from the point of view of the semi-rigorous analysis, which we are reviewing. The recent breakthrough in this field is due to appearence of the Kazakov-Migdal model \cite{KazMi} (see also the lates review \cite{SeWe} and refernces therein), which for the first time opened the possibility to treat a wide class of non-eigenvalue models by exact methods of localization theory (other names for this field, which in fact is growing up into generic theory of integability are Duistermaat-Heckman theorem or Fourier analysis on group manifolds). The work in this direction is, however, only at the early stages and this is why we decided not to present the first non-systematized results in these notes. A part of it which is very close to beeing satisfactrily understood is the "boudary model" of $c=1$ string ("$d=2$ dilaton gravity") - a very important one from the point of view of general string theory. For the present state of knowledge about this model see \cite{VeVe}, and its relation to integrability theory is partly revealed in refs. \cite{DiMo} and \cite{KhMa}. In the domain, which $was$ actually reviewed the weakest points are the theory of continuum limits and that of the multicomponent hierarchies. These theories, when developed, can also help to move in the most important direction, which was many time mentioned in the text above: towards creation of more general theory of integrability. The next natural step, when approached from this side should be generalization of conventional integrable hierarchies, which would lift the restriction to level $k=1$ simply-laced Kac-Moody algebras and unitary representations. The emerging theory will of course have much to do with both localization theory and non-eigenvalue matrix models, and when it is created we shall find ourselves at a new level of understanding, which will be one step closer to the goal of construction of the entire building of string theory (mathematical physics) and will probably provide us with unexpected new means for investigation of the features of the real physical world around us. \section{Acknowledgements} These notes reflect the content of the lectures, given at the University of Amsterdam and NIKHEF in February-March 1993. I am deeply indebted to colleagues at Amsterdam for their patience and attention and especially to Sander Bais for his hospitality and encouragement. It is a pleasure to thank my collaborators and friends, mentioned by names in the end of the Introduction for the lessons that they taught me during our work on the subject of matrix models. I also aknowledge the support of FOM during the time when these notes were being written. \bigskip
2,877,628,089,950
arxiv
\section{Introduction.} We consider the semi-classical Schr{\"o}dinger operator \begin{equation}\label{1.1} P=P(h)=-h^2\Delta +V(x),\,\,x\in{\mathbb R}^n,\end{equation} with the symbol $p(x,\xi)=\xi^2+V(x).$ If the spectrum of (\ref{1.1}) is discrete near some energy $E$ and real-valued potential $V$ is smooth then it is known (\cite{Guillemin1996}, \cite{Zelditch2004}, \cite{IantchenkoSjostrandZworski2002}) that the spectrum of $P(h)$ in a small fixed neighborhood{}{} of $E$ as $h\rightarrow 0$ determines the Birkhoff normal form of the Hamiltonian $p(x,\xi)=\xi^2 +V(x).$ In \cite{GuilleminUribe2007} it was shown that the classical Birkhoff normal form of $p(x,\xi)$ at a non-degenerate minimum $x_0$ of $V$ determines the Taylor series of the potential provided the eigenvalues of the Hessian are linearly independent over ${\mathbb Q}$ and $V$ satisfies a symmetry condition near $x_0.$ This result was applied to prove that the low-lying eigenvalues of the semi-classical operator $P$ determine the Taylor series of the potential at $x_0.$ In this note we study the similar question for the resonances. In \cite{Zworski2007} it was indicated how the inverse spectral results based on wave invariants translates to inverse results for resonances (see also \cite{Zelditch2002}). We consider a special situation as in \cite{Sjostrand1986} and \cite{KaidiKerdelhue2000}, when the resonances can be calculated explicitly as the eigenvalues of the semi-classical Birkhoff normal form. We suppose that general assumptions of Helffer-Sj{\"o}strand in \cite{HelfferSjostrand1986} are fulfilled so that we can define resonances in some fixed neighborhood{}{} of $E_0\in{\mathbb R}$ when $h>0$ is small enough. We suppose also that $V$ is analytic potential, which extends to a holomorphic function in a set $$\{ x\in{\mathbb C}^n;\,\,|{\rm Im\,} x| <\frac{1}{C}\langle{\rm Re\,} x\rangle\}$$ with $V(x)\rightarrow 0,$ when $x\rightarrow \infty$ in that set. Here $\langle s\rangle=(1+|s|^2)^{1/2}.$ We will use notation ${\rm neigh}\,({E,{\mathbb R}})$ or ${\rm neigh}(E)$ for a real neighborhood{}{} of a $E\in{\mathbb R}.$ Following \cite{GerardSjostrand1987} the trapped set $K(E_0)$ is $K (E_0)=\{\rho\in p^{-1}(E_0);\,\,\exp tH_p(\rho)\not\rightarrow \infty,\,\,t\rightarrow\pm\infty\},$ which is the union of trapped trajectories in $p^{-1}(E_0).$ Here $H_p$ is the Hamilton field of $p(x,\xi).$ We assume that the union of trapped trajectories in $p^{-1}(E_0)$ is just the point $(0,0):$ \begin{equation}\label{trapped point} K(E_0)=(0,0). \end{equation} Then $0$ is a unique critical point of $V$ with critical value $E_0.$ We suppose that $0$ is non-degenerate critical point of $V$ with signature $(n-d,d):$ $$V(0)=E_0,\,\,V'(0)=0,\,\,{\rm sgn}\,V''(0)=(n-d,d),$$ so that $V''(0)$ is non-degenerate and \begin{equation}\label{V} V(x)=E_0+\sum_{j=1}^{n-d} u_j^2x_j^2 -\sum_{j=n-d+1}^n u_j^2x_j^2 +{\mathcal O}(|x|^3).\end{equation} Kaidi and Kerdelhue showed in \cite{KaidiKerdelhue2000} how to adapt the Helffer-Sj{\"o}strand theory and realize $P=-h^2\Delta +V(x)$ as acting in $H(\Lambda)$-spaces, where $\Lambda\subset{\mathbb C}^{2n}$ is an IR-manifold which coincides with $T^*({\mathbb R}^{n-d}\oplus e^{i\pi/4}{\mathbb R}^{d})$ near $(0,0)$ and has the property that $\forall\epsilon >0,\,\,\exists\delta >0$ such that $(x,\xi)\in\Lambda,\,\,{\rm dist}\,((x,\xi),(0,0)) >\epsilon$ $\Rightarrow$ $|p(x,\xi) -E_0| >\delta.$ Then resonances can essentially (modulo an argument using a Grushin reduction) be viewed as an eigenvalue problem for $P$ after the complex scaling $ x_j=e^{i\pi/4} \tilde{x}_j,$ $\tilde{x}_j\in{\mathbb R},$ $n-d+1\leq j\leq n.$ We suppose also that the coefficients $u_j$ in (\ref{V}) satisfy non-resonance condition: \begin{equation}\label{non-res} \sum_{j=1}^n k_j u_j=0,\,\, k_j\in{\mathbb Z}\,\,\Rightarrow\,\,k_1=k_2=\ldots=k_n= 0. \end{equation} Under these assumptions a result of Kaidi and Kerdelhue \cite{KaidiKerdelhue2000} gives all resonances in a disc $D(E_0, h^\delta)$ of center $E_0$ and radius $h^\delta.$ Here $\delta$ can be any fixed constant and $h>0$ is small enough depending on $\delta.$ We cite this result later in Theorem \ref{KaKe}. The consequence of the main result of this note is the following: \begin{theorem}\label{TheoremMain} Assume $V$ is symmetric with respect to{} reflections about the coordinate axes, i.e. for any choice of signs \begin{equation}\label{symmetric} V(x_1,\ldots,x_n)=V(\pm x_1,\ldots,\pm x_n).\end{equation} In addition, assume that \begin{equation}\label{expansion}V(x)=E_0+\sum_{j=1}^{n-d} u_j^2x_j^2 -\sum_{j=n-d+1}^n u_j^2x_j^2 +{\mathcal O}(|x|^4),\end{equation} where $u_1,\ldots,u_n$ are the positive numbers satisfying (\ref{non-res}). Then, given $N>0$ there exists a $\delta >0$ such that the resonances in $D(E_0, h^\delta)$ for $0<h<h_0,$ determine the first $N$ terms in the Taylor series of $V$ at zero. \end{theorem} In dimension $n=1,$ $d=1,$ resonances generated by the maximum of the potential (barrier top resonances) are of the form $ \simeq V(0)-ih(-V''(x_0)/2)^{1/2} (2k+1)+\ldots,$ $k=0,1,\ldots,$ $V(0)=E_0.$ Yves Colin de Verdi{\`e}re and Victor Guillemin have recently shown in \cite{CdVGuillemin2008} that one can drop the condition that the potential is even. Namely instead of (\ref{symmetric}) and (\ref{expansion}) it is enough to suppose that in the expansion $V(x)=E_0-ux^2+\sum_{j=3}^\infty a_jx^j$ the coefficients $u>0$ and $a_3$ do not vanish. Then all $a_j$'s are determined from the coefficients of the quantum Birkhoff normal form once we have chosen the sign of $a_3.$ The classical Birkhoff normal form along is not enough to recover the potential. In dimension $n=2,$ $d=1,$ Sj{\"o}strand (see \cite{Sjostrand2003}) showed that the saddle-point resonances are given by the eigenvalues of the Birkhoff normal form in the whole $h$-independent neighborhood{}{} of $E_0.$ Thus the full Taylor series of $V$ is determined and using the analyticity, the full potential can be recovered from the resonances. To prove Theorem \ref{TheoremMain} we use that under non-resonance condition (\ref{non-res}) the Schr{\"o}dinger operator $P$ can be transformed in the semi-classical or quantum Birkhoff normal form (see \cite{Sjostrand1992}) \begin{equation}\label{Bnf} E_0+\tilde{P}\equiv U^*PU, \end{equation} where $U$ is analytic unitary Fourier integral operator{}{} microlocally defined near $(0,0)$ and $\tilde{P}$ is pseudodifferential operator{}{} with the symbol \begin{equation}\label{F} F\sim\sum_{j=0}^\infty h^jF_j(\imath_1,\ldots,\imath_{n-d} , \jmath_{n-d+1},\ldots, \jmath_{n} ), \,\,\imath_j=\xi_j^2 +x_j^2,\,\,\jmath_j=\xi_{j}^2-x_{j}^2, \end{equation} with $F_j$ analytic and principal symbol \begin{equation}\label{F0} F_0=\sum_{j=1}^{n-d} u_j\imath_j +\sum_{j=n-d+1}^n u_j\jmath_j +{\mathcal O}(|(\imath,\jmath)|^2). \end{equation} The equivalence relation $\equiv$ means to infinite order at $(0,0)$ (see \cite{IantchenkoSjostrand2002}). The result of \cite{KaidiKerdelhue2000} shows that, modulo error terms of order ${\mathcal O}(h^\infty),$ the resonances of $P$ in $h^\delta$ neighborhood{}{} of $E_0$ are approximated by the eigenvalues of its quantum Birkhoff normal form at $(0,0)$ after the complex scaling $ x_j=e^{i\pi/4} \tilde{x}_j,$ $\tilde{x}_j\in{\mathbb R},$ $n-d+1\leq j\leq n,$ namely $$\tilde{F} \sim \sum_{j=0}^\infty h^jF_j(\imath_1,\ldots,\imath_{n-d},\frac{1}{i}\tilde{\imath}_{n-d+1},\ldots,\frac{1}{i}\tilde{\imath}_{n}),$$ where $F$ is as in (\ref{F}) and $\frac{1}{i}\tilde{\imath}=\frac{1}{i}(\tilde{\xi}_j^2+\tilde{x}_j^2)=\xi_j^2-x_j^2,$ $\xi_j=e^{-i\pi/4}\tilde{\xi}_j,$ $x_j=e^{i\pi/4}\tilde{x}_j.$ We denote $\tilde{F}_j(\imath_1,\ldots,\imath_{n-d},\tilde{\imath}_{n-d+1},\ldots,\tilde{\imath}_{n})= F_j(\imath_1,\ldots,\imath_{n-d},\frac{1}{i}\tilde{\imath}_{n-d+1}, \ldots,\frac{1}{i}\tilde{\imath}_{n}).$ \begin{theorem}[Kaidi-Kerdelhue]\label{KaKe} The resonances of $P$ in rectangle $]E_0-\epsilon_0,E_0+\epsilon_0[ -i[0,h^{\delta}]$ are simple labeled by $k\in{\mathbb N}^n$ and of the form $$E_0+\sum_{j=0}^\infty h^j\tilde{F}_j((2k_1+1)h,\ldots,(2k_n+1)h)$$ where $$\tilde{F}_j\in C^\infty({\rm neigh}(0)),\,\, \tilde{F}_0(\imath)=\sum_{j=1}^{n-d} u_j\imath_j -\sum_{j=n-d+1}^n iu_j\imath_j +\mathcal{O}(|\imath |^2),\,\,\tilde{F}_1(\imath)=V(0)-E_0=0.$$ \end{theorem} The main result of this note is the following: \begin{lemma}\label{lemma1} Assume (\ref{non-res}), (\ref{symmetric}) and (\ref{expansion}). Then the classical Birkhoff normal form $F_0$ determines the Taylor series of $V$ at the origin. \end{lemma} We show in Section \ref{Section1} how this lemma follows from \cite{GuilleminUribe2007}. The main idea of the proof is that the complex scaling reduces the principle symbol of $P$ to the form $H(x,\xi)=\sum_{j=1}^n\omega_j(\xi_j^2+x_j^2)+{\mathcal O}(|x|^3)$ which is similar to the Hamiltonian considered in \cite{GuilleminUribe2007} with the only difference that coefficients $\omega_j$ for $n-d+1\leq j\leq n$ are complex numbers. We show in Section \ref{Section1} that the method of Guillemin and Uribe can still be applied.\\ \\ {\em Acknowledgements.} The author thanks the unknown referee for numerous comments and suggestions. \section{Classical Birkhoff canonical form, proof of Lemma \ref{lemma1}}\label{Section1} Conjugating the Hamiltonian $p(x,\xi)=\xi^2+ V(x),$ with $V$ as in (\ref{expansion}), by the linear symplectomorphism $$ x_i\mapsto u_i^{1/2}x_i,\,\,\xi_i\mapsto u_i^{-1/2}\xi,\,\,i=1,\ldots, n,$$ one can assume without loss of generality that $$ p=E_0+H_1+V_2\equiv E_0+\sum_{j=1}^{n-d} u_j(\xi_j^2 +x_j^2) +\sum_{j=n-d+1}^n u_j(\xi_j^2-x_j^2) +V_2(x_1^2,\ldots,x_n^2),$$ where $V_2(s_1,\ldots, s_n)={\mathcal O} (|s|^2).$ We denote $H=H_1+V_2.$ Then resonances can essentially (see Introduction) be viewed as an eigenvalue problem for $P$ after the complex scaling \begin{equation}\label{tildes} x_j=e^{i\pi/4} \tilde{x}_j,\,\,\tilde{x}_j\in{\mathbb R},\,\,n-d+1\leq j\leq n. \end{equation} The principal symbol of the scaled operator becomes $\tilde{p}(\ldots,\tilde{x},\ldots,\tilde{\xi})=E_0+\tilde{H}_1+\tilde{V}_2,$ where the new $\tilde{H}=\tilde{H}_1+\tilde{V}_2$ is equal to $$H(x_1,\ldots, x_{n-d}, e^{i\pi /4} \tilde{x}_{n-d+1},\ldots, e^{i\pi /4} \tilde{x_n}, \xi_1,\ldots, \xi_{n-d}, e^{-i\pi /4} \tilde{\xi}_{n-d+1},\ldots, e^{-i\pi /4} \tilde{\xi_n}).$$ With $x_j=e^{i\pi/4}\tilde{x}_j,$ $\xi_j=e^{-i\pi/4}\tilde{\xi}_j,$ for $n-d+1\leq j\leq n,$ we have $\xi_j^2-x_j^2=(\tilde{\xi}_j^2 +\tilde{x}_j^2)/i,$ % and omitting the tildes we get $$ H(x,\xi)=\sum_{j=1}^{n-d} u_j(\xi_j^2 +x_j^2) +\sum_{j=n-d+1}^n \frac{1}{i}u_j(\xi_j^2+x_j^2) +V_2(x_1^2,\ldots,x_n^2)$$ with all $x_j,\xi_j$ real, and which can be identified with the restriction of the old $H$ to the IR-manifold $\Lambda\in{\mathbb C}^{2n}.$ Then one can follow Guillemin-Uribe \cite{GuilleminUribe2007} keeping in mind that for $n-d +1\ldots\leq j\leq n,$ $u_j$ are exchanged by $u_j/i.$ We have \begin{equation}\label{H1} H_1=\sum_{j=1}^{n-d} u_j(\xi_j^2+ x_j^2) +\sum_{j=n-d+1}^n \frac{1}{i}u_j(\xi_j^2+x_j^2). \end{equation} As in \cite{GuilleminUribe2007} we introduce complex coordinates, $z_j=x_j +i\xi_j,$ with real $x_j,\xi_j.$ In these coordinates $x_j^2+\xi_j^2=z_j\overline{z}_j=|z_j|^2.$ The Hamiltonian vector field $$\nu=\sum_j\frac{\partial H_1}{\partial\xi_j}\frac{\partial}{\partial x_j}-\frac{\partial H_1}{\partial x_j}\frac{\partial}{\partial \xi_j}$$ becomes the vector field $$\frac{2}{i}\sum_{j=1}^{n-d}u_j\left(z_j\frac{\partial}{\partial z_j} -\overline{z}\frac{\partial}{\partial\overline{z}_j}\right) -2 \sum_{j=n-d+1}^{n}u_j\left(z_j\frac{\partial}{\partial z_j} -\overline{z}\frac{\partial}{\partial\overline{z}_j}\right).$$ Then the proof of \cite{GuilleminUribe2007}, where $u_j$ for $n-d+1\leq j\leq n$ are substituted by $ u_j/i,$ can be applied and we get inductively that for $N=1,2,\ldots$ there exists a neighborhood{}{}, ${\mathcal O},$ of $x=\xi=0,$ and a complex canonical transformation{}{}, $\kappa:\,\,{\mathcal O}\mapsto{\mathbb C}^{2n}$ such that \begin{equation}\label{classicalBnf} \kappa^*H=\sum_{j=1}^N H_j+R_{N+1}+R_{N+1}', \end{equation} where \begin{enumerate} \item[a)] The $H_j$ are homogeneous polynomials of degree $2j$ of the form $H_j=h_j(x_1^2 +\xi_1^2,\ldots,x_{n}^2 +\xi_{n}^2),$ with $H_1$ given in (\ref{H1}). \item[b)] $R_N$ is homogeneous of degree $2N$ and of the form $R_N=W_N+R_N^\sharp,$ where $W_N$ consists of the terms homogeneous of degree $2N$ in the Taylor series of $V(x_1^2,\ldots,x_n^2)$ at $x=0,$ and $R_N^\sharp$ is an artifact of the previous inductive steps. \item[c)] $R_N'$ vanishes to order $2N+2$ at the origin and is of the form $R_N'=V-\sum_{k=2}^NV_k+S_N,$ where $S_N$ is another artifact of the inductive process. In addition, $R_N'$ is even. \end{enumerate} Using this induction argument Guillemin and Uribe show that one can read off from the $H_j$'s the first $N$ terms in the Taylor expansion of $V(s_1,\ldots,s_n)$ at $s=0.$ This argument is invariant under complex scaling. This achieves the proof of Lemma \ref{lemma1}. \phantom{=}\hfill\qed\vspace{0.5cm} Recalling the tildes introduced by (\ref{tildes}) and letting $N$ tend to infinity in (\ref{classicalBnf}) we obtain the classical Birkhoff normal form $$\sum_{j=1}^\infty \tilde{H}_j(x_1^2 +\xi_1^2,\ldots,x_{n-d}^2 +\xi_{n-d}^2,\tilde{x}_{n-d+1}^2 +\tilde{\xi}_{n-d+1}^2,\ldots, \tilde{x}_{n}^2 +\tilde{\xi}_{n}^2)$$ with $\tilde{H}_1$ as in (\ref{H1}). Then after scaling back to ${\mathbb R}^n\times{\mathbb R}^n$ we get the classical Birkhoff normal form as in (\ref{F0}): $$F_0=\sum_{j=1}^\infty H_j(\xi_1^2+x_1^2 ,\ldots,\xi_{n-d}^2+x_{n-d}^2 , \xi_{n-d+1}^2-x_{n-d+1}^2,\ldots, \xi_{n}^2-x_{n}^2 ),$$ with $$H_1=\sum_{j=1}^{n-d} u_j(\xi_j^2 +x_j^2) +\sum_{j=n-d+1}^n u_j(\xi_j^2-x_j^2).$$ The construction of the quantum Birkhoff normal form (\ref{Bnf}) is well known (see for example \cite{Sjostrand1992}).
2,877,628,089,951
arxiv
\section{Introduction} Blockchain technology originated from Bitcoin has been growing rapidly in recent years \cite{b1}. The blockchain leverages cryptographic techniques, distributed ledgers and consensus algorithms to provide a trusted and decentralized service for several applications \cite{b2,b3,b4,b5,b6}. Depending on the user authorization mechanisms, blockchain can be mainly categorized into the permissionless and the permissioned blockchain \cite{b7}. Ethereum is a programmable permissionless blockchain platform that achieve business logic based on specific smart contracts \cite{b8,b9}. On the other hand, Hyperledger Fabric is an open source enterprise-grade permissioned blockchain platform with a highly modular and configurable architecture \cite{b10}. It integrates fine-grained access control, immutable ledger and pluggable consensus protocols. Due to those advantages, Hyperledger Fabric is used in many scenarios, such as emission trading, insurance and education \cite{b11,b12,b13,b14}. The performance of Hyperledger Fabric was widely studied \cite{b15,b16}. The performance differences between Hyperledger Fabric version 0.6 and version 1.0 have been evaluated in \cite{b17}, which indicated that version 1.0 can significantly improve the system performance. A repeatable evaluating methodology was proposed to assess the performance of Hyperledger Fabric and Ethereum in \cite{b18}. The consensus protocols of those two platforms were compared in \cite{b19}, where Hyperledger Fabric achieves the better performance than Ethereum. Moreover, extensive experiments with varying parameters on Hyperledger Fabric v1.0 have been conducted to study the impact of various system configurations \cite{b20}. The experimental results indicated that the endorsement policy verification, sequential policy validation of transactions in a block and the state validation and commit are three major performance bottlenecks. Based on those analyses, several optimizations to improve the overall performance were introduced. However, those experiment-based analyses for Hyperledger Fabric are lack of scalability and theoretical basis. In order to further analyze the performance characteristics of this blockchain framework, it is imperative to model the transaction flow of it using a mathematical approach. Due to the complexity of the transaction flow and system configurations, the performance of Hyperledger-based system is affected by several factors. Thus, there are many difficulties in modeling the system. In this paper, we analyze the blockchain performance based on Hyperledger Fabric. As a formal mathematical theory designed for modeling concurrency, causality and conflict, GSPN provides a graphical approach to decompose the request processing flow of the Hyperledger-based system into multiple phases. Moreover, the GSPN-based model can be simulated to obtain the performance metrics such as latency and throughput of each phase at a non-steady state, which is suitable for this focused scenario.To sum up, our major contributions are summarized as follows: i.e., \begin{enumerate} \item An analytical model is proposed to depict the transaction flow of a Hyperledger-based system using GSPN and validate this model by experiments. \item We analyze how different ordering strategies affect the system performance and identify the performance bottleneck based on the proposed model. \item In response to the identified bottleneck above, a mathematical configuration selection approach is proposed to determine the configuration parameters of the ordering service in order to achieve the maximum throughput of this system. \end{enumerate} To validate the proposed model and configuration selection approach, a running system is setup on a cloud server. Furthermore, our work has important guiding significance for the practical use of the Hyperledger-based systems. The remainder of this paper is organized as follows. In Section II, we investigate the related work. Section III introduces a Hyperledger-based system and the transaction flow of it. Then an analytic model based on GSPN and a configuration selection approach to achieve best system performance are proposed in Section IV. Next, Section V validates our model and approach by conducting extensive experiments on a running system. Finally, Section VI outlines the main conclusions. \section{Related Work} Compared with those experiment-based performance analyses mentioned before, the performance modeling for Hyperledger Fabric is more critical and scalable to analyze the characteristics of this blockchain framework. To depict the system accurately, an appropriate mathematical theory is needed. As a modeling approach, Petri Nets is a formal mathematical theory with rigorous mathematical foundation and intuitive graphical representation. Derived from Petri Nets, the Stochastic Petri Nets (SPN) associates an exponentially distributed delay with the firing of each transition to provide a clear and intuitive formalism for generating Markov processes. Based on SPN, the Generalised Stochastic Petri Nets (GSPN) adds immediate transitions and inhibitor arcs to prevent the model from becoming exceedingly large. Moreover, the Stochastic Reward Nets (SRN) introduces more primitives than GSPN to enhance the expressive ability. All those nets are widely used in system performance modeling and analysis \cite{b21,b22,b23,b24}. As for modeling for Hyperledger Fabric, a SRN model of the PBFT consensus based on Hyperledger Fabric v0.6 was presented to discuss how the number of peers affects the consensus latency in \cite{b25}. Furthermore, an overall performance model of Hyperledger Fabric v1.0+ was proposed in \cite{b26}, which discussed the impact of different parameters on system performance such as latency, throughput and utilization. The proposed model was validated by using a test framework named Hyperledger Caliper. However, these studies only analyzed the performance characteristics based on simulation but they did not provide a method to obtain the appropriate configuration parameters of system. \section{System Overview} \subsection{System Architecture} A Hyperledger-based emission trading system is proposed to solve the defects of existing centralized systems \cite{b9}. The system uses two organizations to represent the environmental agency and the trading center respectively and share the same ledger through one single channel. By integrating the characteristics of blockchain, the polluters can achieve the credible trading service through this system. As a typical implementation of Hyperledger Fabric v1.2, the system architecture of the blockchain system is given in Fig. 1. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{system.pdf}} \caption{System Architecture.} \label{fig1} \end{figure} This system consists of HTTP server, Web application and the blockchain network. The HTTP server plays a role of Fabric client to interact with the underlying network by integrating specific Fabric SDK. Depending on those RESTful APIs provided by the HTTP server, the web application can provide a variety of services for users. The blockchain network contains two distinct organizations, one channel which connects those organizations and an orderer node to provide the ordering service. This node adopts solo consensus protocol to guarantee the consistency of the distributed ledger. Each organization has a Fabric CA and a local peer. Fabric CA issues certificates for participants to achieve the access control policy. The peer holds an immutable ledger based on LevelDB and installs customized chaincode which implements the business logic. Each peer in this system plays roles of not only an endorser (endorse for transactions) but also a committer (hold ledgers). In addition, all of those nodes are running in special docker containers. \subsection{Transaction Flow} Requests in Hyperledger Fabric are divided into kinds, i.e., query and invoke, depending on whether the ledger is modified. Fig. 2 depicts the transaction flow of a typical invoke-type request. A completed transaction consists of several phases as follows, i.e., \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{transaction_flow.pdf}} \caption{Transaction Flow.} \label{fig2} \end{figure} \textbf{HTTP Phase.} Client sends a HTTP request to the server, aiming to interact with the blockchain network. The HTTP server extracts essential parameters from the request body and then constructs a transaction proposal by using the provided SDK. The generated proposal is signed with the client’s credentials and contains details of the specific chaincode. Then, the proposal is sent to an endorsing peer to endorse for this transaction. \textbf{Endorsement Phase.} All the peers that have installed the chaincode can play the role of an endorsing peer. When receiving a transaction proposal, each endorsing peer should execute tasks as follows: Firstly, the peer verifies the identity of this submitter to check whether it’s authorized to invoke the chaincode. Secondly, the peer executes the chaincode to generate the response value and read-write set without modifying the world state. Thirdly, endorser signs the proposal response with its identity and then the peer sends the response back to the client. Finally, the client collects responses from multiple endorsers and verifies if they satisfy the endorsement policy. The system adopts ‘or’ policy so that one endorser is enough. \begin{figure*}[htbp] \centerline{\includegraphics[width=\linewidth]{petri.pdf}} \caption{GSPN Model of System.} \label{fig3} \end{figure*} \textbf{Ordering Phase.} Once the transaction is fully endorsed, the client integrated in HTTP server broadcasts it to the ordering service. According to the specific configuration, the ordering service orders all the transactions and packages them into blocks with the special strategy. Then it signs the blocks and delivers them to the leading peers using gossip protocol. Some of the critical parameters of ordering service are listed as follows, i.e., \begin{itemize} \item \textbf{BatchTimeout:} The amount of time to wait before creating a batch. \item \textbf{MaxMessageCount:} The maximum number of messages batched into a block. \item \textbf{PreferredMaxBytes:} The maximum number of bytes allowed for the serialized messages in a block. \end{itemize} When one of the above conditions is satisfied (e.g., the number of transactions queuing in ordering service reaches the MaxMessageCount), a sequence of transactions can be batched into a new block. Considering that the size of transaction is similar, in general only BatchTimeout and MaxMessageCount need to be adjusted. \textbf{Validation \& Committing Phase.} After receiving blocks from the ordering service, the leading peers disseminate those blocks to all the peers, which belong to the same channel and organization. The peers first verify the signature of the blocks and then check all the transactions within them. If all the transactions pass the endorsement validation and read-write set validation, those blocks are appended to the ledger and the world states are updated. \textbf{Response Phase.} By registering an event listener in ChannelEventHub, the HTTP server can receive a notification when the target transaction has been committed into a block and appended to the ledger. A registered callback function can collect details of this event and form the response data in JSON format. \section{The GSPN-based Analytic Model of System} In this section, an analytic model for the Hyperledger-based system mentioned in Section III is proposed. We first introduce the basic elements of GSPN and the modeling assumptions. Then the proposed model is decomposed to multiple phases and each phase is described in details. Finally, we present a configuration selection approach to determine the network parameters. \subsection{Preliminaries} A typical GSPN model consists of basic elements as follows, i.e., \textbf{Places.} Circular nodes are used to describe places, which represent conditions or local system states. \textbf{Transitions.} Rectangular boxes are used to describe transitions, which represent events occur in the system. The fire of a transition can change system status from one place to another. \textbf{Tokens.} Black dots or numbers are used to describe the tokens resided in place, which represent the state quantity that a place holds. \textbf{Arcs.} Arcs specify the relationship between places and transitions. The weight of input arc represents the number of tokens consumed by the transition firing, and the weight of output arc represents the number of tokens produced to the output place. The default weight of arc is 1. \textbf{Immediate Transitions.} Thick bars are used to describe immediate transitions, which represent events that are assumed to take no time. \textbf{Inhabit Arcs.} An inhibitor arc from a place to a transition means the transition cannot fire if there is a token in the place. Based on those basic elements, a performance model for the system mentioned in Section III can be proposed. The system modeling satisfies the following assumptions, i.e., \begin{itemize} \item The arrival of requests is a Poisson process. \item The size of each transaction is constant. \item Ignore the transmission latency between processing phases. \end{itemize} \subsection{Model Description} The proposed analytic model based on GSPN is shown in Fig. 3. The overall model depicts the transaction flow in details, consisting of five phases as follows, i.e., \textbf{HTTP Phase.} This phase represents the process by which the HTTP server receives the request and sends the transaction proposal to the endorser. It can be regarded as a fundamental processing unit. Fig. 4 decomposes this unit from the overall model. \begin{figure} \centerline{\includegraphics[width=\linewidth]{mm1.pdf}} \caption{Decomposed model of HTTP server.} \label{fig4} \end{figure} The meaning of all the places and transitions are described as follows, i.e., \begin{itemize}[leftmargin=1.5cm] \item[$T_{arr}$:] a transition that represents the arrival of a new request. \item[$P_{wait\_h}$:] a place that represents the request is queuing, the number of token $\#(P_{wait\_h})$ denotes the queuing length. \item[$P_{serve\_h}$:] a place that represents the request is being processed. \item[$P_{idle\_h}$:] a place that represents the server is idle now, the number of token $\#(P_{idle\_h})$ denotes the number of idle servers. \item[$T_{in}$:] an immediate transition whose enable predicate is $\#(P_{wait\_h})>0$ \& $\#(P_{idle\_h})>0$, which means there are idle servers and queuing requests. \item[$P_{next}$:] a place represents the next processing phase. \end{itemize} The performance metrics of each place in this model can be obtained by simulating. Assume that during the total simulation time $T$, the transition $T_{arr}$ has been fired $X$ times, i.e., place $P_{wait\_h}$ generates $X$ tokens. The firing time interval obeys exponential distribution with parameter $\lambda$. Let $\varphi_p$ denote the number of departed tokens of place $P$, $\tau_{p_i}$ denote the stay time of $i-th$ token in place $P$. The performance metrics of place $P$ can be derived as follows, i.e., \begin{itemize} \item Throughput: \begin{equation} \theta_p = \frac{\varphi_p}{T}. \end{equation} \item Average Latency: \begin{equation} L_P = \frac{\sum_{i}\tau_{p_i}}{\varphi_p}. \end{equation} \item Queuing Length: \begin{equation} Q_P = \theta_P * L_P = \frac{\sum_{i}\tau_{p_i}}{T}. \end{equation} \end{itemize} Obviously, this processing unit describes a typical M/M/1 queuing model, whose total latency equals to the queuing time plus the service time, i.e., \begin{equation} \delta_H = L_{P_{wait\_h}} + L_{P_{serve\_h}}. \end{equation} \textbf{Endorsement Phase.} This phase describes the endorsement process in the processing flow. Since the system adopts the ‘or’ endorsement policy, one endorser for a request is enough. So there is only one fundamental processing unit in this phase. According to the above analysis, the total latency of this phase $\delta_E$ can be derived. \textbf{Ordering Phase.} This phase describes the ordering service of the system. Because the block packaging, signing and delivering are executed sequentially, only one place $P_{serve\_o}$ is used to represent the processing of the orderer node. Different with the above phases, the arc between place $P_{wait\_o}$ and immediate transition $T_{in}$ has a weight $N$, which indicates that the enable predicate of $T_{in}$ is $\#(P_{wait\_o})\ge N$ \& $\#(P_{idle\_o})>0$. When $T_{in}$ fires , $N$ tokens in $P_{wait\_o}$ are absorbed and one token is created in $P_{serve\_o}$. Corresponding to the actual system, this weight represents the operation of packing N transactions into a block. Similarly, the total latency of this phase $\delta_O$ can be derived. \textbf{Committing Phase.} When the peer node receives a new block, it performs a series of validations (e.g., MVCC) and then commits the block into the local ledger. Those processes are abstracted into one place $P_{serve\_c}$ to simplify the model. Since two peers in the system perform commit operations synchronously, the HTTP server listens to the events of both peers at the same time. Thus, there are two parallel procession units in this phase. The total latency of this phase depends on the one with the larger latency. \begin{equation} \delta_C = \max(L_{P_{wait\_c0}} + L_{P_{serve\_c0}},L_{P_{wait\_c1}} + L_{P_{serve\_c1}}). \end{equation} \textbf{Response Phase.} In order to approximate the real system, an extra transition is used to represent the network latency between applications and HTTP server, which can be expressed as $\delta_T=L_{P_{end}}$. The total latency of this system can be derived by \begin{equation} \Delta=\sum_{i=H}^{T}\delta_i. \end{equation} Moreover, it is obvious that a request has been completely handled after it arrives at the place $P_{end}$. Therefore, the throughput of this system is equal to the throughput of place $P_{end}$, i.e., \begin{equation} \Theta =\theta_{P_{end}}. \end{equation} Based on the proposed analytic model, the system performance metrics can be obtained conveniently through the simulations after determining the rates of all transitions. \begin{figure*}[ht] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{latency_1.pdf} \caption{$N=1$} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{latency_2.pdf} \caption{$N=2$} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{latency_3.pdf} \caption{$N=3$} \end{subfigure} \caption{System Latency with Different $N$.} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{throughput_1.pdf} \caption{$N=1$} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{throughput_2.pdf} \caption{$N=2$} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{throughput_3.pdf} \caption{$N=3$} \end{subfigure} \caption{System Throughput with Different $N$.} \end{figure*} \subsection{Network Parameter Determination} As we know, the most time-consuming operation in the ordering phase is signing and delivering the blocks \cite{b18}. It is feasible to reduce the block generation rate by adjusting the configuration of the ordering service, which can guarantee the ordering phase is not the performance bottleneck. Moreover, the experimental results in \cite{b24} indicate that at a high request arrival rate, the performance bottleneck of the Hyperledger Fabric framework exists in the endorsement phase or the committing phase (We will prove this in the next section). Moreover, the number of transactions contained in the block (or block size) has a great influence on the system performance. Under this premise, we can discuss the impact of the different parameters of ordering service, in order to find the best configuration parameters to optimize the system performance (i.e., make the system achieve the max throughput). The overall throughput is selected as the performance indicator rather than the overall latency and the reasons are listed as follows, i.e., \begin{itemize} \item Many related studies indicate that the throughput is more important than the latency in a Hyperledger-based system\cite{b18,b24}. \item With the increase of the requests arrival rate, the system latency can grow infinitely while the throughput can reach a saturation point. \item The overall latency of a Hyperledger-based system is greatly affected by the configuration parameters of the network and the request arrival rate. It is obvious that the overall latency obtained from experiments can’t represent the real processing time because it contains uncertain waiting time due to those different parameters. However, considering the throughput can ignore the impact of the request arrival rate because the throughput can reach a saturation point when the request arrival rate is big enough. \end{itemize} The proposed analytic model based on GSPN indicates that the blockchain system is composed of multiple successive M/M/1 networks. Thus, the system throughput is equal to the lowest throughput of those phases. Obviously, different $N$ in model can greatly affect the arrival rate of the committing phase, which further affects the system performance. Considering the ordering service configuration mentioned earlier, the number of transactions within a single block is determined by specific packaging strategies. Assume that $\lambda$ denotes the arrival rate of requests, $t$ denotes the BatchTimeout, $n$ denotes the MaxMessageCount, $\mu_e$ denotes the service rate of endorsement, $\mu_c$ denotes the service rate of committing and $\mu_c$ is determined by $N$ with function $f(N)$. Our work is to find appropriate $n$ and $t$ to maximize the system throughput, i.e., \begin{equation} \Theta_{max} =\max\min(N*f(N),\mu_e), \end{equation} where $N$ is determined by $n$ and $t$ with function $g(n,t)$, i.e., \begin{equation} g(n,t)= \begin{cases} n, & \lambda \ge \frac{n}{t},\\ \lfloor t*\lambda \rfloor + 1, & 0 < \lambda < \frac{n}{t}. \end{cases}\label{n} \end{equation} \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{queue_1.pdf} \caption{$N=1$} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{queue_3.pdf} \caption{$N=3$} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{queue_5.pdf} \caption{$N=5$} \end{subfigure} \caption{Queuing Length with Different $N$.} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{phase_latency_1.pdf} \caption{$N=1$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{phase_latency_3.pdf} \caption{$N=3$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{phase_latency_5.pdf} \caption{$N=5$} \end{subfigure} \caption{Average Latency of Each Phase with Different $N$.} \end{figure*} It is clear that we can adjust $N$ to make sure the throughput reaches the maximum value $\mu_e$, so there is $\lambda \ge \mu_e$. Considering that $\lambda$ is an uncertain value in the practical use, it is inappropriate to determine the configuration parameters depending on it, so it is better to assign it to a constant value. It is obvious that the obtained $n$ and $t$ under the premise $\lambda=\mu_e$ can still make sure the system throughput reach $\mu_e$ when $\lambda>\mu_e$, thus \eqref{n} can be simplified by taking $\lambda=\mu_e$. Moreover, $n$ and $t$ can be considered respectively, which ignores the impact of $\lambda$ and guarantees $\mu_e$ is always the smallest one in the formula, so that $t$ can not be the bottleneck (e.g. when $t$ and $\lambda$ are small, the number of transactions arrived in $t$ may be less than $n$, thus $N$ is limited). Therefore, \begin{equation} \begin{split} \Theta_{max} &=\max\min(\mu_e, n*f(n),\\ &(\lfloor t*\mu_e \rfloor + 1)*f(\lfloor t*\mu_e \rfloor + 1)).\label{max} \end{split} \end{equation} Then, after determining $\mu_e$ and $f(N)$, the appropriate configuration parameters can be obtained to achieve the maximum throughput of the system. \section{Performance Evaluation \& Optimization} \subsection{System Configuration} To validate the proposed model, the system mentioned in Section III is set up on a aliyun cloud server with 4 Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz processors and 8GB RAM, running Ubuntu 14.04 (64 bit). Various experiments are conducted on this running system by sending HTTP requests. To simulate the GSPN-based model needs to determine the firing time of all the transitions at first. For the HTTP phase and endorsement phase, the latency can be calculated by printing timestamp at the specific code point in HTTP server. For ordering phase and committing phase, the time information can be extracted from log files after adjusting the log level of relevant docker containers from INFO to DEBUG. Finally, for response phase, an extra empty interface is integrated in the HTTP server to obtain the end-to-end latency. The average latency of all phases are summarized in Table I. \begin{table}[ht] \centering \setlength{\tabcolsep}{4.2mm}% \caption{Model Parameters.} \label{Model Parameters} \begin{tabular}{cccccc} \hline \multicolumn{1}{|c|}{Transision} & \multicolumn{1}{c|}{T$_h$} & \multicolumn{1}{c|}{T$_e$} & \multicolumn{1}{c|}{T$_o$} & \multicolumn{1}{c|}{T$_c$} & \multicolumn{1}{c|}{T$_n$} \\ \hline \multicolumn{1}{|c|}{Latency (ms)} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{12} & \multicolumn{1}{c|}{27} & \multicolumn{1}{c|}{10} \\ \hline \multicolumn{1}{|c|}{Rate} & \multicolumn{1}{c|}{500} & \multicolumn{1}{c|}{143} & \multicolumn{1}{c|}{83} & \multicolumn{1}{c|}{37} & \multicolumn{1}{c|}{100} \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \end{tabular} \end{table} \subsection{Model Validation} Based on the running system, Locust\footnote{https://www.locust.io/} is used to evaluate the system overall latency and throughput of HTTP interfaces over increasing request arrival rates and different $N$. The throughput is evaluated by Requests Per Second (RPS). Moreover, a software tool embedded in Matlab named pntool\footnote{http://www.pntool.ac.tuiasi.ro/} is used to simulate the GSPN-based model under the same conditions. This tool can obtain the performance metrics (e.g., latency, throughput, queuing length) for all the transitions and places of the model. By analyzing the simulation results, the overall latency and throughput can be determined. Fig. 5 and Fig. 6 compare the experimental results with the simulation results of the system. It can be observed that the experimental results are comparable to the simulation results and the proposed model can well describe the actual system. Furthermore, the overall latency increases rapidly when throughput reaches the saturation point and the maximum throughput is greatly improved due to the increase of $N$, which indicates that the different configuration parameters ($N$) has a great influence on the system performance. \subsection{System Bottleneck Analysis} System bottleneck analysis is significant for performance improvement. For a Hyperledger-based system, once the bottleneck (i.e., endorsement phase, ordering phase and committing phase) is identified, the optimization goal is determined. It is very hard to obtain the performance metrics of each processing phase through experiments as the official SDK does not provide corresponding APIs for developers. Thus, we analyze the system bottleneck based on the proposed GSPN model which has been validated. Because each place in this model represent a specific processing phase, the performance metrics for each processing phase such as the average latency can be obtained by simulating the model. Consider the average latency and queuing length of each phase (i.e., places like $P_{wait\_h}$) as the metrics of system busyness, the longer queuing length and latency lead to the worse performance. Fig. 7 shows the average queuing length of different phases with different values of $N$, i.e., 1, 3 and 5. When $N$ is small, the queuing length of committing phase increases significantly while the other two curves remain stable. However, when $N=5$, the queuing length of endorsement phase increases rapidly. This is because the increase of $N$ is equivalent to decreasing the arrival rate of the committing phase. When $N$ is large enough, the service rate of the committing phase is more than the arrival rate of requests while the endorsement phase is less than it. Fig. 8 shows the average latency of the three major processing phases with different $N$, and the results match the previous analysis. Therefore, it can be found that the committing phase is the system bottleneck in case of small $N$ while the endorsement phase is instead in case of large $N$. Thus, it is an effective approach to improve the system performance by determining the value of $N$ in the practical use of a Hyperledger-based system. \subsection{Performance Optimization} The performance metrics in Fig. 6 point out that the value of $N$ has a great influence on the maximum throughput of the Hyperledger-based system. In Section IV, a mathematical method is proposed to determine the configuration parameters of ordering service, in order to achieve the maximum throughput. Considering the analysis results of the system bottleneck in the previous subsection, the proposed method is actually an approach to improve the performance of the committing phase. According to \eqref{max}, when the function $f(N)$ is determined, the appropriate $n$ and $t$ can be obtained. At this point, the system bottleneck is the endorsement phase. We leverage the approach of curve fitting to approximate $f(N)$. The experimental data and fitting result are shown in Fig. 9. It is obvious that the latency of committing phase is linear with $N$ within a certain range and the function is \begin{equation} h(N)=1000/f(N)=25.06+1.57*N. \end{equation} Therefore, the maximum throughput of the system is \begin{equation} \Theta_{max}=\max\min(\mu_e, \frac{1000}{\frac{25.06}{n}+1.57}, \frac{1000}{\frac{25.06}{\lfloor t*\mu_e \rfloor + 1}+1.57}). \end{equation} To solve (12), we have \begin{equation} \Theta_{max}=\mu_e. \end{equation} \begin{equation} n \ge \lceil \frac{25.06*\mu_e}{1000-1.57*\mu_e} \rceil, t \ge \frac{26.63*\mu_e - 1000}{\mu_e * (1000-1.57*\mu_e)}. \end{equation} According to the results in table I, let $\mu_e=143$. Thus, the theoretical results show that the system can reach a maximum throughput of 143 RPS when $n \ge 5$ and $t \ge 0.026$ (s). Note that $t$ is determined under the premise of $\lambda=\mu_e$, a larger $\lambda$ can ignore the impact of $t$. However, restricting the range of $t$ can prevent $t$ from being the bottleneck. Generally, it is not suggested to assign $n$ and $t$ with a large value in the practical environment because the large value can lead to poor performance with a low arrival rate $\lambda$. To validate the theoretical results above, a series of experiments have been conducted on the running system. For each $N$, we gradually increased the arrival rate in Locust until the throughput reaches the saturation point. Fig. 10 depicts the relationship between $N$ and the maximum system throughput and compares the simulation and experimental results. Our goal is to determine the maximum throughput and the corresponding $N$. The experimental results show that when $N$ is larger than 5, the maximum throughput becomes floor at around 145 RPS. Thus, continuing to increase $N$ when $N \ge 5$ can not significantly improve the throughput, which is in line with the theoretical results (i.e., 143 RPS when $n \ge 5$ and $t \ge 0.026$ (s)). Therefore, our proposed approach can determine the appropriate configuration of ordering service. As for other Hyperledger-based systems with distinct underlying networks, the proposed analytic method can be also applied to achieve the higher system throughput. The steps to determine the configuration parameters are described as follows, i.e.,: \begin{itemize} \item Step 1: Obtain the service rate of endorsement phase ($\mu_e$) and committing phase ($\mu_c$) by conducting experiments on the system. \item Step 2: Determine which processing phase of the transaction flow is the system bottleneck (The endorsement phase in our case). \item Step 3: Calculate $t$ (BatchTimeout) and $n$ (MaxMessageCount) by the proposed formula. \item Step 4: Confirm the conclusion through experiments. \end{itemize} Actually, this analysis method can be applied to a more complex system with more organizations and channels. The only difference is how to get the $\mu_e$ and $\mu_c$ because the number of channels and peers can affect the service rate. Thus, even though the system in the paper only contains two organizations and one channel, the proposed analysis method is scalable and convincing. \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{curve.pdf}} \caption{Latency of Committing Phase with Different $N$.} \label{fig9} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width=\linewidth]{max_throughput.pdf}} \caption{Maximum throughput with Different $N$.} \label{fig10} \end{figure} \section{Conclusion} In this paper, we have proposed a GSPN model for a blockchain system based on Hyperledger Fabric v1.2. This model depicts the transaction flow of Hyperledger Fabric in details and is aiming to evaluate the system performances. Extensive experiments have been conducted on a real-time system to validate our model. The results show that the number of transactions in a block significantly affect the system performance. Based on this model, we have analyzed the performance bottleneck with different configurations of the ordering service. With an increasing number of transactions in a block, the system bottleneck changes from committing phase to endorsement phase. In addition, we have presented a configuration selection approach to determine the configuration parameters of ordering service, in order to achieve the maximum throughput. Our simulation results have shown that when the number of transactions contained in a block exceeds 5 and the waiting time exceeds 0.26 seconds, the throughput can reach 143 RPS, which is in line with the real system. Furthermore, the conclusions of this paper are instructive for the future work. Our next plan is to further improve the overall system performance by optimizing the endorsement phase. For example, adding more endorsers in each organization to load balance the endorsement is an effective approach. \section*{Acknowledgment} This work is supported by Chinese National Natural Science Foundation (61671089) and China Unicom Network Technology Research Institute.
2,877,628,089,952
arxiv
\section{Introduction} Her X-1 is an extensively-studied accreting X-ray pulsar, discovered with the \textit{UHURU} satellite \citep{tananbaum72}. The pulsar has a low spin period of $1.24$ s and is in a binary system with an orbital period of 1.7 days \citep{leahy14}. Her X-1 was the first accreting neutron star (NS) where a cyclotron line was discovered \citep{trumper78}, with an energy of $\sim 40$ keV. Although this energy varies with time and X-ray flux \citep[e.g.][]{staubert16}, it provides a direct measurement of the pulsar magnetic field of a few times $10^{12}$ G. Her X-1 shows peculiar variability in X-rays over a $35$-day cycle, originating from the precession of a warped accretion disk \citep[][see also Fig. \ref{fig:lc}]{scott99}: the cycle starts in the bright Main High (MH) state, offering an unobscured view of the central X-ray source. This is followed with the Low State (LS), where the X-ray flux drops $\sim 99\%$ and only reflection off the face of the companion and an accretion disk corona are visible \citep{abdallah15}. This LS is interspersed by the Short High (SH) state, reaching a few tens of percent of the original MH state flux. This variability is geometric, and the central X-ray source does not intrinsically vary on the $35$-day timescale. The companion star of Her X-1 has a mass of $2.2M_{\odot}$ \citep{reynolds97,leahy14}. At the simplest level, accreting NSs are classified based on the donor star mass into low-mass X-ray binaries (LMXBs, $\lesssim 1 M_{\odot}$), high-mass X-ray binaries (HMXBs, $\gtrsim 10 M_{\odot}$) and the rare intermediate-mass X-ray binaries (IMXBs) inbetween. Her X-1 is an IMXB but combines characteristics from both other classes: as in LMXBs, it accretes through Roche-lobe overflow and an accretion disk \citep{scott99}, while most HMXBs accrete from the wind or circumstellar disk of the donor. In addition, Her X-1 has the strong magnetic field and low spin that are typically seen in HMXBs; NS LMXBs instead tend to have weaker magnetic fields of $B \lesssim 10^9$ G and if pulsations are seen, these are typically at millisecond periods \citep{patruno12}. Another observational difference between HMXBs and LMXBs is the presence of radio emission and inferred jets. LMXBs very commonly show synchrotron emission from jets, which is correlated with the X-ray emission from the accretion flow \citep[][]{migliari06,tudor17,gusinskaia17}, similar to accreting black holes \citep[BHs;][]{merloni03,falcke04}. On the contrary, in NS HMXBs jets have only been observed in Cir X-1, a young NS that might have a high-mass donor \citep{johnston16}. As jet formation is still poorly understood, it is unclear which properties of NS LMXBs and HMXBs could explain this apparent systematic difference: the spin period, magnetic field, or the presence of an accretion disk might all play a vital role. As Her X-1 shares characteristics of both classes, it can help understand the difference between their jet launching abilities. In this Letter, we present the discovery of radio emission from Her X-1. We present the observations and results in Sections 2 and 3, and afterwards discuss the origin of the emission and implications for our understanding of jet formation in LMXBs and HMXBs. We also consider the possibilities for future observations. \section{Observations} \subsection{Radio} We observed Her X-1 with the Karl G. Jansky Very Large Array (VLA) on 06 June 2013 (MJD 56449) from 02:12:01 to 03:26:38 UT, for a total of $\sim 54$ minutes of on-source observing time (project ID: 13A-352, PI: Degenaar). We observed the target in X-band between $8$ and $10$ GHz in two basebands, while the array was in C-configuration, yielding a synthesized beam of $3.24"\times1.8"$ (position angle $8.57^{\rm o}$). We used J1331+305 and J1635+3808 ($5.3^{\rm o}$ from the target) as the primary and secondary calibrators, respectively. The observation was calibrated and imaged following standard procedures with the Common Astronomy Software Applications package (CASA) v4.7.2 \citep{mcmullin07}. We did not encounter any significant RFI or calibration issues. Using CASA's multi-frequency, multi-scale \textsc{clean} task, we imaged Stokes I and V to make a source model of the field. With Briggs weighting and setting the robustness parameter to $0$ to balance sensitivity and resolution, we reached an RMS noise of $4.8$ $\mu$Jy\,beam$^{-1}$. We fit a point source in the image plane by forcing the fit of an elliptical gaussian with the FWHM and orientation of the synthesized beam. In addition, we also individually imaged the $8$--$9$ and $9$--$10$ GHz basebands with the same approach as the full band. As we did not observe a polarization calibrator, beam squint can affect our circular polarization estimates away from the pointing centre by a few percent. \subsection{X-rays} We examined the X-ray properties of Her X-1 during the VLA epoch in order to obtain a simultaneous X-ray flux and determine the source's phase in the $35$-day precession cycle. To measure the X-ray flux, we extracted the \textit{MAXI}/Gas Slit Camera \citep[GSC;][]{matsuoka09} spectrum for the MJD of the VLA observation from the \textit{MAXI} website (\href{http://maxi.riken.jp}{http://maxi.riken.jp}). We extracted the spectrum for the full MJD to ensure a sufficient number of counts for a basic characterisation of the spectrum. We also obtained the \textit{MAXI}/GSC and \textit{Swift}/Burst Alert Telescope (BAT) \citep{krimm13} long term X-ray lightcurves of Her X-1. Fig. \ref{fig:lc} shows the \textit{MAXI} and \textit{Swift} light curves, clearly showing the $35$-day cycle and revealing that Her X-1 was in the first low-state of its precession cycle. Finally, we also downloaded the \textit{MAXI} spectrum on MJD 56437, the peak of the prior MH state, to estimate the unobscured X-ray flux. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{paper_herx-1.pdf} \caption{\textit{MAXI}/GSC and \textit{Swift}/BAT daily X-ray lightcurves of Her X-1 around the radio epoch on MJD 56449. The $35$-day cyclic variability, due to precession of the warped accretion disk, is clearly visible. The VLA epoch is shown by the dashed line.} \label{fig:lc} \end{center} \end{figure} \section{Results} \subsection{Radio} Her X-1 is detected at a flux density $S_{\nu} = 38.7 \pm 4.8$ $\mu$Jy at $9$ GHz, with a significance of $8\sigma$. A zoom of the target field is shown in Fig. \ref{fig:image}. For a distance of $d = 6.1$ kpc \citep[e.g][]{leahy14} and defining the radio luminosity $L_R = 4\pi\nu S_{\nu}d^2$, this corresponds to $L_R = (1.6 \pm 0.2)\times10^{28}$ erg s$^{-1}$. The source is also detected in the $8$--$9$ and $9$--$10$ GHz bands separately at $42.2 \pm 6.8$ and $36.2 \pm 6.8$ $\mu$Jy respectively. However, the low significance means we do not well constrain the radio spectrum with $\alpha = -0.7 \pm 5.3$, where $S_{\nu} \propto \nu^{\alpha}$. We measured a position of $\rm RA = 16^{\rm h}57^{\rm m}49^{\rm s}.792\pm0^{\rm s}.027$ and $\rm Dec = +35^{\rm o}20'32".578 \pm 0".225$, where the uncertainties equal the synthesized beam size divided by the signal-to-noise of the detection. This position is consistent within the $1-\sigma$ errors with the best known position of Her X-1, from the infrared 2MASS survey \citep{skrutskie06}, which is shown in Fig. \ref{fig:image} as well, and with the lower-accuracy positions at other wavelenghts. Hence, this is unlikely to be a background source. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{paperplot.png} \caption{VLA image of Her X-1 at $9$ GHz. The black cross indicates the best known position, from the infrared 2MASS survey. In the bottom left corner, we show the half-power contour of the synthesized beam} \label{fig:image} \end{center} \end{figure} \subsection{X-rays} We fit the two downloaded \textit{MAXI} $2$--$20$ keV spectra of Her X-1 to determine the flux on the MJD of the radio observation and at the height of the previous MH state. As the latter is only a short ($\sim 120$s) exposure, both spectra contain few photons ($\sim 145$ and $40$ photons, respectively) and are only suitable for a very simple fit. In both cases, we used \textsc{xspec} to fit an absorbed (\textsc{tbabs}) blackbody (\textsc{bbodyrad}) spectrum. We fix the $N_H$ in both cases, as the data quality is not sufficient to determine it directly. We set $N_H = 1.0\times10^{22}$ for the obscured LS \citep{inam05} and $N_H = 1.7\times10^{20}$ cm$^{-2}$ in the HS \citep{furst13}. This yields $0.5$--$10$ keV X-ray fluxes of $\sim 9\times10^{-11}$ erg s$^{-1}$ cm$^{-2}$ during the radio observation and $\sim 3\times10^{-9}$ erg s$^{-1}$ cm$^{-2}$ during the MH state. The latter is slightly lower than the typical range of X-ray fluxes of Her X-1 in the MH state of $5\times10^{-9}$ to $10^{-8}$ erg s$^{-1}$ cm$^{-2}$ \citep{staubert16}. This difference might be due to the short exposure of the \textit{MAXI} spectrum, combined with the dips often seen during the MH state \citep{igna11}. \section{Discussion} \label{sec:discussion} We present the first radio detection of the IMXB Her X-1, at a flux density of $S_{\nu} = 38.7 \pm 4.8$ $\mu$Jy. Her X-1 has been the subject of multiple radio searches, but similar to most NS HMXBs, was hitherto never detected. \citet{coe80} observed Her X-1 every day of an entire $35$-day precession cycle. The source was not detected in any of the observations, with $3\sigma$ upper limits of $9$ mJy. \citet{nelson88} observed Her X-1 twice in a large sample study of X-ray binaries and cataclysmic variables, reaching a $5\sigma$ upper limit of $1.3$ mJy. In this discussion, we will first compare the radio properties of Her X-1 with different classes of accreting NSs. Subsequently, we will discuss the origin of the radio emission and implication for future research. \subsection{Comparison with NS LMXBs and HMXBs} Radio detections and jets are ubiquitous in disk-accreting, weak magnetic field NS LMXBs \citep[][]{migliari06,tudor17,gusinskaia17}. While $L_X$ and $L_R$ do appear to be related for these types of NS systems, no universal relation has emerged \citep{tudor17}. Most relevant for the comparison with Her X-1 are the handful of LMXBs containing a slow pulsar. With the exception of one (see below), none of these sources have been detected in the radio. 2A 1822-371 and 4U 1626-67 have unconstraining upper limits on their radio flux of $200$ $\mu$Jy \citep{fender00}. GRO 1744-28 (The Bursting Pulsar) does have deep \textit{ATCA} upper limits during it small 2017 outburst \citep{russell17}. Finally, for the mildly recycled $11$-Hz pulsar IGR J1748-2466 no radio upper limits are known. As stated, a single slow pulsar in an LMXB has been detected: the symbiotic X-ray binary GX 1+4 was recently discovered in radio ({\color{blue} Van den Eijnden et al., submitted}). In this type of sources, the NS accretes from the stellar wind of an evolved low-mass companion. The origin of the radio emission in GX 1+4 can not be unambigiously inferred. Other symbiotic X-ray binaries have not been targeted by radio campaigns and it is thus unknown whether radio emission occurs is more sources of this type. Given the current upper limits or lack of observations, new, deep radio observations are needed to infer whether Her X-1 is an outlier among slow pulsars in LMXBs or whether radio emission occurs more commonly among such sources. Her X-1 also shares characterics with many of the NS HMXBs: a strong magnetic field ($\gtrsim 10^{12}$ G) and a slow spin. Radio detections of HMXBs are relatively rare \citep{duldig79,nelson88,fender00,migliari11b}: the NS Cir X-1 launches resolved radio jets \citep{tudose06}, and likely is a HMXB (see \citealt{johnston16} for a recent discussion). However, no magnetic field estimate is known for Cir X-1. Two other NS HMXBs have been detected in radio. Most notably, the wind-accreting HMXB GX 301-2 was detected over multiple radio epochs \citep{pestalozzi09}. However, the flux levels were consistent with those expected from the stellar wind, and the claimed transient outflow component in the emission has not been confirmed \citep{migliari11b}. Additionally, the Be/X-ray binary A 1118-61, consisting of a NS and a Be companion, was detected in only one out of eight observations by \citet{duldig79}. Due to the crowded field, this detection might not be related to the Be/X-ray binary. Both these (possible) detections are thus not conclusive about the presence of a jet. There exist numerous radio non-detections of NS HMXBs. However, for most of these sources, the radio upper limits \citep[ranging from hundreds of $\mu$Jy to mJy levels;][]{duldig79,nelson88,fender00} are not constraining compared with NS LMXBs and deep observations with current generation radio telescopes might reveal these sources. Only the Be/X-ray binaries A 0535+26 \citep{tudose10} and X Per, and the wind-accreting NS HMXB 4U 2206+54 \citep{migliari06} have radio luminosity upper limits of $\lesssim 5\times10^{27}$ erg/s, below our Her X-1 measurement. However, these sources were observed at much lower (more than an order of magnitude) X-ray luminosity, making any direct comparison with Her X-1 difficult. \subsection{The emission mechanism and physical origin} Three radio emission mechanisms are relatively unlikely to explain our detection of Her X-1. Firstly, thermal emission would require too high densities of emitting material on too large scales. Secondly, we imaged Stokes V in addition to Stokes I and did not detect the target, setting a $3\sigma$ upper limit on the circular polarization of $37\%$. Coherent emission should be highly circularly polarized and can thus be excluded. Finally, free-free emission from a strong stellar wind, as observed in the HMXB GX 301-2 \citep{pestalozzi09}, is unlikely: while a wind might be present \citep{leahy15}, its strength implies a flux density over two orders of magnitude lower than our detection \citep{wright75}. On the constrary, synchrotron emission is consistent with the observed radio properties. In the following, we will discuss possible physical origins of such synchrotron emission. Firstly, synchrotron-emitting shocks could occur in the interaction between the disk and the magnetosphere or in the accretion column onto the magnetic poles. However, the Compton limit on the brightness temperature of $10^{12}$ K sets a lower limit on the size of the emitting region of $\gtrsim 7.5\times10^4$ km. We can estimate the size of the magnetosphere $R_{\rm m}$ by rewriting equation 1 from \citet{cackett09}: \begin{equation} \begin{split} R_{\rm m} = k& \left(\frac{B}{1.2\times10^5 \rm G}\right)^{4/7} \left(\frac{f_{\rm ang}}{\eta}\frac{F_{\rm bol}}{10^{-9} \rm erg~s^{-1}~cm^{-2}}\right)^{-4/14} \\ &\left(\frac{M}{1.4M_{\odot}}\right)^{-8/7} \left(\frac{R}{10\rm km}\right)^{-12/7} \left(\frac{D}{5 \rm kpc}\right)^{-4/7} \text{ } R_g \end{split} \end{equation} where $k$ is a geometry factor relating spherical and disk accretion, typically assumed to be $0.5$ for disk accretion, $B$ is the magnetic field strength, $f_{\rm ang}$ is the anisotropy correction factor, $\eta$ is the accretion efficiency, $F_{\rm bol}$ is the bolometric flux, and $M$, $R$ and $D$ are the mass, radius and distance of the NS. We use $B \sim 3\times10^{12}$ G \citep{staubert16}, $k=0.5$, $f_{\rm ang}=1$, $\eta=0.1$, $M=1.4M_{\odot}$, $R=10$ km \citep{leahy04} and D=$6.1$ kpc \citep{leahy14}. As $R_m$ scales inversely with flux, the maximum magnetospheric size can be estimated with the LS $2$--$10$ keV X-ray flux without a bolometric correction (e.g. $9\times10^{-11}$ erg s$^{-1}$ cm$^{-2}$): this yields $R_m \approx 1.7\times10^4$ km, smaller than the minimum emission region size. As the low flux during the LS of Her X-1 originates from a geometric effect, it is actually more accurate to use the MH state bolometric flux; for the measured MH state flux of $3\times10^{-9}$ erg s$^{-1}$ cm$^{-2}$, $R_m$ is even smaller at $\sim 0.7\times10^4$ km. Hence, shocks can be exluded as well, assuming that they indeed occur at the magnetosphere and not further out in the accretion flow. Another possibility is that we observe a propeller-driven outflow: if the magnetosphere spins faster than the disk where the magnetic and gas pressure are equal, it creates a centrifugal barrier that can either trap the disk \citep{dangelo10} or expel the material \citep{illarionov75,campana98}. The latter has, for instance, recently been inferred through X-ray monitoring in several strong magnetic field accreting NSs \citep[e.g. two Be/X-ray binaries;][]{tsygankov16b} and might explain the recent radio detection of GX 1+4 ({\color{blue} Van den Eijnden et al., submitted}). For a given NS magnetic field and spin period, one can estimate the maximum $L_X$ for which the magnetosphere can still create this centrifugal barrier as \citep[e.g.][]{campana02}: \begin{equation} \begin{split} L_{X,\rm max} \approx 4\times10^{37} &k^{7/2} \left(\frac{B}{10^{12} \rm G}\right)^{2} \left(\frac{P}{1\rm s}\right)^{-7/3} \\ &\left(\frac{M}{1.4M_{\odot}}\right)^{-2/3} \left(\frac{R}{10 \rm km}\right)^{5} \text{ } \rm erg~s^{-1} \end{split} \label{eq:prop} \end{equation} where $P$ is the pulsar spin and all other parameters are already defined. For a magnetic field of $\sim 3\times 10^{12}$ G, a spin period of $1.24$ s and standard NS parameters, we estimate the $L_{X,\rm max} \approx 1.9\times10^{37}$ erg s$^{-1}$ for Her X-1. To assess whether a magnetic propeller could be at play in Her X-1, we need to compare this maximum X-ray luminosity with the correct $L_X$ of Her X-1. During the LS radio epoch, $L_X \approx 4\times10^{35}$ erg s$^{-1}$, comfortably below the upper limit for the propeller effect. However, the actual, unobscured X-ray luminosity is the more accurate probe of the relevant physical properties (i.e. the mass accretion rate that balances the magnetic pressure). During the prior MH state, Her X-1 reached $L_X \approx 1.2\times10^{37}$ erg s$^{-1}$ between $2$--$10$ keV. With a bolometric correction, the X-ray luminosity of Her X-1's MH state typically reaches $(2.5-5)\times10^{37}$ erg s$^{-1}$ \citep{staubert16}. This is of comparable magnitude as the $L_{X,\rm max}$ estimate, although it should be noted that not every MH state reaches the same luminosity \citep[e.g.][]{staubert16} and Eq. \ref{eq:prop} is merely an estimate. However, in other accreting NSs propellers have been linked to a simultaneous decrease in X-ray flux and pulsation strength and such behaviour has not been observed in Her X-1 in its regular, $35$-day cyclic behaviour. Finally, we might observe a compact, synchrotron-emitting radio jet, similar to those seen in NS LMXBs with weaker ($\lesssim 10^9$ G) magnetic fields. If we compare Her X-1's radio properties with the NS LMXB sample in the $L_X$/$L_R$ diagram \citep[see e.g.][for recent versions]{tudor17,gusinskaia17}, we see that these are consistent with several AMXP observations if we assume the LS flux. While an interesting comparison, as AMXPs have $\sim3-4$ orders of magnitude weaker magnetic fields and faster spins, we should actually again use the estimated unobscured (i.e. MH state) flux. In that comparison, the $L_R$ of Her X-1 is three to ten times lower than hard-state and several soft-state Atoll sources at similar $L_X$, and more similar to that of jet-quenched sources \citep[e.g.][]{gusinskaia17}. As jet formation is poorly understood, the cause of jet quenching is puzzling. It is observed in all BH LMXBs if and when they transition into the soft spectral state \citep[e.g.][]{gallo03}, but the picture is more ambigious in accreting NSs: only in a handful of sources is quenching observed \citep[see e.g.][]{millerjones10, gusinskaia17}. If jet formation requires large scale-height poloidal magnetic fields, quenching might be explained as follows: in the hard spectral state, the accretion disk might be truncated away from the compact object \citep[e.g.][]{done07} as a radiatively-inefficient accretion flow \citep[RIAF;][]{narayan95} or corona replaces the inner disk, providing the required fields. As the disk moves inwards during the transition to softer spectral states, the RIAF disappears or the corona is cooled, breaking the jet formation mechanism. If the above scenario indeed underlies jet quenching, it is difficult to reconcile with the low $L_R$ in Her X-1: there, the strong stellar magnetic field prevents the disk from moving inwards. However, this stellar field might instead hamper the initial formation of a RIAF or corona, also effectively quenching the jet. Alternatively, the jet formation might be partially suppressed as the disk pressure can not dominate and twist the strong magnetic field \citep{massi08}, or as the magnetic field prevents the formation of a boundary layer at the NS surface, which might play a role in NS jet formation \citep{livio99}. \subsection{Implications for future research} Out of the considered origins for the radio emission (shocks, a propeller ouflow and a compact jet), a jet appears most compatible with both the correlated X-ray and radio properties and with the known properties of the Her X-1 system. The presence of a jet in Her X-1 automatically implies that (the combination of) a strong magnetic field and a slow spin do not completely impede jet formation \citep[as suggested by e.g.][]{massi08}. This would imply that our understanding of jet formation, in the presence of a strong NS magnetic field, needs to be revisited. Additionally, it opens up the possibility of observing radio emission from several currently undetected sources: for instance, the LMXBs containing slow X-ray pulsars and Be/X-ray binaries accreting from an (small) disk would be prime targets for such studies, as current generation radio telescopes (e.g. \textit{VLA}, \textit{ATCA}) reach sensitivities orders of magnitude below the current typical upper limits for these sources. In order to confidently confirm a jet nature of the radio emission in Her X-1, new observations are necessary. A measurement of the radio spectral index, combined with a linear polarization measurement, and a search for extended structure or a jet-break in the spectrum, could reveal the emission mechanism. If a jet is indeed present, this opens up interesting possibilities to better understand Her X-1 itself. For instance, observations at different states during its $35$-day cycle could independently confirm that precession causes this cycle, as the jet likely emits from further out without being obscured. Additionally, a jet might be a better tracer of the mass accretion rate onto the NS than the X-rays, if it is indeed not influenced by obscuration. That could possibly also allow a more detailed study of Her X-1's rare off state, wherein barely any X-ray emission is observed for extended periods of time. \section*{Acknowledgements} JvdE and TDR acknowledge the hospitality of ICRAR Curtin, where part of this research was carried out, and support from the Leids Kerkhoven-Bosscha Fonds. JvdE and ND are supported by a Vidi grant from the Netherlands Organization for Scientific Research (NWO) awarded to ND. TDR is supported by a Veni grant from the NWO. JCAM-J is the recipient of an Australian Research Council Future Fellowship (FT140101082). \input{output.bbl} \bsp \label{lastpage} \end{document}
2,877,628,089,953
arxiv
\section{Introduction} About Conway notation of knots the reader can consult the seminal paper by J.~Conway \cite{2}, where this notation is introduced, the paper by A.~Caudron \cite{3}, and books \cite{4,5}. In particular, drawings of all knots up to $n=10$ crossings according to Conway notation, where every knot is represented by a single diagram, are given in the Appendix C of the book "Knots and links" by D.~Rolfsen \cite{4}. In Sections 2,3 and 4 we compute ascending numbers for 64 knots with at most $n=10$ crossings and determine upper and lower bounds of ascending numbers for all knots up to $n=10$ crossings. For twist knots, i.e., knots of the family $p\,2$ ($p\ge 1$) in the Conway notation, the ascending number is one, and for all other knots $a(K)\ge 2$, i.e., $a(K)\ge max(u(K),2)$. This means, that if there is a diagram $\widetilde{K}$ of a $K$ with $a(\widetilde{K})=u(K)$, then $a(K)=a(\widetilde{K})=u(K)$. Except for several knots, the unknotting numbers of knots with at most $n=10$ crossings are known, and they are given in "Tables of knot invariants" by C.~Livingston and J.C.~Cha \cite{6}. Bridge numbers of knots with $n\le 10$ crossings are given in the same tables, but they are not useful for our purpose because for all knots with $n\le 10$ the bridge number is 2 or 3. In order to improve upper bound given by the inequality (1) following from minimal crossing numbers, we computed ascending numbers of all minimal diagrams. As an additional improvement, for some knots we obtained upper bounds from ascending numbers of some of their non-minimal diagrams. For all computations we used the program "LinKnot" \cite{5}. In Section 5 we prove the theorem on signature enabling computation of general formulae for the signature of alternating knot families given by their Conway symbols. These general formulae enabled us to recognize the families of knots obtained from alternating generating knots with at most $n=10$ crossings for which unknotting numbers are determined by signatures computed in Section 6. In Section 7 we consider some families of knots with ascending numbers that coincides with the unknotting number. The ascending number of a link $L$ is described in the paper "Ascending number of knots and links" by M.~Ozawa \cite{1}. In our paper we restrict the consideration of ascending numbers to knots, so we repeat the definitions of basic terms from \cite{1}. A knot diagram is {\it based} if a base point (different from the crossing points) is specified on the diagram, and {\it oriented} if an orientation is assigned to it. Let $K$ be a knot and $\widetilde{K}$ be a based oriented diagram of $K$. The {\it descending diagram} of $\widetilde{K}$, denoted by $d(\widetilde{K})$, is obtained as follows: beginning at the basepoint of $\widetilde{K}$ and proceeding in the direction specified by the orientation, change the crossings as necessary so that each crossing is first encountered as an over-crossing. Note that $d(\widetilde{K})$ is the diagram of a trivial knot. \begin{definition} Let $K$ be a knot and let $\widetilde{K}$ be a based oriented diagram of $K$. The {\it ascending number} of $K$ is defined as the number of different crossings between $\widetilde{K}$ and $d(\widetilde{K})$ and denoted by $a(\widetilde{K})$. The ascending number of $K$ is defined as the minimum number of $a(\widetilde{K})$ over all based oriented knot diagrams $\widetilde{K}$ of $K$, and denoted by $a(K)$ \cite{1}. \end{definition} Among theorems proved in \cite{1}, we relate four of them giving upper and lower bounds for ascending numbers of knots: \begin{enumerate} \item for a non-trivial knot $K$, we have $$a(K)\le \lfloor{{c(K)-1}\over 2}\rfloor \eqno (1)$$ \noindent where $c(K)$ is the minimum crossing number of $K$, and $\lfloor x\rfloor $ integer part of $x$; \item for every non-trivial knot $K$, we have $$a(K)\ge u(K)$$ \noindent where $u(K)$ is the unknotting number of $K$; \item the ascending number of a knot $K$ is one {\it iff} $K$ is a twist knot; \item for a knot $K$, we have $$a(K)\ge b(K)-1$$ \noindent where $b(K)$ is the bridge number of $K$. \end{enumerate} \section{Ascending numbers of knots up to 8 crossings} Ascending numbers of knots up to $n=8$ crossings are given in the tables from paper \cite{1} and illustrated by the corresponding based oriented knot diagrams giving the minimal ascending number, where the knot diagrams, which are the same as minimal crossing diagrams, are omitted. Among knots with the minimal diagram giving the ascending number we recognized two more knots: $7_6=2\,2\,1\,2$ and $8_{12}=2\,2\,2\,2$, illustrated in Fig. 1. For knots $8_{16}=.2.2\,0$ and $8_{17}=.2.2$ we succeeded to find their non-minimal diagrams with diagram ascending number equal 2, so $a(8_{16})=2$ and $a(8_{17})=2$. In the corresponding tables every knot is given in classical Conway notation {\it Con} \cite{4}, followed by unknotting number $u$ \cite{6}, upper bound for ascending number $a_d$ (obtained mostly from minimal diagrams), and ascending number $a$. For knots with unknown ascending numbers a sequence is given, beginning with lower bounds and ending with the best known upper bound (e.g., $[2,3]$). For knots up to $n=8$ crossings, the computation of ascending numbers corresponding to all minimal diagrams gives no improvement of the upper bound obtained from the crossing number, but for many knots with $n=9$ or $n=10$ crossings it results in the upper bound equals 3, instead of the upper bound 4 obtained from the crossing number. \begin{figure}[th] \centerline{\psfig{file=afig001.eps,width=2.20in}} \vspace*{8pt} \caption{(a) Knot $7_6$; (b) knot $8_{12}$.\label{fig1}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig002.eps,width=2.20in}} \vspace*{8pt} \caption{(a) Knot $8_{16}$; (b) knot $8_{17}$.\label{fig2}} \end{figure} \footnotesize \noindent \begin{tabular}{|c|c|c|c|c||c|c|c|c|c|} \hline $K$ & $Con$ & $u$ & $a_d$ & $a$ & $K$ & $Con$ & $u$ & $a_d$ & $a$ \\ \hline $3_1$ & $3$ & 1 & 1 & 1 & $7_1$ & $7$ & 3 & 3 & 3 \\ \hline $4_1$ & $2\,2$ & 1 & 1 & 1 & $7_2$ & $5\,2$ & 1 & 3 & 1 \\ \hline $5_1$ & $5$ & 2 & 2 & 2 & $7_3$ & $4\,3$ & 2 & 3 & 2 \\ \hline $5_2$ & $3\,2$ & 1 & 2 & 1 & $7_4$ & $3\,1\,3$ & 2 & 3 & 2 \\ \hline $6_1$ & $4\,2$ & 1 & 2 & 1 & $7_5$ & $3\,2\,2$ & 2 & 3 & 2 \\ \hline $6_2$ & $3\,1\,2$ & 1 & 2 & 2 & $7_6$ & $2\,2\,1\,2$ & 1 & 2 & 2 \\ \hline $6_3$ & $2\,1\,1\,2$ & 1 & 2 & 2 & $7_7$ & $2\,1\,1\,1\,2$ & 1 & 2 & 2 \\ \hline \end{tabular} \normalsize \bigskip \scriptsize \noindent \begin{tabular}{|c|c|c|c|c||c|c|c|c|c|} \hline $K$ & $Con$ & $u$ & $a_d$ & $a$ & $K$ & $Con$ & $u$ & $a_d$ & $a$ \\ \hline $8_1$ & $6\,2$ & 1 & 3 & 1 & $8_{12}$ & $2\,2\,2\,2$ & 2 & 2 & 2 \\ \hline $8_2$ & $5\,1\,2$ & 2 & 3 & $[2,3]$ & $8_{13}$ & $3\,1\,1\,1\,2$ & 1 & 3 & 2 \\ \hline $8_3$ & $4\,4$ & 2 & 3 & 2 & $8_{14}$ & $2\,2\,1\,1\,2$ & 1 & 3 & 2 \\ \hline $8_4$ & $4\,1\,3$ & 2 & 3 & 2 & $8_{15}$ & $2\,1,2\,1,2$ & 2 & 3 & 2 \\ \hline $8_5$ & $3,3,2$ & 2 & 3 & $[2,3]$ & $8_{16}$ & $.2.2\,0$ & 2 & 3 & 2 \\ \hline $8_6$ & $3\,3\,2$ & 2 & 3 & 2 & $8_{17}$ & $.2.2$ & 1 & 3 & 2 \\ \hline $8_7$ & $4\,1\,1\,2$ & 1 & 3 & $[2,3]$ & $8_{18}$ & $8^*$ & 2 & 2 & 2 \\ \hline $8_8$ & $2\,3\,1\,2$ & 2 & 3 & 2 & $8_{19}$ & $3,3,-2$ & 3 & 3 & 3 \\ \hline $8_9$ & $3\,1\,1\,3$ & 1 & 3 & $[2,3]$ & $8_{20}$ & $3,2\,1,-2$ & 1 & 2 & 2 \\ \hline $8_{10}$ & $2\,1,3,2$ & 2 & 3 & $[2,3]$ & $8_{21}$ & $2\,1,2\,1-2$ & 1 & 2 & 2 \\ \hline $8_{11}$ & $3\,2\,1\,2$ & 1 & 3 & 2 & & & & & \\ \hline \end{tabular} \normalsize \bigskip \section{Ascending numbers of knots with 9 crossings} According to \cite{1}, for knots with $n=9$ crossings ascending numbers are known only for six knots: $a(9_3)=3$, $a(9_4)=2$, $a(9_6)=3$, $a(9_7)=2$, $a(9_{47})=2$, and $a(9_{48}=2$, and they were determined by M.~Okuda. For knots $9_{47}$ and $9_{48}$ they can be determined from their minimal diagrams. Hence, for 23 new non-trivial knots\footnote{For knots $9_1$ and $9_2$ is trivial to conclude that $a(9_1)=4$, and $a(9_2)=1$.} with $n=9$ crossings we obtained their ascending numbers. Based oriented diagrams corresponding to these knots are illustrated in Figs. 3-10. All these alternating knots with $n=9$ crossings are given by their non-minimal based oriented diagrams giving their ascending numbers. For all remaining knots with $n=9$ crossings, except for the knot $9_{40}$, by computing diagram ascending numbers for all minimal crossing diagrams (or for some non-alternating diagrams in the case of knots $9_{29}$ and $9_{39}$) we succeeded to reduce the set of possible values of the ascending number to $[2,3]$ (meaning, 2 or 3). \bigskip \scriptsize \noindent \begin{tabular}{|c|c|c|c|c||c|c|c|c|c|} \hline $K$ & $Con$ & $u$ & $a_d$ & $a$ & $K$ & $Con$ & $u$ & $a_d$ & $a$ \\ \hline $9_1$ & $9$ & 4 & 4 & 4 & $9_{26}$ & $3\,1\,1\,1\,1\,2$ & 1 & 3 & $[2,3]$ \\ \hline $9_2$ & $7\,2$ & 1 & 4 & 1 & $9_{27}$ & $2\,1\,2\,1\,1\,2$ & 1 & 3 & $[2,3]$ \\ \hline $9_3$ & $6\,3$ & 3 & 4 & 3 & $9_{28}$ & $2\,1,2\,1,2+$ & 1 & 3 & $[2,3]$ \\ \hline $9_4$ & $5\,4$ & 2 & 4 & 2 & $9_{29}$ & $.2.2\,0.2$ & 2 & 4 & $[2,3]$ \\ \hline $9_5$ & $5\,1\,3$ & 2 & 4 & 2 & $9_{30}$ & $2\,1\,1,2\,1,2$ & 1 & 3 & $[2,3]$ \\ \hline $9_6$ & $5\,2\,2$ & 3 & 4 & 3 & $9_{31}$ & $2\,1\,1\,1\,1\,1\,2$ & 2 & 3 & $[2,3]$ \\ \hline $9_7$ & $3\,4\,2$ & 2 & 4 & 2 & $9_{32}$ & $.2\,1.2\,0$ & 2 & 3 & $[2,3]$ \\ \hline $9_8$ & $2\,4\,1\,2$ & 2 & 3 & 2 & $9_{33}$ & $.2\,1.2$ & 1 & 3 & $[2,3]$ \\ \hline $9_9$ & $4\,2\,3$ & 3 & 4 & 3 & $9_{34}$ & $8^*2\,0$ & 1 & 3 & 2 \\ \hline $9_{10}$ & $3\,3\,3$ & 3 & 4 & 3 & $9_{35}$ & $3,3,3$ & 3 & 4 & 3 \\ \hline $9_{11}$ & $4\,1\,2\,2$ & 2 & 3 & $[2,3]$ & $9_{36}$ & $2\,2,3,2$ & 2 & 3 & $[2,3]$ \\ \hline $9_{12}$ & $4\,2\,1\,2$ & 1 & 3 & 2 & $9_{37}$ & $3,2\,1,2\,1$ & 2 & 3 & 2 \\ \hline $9_{13}$ & $3\,2\,1\,3$ & 3 & 4 & 3 & $9_{38}$ & $.2.2.2$ & 3 & 4 & 3 \\ \hline $9_{14}$ & $4\,1\,1\,1\,2$ & 1 & 3 & 2 & $9_{39}$ & $2:2:2\,0$ & 1 & 4 & $[2,3]$ \\ \hline $9_{15}$ & $2\,3\,2\,2$ & 2 & 3 & 2 & $9_{40}$ & $9^*$ & 2 & 4 & $[2,3,4]$ \\ \hline $9_{16}$ & $3,3,2+$ & 3 & 4 & 3 & $9_{41}$ & $2\,0:2\,0:2\,0$ & 2 & 3 & $[2,3]$ \\ \hline $9_{17}$ & $2\,1\,3\,1\,2$ & 2 & 3 & $[2,3]$ & $9_{42}$ & $2\,2,3,-2$ & 1 & 2 & 2 \\ \hline $9_{18}$ & $3\,2\,2\,2$ & 2 & 4 & 2 & $9_{43}$ & $2\,1\,1,3,-2$ & 2 & 3 & $[2,3]$ \\ \hline $9_{19}$ & $2\,3\,1\,1\,2$ & 1 & 3 & 2 & $9_{44}$ & $2\,2,2\,1,-2$ & 1 & 2 & 2 \\ \hline $9_{20}$ & $3\,1\,2\,1\,2$ & 2 & 3 & $[2,3]$ & $9_{45}$ & $2\,1\,1,2\,1,-2$ & 1 & 2 & 2 \\ \hline $9_{21}$ & $3\,1\,1\,2\,2$ & 1 & 3 & 2 & $9_{46}$ & $3,3,-3$ & 2 & 2 & 2 \\ \hline $9_{22}$ & $2\,1\,1,3,2$ & 1 & 3 & $[2,3]$ & $9_{47}$ & $8^*-2\,0$ & 2 & 2 & 2 \\ \hline $9_{23}$ & $2\,2\,1\,2\,2$ & 2 & 4 & 2 & $9_{48}$ & $2\,1,2\,1,-3$ & 2 & 2 & 2 \\ \hline $9_{24}$ & $3,2\,1,2$ & 1 & 3 & $[2,3]$ & $9_{49}$ & $-2\,0:-2\,0:-2\,0$ & 3 & 3 & 3 \\ \hline $9_{25}$ & $2\,2,2\,1,2$ & 2 & 3 & 2 & & & & & \\ \hline \end{tabular} \bigskip \normalsize \begin{figure}[th] \centerline{\psfig{file=afig003.eps,width=3.40in}} \vspace*{8pt} \caption{(a) Knot $9_5$; (b) knot $9_8$; (c) knot $9_9$.\label{fig3}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig004.eps,width=3.40in}} \vspace*{8pt} \caption{(a) Knot $9_{10}$; (b) knot $9_{12}$; (c) knot $9_{13}$.\label{fig4}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig005.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $9_{14}$; (b) knot $9_{15}$; (c) knot $9_{16}$.\label{fig5}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig006.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $9_{18}$; (b) knot $9_{19}$; (c) knot $9_{21}$.\label{fig6}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig007.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $9_{23}$; (b) knot $9_{25}$; (c) knot $9_{34}$.\label{fig7}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig008.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $9_{35}$; (b) knot $9_{37}$; (c) knot $9_{38}$.\label{fig8}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig009.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $9_{42}$; (b) knot $9_{44}$; (c) knot $9_{45}$.\label{fig9}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig010.eps,width=2.0in}} \vspace*{8pt} \caption{(a) Knot $9_{46}$; (b) knot $9_{49}$.\label{fig10}} \end{figure} \section{Ascending numbers of knots with 10 crossings} For knots with $n=10$ crossings, none of ascending numbers (except for the twist knot $10_1=8\,2$ with $a(10_1)=1$) were known. In this paper we computed ascending numbers for 39 knots with $n=10$ crossings. For some of remaining knots, by using all minimal or some non-minimal diagrams we succeeded to improve upper and lower bound for ascending numbers to the set $[2,3]$. Because among 10-crossing knots there are some with unknown unknotting number ($[2,3]$, meaning 2 or 3), the corresponding bounds for ascending number are denoted by $(2,3)$ instead of $[2,3]$, and $(2,3,4)$ instead of $[2,3,4]$. If in any of these cases unknotting number is equal to its lower bound, this will be a counterexample to the Bernhard-Jablan Conjecture \cite{5,7,8}. Based oriented diagrams of 38 knots with $n=10$ crossings for which we succeeded to compute their ascending numbers are illustrated in Figs. 11-23. \scriptsize \noindent \begin{tabular}{|c|c|c|c|c||c|c|c|c|c|} \hline $K$ & $Con$ & $u$ & $a_d$ & $a$ & $K$ & $Con$ & $u$ & $a_d$ & $a$ \\ \hline $10_1$ & $8\,2$ & 1 & 4 & 1 & $10_{84}$ & $.2\,2.2$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_2$ & $7\,1\,2$ & 3 & 4 & $[3,4]$ & $10_{85}$ & $.4.2\,0$ & 2 & 4 & $[2,3]$ \\ \hline $10_3$ & $6\,4$ & 2 & 4 & 2 & $10_{86}$ & $.3\,1.2\,0$ & 2 & 4 & $[2,3]$ \\ \hline $10_4$ & $6\,1\,3$ & 2 & 4 & 2 & $10_{87}$ & $.2\,2.2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_5$ & $6\,1\,1\,2$ & 2 & 4 & $[2,3,4]$ & $10_{88}$ & $.2\,1.2\,1$ & 1 & 3 & $[2,3]$ \\ \hline $10_6$ & $5\,3\,2$ & 3 & 4 & 3 & $10_{89}$& $2\,1.2\,1\,0$ & 2 & 3 & $[2,3]$ \\ \hline $10_7$ & $5\,2\,1\,2$ & 1 & 4 & 2 & $10_{90}$ & $.3.2.2$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_8$& $5\,1\,4$ & 2 & 3 & $[2,3]$ & $10_{91}$ & $.3.2.2\,0$ & 1 & 4 & $[2,3]$ \\ \hline $10_9$ & $5\,1\,1\,3$ & 2 & 3 & $[2,3]$ & $10_{92}$ & $.2\,1.2.2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{10}$ & $5\,1\,1\,1\,2$ & 1 & 4 & 2 & $10_{93}$ & $.3.2\,0.2$ & 2 & 4 & $[2,3]$ \\ \hline $10_{11}$ & $4\,3\,3$ & $[2,3]$ & 4 & $(2,3)$ & $10_{94}$ & $.3\,0.2.2$ & 2 & 4 & $[2,3]$ \\ \hline $10_{12}$ & $4\,3\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{95}$ & $.2\,1\,0.2.2$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_{13}$ & $4\,2\,2\,2$ & 2 & 3 & 2 & $10_{96}$ & $.2.2\,1.2$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{14}$ & $4\,2\,1\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{97}$ & $.2.2\,1\,0.2$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{15}$ & $4\,1\,3\,2$ & 2 & 4 & $[2,3]$ & $10_{98}$ & $.2.2.2.2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{16}$ & $4\,1\,2\,3$ & 2 & 4 & $[2,3]$ & $10_{99}$& $.2.2.2\,0.2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{17}$ & $4\,1\,1\,4$ & 1 & 4 & $[2,3,4]$ & $10_{100}$ & $3:2:2$ & $[2,3]$ & 4 & $(2,3)$ \\ \hline $10_{18}$& $4\,1\,1\,2\,2$ & 1 & 4 & 2 & $10_{101}$ & $2\,1:2:2$ & 3 & 4 & $[3,4]$ \\ \hline $10_{19}$ & $4\,1\,1\,1\,3$ & 2 & 4 & $[2,3]$ & $10_{102}$ & $3:2:2\,0$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_{20}$ & $3\,5\,2$ & 2 & 4 & 2 & $10_{103}$ & $3\,0:2:2$ & 3 & 4 & $3$ \\ \hline $10_{21}$ & $3\,4\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{104}$ & $3:2\,0:2\,0$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_{22}$ & $3\,3\,1\,3$ & 2 & 4 & $[2,3]$ & $10_{105}$ & $2\,1:2\,0:2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{23}$ & $3\,3\,1\,1\,2$ & 1 & 4 & $[2,3]$ & $10_{106}$ & $3\,0:2:2\,0$ & 2 & 4 & $[2,3]$ \\ \hline $10_{24}$ & $3\,2\,3\,2$ & 2 & 4 & 2 & $10_{107}$ & $2\,1\,0:2:2\,0$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_{25}$ & $3\,2\,2\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{108}$ & $3\,0:2\,0:2\,0$ & 2 & 4 & $[2,3]$ \\ \hline $10_{26}$ & $3\,2\,1\,1\,3$ & 1 & 4 & $[2,3]$ & $10_{109}$& $2.2.2.2$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{27}$ & $3\,2\,1\,1\,1\,2$ & 1 & 4 & $[2,3]$ & $10_{110}$ & $2.2.2.2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{28}$& $3\,1\,3\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{111}$ & $2.2.2\,0.2$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{29}$ & $3\,1\,2\,2\,2$ & 2 & 3 & $[2,3]$ & $10_{112}$ & $8^*3$ & 2 & 3 & $[2,3]$ \\ \hline $10_{30}$ & $3\,1\,2\,1\,1\,2$ & 1 & 4 & $[2,3]$ & $10_{113}$ & $8^*2\,1$ & 1 & 3 & $[2,3]$ \\ \hline $10_{31}$ & $3\,1\,1\,3\,2$ & 1 & 4 & 2 & $10_{114}$ & $8^*3\,0$ & 1 & 3 & $[2,3]$ \\ \hline $10_{32}$ & $3\,1\,1\,1\,2\,2$ & 2 & 3 & $[2,3]$ & $10_{115}$ & $8^*2\,0.2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{33}$ & $3\,1\,1\,1\,1\,3$ & 1 & 4 & $[2,3]$ & $10_{116}$ & $8^*2:2$ & 2 & 4 & $[2,3]$ \\ \hline $10_{34}$ & $2\,5\,1\,2$ & 2 & 4 & 2 & $10_{117}$ & $8^*2:2\,0$ & 2 & 4 & $[2,3]$ \\ \hline $10_{35}$ & $2\,4\,2\,2$ & 2 & 4 & 2 & $10_{118}$ & $8^*2:.2$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_{36}$ & $2\,4\,1\,1\,2$ & 2 & 4 & 2 & $10_{119}$& $8^*2:.2\,0$ & 1 & 4 & $[2,3,4]$ \\ \hline $10_{37}$ & $2\,3\,3\,2$ & 2 & 4 & 2 & $10_{120}$ & $8^*2\,0::2\,0$ & 3 & 4 & $[3,4]$ \\ \hline $10_{38}$& $2\,3\,1\,2\,2$ & 2 & 4 & 2 & $10_{121}$ & $9^*2\,0$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{39}$ & $2\,2\,3\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{122}$ & $9^*.2\,0$ & 2 & 3 & $[2,3]$ \\ \hline $10_{40}$ & $2\,2\,2\,1\,1\,2$ & 2 & 4 & $[2,3]$ & $10_{123}$ & $10^*$ & 2 & 3 & $[2,3]$ \\ \hline \end{tabular} \bigskip \noindent \begin{tabular}{|c|c|c|c|c||c|c|c|c|c|} \hline $K$ & $Con$ & $u$ & $a_d$ & $a$ & $K$ & $Con$ & $u$ & $a_d$ & $a$ \\ \hline $10_{41}$ & $2\,2\,1\,2\,1\,2$ & 2 & 3 & $[2,3]$ & $10_{124}$ & $5,3,-2$ & 4 & 4 & 4 \\ \hline $10_{42}$ & $2\,2\,1\,1\,1\,1\,2$ & 1 & 3 & $[2,3]$ & $10_{125}$ & $5,2\,1,-2$ & 2 & 3 & $[2,3]$ \\ \hline $10_{43}$ & $2\,1\,2\,2\,1\,2$ & 2 & 3 & $[2,3]$ & $10_{126}$ & $4\,1,3,-2$ & 2 & 3 & $[2,3]$ \\ \hline $10_{44}$ & $2\,1\,2\,1\,1\,1\,2$ & 1 & 3 & $[2,3]$ & $10_{127}$ & $4\,1,2\,1,-2$ & 2 & 3 & $[2,3]$ \\ \hline $10_{45}$ & $2\,1\,1\,1\,1\,1\,1\,2$ & 2 & 3 & $[2,3]$ & $10_{128}$ & $3\,2,3,-2$ & 3 & 3 & 3 \\ \hline $10_{46}$ & $5,3,2$ & 3 & 4 & $[3,4]$ & $10_{129}$& $3\,2,2\,1,-2$ & 1 & 3 & $[2,3]$ \\ \hline $10_{47}$ & $5,2\,1,2$ & $[2,3]$ & 4 & $(2,3,4)$ & $10_{130}$ & $3\,1\,1,3,-2$ & 1 & 3 & $[2,3]$ \\ \hline $10_{48}$& $4\,1,3,2$ & 3 & 4 & $[2,3,4]$ & $10_{131}$ & $3\,1\,1,2\,1,-2$ & 1 & 3 & $[2,3]$ \\ \hline $10_{49}$ & $4\,1,2\,1,2$ & 3 & 4 & 3 & $10_{132}$ & $2\,3,3,-2$ & 1 & 3 & 2 \\ \hline $10_{50}$ & $3\,2,3,2$ & 2 & 4 & $[2,3,4]$ & $10_{133}$ & $2\,3,2\,1-2$ & 1 & 3 & 2 \\ \hline $10_{51}$ & $3\,2,2\,1,2$ & $[2,3]$ & 4 & $(2,3,4)$ & $10_{134}$ & $2\,2\,1,3,-2$ & 3 & 3 & 3 \\ \hline $10_{52}$ & $3\,1\,1,3,2$ & 2 & 4 & $[2,3]$ & $10_{135}$ & $2\,2\,1,2\,1,-2$ & 2 & 3 & 2 \\ \hline $10_{53}$ & $3\,1\,1,2\,1,2$ & 3 & 4 & 3 & $10_{136}$ & $2\,2,2\,2-2$ & 1 & 2 & 2 \\ \hline $10_{54}$ & $2\,3,3,2$ & $[2,3]$ & 4 & $(2,3,4)$ & $10_{137}$ & $2\,2,2\,1\,1,-2$ & 1 & 2 & 2 \\ \hline $10_{55}$ & $2\,3,2\,1,2$ & 2 & 4 & 2 & $10_{138}$ & $2\,1\,1,2\,1\,1,-2$ & 2 & 3 & $[2,3]$ \\ \hline $10_{56}$ & $2\,2\,1,3,2$ & 2 & 4 & $[2,3]$ & $10_{139}$& $4,3,-2\,1$ & 4 & 4 & 4 \\ \hline $10_{57}$ & $2\,2\,1,2\,1,2$ & 2 & 4 & $[2,3]$ & $10_{140}$ & $4,3,-3$ & 2 & 3 & $[2,3]$ \\ \hline $10_{58}$& $2\,2,2\,2,2$ & 2 & 3 & $[2,3]$ & $10_{141}$ & $4,2\,1,-3$ & 1 & 3 & $[2,3]$ \\ \hline $10_{59}$ & $2\,2,2\,1\,1,2$ & 1 & 3 & $[2,3]$ & $10_{142}$ & $3\,1,3,-2\,1$ & 3 & 4 & 3 \\ \hline $10_{60}$ & $2\,1\,1,2\,1\,1,2$ & 1 & 3 & $[2,3]$ & $10_{143}$ & $3\,1,3,-3$ & 1 & 3 & $[2,3]$ \\ \hline $10_{61}$ & $4,3,3$ & $[2,3]$ & 4 & $(2,3,4)$ & $10_{144}$ & $3\,1,2\,1,-3$ & 2 & 3 & $[2,3]$ \\ \hline $10_{62}$ & $4,3,2\,1$ & 2 & 4 & $[2,3,4]$ & $10_{145}$ & $2\,2,3,-2\,1$ & 2 & 3 & 2 \\ \hline $10_{63}$ & $4,2\,12\,1$ & 2 & 4 & $[2,3]$ & $10_{146}$ & $2\,2,2\,1,-3$ & 1 & 2 & 2 \\ \hline $10_{64}$ & $3\,1,3,3$ & 2 & 4 & $[2,3,4]$ & $10_{147}$ & $2\,1\,1,3,-3$ & 1 & 2 & 2 \\ \hline $10_{65}$ & $3\,1,3,2\,1$ & 2 & 4 & $[2,3,4]$ & $10_{148}$ & $(3,2)\,(3,-2)$ & 2 & 3 & $[2,3]$ \\ \hline $10_{66}$ & $3\,1,2\,1,2\,1$ & 3 & 4 & $[3,4]$ & $10_{149}$& $(3,2) \,(2\,1,-2)$ & 2 & 3 & $[2,3]$ \\ \hline $10_{67}$ & $2\,2,3,2\,1$ & 2 & 4 & $[2,3,4]$ & $10_{150}$ & $(2\,1,2)\,(3,-2)$ & 2 & 3 & $[2,3]$ \\ \hline $10_{68}$& $2\,1\,1,3,3$ & 2 & 4 & $[2,3,4]$ & $10_{151}$ & $(2\,1,2)\,(2,1,-2)$ & 2 & 3 & $[2,3]$ \\ \hline $10_{69}$ & $2\,1\,1,2\,1,2\,1$ & 2 & 3 & $[2,3]$ & $10_{152}$ & $(3,2)\,-(3,2)$ & 4 & 4 & 4 \\ \hline $10_{70}$ & $2\,2,3,2+$ & 2 & 3 & $[2,3]$ & $10_{153}$ & $(3,2)\,-(2\,1,2)$ & 2 & 4 & $[2,3,4]$ \\ \hline $10_{71}$ & $2\,2,2\,1,2+$ & 1 & 3 & $[2,3]$ & $10_{154}$ & $(2\,1,2)\,-(2\,1,2)$ & 3 & 4 & $[3,4]$ \\ \hline $10_{72}$ & $2\,1\,1,3,2+$ & 2 & 4 & $[2,3,4]$ & $10_{155}$ & $-3:2:2$ & 2 & 3 & $[2,3]$ \\ \hline $10_{73}$ & $2\,1\,1,2\,1,2+$ & 1 & 3 & $[2,3]$ & $10_{156}$ & $-3:2:2\,0$ & 1 & 3 & $[2,3]$ \\ \hline $10_{74}$ & $3,3,2\,1+$ & 2 & 4 & $[2,3,4]$ & $10_{157}$ & $-3:2\,0:2\,0$ & 2 & 3 & $[2,3]$ \\ \hline $10_{75}$ & $2\,1,2\,1,2\,1+$ & 2 & 3 & $[2,3]$ & $10_{158}$ & $-3\,0:2:2$ & 2 & 3 & $[2,3]$ \\ \hline $10_{76}$ & $3,3,2++$ & $[2,3]$ & 4 & $(2,3)$ & $10_{159}$& $-3\,0:2:2\,0$ & 1 & 2 & 2 \\ \hline $10_{77}$ & $3,2\,1,2++$ & $[2,3]$ & 4 & $(2,3)$ & $10_{160}$ & $-3\,0:2\,0:2\,0$ & 2 & 2 & 2 \\ \hline $10_{78}$& $2\,1,2\,1,2++$ & 2 & 3 & $[2,3]$ & $10_{161}$ & $3:-2\,0:-2\,0$ & 3 & 3 & 3 \\ \hline $10_{79}$ & $(3,2)\,(3,2)$ & $[2,3]$ & 4 & $(2,3,4)$ & $10_{162}$ & $-3\,0:-2\,0:-2\,0$ & 2 & 3 & $[2,3]$ \\ \hline $10_{80}$ & $(3,2)\,(2\,1,2)$ & 3 & 4 & 3 & $10_{163}$ & $8^*-3\,0$ & 2 & 2 & 2 \\ \hline $10_{81}$ & $(2\,1,2)\,(2\,1,2)$ & 2 & 4 & $[2,3]$ & $10_{164}$ & $8^*2:-2\,0$ & 1 & 3 & $[2,3]$ \\ \hline $10_{82}$ & $.4.2$ & 1 & 4 & $[2,3]$ & $10_{165}$ & $8^*2:.-2\,0$ & 2 & 3 & $[2,3]$ \\ \hline $10_{83}$ & $.3\,1.2$ & 2 & 4 & $[2,3,4]$ & $$ & $$ & & & \\ \hline \end{tabular} \bigskip \normalsize \begin{figure}[th] \centerline{\psfig{file=afig011.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_3$; (b) knot $10_4$; (c) knot $10_6$.\label{fig11}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig012.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_7$; (b) knot $10_{10}$; (c) knot $10_{11}$.\label{fig12}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig013.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{13}$; (b) knot $10_{18}$; (c) knot $10_{20}$.\label{fig13}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig014.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{24}$; (b) knot $10_{31}$; (c) knot $10_{34}$.\label{fig14}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig015.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{35}$; (b) knot $10_{36}$; (c) knot $10_{37}$.\label{fig15}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig016.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{38}$; (b) knot $10_{49}$; (c) knot $10_{53}$.\label{fig16}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig017.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{55}$; (b) knot $10_{80}$; (c) knot $10_{124}$.\label{fig17}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig018.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{128}$; (b) knot $10_{132}$; (c) knot $10_{133}$.\label{fig18}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig019.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{134}$; (b) knot $10_{135}$; (c) knot $10_{136}$.\label{fig19}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig020.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{137}$; (b) knot $10_{139}$; (c) knot $10_{142}$.\label{fig20}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig021.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{145}$; (b) knot $10_{146}$; (c) knot $10_{147}$.\label{fig21}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig022.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{152}$; (b) knot $10_{159}$; (c) knot $10_{160}$.\label{fig22}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afig023a.eps,width=3.0in}} \vspace*{8pt} \caption{(a) Knot $10_{103}$; (b) knot $10_{161}$; (b) knot $10_{163}$. \label{fig23}} \end{figure} \section{Signature and alternating knot families} \begin{definition} Let $S$ denote the set of numbers in the unreduced\footnote{The Conway notation is called {\it unreduced} if $1$'s denoting elementary tangles in vertices are not omitted in symbols of polyhedral links.} Conway symbol $C(L)$ of a link $L$. Given $C(L)$ and an arbitrary (non-empty) subset $\tilde S=\{a_1, a_2, \ldots, a_m \}$ of $S$, the family $F_{\tilde S}(L)$ of knots or links derived from $L$ is constructed by substituting each $a_i \in \tilde S$, $a_i \neq 1$ in $C(L)$ by $sgn(a) (|a|+n)$, for $n \in {N^+}$. \end{definition} For even integers $n \geq 0$ this construction preserves the number of components, i.e., we obtain (sub)families of links with the same number of components. If all parameters in a Conway symbol of a knot or link are 1,2, or 3, such a link is called {\it generating}. K. Murasugi \cite{9} defined {\it signature} $\sigma _K$ of a knot $K$ as the signature of the matrix $S_K+{S_K}^T$, where ${S_K}^T$ is the transposed matrix of $S_K$, and $S_K$ is the Seifert matrix of the knot $K$. For alternating knots, signature can be computed by using a combinatorial formula derived by P.~Traczyk \cite{10}. We will use this formula, proved by J.~Przytycki, in the following form, taken from \cite{11}, Theorem 7.8, Part (2): \begin{theorem} If $D$ is a reduced alternating diagram of an oriented knot, then $$\sigma_D=-{1\over 2}w+{1\over 2}(W-B)=-{1\over 2}w+{1\over 2}(|D_{s+}|-|D_{s-}|),$$ where $w$ is the writhe of $D$, $W$ is the number of white regions in the checkerboard coloring of $D$, which is for alternating minimal diagrams equal to the number of cycles $|D_{s+}|$ in the state $s+$, and $B$ is the number of black regions in the checkerboard coloring of $D$ equal to the number of the cycles $|D_{s-}|$ in the state $s-$. \end{theorem} Introducing orientation of a knot, every $n$-twist (chain of digons) becomes {\it parallel} or {\it anti-parallel}. For signs of crossings and checkerboard coloring we use the conventions shown in Fig. 24. \begin{figure}[th] \centerline{\psfig{file=achainsnew.eps,width=1.80in}} \vspace*{8pt} \caption{(a) Positive crossing and negative crossing (b) parallel positive twist; (c) parallel negative twist; (d) antiparallel positive twist; (e) antiparallel negative twist. \label{f24}} \end{figure} \begin{lemma} By replacing $n$-twist ($n\ge 2$) by $(n+2)$-twist in the Conway symbol of an alternating knot $K$, the signature changes by $-2$ if the replacement is made in a parallel twist with positive crossings, the signature changes by $+2$ if the replacement is made in a parallel twist with negative crossings, and remains unchanged if the replacement is made in an anti-parallel twist. \end{lemma} {\bf Proof}: According to the preceding theorem: \begin{enumerate} \item by adding a full twist in a parallel positive $n$-twist the writhe changes by $+2$, the number of the white regions $W$ remains unchanged, the number of black regions $B$ increases by $+2$, and the signature changes by $-2$; \item by adding a full twist in a parallel negative $n$-twist the writhe changes by $-2$, the number of white regions $W$ increases by $2$, the number of black regions $B$ remains unchanged, and the signature increases by 2; \item by adding a full twist in an anti-parallel positive $n$-twist the writhe changes by $+2$, the number of white regions $W$ increases by 2, the number of black regions $B$ remains unchanged, and the signature remains unchanged; \item by adding a full twist in an anti-parallel negative $n$-twist the writhe changes by $-2$, the number of white regions $W$ remains unchanged, the number of black regions $B$ increases by 2, and the signature remains unchanged. \end{enumerate} \begin{figure}[th] \centerline{\psfig{file=ajozef01new.eps,width=1.00in}} \vspace*{8pt} \caption{Knot family $(2p_1+1)\,(2p_2),(2p_3)\,(2p_4),(2p_5+1)\,1,(2p_6+1)$ beginning with knot $3\,2,2\,2,3\,1,3$. \label{f25}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=ajozef02.eps,width=1.00in}} \vspace*{8pt} \caption{Knot family $(2p_1+1):(2p_2):(2p_3)$ beginning with knot $3:2:2$. \label{f26}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=apretzel.eps,width=1.00in}} \vspace*{8pt} \caption{Pretzel knot $p_1,p_2,\ldots ,p_n$. \label{f27}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=ajozef03.eps,width=4.20in}} \vspace*{8pt} \caption{(a) Knot $t_1,\ldots t_n+t$; (b) knot $(t_1,\ldots ,t_m)\,(t_1',\ldots ,t_n')$. \label{f28}} \end{figure} \begin{theorem} The signature $\sigma _K$ of an alternating knot $K$ given by its Conway symbol is $$\sigma _K=\sum_P -2[{{n_i} \over 2}]c_i+2c_0,$$ where the sum is taken over all parallel twists $n_i$, $c_i\in \{1,-1\}$ is the sign of crossings belonging to a parallel twist $n_i$, and $2c_0$ is an integer constant which can be computed from the signature of the generating knot. \end{theorem} The proof of this theorem follows directly form the preceding Lemma, claiming that only additions of twists in parallel twists in a Conway symbol result in the change of signature, and that by every such addition, signature changes by $-2c_i$. Notice that the condition that we are making twist replacements in the standard Conway symbols, i.e., Conway symbols with the maximal twists, is essential for computation of general formulae for the signature of alternating knot families. {\bf Example 1}: For the family of Montesinos knots with the Conway symbol of the form $(2p_1+1)\,(2p_2),(2p_3)\,(2p_4),(2p_5+1)\,1,(2p_6+1)$ (Fig. 25), beginning with the generating knot $3\,2,2\,2,3\,1,3$, the parallel twists with negative signs are $2p_2$ and $2p_6+1$, the parallel twist with positive signs is $2p_5+1$, and the remaining twists are anti-parallel. Hence, the signature is $\sigma =-2p_2+2p_5-2p_6+2c_0$. Since the writhe of the generating knot $G=3\,2,2\,2,3\,1,3$ is $w=-4$ and its checkerboard coloring has $W=9$ white and $B=9$ black regions, its signature is $2$. Evaluating the formula $\sigma =2p_2-2p_52p_6+2c_0$ for $\sigma_G= $, $p_2=1$, $p_5=1$, and $p_6=1$, we obtain $c_0=0$. Hence, the general formula for the signature of knots belonging to the family $(2p_1+1)\,(2p_2),(2p_3)\,(2p_4),(2p_5+1)\,1,(2p_6+1)$ is $2p_2-2p_5+2p_6$. {\bf Example 2}: For the family of polyhedral knots with the Conway symbol of the form $(2p_1+1):(2p_2):(2p_3)$ ($p_1\ge 1$, $p_2\ge 1$, $p_3\ge 1$), beginning with the knot $3:2:2$ (Fig. 26), all twists are parallel twists with positive crossings, and the formula for the signature is $-2p_1-2p_2-2p_3-2$, i.e., $c_0=2$. Constant $c_0$ is computed from the signature of the generating knot $3:2:2$ which is equal to $-4$. {\bf Example 3}: Let us consider pretzel knots and links (Fig. 27) given by Conway symbol $p_1,\ldots ,p_n$ ($n\ge 3$). We obtain knots if all $p_i$ ($i=1,...,n$) are odd and $n$ is an odd number, or if one twist is even, and all the others are odd. If all twists are odd and $n$ is an odd number, all twists are anti-parallel, and the signature is $\sigma _K=n-1$ for every such knot. If $n=3$, for the pretzel knots of the form $(2p_1+1),(2p_2+1),(2q)$, the twists $2p_1+1$ and $2p_2+1$ are parallel with positive crossings, the twist $2q$ is antiparallel, and the signature is $\sigma _K=2p_1+2p_2$. For $n\ge 4$, for pretzel knots consisting of an even number of odd twists and one even twist, $2p_1+1$, $\ldots $, $2p_{2k}+1$, $2q$, all odd twists are parallel with positive crossings, the even twist $2q$ is anti-parallel, and the signature is $\sigma _K=2p_1+2p_2+\ldots +2p_{2k+1}$. For $n\ge 4$, for pretzel knots consisting of an odd number of odd twists and one even twist, $2p_1+1$, $\ldots $, $2p_{2k+1}+1$, $2q$, all twists are parallel with positive crossings, and the signature is $\sigma _K=2p_1+2p_2+\ldots +2p_{2k+1}+2q$. Hence, for this class of pretzel knots we simply conclude that their unknotting number is given by the formula $u_K=p_1+p_2+\ldots +p_{2k+1}+q.$ {\bf Example 4}: Let us consider knots of the form $t_1,\ldots t_n+t$ ($n\ge 3$), where $t_i$ and $t$ are twists (Fig. 28a). If the twists of an odd length are denoted by $p$, and twists of an even length by $q$, we have six possible cases: \begin{enumerate} \item if the tangle $t_1,\ldots ,t_n$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$, and the tangle $t$ is an odd twist $p$, the signature is given by the formula $2k+2[{p\over 2}]$ \item if the tangle $t_1,\ldots ,t_n$ consists of $2k+1$ odd twists $p_1,\ldots,p_{2k}$, and the tangle $t$ is an even twist $q$, the signature is given by the formula $2k+q$ \item if the tangle $t_1,\ldots ,t_n$ consists of $2k+1$ odd twists $p_1,\ldots,p_{2k+1}$ and an even twist $q_1$, and the tangle $t$ is an odd twist $p$, the signature is given by the formula $\sum_{i=1}^{2k+1}2[{p_i \over 2}]$ \item if the tangle $t_1,\ldots ,t_n$ consists of $2k+1$ odd twists $p_1,\ldots,p_{2k+1}$ and an even twist $q_1$, and the tangle $t$ is an even twist $q$, the signature is given by the formula $\sum_{i=1}^{2k+1}2[{p_i \over 2}]+q_1$ \item if the tangle $t_1,\ldots ,t_n$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$ and an even twist $q_1$, and the tangle $t$ is an odd twist $p$, the signature is given by the formula $\sum_{i=1}^{2k}2[{p_i \over 2}]+q_1$ \item if the tangle $t_1,\ldots ,t_n$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$ and an even twist $q_1$, and the tangle $t$ is an even twist $q$, the signature is given by the formula $\sum_{i=1}^{2k}2[{p_i \over 2}].$ \end{enumerate} {\bf Example 5}: As a more complex example, we provide general formulae for the signature of knots of the type $(t_1,\ldots ,t_m)\,(t_1',\ldots ,t_n)$ ($m\ge 2$, $n\ge 2$), where twists are denoted by $t_i$ or $t_i'$ (Fig. 28b). If the twists of an odd length are denoted by $p$, and twists of an even length by $q$, we have seven possible cases: \begin{enumerate} \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r+1$ odd twists $p_1',\ldots,p_{2r+1}'$, the signature is given by the formula $$\sum_{i=1}^{2r+1}2[{p_i' \over 2}]+2k$$ \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r$ odd twists $p_1',\ldots,p_{2r}'$, the signature is given by the formula $$\sum_{i=1}^{2k}2[{p_i \over 2}]-\sum_{i=1}^{2r}2[{p_i' \over 2}]$$ \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$ and one even twist $q_1$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r+1$ odd twists $p_1',\ldots,p_{2r+1}'$, the signature is given by the formula $$\sum_{i=1}^{2k}2[{p_i \over 2}]+q_1+2r$$ \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k+1$ odd twists $p_1,\ldots,p_{2k+1}$ and one even twist $q_1$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r+1$ odd twists $p_1',\ldots,p_{2r+1}'$, the signature is given by the formula $$\sum_{i=1}^{2k+1}2[{p_i \over 2}]+2r$$ \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$ and one even twist $q_1$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r$ odd twists $p_1',\ldots,p_{2r}'$ and one even twist $q_1'$, the signature is given by the formula $$\sum_{i=1}^{2k}2[{p_i \over 2}]-\sum_{i=1}^{2r}2[{p_i' \over 2}]$$ \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k$ odd twists $p_1,\ldots,p_{2k}$ and one even twist $q_1$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r+1$ odd twists $p_1',\ldots,p_{2r+1}'$ and one even twist $q_1'$, the signature is given by the formula $$\sum_{i=1}^{2k}2[{p_i \over 2}]-\sum_{i=1}^{2r+1}2[{p_i' \over 2}]-q_1'$$ \item if the first tangle $t_1,\ldots ,t_m$ consists of $2k+1$ odd twists $p_1,\ldots,p_{2k+1}$ and one even twist $q_1$, and the second tangle $t_1',\ldots ,t_n'$ consists of $2r+1$ odd twists $p_1',\ldots,p_{2r+1}'$ and one even twist $q_1'$, the signature is given by the formula $$\sum_{i=1}^{2k+1}2[{p_i \over 2}]-\sum_{i=1}^{2r+1}2[{p_i' \over 2}]+q_1-q_1'.$$ \end{enumerate} \section{Unknotting numbers of knot families} K.~Murasugi \cite{9} proved the lower bound for the unknotting number of knots, $u(K)\geq \frac{|\sigma _K|}{2}$. Using this criterion, for many (sub)families of knots we can confirm that their $BJ$-unknotting numbers, i.e., unknotting numbers computed according to Bernhard-Jablan Conjecture \cite{5} represent the actual unknotting numbers of these (sub)families. In the following table is given the list of (sub)families with this property obtained from knots with at most $n=8$ crossings, where in the first column is given the first knot belonging to the family, in the second its Conway symbol, in the third the general Conway symbol, in the fourth the general formula for the signature, in the fifth the unknotting number confirmed by the signature, and in the sixth the conditions for this unknotting number\footnote{Conditions for unknotting numbers are determined from the experimental results obtained for knots up to $n=20$ crossings.}. \medskip \scriptsize \noindent \begin{tabular}{|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $3_1$ & $3$ & $(2p_1+1)$ & $2p_1$ & $p_1$ & $$ \\ \hline $4_1$ & $2\,2$ & $(2p_1)\,(2p_2)$ & $0$ & $$ & $$ \\ \hline $5_2$ & $3\,2$ & $(2p_1+1)\,(2p_2)$ & $2p_2$ & $p_2$ & $$ \\ \hline $6_2$ & $3\,1\,2$ & $(2p_1+1)\,1\,(2p_2)$ & $2p_1$ & $p_1$ & $p_1\ge p_2$ \\ \hline $6_3$ & $2\,1\,1\,2$ & $(2p_1)\,1\,1\,(2p_2)$ & $2p_1-2p_2$ & $|p_1-p_2|$ & $p_1\neq p_2$ \\ \hline $7_4$ & $3\,1\,3$ & $(2p_1+1)\,1\,(2p_2+1)$ & $2$ & $$ & $$ \\ \hline $7_5$ & $3\,2\,2$ & $(2p_1+1)\,(2p_2)\,(2p_3)$ & $2p_1+2p_3$ & $p_1+p_3$ & $$ \\ \hline $7_6$ & $2\,2\,1\,2$ & $(2p_1)\,(2p_2)\,1\,(2p_3)$ & $2p_3$ & $p_3$ & $p_2\le p_3$ \\ \hline $7_7$ & $2\,1\,1\,1\,2$ & $(2p_1)\,1\,1\,1\,(2p_2)$ & $0$ & $$ & $$ \\ \hline $8_5$ & $3,3,2$ & $(2p_1+1),(2p_2+1),(2p_3)$ & $2p_1+2p_2$ & $p_1+p_2$ & $p_1\ge p_3$ or $p_2\ge p_3$ \\ \hline $8_6$ & $3\,3\,2$ & $(2p_1+1)\,(2p_2+1)\,(2p_3)$ & $2p_1$ & $$ & $$ \\ \hline $8_8$ & $2\,3\,1\,2$ & $(2p_1)\,(2p_2+1)\,1\,(2p_3)$ & $2p_1-2p_3$ & $p_3-p_1$ & $p_3-p_1>p_2$ \\ \hline $8_9$ & $3\,1\,1\,3$ & $(2p_1+1)\,1\,1\,(2p_2+1)$ & $2p_1-2p_2$ & $|p_1-p_2|$ & $p_1\neq p_2$ \\ \hline $8_{10}$ & $3,2\,1,2$ & $\,\,(2p_1+1),(2p_2)\,1,(2p_3)\,\,$ & $\,\,2p_1-2p_2+2p_3\,\,$ & $p_1-p_2+p_3$ & $p_3>p_2$ \\ \hline $8_{11}$ & $3\,2\,1\,2$ & $(2p_1+1)\,2p_2)\,1\,(2p_3)$ & $2p_2$ & $p_2$ & $p_2\ge p_3$ \\ \hline $8_{12}$ & $2\,2\,2\,2$ & $(2p_1)\,(2p_2)\,(2p_3)\,(2p_4)$ & $0$ & $$ & $$ \\ \hline $8_{13}$ & $3\,1\,1\,1\,2$ & $(2p_1+1)\,1\,1\,1\,(2p_2)$ & $2p_2-2$ & $p_2-1$ & $p_2-1>p_1$ \\ \hline $8_{14}$ & $2\,2\,1\,1\,2$ & $(2p_1)\,(2p_2)\,1\,1\,(2p_3)$ & $2p_1$ & $p_1$ & $p_2\le p_3$ \\ \hline $8_{15}$ & $2\,1,2\,1,2$ & $(2p_1)\,1,(2p_2)\,1,(2p_3)$ & $2p_1+2p_2$ & $p_1+p_2$ & $$ \\ \hline $8_{16}$ & $.2.2\,0$ & $.(2p_1).(2p_2)\,0$ & $2p_1+2p_2-2$ & $p_1+p_2-1$ & $$ \\ \hline $8_{17}$ & $.2.2$ & $.(2p_1).(2p_2)$ & $2p_1-2p_2$ & $|p_1-p_2|$ & $p_1=1,p_2>1$ \\ $$ & $$ & $$ & $$ & $$ & or $p_2=1,p_1>1$ \\ \hline \end{tabular} \scriptsize \noindent \begin{tabular}{|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $9_{10}$ & $3\,3\,3$ & $(2p_1+1)\,(2p_2+1)\,(2p_3+1)$ & $2p_2+2$ & $$ & $$ \\ \hline $9_{13}$ & $3\,2\,1\,3$ & $(2p_1)\,(2p_2)\,1\,(2p_3+1)$ & $2p_1+2$ & $$ & $$ \\ \hline $9_{15}$ & $2\,3\,2\,2$ & $(2p_1)\,(2p_2+1)\,(2p_3)\,(2p_4)$ & $2p_1$ & $$ & $$ \\ \hline $9_{16}$ & $3,3,2+$ & $(2p_1+1),(2p_2+1),(2p_3)+$ & $2p_1+2p_2+2p_3$ & $p_1+p_2+p_3$ & $$ \\ \hline $9_{17}$ & $2\,1\,3\,1\,2$ & $(2p_1)\,1\,(2p_2+1)\,1\,(2p_3)$ & $-2p_2$ & $p_2$ & $p_1+p_3\le p_2$ \\ \hline $9_{18}$ & $3\,2\,2\,2$ & $(2p_1+1)\,(2p_2)\,(2p_3)\,(2p_4)$ & $2p_2+2p_4$ & $p_2+p_4$ & $$ \\ \hline $9_{19}$ & $2\,3\,1\,1\,2$ & $(2p_1)\,(2p_2+1)\,1\,1\,(2p_3)$ & $0$ & $$ & $$ \\ \hline $9_{20}$ & $3\,1\,2\,1\,2$ & $(2p_1+1)\,1\,(2p_2)\,1\,(2p_3)$ & $2p_1+2p_3$ & $p_1+p_3$ & $p_1+p_3\ge p_2$ \\ \hline $9_{21}$ & $3\,1\,1\,2\,2$ & $(2p_1+1)\,1\,1\,(2p_2)\,(2p_3)$ & $2$ & $$ & $$ \\ \hline $9_{22}$ & $2\,1\,1,3,2$ & $(2p_1)\,1\,1,(2p_2+1),(2p_3)$ & $-2p_2$ & $p_2$ & $p_1+p_3-1\le p_2$ \\ \hline $9_{23}$ & $2\,2\,1\,2\,2$ & $(2p_1)\,(2p_2)\,1\,(2p_3)\,(2p_4)$ & $2p_1+2p_4$ & $p_1+p_4$ & $$ \\ \hline $9_{24}$ & $2\,1,3,2+$ & $(2p_1)\,1,(2p_2+1),(2p_3)+$ & $2p_1-2p_2$ & $$ & $$ \\ \hline $9_{25}$ & $2\,2,2\,1,2$ & $(2p_1)\,(2p_2),(2p_3)\,1,(2p_4)$ & $-2p_3$ & $$ & $$ \\ \hline $9_{26}$ & $3\,1\,1\,1\,1\,2$ & $(2p_1+1)\,1\,1\,1\,1\,(2p_2)$ & $2p_1$ & $p_1$ & $$ \\ \hline $9_{27}$ & $2\,1\,2\,1\,1\,2$ & $(2p_1)\,1\,(2p_2)\,1\,1\,(2p_3)$ & $2p_3-2p_2$ & $p_3-p_2$ & $p_2<p_3$ \\ \hline $9_{28}$ & $2\,1,2\,1,2+$ & $(2p_1)\,1,(2p_2)\,1,(2p_3)+$ & $2p_1+2p_2-2p_3$ & $p_1+p_2-p_3$ & $p_3\le p_1$ or $p_3\le p_2$ \\ \hline $9_{29}$ & $.2.2\,0.2$ & $.(2p_1).(2p_2)\,0.(2p_3)$ & $-2p_2$ & $$ & $$ \\ \hline $9_{30}$ & $2\,1\,1,2\,1,2$ & $(2p_1)\,1\,1,(2p_2)\,1,(2p_3)$ & $2p_2-2p_3$ & $p_3-p_2$ & $p_1+p_2\le p_3$ \\ \hline $9_{31}$ & $2\,1\,1\,1\,1\,1\,2$ & $(2p_1)\,1\,1\,1\,1\,1\,(2p_2)$ & $2p_1-2p_2-2$ & $p_1+p_2-1$ & $p_1+p_2>2$ \\ \hline $9_{32}$ & $.2\,1.2\,0$ & $.(2p_1)\,1.(2p_2)\,0$ & $2p_2$ & $$ & $$ \\ \hline $9_{33}$ & $.2\,1.2$ & $.(2p_1)\,1.(2p_2)$ & $-2p_2+2$ & $p_2-1$ & $p_2-p_1\ge 2$ \\ \hline $9_{34}$ & $8^*2\,0$ & $8^8(2p_1)\,0$ & $0$ & $$ & $$ \\ \hline $9_{35}$ & $3,3,3$ & $(2p_1+1),(2p_2+1),(2p_3+1)$ & $2$ & $$ & $$ \\ \hline $9_{36}$ & $2\,2,3,2$ & $(2p_1)\,(2p_2),(2p_3)\,1,(2p_4)$ & $2p_3+2p_4$ & $p_3+p_4$ & $p_2\le p_4$ \\ \hline $9_{37}$ & $2\,1,2\,1,3$ & $(2p_1)\,1,(2p_2)\,1,(2p_3+1)$ & $0$ & $$ & $$ \\ \hline $9_{38}$ & $.2.2.2$ & $.(2p_1).(2p_2).(2p_3)$ & $2p_2+2$ & $$ & $$ \\ \hline $9_{39}$ & $2:2:2\,0$ & $(2p_1):(2p_2):(2p_3)\,0$ & $2$ & $$ & $$ \\ \hline $9_{41}$ & $2\,0:2\,0:2\,0$ & $(2p_1)\,0:(2p_2)\,0:(2p_3)\,0$ & $0$ & $$ & $$ \\ \hline \end{tabular} \bigskip \begin{landscape} \scriptsize \noindent \begin{tabular}{|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $10_{22}$ & $3\,3\,1\,3$ & $(2p_1+1)\,(2p_2+1)\,1\,(2p_1+1)$ & $2p_1-2p_3$ & $p_3-p_1$ & $p_3-p_1>p_2$ \\ \hline $10_{23}$ & $3\,3\,1\,1\,2$ & $(2p_1+1)\,(2p_2+1)\,1\,1,(2p_3)$ & $2p_2-2p_3+2$ & $p_2-p_3+1$ & $p_3\le p_2$ \\ \hline $10_{24}$ & $3\,2\,3\,2$ & $(2p_1+1)\,(2p_2)\,(2p_3+1)\,(2p_4)$ & $2p_2$ & $$ & $$ \\ \hline $10_{25}$ & $3\,2\,2\,1\,2$ & $(2p_1+1)\,(2p_2)\,(2p_3)\,1\,(2p_4)$ & $2p_1+2p_3$ & $p_1+p_3$ & $p_4\le p_3$ \\ \hline $10_{26}$ & $3\,2\,1\,1\,3$ & $(2p_1+1)\,(2p_2)\,1\,1\,(2p_3+1)$ & $2p_2-2p_3$ & $|p_2-p_3|$ & $p_3-p_2>p_1$ or $p_2>p_3$ \\ \hline $10_{27}$ & $3\,2\,1\,1\,1\,2$ & $(2p_1+1)\,(2p_2)\,1\,1\,1(2p_3)$ & $2p_1-2p_3+2$ & $|p_1-p_3+1|$ & $p_3\le p_1, p_2=2$ \\ $$ & $$ & $$ & $$ & $$ & or $p_3-p_1>p_2+1$ \\ \hline $10_{28}$ & $3\,1\,3\,1\,2$ & $(2p_1+1)\,1\,(2p_2+1)\,1\,(2p_3)$ & $-2p_3+2$ & $p_3-1$ & $p_1+p_2+1<p_3$ \\ \hline $10_{29}$ & $3\,1\,2\,2\,2$ & $(2p_1+1)\,1\,(2p_2)\,(2p_3)\,(2p_4)$ & $2p_1$ & $$ & $$ \\ \hline $10_{30}$ & $3\,1\,2\,1\,\,1\,2$ & $(2p_1+1)\,1\,(2p_2)\,1\,1\,(2p_3)$ & $2$ & $$ & $$ \\ \hline $10_{31}$ & $3\,1\,1\,3\,2$ & $(2p_1+1)\,1\,1\,(2p_2+1)\,(2p_3)$ & $-2p_3+2$ & $$ & $$ \\ \hline $10_{32}$ & $3\,1\,1\,1\,2\,2$ & $(2p_1+1)\,1\,1\,1\,(2p_2)\,(2p_3)$ & $2p_1-2p_3$ & $|p_1-p_3|$ & $p_1>p_2+p_3$ \\ $$ & $$ & $$ & $$ & $$ & or $p_2=2, p_3>p_1$ \\ \hline $10_{33}$ & $3\,1\,1\,1\,1\,3$ & $(2p_1+1)\,1\,1\,1\,1\,(2p_2+1)$ & $0$ & $$ & $$ \\ \hline $10_{37}$ & $2\,3\,3\,2$ & $(2p_1)\,(2p_2+1)\,(2p_3+1)\,(2p_4)$ & $2p_1-2p_4$ & $$ & $$ \\ \hline $10_{38}$ & $2\,3\,1\,2\,2$ & $(2p_1)\,(2p_2+1)\,1\,(2p_3)\,(2p_4)$ & $-2p_4$ & $$ & $$ \\ \hline $10_{39}$ & $2\,2\,3\,1\,2$ & $(2p_1)\,(2p_2)\,(2p_3+1)\,1\,(2p_4)$ & $2p_1+2p_3$ & $p_1+p_3$ & $p_1+p_2+p_3\le p_4$ \\ \hline $10_{40}$ & $2\,2\,2\,1\,1\,2$ & $(2p_1)\,(2p_2)\,(2p_3)\,1\,1\,(2p_4)$ & $2p_1+2p_3-2p_4$ & $|p_1+p_3-p_4|$ & $p_4<p_3$ \\ $$ & $$ & $$ & $$ & $$ & or $p_4>p_1+p_2+p_3$ \\ \hline $10_{41}$ & $2\,2\,1\,2\,1\,2$ & $(2p_1)\,(2p_2)\,1\,(2p_3)\,1\,(2p_4)$ & $2p_3$ & $p_3$ & $p_2+p_4\le p_3$ \\ \hline $10_{42}$ & $2\,2\,1\,1\,1\,1\,2$ & $(2p_1)\,(2p_2)\,1\,1\,1\,1\,(2p_3)$ & $-2p_3+2$ & $$ & $$ \\ \hline $10_{43}$ & $2\,1\,2\,2\,1\,2$ & $(2p_1)\,1\,(2p_2)\,(2p_3)\,1\,(2p_4)$ & $2p_1-2p_4$ & $$ & $$ \\ \hline $10_{44}$ & $2\,1\,2\,1\,1\,1\,2$ & $(2p_1)\,1\,(2p_2)\,1\,1\,1\,(2p_3)$ & $2p_1$ & $p_1$ & $p_2+p_3\le p_1+1$ \\ \hline $10_{45}$ & $2\,1\,1\,1\,1\,1\,1\,2$ & $(2p_1)\,1\,1\,1\,1\,1\,1\,(2p_2)$ & $0$ & $$ & $$ \\ \hline $10_{50}$ & $3\,2,3,2$ & $(2p_1+1)\,(2p_2),(2p_3+1),(2p_4)$ & $2p_2+2p_3$ & $p_2+p_3$ & $p_4\le p_2$ \\ \hline $10_{51}$ & $3\,2,2\,1,2$ & $(2p_1+1)\,(2p_2),(2p_3)\,1,(2p_4)$ & $2p_2+2p_4-2p_3$ & $$ & $$ \\ \hline $10_{52}$ & $3\,1\,1,3,2$ & $(2p_1+1)\,1\,1,(2p_2+1),(2p_3)$ & $-2p_2-2p_3+2$ & $p_2+p_3-1$ & $p_3-p_1\ge 2$ \\ \hline $10_{53}$ & $3\,1\,1,2\,1,2$ & $(2p_1+1)\,1\,1,(2p_2)\,1,(2p_3)$ & $2p_2+2$ & $$ & $$ \\ \hline $10_{54}$ & $2\,3,3,2$ & $(2p_1)\,(2p_2+1),(2p_3+1),(2p_4)$ & $2p_1-2p_3-2p_4$ & $p_3+p_4-p_1$ & $p_4>p_1+p_2$ \\ \hline $10_{55}$ & $2\,3,2\,1,2$ & $(2p_1)\,(2p_2+1),(2p_3)\,1,(2p_4)$ & $2p_1+2p_3$ & $p_1+p_3$ & $$ \\ \hline $10_{56}$ & $2\,2\,1,3,2$ & $(2p_1)\,(2p_2)\,1,(2p_3+1),(2p_4)$ & $2p_1+2p_3$ & $p_1+p_3$ & $p_4\le p_1+p_2$ \\ \hline $10_{57}$ & $2\,2\,1,2\,1,2$ & $(2p_1)\,(2p_2)\,1,(2p_3)\,1,(2p_4)$ & $2p_1+2p_4-2p_3$ & $p_1+p_4-p_3$ & $p_3<p_4$ \\ \hline $10_{58}$ & $2\,1\,1,2\,1\,1,2$ & $(2p_1)\,1\,1,(2p_2)\,1\,1,(2p_3)$ & $0$ & $$ & $$ \\ \hline $10_{59}$ & $2\,2,2\,1\,1,2$ & $(2p_1)\,(2p_2),(2p_3)\,1\,1,(2p_4)$ & $2p_4$ & $p_4$ & $p_2+p_3-1\le p_4$ \\ \hline $10_{60}$ & $2\,1\,1,2\,1\,1,2$ & $(2p_1)\,1\,1,(2p_2)\,1\,1,(2p_3)$ & $0$ & $$ & $$ \\ \hline $10_{64}$ & $3\,1,3,3$ & $(2p_1+1)\,1,(2p_2+1),(2p_3+1)$ & $2p_1-2p_2-2p_3$ & $p_2+p_3-p_1$ & max$(p_2,p_3)>p_1$ \\ \hline \end{tabular} \end{landscape} \bigskip \begin{landscape} \scriptsize \noindent \begin{tabular}{|c|c|c|c|c|c|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $10_{65}$ & $3\,1,3,2\,1$ & $(2p_1+1)\,1,(2p_2+1),(2p_3)\,1$ & $2p_2-2p_3+2$ & $$ & $$ \\ \hline $10_{66}$ & $3\,1,2\,1,2\,1$ & $(2p_1+1)\,1,(2p_2)\,1,(2p_3)\,1$ & $2p_1+2p_2+2p_3$ & $p_1+p_2+p_3$ & $$ \\ \hline $10_{68}$ & $2\,1\,1,3,3$ & $(2p_1)\,1\,1,(2p_2+1),(2p_3+1)$ & $2p_1-2$ & $$ & $$ \\ \hline $10_{67}$ & $2\,2,3,2\,1$ & $(2p_1)\,(2p_2),(2p_3+1),(2p_4)\,1$ & $2p_1$ & $$ & $$ \\ \hline $10_{69}$ & $2\,1\,1,2\,1,2\,1$ & $(2p_1)\,1\,1,(2p_2)\,1,(2p_3)\,1$ & $2p_1$ & $$ & $$ \\ \hline $10_{70}$ & $2\,2,3,2+$ & $(2p_1)\,(2p_2),(2p_3+1),(2p_4)+$ & $2p_3$ & $$ & $$ \\ \hline $10_{71}$ & $2\,2,2\,1,2+$ & $(2p_1)\,(2p_2),(2p_3)\,1,(2p_4)+$ & $2p_4-2p_3$ & $$ & $$ \\ \hline $10_{72}$ & $2\,1\,1,3,2+$ & $(2p_1)\,1\,1,(2p_2+1),(2p_3)+$ & $-2p_2-2p_3$ & $p_2+p_3$ & $p_1-1\le p_2+p_3$ \\ \hline $10_{73}$ & $2\,1\,1,2\,1,2+$ & $(2p_1)\,1\,1,(2p_2)\,1,(2p_3)+$ & $2p_2$ & $p_2$ & $p_3=1$ \\ \hline $10_{74}$ & $3,3,2\,1+$ & $(2p_1+1),(2p_2+1),(2p_3)\,1+$ & $2$ & $$ & $$ \\ \hline $10_{75}$ & $2\,1,2\,1,2\,1+$ & $(2p_1)\,1,(2p_2)\,1,(2p_3)\,1+$ & $0$ & $$ & $$ \\ \hline $10_{76}$ & $3,3,2+2$ & $(2p_1+1),(2p_2+1),(2p_3)+(2p_4)$ & $2p_1+2p_2$ & $$ & $$ \\ \hline $10_{77}$ & $3,2\,1,2+2$ & $(2p_1+1),(2p_2)\,1,(2p_3)+(2p_4)$ & $2p_1+2p_3-2p_2$ & $$ & $$ \\ \hline $10_{78}$ & $2\,1,2\,1,2+2$ & $(2p_1)\,1,(2p_2)\,1,(2p_3)+(2p_4)$ & $2p_1+2p_2$ & $p_1+p_2$ & $p_4\le p_1+p_2$ \\ \hline $10_{79}$ & $(3,2)\,(3,2)$ & $((2p_1+1),(2p_2))\,((2p_3+1),(2p_4))$ & $2p_1+2p_2-2p_3-2p_4$ & $$ & $$ \\ \hline $10_{80}$ & $(3,2)\,(2\,1,2)$ & $((2p_1+1),(2p_2))\,((2p_3)\,1,(2p_4))$ & $2p_1+2p_2+2p_3$ & $p_1+p_2+p_3$ & $$ \\ \hline $10_{81}$ & $(2\,1,2)\,(2\,1,2)$ & $((2p_1)\,1,(2p_2))\,((2p_3)\,1,(2p_4))$ & $2p_1-2p_3$ & $$ & $$ \\ \hline $10_{83}$ & $.3\,1.2$ & $.(2p_1+1)\,1.(2p_2)$ & $-2p_2+2$ & $p_2-1$ & $p_2>p_1+1$ \\ \hline $10_{84}$ & $.2\,2.2$ & $.(2p_1)\,(2p_2).(2p_3)$ & $2p_1+2p_3$ & $p_1+p_3$ & $p_2=1$ or $p_3\ge 2$ \\ \hline $10_{86}$ & $.3\,1.2\,0$ & $.(2p_1+1)\,1.(2p_2)\,0$ & $2p_2$ & $$ & $$ \\ \hline $10_{87}$ & $.2\,2.2\,0$ & $.(2p_1)\,(2p_2).(2p_3)\,0$ & $2p_1-2p_3$ & $$ & $$ \\ \hline $10_{88}$ & $.2\,1.2\,1$ & $.(2p_1)\,1.(2p_2)\,1$ & $0$ & $$ & $$ \\ \hline $10_{89}$ & $.2\,1.2\,1\,0$ & $.(2p_1)\,1.(2p_2)\,1\,0$ & $2$ & $$ & $$ \\ \hline $10_{90}$ & $.3.2.2$ & $.(2p_1+1).(2p_2).(2p_3)$ & $2p_2-2p_3$ & $p_3-p_2$ & $p_3>p_1+p_2$ \\ \hline $10_{91}$ & $.3.2.2\,0$ & $.(2p_1+1).(2p_2).(2p_3)\,0$ & $2p_1-2p_2-2p_3+2$ & $p_2+p_3-p_1-1$ & $p_1\le p_3, p_2>1$ \\ \hline $10_{92}$ & $.2\,1.2.2\,0$ & $.(2p_1)\,1.(2p_2).(2p_3)\,0$ & $2p_1+2p_2$ & $p_1+p_2$ & $p_3-1\le p_1$ \\ \hline $10_{93}$ & $.3.2\,0.2$ & $.(2p_1+1).(2p_2)\,0.(2p_3)$ & $-2p_2-2p_3$ & $p_2+p_3$ & $p_1<p_2$ or $p_3>p_1+1$ \\ \hline $10_{94}$ & $.3\,0.2.2$ & $.(2p_1+1)\,0.(2p_2).(2p_3)$ & $2p_1+2p_2-2p_3$ & $p_1+p_2-p_3$ & $p_1-p_3\ge 1$ or $p_2-p_3\ge 1$ \\ \hline $10_{95}$ & $.2\,1\,0.2.2$ & $(2p_1)\,1\,0.(2p_2).(2p_3)$ & $2p_1-2p_2-2$ & $p_2-p_1+1$ & $p_1=p_3=1$ \\ \hline $10_{96}$ & $.2.2\,1.2$ & $.(2p_1).(2p_2)\,1.(2p_3)$ & $0$ & $$ & $$ \\ \hline $10_{97}$ & $.2.2\,1\,0.2$ & $.(2p_1).(2p_2)\,1\,0.(2p_3)$ & $2$ & $$ & $$ \\ \hline $10_{98}$ & $.2.2.2.2\,0$ & $.(2p_1).(2p_2).(2p_3).(2p_4)\,0$ & $2p_1+2p_3$ & $p_1+p_3$ & $p_4\le p_1$ or $p_4\le p_3$ \\ \hline $10_{99}$ & $.2.2.2\,0.2\,0$ & $.(2p_1).(2p_2).(2p_3)\,0.(2p_4)\,0$ & $2p_1+2p_4-2p_2-2p_3$ & $$ & $$ \\ \hline $10_{100}$ & $3:2:2$ & $(2p_1+1):(2p_2):(2p_3)$ & $2p_1+2p_2+2p_3-2$ & $p_1+p_2+p_3-1$ & $p_2>1$ or $p_3>1$ \\ \hline $10_{101}$ & $2\,1:2:2$ & $(2p_1)\,1:(2p_2):(2p_3)$ & $2p_1+2$ & $$ & $$ \\ \hline $10_{102}$ & $3:2:2\,0$ & $(2p_1+1):(2p_2):(2p-3)\,0$ & $2p_3-2p_2$ & $p_2-p_3$ & $p_3=1,p_2-p_1>1$ \\ \hline $10_{103}$ & $3\,0:2:2$ & $(2p_1+1)\,0:(2p_2):(2p_3)$ & $2p_2+2p_3-2$ & $$ & $$ \\ \hline $10_{104}$ & $3:2\,0:2\,0$ & $(2p_1+1):(2p_2)\,0:(2p_3)\,0$ & $2p_1-2p_2-2p_3+2$ & $p_1-p_2-p_3+1$ & $p_1>p_2, p_3=1$ \\ $$ & $$ & $$ & $$ & $$ & or $p_1>p_3, p_2=1$ \\ \hline \end{tabular} \bigskip \noindent \begin{tabular}{|c|c|c|c|c|c|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $10_{105}$ & $2\,1:2\,0:2\,0$ & $(2p_1)\,1:(2p_2)\,0:(2p_3)\,0$ & $2p_1$ & $p_1$ & $p_1>p_2+p_3$ \\ \hline $10_{106}$ & $3\,0:2:2\,0$ & $(2p_1+1)\,0:(2p_2):(2p_3)\,0$ & $2p_1+2p_3-2p_2$ & $p_1+p_3-p_2$ & $p_2+1\le p_1$ \\ \hline $10_{107}$ & $2\,1\,0:2:2\,0$ & $(2p_1)\,1\,0:(2p_2):(2p_3)\,0$ & $2p_1-2$ & $$ & $$ \\ \hline $10_{108}$ & $3\,0:2\,0:2\,0$ & $(2p_1+1)\,0:(2p_2)\,0:(2p_3)\,0$ & $-2p_2-2p_3+2$ & $p_2+p_3-1$ & $p_2>p_1+2$ or $p_3>p_1+2$ \\ \hline $10_{109}$ & $2.2.2.2$ & $(2p_1).(2p_2).(2p_3).(2p_4)$ & $2p_1+2p_3-2p_2-2p_4$ & $p_1+p_3-p_2-p_4$ & $p_1\ge p_2+p_4$ \\ \hline $10_{110}$ & $2.2.2.2\,0$ & $(2p_1).(2p_2).(2p_3).(2p_4)\,0$ & $-2p_4$ & $p_4$ & $p_4\ge p_1+p_3$ \\ \hline $10_{111}$ & $2.2.2\,0.2$ & $(2p_1).(2p_2).(2p_3)\,0.(2p_4)$ & $2p_2+2p_3$ & $p_2+p_3$ & $p_2+p_3\ge p_4\ge p_1$ \\ \hline $10_{112}$ & $8^*3$ & $8^*(2p_1+1)$ & $2p_1$ & $p_1$ & $p_1\ge 2$ \\ \hline $10_{113}$ & $8^*2\,1$ & $8^*(2p_1)\,1$ & $2p_1$ & $p_1$ & $p_1\ge 2$ \\ \hline $10_{114}$ & $8^*3\,0$ & $8^*(2p_1+1)\,0$ & $0$ & $$ & $$ \\ \hline $10_{115}$ & $8^*2\,0.2\,0$ & $8^*(2p_1)\,0.(2p_2)\,0$ & $0$ & $$ & $$ \\ \hline $10_{116}$ & $8^*2:2$ & $8^*(2p_1):(2p_2)$ & $2p_1+2p_2-2$ & $p_1+p_2-1$ & $p_1\ge 2$ or $p_2\ge 2$ \\ \hline $10_{117}$ & $8^*2:2\,0$ & $8^*(2p_1):(2p_2)\,0$ & $2p_2$ & $$ & $$ \\ \hline $10_{118}$ & $8^*2:.2$ & $8^*(2p_1):.(2p_2)\,0$ & $2p_1-2p_2$ & $|p_1-p_2|$ & $p_1\ge 2,p_2=1$ \\ $$ & $$ & $$ & $$ & $$ & $p_2\ge 2,p_1=1$ or $|p_1-p_2|\ge 2$ \\ \hline $10_{119}$ & $8^*2:.2\,0$ & $8^*(2p_1):.(2p_2)\,0$ & $-2p_2+2$ & $p_2-1$ & $p_2-p_1\ge 2$ \\ \hline $10_{120}$ & $8^*2\,0::2\,0$ & $8^*(2p_1)\,0::(2p_2)\,0$ & $4$ & $$ & $$ \\ \hline $10_{121}$ & $9^*2\,0$ & $9^8(2p_1)\,0$ & $2$ & $$ & $$ \\ \hline $10_{122}$ & $9^*.2\,0$ & $9^*.(2p_1)\,0$ & $0$ & $$ & $$ \\ \hline \end{tabular} \bigskip \normalsize In the following table we provide the same results for link families obtained from generating links with at most $n=9$ crossings. \medskip \scriptsize \noindent \begin{tabular}{|c|c|c|c|c|c|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $2_1^2$ & $2$ & $(2p_1)$ & $-2p_1+1$ & $p_1$ & $$ \\ \hline $5_1^2$ & $2\,1\,2$ & $(2p_1)\,1\,(2p_2)$ & $-2p_1+1$ & $p_1$ & $p_1>p_2$ \\ \hline $6_2^2$ & $3\,3$ & $(2p_1+1)\,(2p_2+1)$ & $-2p_1-1$ & $$ & $$ \\ \hline $6_3^2$ & $2\,2\,2$ & $(2p_1)\,(2p_2)\,(2p_3)$ & $-2p_1-2p_3+1$ & $p_1+p_3$ & $$ \\ \hline $7_2^2$ & $3\,1\,1\,2$ & $(2p_1+1)\,1\,1\,(2p_2)$ & $-2p_1+2p_2$ & $|p_1-p_2|$ & $$ \\ \hline $7_3^2$ & $2\,3\,2$ & $(2p_1)\,(2p_2+1)\,(2p_3)$ & $-2p_1+1$ & $$ & $$ \\ \hline $7_4^2$ & $3,2,2$ & $(2p_1+1),(2p_2),(2p_3)$ & $-2p_1-2p_3+1$ & $p_1+p_3$ & $p_2=1$ \\ \hline $7_5^2$ & $2\,1,2,2$ & $(2p_1)\,1,(2p_2),(2p_3)$ & $2p_2+2p_3-2p_1+1$ & $$ & $$ \\ \hline $7_6^2$ & $.2$ & $.(2p_1)$ & $-2p_1+1$ & $$ & $$ \\ \hline $8_4^2$ & $3\,2\,3$ & $(2p_1+1)\,(2p_2)\,(2p_3+1)$ & $-2p_1-2p_3-1$ & $$ & $$ \\ \hline $8_5^2$ & $3\,1\,2\,2$ & $(2p_1+1)\,1\,(2p_2)\,(2p_3)$ & $-2p_1-1$ & $$ & $$ \\ \hline $8_7^2$ & $2\,1\,2\,1\,2$ & $(2p_1)\,1\,(2p_2)\,1\,(2p_3)$ & $-2p_1-2p_3+1$ & $p_1+p_3$ & $p_1+p_3\ge p_2$ \\ \hline $8_9^2$ & $2\,2,2,2$ & $(2p_1)\,(2p_2),(2p_3),(2p_4)$ & $-2p_3-2p_4+1$ & $$ & $$ \\ \hline $8_{10}^2$ & $2\,1\,1,2,2$ & $(2p_1)\,1\,1,(2p_2),(2p_3)$ & $2p_3-1$ & $p_3$ & $p_1=p_2=1$ \\ \hline $8_{11}^2$ & $3,2,2+$ & $(2p_1+1),(2p_2),(2p_3)+$ & $-2p_1+1$ & $$ & $$ \\ \hline $8_{12}^2$ & $2\,1,2,2+$ & $(2p_1)\,1,(2p_2),(2p_3)+$ & $-2p_1+2p_2-1$ & $p_1-p_2+1$ & $p_2=1$ \\ \hline \end{tabular} \noindent \begin{tabular}{|c|c|c|c|c|c|} \hline $K$ & $Con$ & $Fam$ & $\sigma $ & $u$ & $Cond$ \\ \hline $8_{13}^2$ & $.2\,1$ & $.(2p_1)\,1$ & $-1$ & $$ & $$ \\ \hline $8_{14}^2$ & $.2:2$ & $.(2p_1):(2p_2)$ & $1$ & $$ & $$ \\ \hline $9_6^2$ & $3\,3\,1\,2$ & $(2p_1+1)\,(2p_2+1)\,1\,(2p_3)$ & $-2p_1+2p_3-1$ & $p_3-p_1$ & $p_3\ge p_1+p_2$ \\ \hline $9_7^2$ & $3\,2\,1\,1\,2$ & $(2p_1+1)\,(2p_2)\,1\,1\,(2p_3)$ & $-2p_1+1$ & $p_1+1$ & $p_2=1,p_3\le p_1+2$ \\ \hline $9_8^2$ & $3\,1\,3\,2$ & $(2p_1+1)\,1\,(2p_2+1)\,(2p_3)$ & $-2p_1+2p_3-1$ & $p_1-1$ & $p_1\ge p_2+p_3$ \\ \hline $9_9^2$ & $3\,1\,1\,1\,3$ & $(2P-1+1)\,1\,1\,1\,(2p_2+1)$ & $-2p_1+1$ & $p_1$ & $p_1-p_2\ge 1$ \\ \hline $9_{11}^2$ & $2\,2\,2\,1\,2$ & $(2p_1)\,(2p_2)\,(2p_3)\,1\,(2p_4)$ & $-2p_1-2p_3+1$ & $p_1+p_3$ & $p_3\ge p_4$ \\ \hline $9_{12}^2$ & $2\,2\,1\,1\,1\,2$ & $(2p_1)\,(2p_2)\,1\,1\,1\,(2p_3)$ & $-2p_1+2p_3-1$ & $p_3-p_1$ & $p_3>p_1+p_2$ \\ $$ & $$ & $$ & $$ & $p_1-p_3+1$ & $p_2=1, p_1>p_3$ \\ \hline $9_{15}^2$ & $3\,2,2,2$ & $(2p_1+1)\,(2p_2),(2p_3),(2p_4)$ & $-2p_2-2p_4+1$ & $$ & $$ \\ \hline $9_{16}^2$ & $3\,1\,1,2,2$ & $(2p_1+1)\,1\,1,(2p_2),(2p_3)$ & $2p_2+2p_3-3$ & $$ & $$ \\ \hline $9_{17}$ & $2\,3,2,2$ & $(2p_1)\,(2p_2)\,1,(2p_3),(2p_4)$ & $-2p_2+2p_3+2p_4-1$ & $$ & $$ \\ \hline $9_{18}^2$ & $2\,2\,1,2,2$ & $(2p_1)\,(2p_2)\,1,(2p_3),(2p_4)$ & $-2p_1-2p_4+1$ & $p_1+p_4$ & $p_3=1$ \\ \hline $9_{21}^2$ & $3\,1,3,2$ & $(2p_1+1)\,1,(2p_2+1),(2p_3)$ & $-2p_1+2p_2+2p_3-1$ & $p_2+p_3-p_1$ & $p_3>p_1$ \\ \hline $9_{22}^2$ & $3\,1,2\,1,2$ & $(2p_1+1)\,1,(2p_2)\,1,(2p_3)$ & $-2p_1-2p_2-1$ & $$ & $$ \\ \hline $9_{23}^2$ & $3,3,2\,1$ & $(2p_1+1),(2p_2+1)\,(2p_3)\,1$ & $-2p_1-2p_2+2p_3-1$ & $p_1+p_2-p_3+1$ & $p_1\ge p_3$ or $p_2\ge p_3$ \\ \hline $9_{24}^2$ & $2\,1,2\,1,2\,1$ & $(2p_1)\,1,(2p_2)\,1,(2p_3)\,1$ & $-2p_1-2p_2-2p_3-1$ & $p_1+p_2+p_3$ & $$ \\ \hline $9_{25}^2$ & $2\,2,2,2+$ & $(2p_1)\,(2p_2),(2p_3),(2p_4)+$ & $-2p_2+1$ & $$ & $$ \\ \hline $9_{26}^2$ & $2\,1\,1,2,2+$ & $(2p_1)\,1\,1,(2p_2),(2p_3)+$ & $-1$ & $$ & $$ \\ \hline $9_{27}^2$ & $3,2,2+2$ & $(2p_1+1),(2p_2),(2p_3)+(2p_4)$ & $-2p_1-2p_3+1$ & $$ & $$ \\ \hline $9_{28}^2$ & $2\,1,2,2,2+2$ & $(2p_1)\,1,(2p_2),(2p_3)+(2p_4)$ & $-2p_1+2p_2+2p_3-1$ & $$ & $$ \\ \hline $9_{29}^2$ & $(3,2)\,(2,2)$ & $(2p_1+1),(2p_2))\,((2p_3),(2p_4))$ & $2p_1+2p_2-2p_3-2p_4-1$ & $$ & $$ \\ \hline $9_{30}^2$ & $(2\,1,2)\,(2,2)$ & $((2p_1)\,1,(2p_2))\,((2p_3),(2p_4))$ & $-2p_1-2p_3-2p_4+1$ & $p_1+p_3+p_4$ & $$ \\ \hline $9_{32}^2$ & $.3\,1$ & $.(2p_1+1)\,1$ & $-1$ & $$ & $$ \\ \hline $9_{33}^2$ & $.2\,2$ & $.(2p_1)\,(2p_2)$ & $-2p_1+1$ & $$ & $$ \\ \hline $9_{34}^2$ & $.3.2$ & $.(2p_1+1).(2p_2)$ & $-2p_1+2p_2-1$ & $p_1$ & $p_1>1,p_2=1$ \\ \hline $9_{35}^2$ & $.3.2\,0$ & $.(2p_1+1).(2p_2)\,0$ & $-2p_1-2p_2+1$ & $p_1+p_2$ & $$ \\ \hline $9_{36}^2$ & $.3:2$ & $.(2p_1+1):(2p_2)$ & $-2p_1+2p_2-1$ & $p_1-p_2+1$ & $p_1>1,p_2=1$ \\ \hline $9_{37}^2$ & $.3:2\,0$ & $.(2p_1+1):(2p_2)\,0$ & $-2p_1+2p_2-1$ & $$ & $$ \\ \hline $9_{38}^2$ & $.2\,1:2\,0$ & $.(2p_1)\,1:(2p_2)\,0$ & $-2p_1+3$ & $$ & $$ \\ \hline $9_{39}^2$ & $.2.2.2\,0$ & $.(2p_1).(2p_2).(2p_3)\,0$ & $-2p_1+2p_2+2p_3-1$ & $p_2+p_3-p_1$ & $p_3>p_1$ \\ \hline $9_{40}^2$ & $2:2:2$ & $(2p_1):(2p_2):(2p_3)$ & $-2p_1-2p_2-2p_3+3$ & $$ & $$ \\ \hline $9_{41}^2$ & $2:2\,0:2\,0$ & $(2p_1):(2p_2):(2p_3)\,0$ & $-2p_1+2p_2+2p_3-1$ & $p_2+p_3-p_1$ & $p_2>p_1$ or $p_3>p_1$ \\ \hline $9_{42}^2$ & $8^*2$ & $8^*(2p_1)$ & $-2p_1+1$ & $p_1$ & $p_1\ge 2$ \\ \hline $6_1^3$ & $2,2,2$ & $(2p_1),(2p_2),(2p_3)$ & $-2p_1+2p_2-2p_3$ & $$ & $$ \\ \hline $7_1^3$ & $2,2,2+$ & $(2p_1),(2p_2),(2p_3)+$ & $-2p_1+2$ & $$ & $$ \\ \hline $8_3^2$ & $3\,1,2,2$ & $(2p_1+1)\,1,(2p_2),(2p_3)$ & $-2p_1+2p_2+2p_3-2$ & $$ & $$ \\ \hline $8_3^3$ & $2,2,2+2$ & $(2p_1),(2p_2),(2p_3)+(2p_4)$ & $-2p_1-2p_3+2$ & $$ & $$ \\ \hline $8_4^3$ & $(2,2)\,(2,2)$ & $((2p_1),(2p_2))\,((2p_3),(2p_4))$ & $-2p_1-2p_2+2p_3+2p_4$ & $$ & $$ \\ \hline $8_5^3$ & $.3$ & $.(2p_1+1)$ & $-2p_1$ & $$ & $$ \\ \hline $8_6^3$ & $.2:2\,0$ & $.(2p_1):(2p_2)\,0$ & $-2p_1+2p_2$ & $$ & $$ \\ \hline \end{tabular} \end{landscape} \noindent \begin{tabular}{|c|c|c|c|c|c|} \hline $9_1^3$ & $2\,1\,2,2,2$ & $(2p_1)\,1\,(2p_2),(2p_3),(2p_4)$ & $-2p_1-2p_3-2p_4+2$ & $$ & $$ \\ \hline $9_2^3$ & $2\,1\,1\,1,2,2$ & $(2p_1)\,1\,1\,1,(2p_2),(2p_3)$ & $-2p_1+2p_3$ & $$ & $$ \\ \hline $9_3^3$ & $3,2,2,2$ & $(2p_1+1),(2p_2),(2p_3),(2p_4)$ & $-2p_1-2p_4+2$ & $$ & $$ \\ \hline $9_4^3$ & $2\,1,2,2,2$ & $(2p_1)\,1,(2p_2),(2p_3),(2p_4)$ & $-2p_1+2p_2+2p_4-2$ & $$ & $$ \\ \hline $9_6^3$ & $3\,1,2,2$ & $(2p_1+1)\,1,(2p_2),(2p_3)+$ & $-2p_1+2p_2-2$ & $$ & $$ \\ \hline $9_7^3$ & $2,2,2+3$ & $(2p_1),(2p_2),(2p_3)+(2p_4+1)$ & $-2p_1+2$ & $$ & $$ \\ \hline $9_8^3$ & $(2,2+)\,(2,2)$ & $((2p_1),(2p_2)+)\,((2p_3),(2p_4))$ & $-2p_1+2p_3+2p_4$ & $$ & $$ \\ \hline $9_9^3$ & $(2,2)\,1\,(2,2)$ & $((2p_1),(2p_2)\,1\,((2p_3),(2p_4))$ & $-2p_1-2p_4+2$ & $$ & $$ \\ \hline $9_{10}^3$ & $.2\,1\,1$ & $.(2p_1)\,1\,1$ & $-2p_1$ & $$ & $$ \\ \hline $9_{11}^3$ & $.2\,1:2$ & $(-2p_1)\,1:2$ & $-2p_1+2$ & $$ & $$ \\ \hline $9_{12}^3$ & $.(2,2)$ & $.((2p_1),(2p_2))$ & $-2p_1+2$ & $$ & $$ \\ \hline $8_1^4$ & $2,2,2,2$ & $(2p_1),(2p_2),(2p_3),(2p_4)$ & $-2p_1-2p_4+3$ & $$ & $$ \\ \hline \end{tabular} \bigskip \normalsize \section{Ascending numbers of alternating knot families} Our next goal was to compute ascending numbers of alternating knots belonging to some families with known unknotting numbers and to find their based oriented diagrams showing the ascending number. These results for the families beginning with knots with $n\le 8$ crossings are described in Theorems 5.1-5.4, for the families beginning with knots with $n=9$ crossings in Theorems 5.5-5.9, and for the families beginning with knots with $n=10$ crossings in Theorems 5.10-5.15. For all these families, except for the first and the last family, ascending numbers are computable only from non-minimal diagrams. Hence, in Figs. 29-38 every family is represented by its minimal diagram (a), and non-minimal based oriented diagram (b) giving the corresponding ascending number. \begin{theorem} For knots $3_1=3$, $5_1=5$, $7_1=7$, $9_1=9$, $\ldots $ of the family $2p+1$ ($p\ge 1$), the minimal ascending number is $a(K)=u(K)=p$, and it is realized on the minimal diagrams (Fig. 29). \end{theorem} \begin{theorem} For knots $7_3=4\,3$, $9_3=6\,3$, $\ldots $ of the family $(2p)\,3$ ($p\ge 2$) the minimal ascending number is $a(K)=u(K)=p$, and it is realized on the non-minimal diagrams of the form $((((1,(-1,(((1^{2p-2}),-1),-1))),1),-1),-1,-1)$ (Fig. 30b), where by $1^{2p-2}$ is denoted the sequence $1,\ldots ,1$ of the length $2p-2$. \end{theorem} \begin{theorem} For knots $7_5=3\,2\,2$, $9_6=5\,2\,2$, $9_9=4\,2\,3$, $\ldots $ of the family $(2p+1)\,2\,(2q)$ ($p\ge 1$, $q\ge 1$), the minimal ascending number is $a(K)=u(K)=p+q$, and it is itself on the non-minimal diagrams of the form $(2p+1),-2\,1,(2q)$ (Fig. 31b). \end{theorem} \begin{theorem} For knots $8_{15}=2\,1,2\,1,2$, $10_{49}=4\,1,2\,1,2$, $\ldots $ of the family $(2p)\,1,$ $(2q)\,1,2$ ($p\ge 1$, $q\ge 1$) minimal ascending number is $a(K)=u(K)=p+q$, and it is itself on the non-minimal diagrams of the form $(2p)\,1,(2q)\,1,-2,1$ (Fig. 32b). \end{theorem} For the knot family $2p+1$ ($p\ge 1$) the absolute value of the signature of a knot $2p+1$ ($p\ge 1$) is $2p$. For the knot family $(2p)\,3$ ($p\ge 2$) the absolute value of the signature is $2p$. For the family of knots $(2p+1)\,2\,(2q)$ ($p\ge 1$, $q\ge 1$) the absolute value of the signature is $2p+2q$. For the family of knots $(2p)\,1,(2q)\,1,2$ ($p\ge 1$, $q\ge 1$) the absolute value of the signature is $2p+2q$. K.~Murasugi \cite{9} proved the lower bound for the unknotting number for knots $u(K)\geq \frac{|\sigma _K|}{2}$. For the family of knots $2p+1$ ($p\ge 1$), from Murasugi's Theorem 1.1, half of the signature is $p$, for the family of knots $(2p)\,3$ ($p\ge 2$) from Theorem 1.2 half of the signature is $p$, for the family of knots $(2p+1)\,2\,(2q)$ ($p\ge 1$, $q\ge 1$) from Theorem 1.3 it is $p+q$, and for the family of knots $(2p+1)\,2\,(2q)$ ($p\ge 1$, $q\ge 1$) from Theorem 1.4 half of the signature is $p+q$. The proof of the Theorems 1.1-1.4 follows from the fact that in each of the families the unknotting number is equal to a half of the absolute value of the signature, and it is realized on the minimal diagrams of the knots belonging to the families from Theorems 1.1-1.4, respectively. Hence, unknotting number for these families is equal to the minimal diagram unknotting number. For all these families, we effectively constructed corresponding diagrams giving minimal ascending number equal to the unknotting number, so it follows that for each of the knots from these families $a(K)=u(K)=\frac{|\sigma _K|}{2}$. We provide similar theorems for certain families beginning with nine and ten crossing knots. Their proof is analogous to the proof of Theorems 5.1-5.4. For rational knot family $5\,(2p)$ ($p\ge 2$) from Theorem 5.5 the absolute value of the signature is $2p$; for rational knot family $(2p+1)\,4\,(2q)$ ($p\ge 1$, $q\ge 1$) from Theorem 5.6 the absolute value of the signature is $2p+2q$; for knot family $(2p+1)\,4\,(2q)$ ($p\ge 1$, $q\ge 1$) from Theorem 5.7 the absolute value of the signature is $2p+4$; for rational knot family $3\,2\,2\,(2p)$ ($p\ge 1$) from Theorem 5.8 the absolute value of the signature is $2p+2$; for rational knot family $(2p)\,2\,1\,2\,(2q)$ ($p\ge 1$, $q\ge 1$) from Theorem 5.9 the absolute value of the signature is $2p+2q$; for knot family $(2p)\,3,(2q)\,1,2$ ($p\ge 1$, $q\ge 1$) from Theorem 5.10 the absolute value of the signature is $2p+2q$; for knot family $(2p)\,3,(2q)\,1,2$ ($p\ge 1$, $q\ge 1$) from Theorem 5.11 the absolute value of the signature is $2p+4$. By similar arguments, applied to non-alternating knots we can conclude that for knot family $(2p)\,2\,1,(2q+1),-2$ ($p\ge 1$, $q\ge 1$) from Theorem 5.12 the absolute value of the signature is $2p+2q$, and for knot family $-3\,0:(2p)\,0:2\,0$ ($p\ge 1$) from Theorem 5.13 the absolute value of the signature is $2p+2$. \begin{theorem} For knots $9_4=5\,4$, $\ldots $ of the family $5\,(2p)$ ($p\ge 2$), the minimal ascending number is $a(K)=u(K)=p$, and it is realized on the non-minimal diagrams of the form $((1,(-1,(((((1,(1,(-1,-1))),1),-1),-1),-1))),1^{2p-2})$ (Fig. 33b), where by $1^{2p-2}$ is denoted the sequence $1,\ldots ,1$ of the length $2p-2$. \end{theorem} \begin{theorem} For knots $9_7=3\,4\,2$, $\ldots $ of the family $(2p+1)\,4\,(2q)$ ($p\ge 1$, $q\ge 1$) the minimal ascending number is $a(K)=u(K)=p+q$, and it is realized on the non-minimal diagrams of the form $(((-1,(1,((((-1,((1^{2p}),1)),-1),1),1))),-1),$ $(-1)^{2q})$ (Fig. 34b), where by $1^{2p-2}$ and $(-1)^{2q-2}$ are denoted the sequence $1,\ldots ,1$ of the length $2p-2$, and $-1,\ldots ,-1$ of the length $2q-2$, respectively. \end{theorem} \begin{theorem} For knots $9_{16}=3,3,2+$, $\ldots $ of the family $3,3,(2p)+$ ($p\ge 1$) the minimal ascending number is $a(K)=u(K)=p+2$, and it is realized on the non-minimal diagrams of the form $-(1,1)\,1\,1,-(1,1)\,1\,1,((-1)^{2p+1})\,1$ (Fig. 35b), where by $1^{2p+1}$ is denoted the sequence $1,\ldots ,1$ of the length $2p+1$. \end{theorem} \begin{theorem} For knots $9_{18}=3\,2\,2\,2$, $\ldots $ of the family $3\,2\,2\,(2p)$ ($p\ge 1$), the minimal ascending number is $a(K)=u(K)=p+1$, and it is realized on non-minimal diagrams of the form $(((((1,(-1,(((-1,(1,1)),-1),-1))),1),-1),-1),(-1)^{2p})$ (Fig. 36b), where by $1^{2p}$ is denoted the sequence $-1,\ldots ,-1$ of the length $2p$. \end{theorem} \begin{theorem} For knots $9_{23}=2\,2\,1\,2\,2$, $\ldots $ of the family $(2p)\,2\,1\,2\,(2q)$ ($p\ge 1$, $q\ge 1$), the minimal ascending number is $a(K)=u(K)=p+q$, and it is realized on the non-minimal diagrams of the form $((((((((1,(1,(1^{2p}))),-1),-1),1),1),-1),-1),$ $(-1)^{2q})$ (Fig. 37b), where by $1^{2p}$ and $(-1)^{2q}$ are denoted the sequence $1,\ldots ,1$ of the length $2p$, and $-1,\ldots ,-1$ of the length $2q$, respectively. \end{theorem} \begin{theorem} For knots $10_{50}=2\,3,2\,1,2$, $\ldots $ of the family $(2p)\,3,(2q)\,1,2$ ($p\ge 1$, $q\ge 1$), the minimal ascending number is $a(K)=u(K)=p+q$, and it is realized on the non-minimal diagrams of the form $(((-1,(-1,-(1^{2p}))),1),1),((((-(1^{2q}),1),-1),1),$ $1),-(1,1),1$ (Fig. 38b), where by $1^{2p}$ and $1^{2q}$ are denoted the sequence $1,\ldots ,1$ of the length $2p$, and $1,\ldots ,1$ of the length $2q$, respectively. \end{theorem} \begin{theorem} For knots $10_{80}=(3,2)\,(2\,1,2)$, $\ldots $ of the family $(3,2)\,((2p)\,1,2)$ ($p\ge 1$), the minimal ascending number is $a(K)=u(K)=p+2$, and it is realized on the non-minimal diagrams of the form $(((1,-(1,1)),1),-(1,1),1)$ $(((((-(1^{2p}),1),-1),1),1),$ $-(1,1),1)$ (Fig. 39b), where by $1^{2p}$ is denoted the sequence $1,\ldots ,1$ of the length $2p$. \end{theorem} \begin{figure}[th] \centerline{\psfig{file=afam01.eps,width=1.2in}} \vspace*{8pt} \caption{Family $2p+1$ ($p\ge 1$) and its corresponding minimal based oriented diagram giving the ascending number $p$. \label{fig29}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam02.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $(2p)\,3$ ($p\ge 2$) with the ascending number $p$ (b). \label{fig30}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam03.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $(2p+1)\,2\,(2q)$ ($p\ge 1$, $q\ge 1$) with the ascending number $p+q$ (b) . \label{fig31}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam04.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $(2p)\,1,(2q)\,1,2$ ($p\ge 1$, $q\ge 1$) with the ascending number $p+q$ (b) . \label{fig32}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam05.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $5\,(2p)$ ($p\ge 2$) with the ascending number $p$ (b). \label{fig33}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam06.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $(2p+1)\,4\,(2q)$ ($p\ge 1$, $q\ge 1$) with the ascending number $p+q$ (b). \label{fig34}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam07.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $3,3,(2p)+$ ($p\ge 1$) with the ascending number $p+2$ (b). \label{fig35}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam08.eps,width=2.4in}} \vspace*{8pt} \caption{(a) Family $3\,2\,2\,(2p)$ ($p\ge 1$) with the ascending number $p+1$ (b). \label{fig36}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam09.eps,width=2.8in}} \vspace*{8pt} \caption{(a) Family $(2p)\,2\,1\,2\,(2q)$ ($p\ge 1$, $q\ge 1$) with the ascending number $p+q$ (b). \label{fig37}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam10.eps,width=2.8in}} \vspace*{8pt} \caption{(a) Family $(2p)\,3,(2q)\,1,2$ ($p\ge 1$, $q\ge 1$) with the ascending number $p+q$ (b). \label{fig38}} \end{figure} \begin{figure}[th] \centerline{\psfig{file=afam11.eps,width=2.8in}} \vspace*{8pt} \caption{(a) Family $(3,2)\,((2p)\,1,2)$ ($p\ge 1$) with the ascending number $p+2$ (b). \label{fig39}} \end{figure} \bigskip \noindent {\bf References}